the cinegrid collaboration
play

The CineGRID collaboration University of Amsterdam Jeroen Roodhart - PDF document

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group The CineGRID collaboration University of Amsterdam Jeroen Roodhart 12/02/2009 .Plan What is CineGrid Who are involved Use


  1. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group The CineGRID collaboration University of Amsterdam Jeroen Roodhart 12/02/2009

  2. .Plan • What is CineGrid • Who are involved • Use cases • Grand vision ATG - Informatiseringscentrum - Universiteit van Amsterdam • Storage in CineGrid context Systems- and Network Engineering Research Group • Current setup in Amsterdam • Experience • Lessons • Future • Summary

  3. What is CineGrid? CineGrid is a non-profit international membership organization... ATG - Informatiseringscentrum - Universiteit van Amsterdam CineGrid’s mission is to build an interdisciplinary community focused on the Systems- and Network Engineering Research Group research, development and demonstration of networked collaborative tools, enabling the production, use and exchange of very high-quality digital media over high-speed photonic networks. (From the site: http://cinegrid.org)

  4. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Who are involved • More important: • Lots of (big) names, you find them in the members section of the site.

  5. Use cases (CdL) Keio/Calit2 Collaboration: Trans-Pacific 4K teleconference Used 1Gbps ATG - Informatiseringscentrum - Universiteit van Amsterdam Dedicated Keio University Sony Systems- and Network Engineering Research Group President Anzai NTT SGI UCSD Chancellor Fox

  6. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group CineGrid @ SARA Use cases (CdL)

  7. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Holland Festival 2007 – Era la Notte Use cases (CdL)

  8. Use cases • Scientific visualisation • Film editing processes ATG - Informatiseringscentrum - Universiteit van Amsterdam • High def. Collaboration environments Systems- and Network Engineering Research Group • Medical applications • Entertainment venues – Dome theatres – 4K Cinema

  9. Grand Vision (Cees de Laat's “find the beautiful lady on the beach”) RDF describing Infrastructure Application: find video containing x, then trans-code to it view on Tiled Display ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group RDF/CG RDF/VIZ RDF/CG RDF/ST RDF/NDL RDF/CPU RDF/NDL content content PG&CdL

  10. Grand Vision (Ctd.) ATG - Informatiseringscentrum - Universiteit van Amsterdam • CineGrid will be using iRODS • Intention to place NDL and semantic information Systems- and Network Engineering Research Group within iRODS – Storage/content delivery nodes/transcoding

  11. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Storage in CineGrid context 3840*2160 (CdL)

  12. Storage in CineGrid context (ctd.) (CdL) Format x Y Rate Color Frame Frame Flow Stream /s bits/pix pix MByte MByt/s Gbit/s 720p 1280 720 60 24 921600 2.8 170 1.3 ATG - Informatiseringscentrum - Universiteit van Amsterdam HD 1080p 1920 1080 30 24 2073600 6.2 190 1.5 Systems- and Network Engineering Research Group HD 2k 2048 1080 24 36 2211840 10 240 1.2 48 480 2.4 SHD 3840 2160 30 24 8294400 25 750 6.0 7.6 4k 4096 2160 24 36 8847360 40 960

  13. Storage in CineGrid context • Buying n x 1T disks doesn't work • Traditional approach using FCAL, NFS, CIFS might not be fast enough ATG - Informatiseringscentrum - Universiteit van Amsterdam • We may now run into issues with Systems- and Network Engineering Research Group – Complexity – Scalability – Speed considerations – Integrity

  14. Why think about storage (ctd.) • CineGRID – Large data sets – Traditional interconnecting of sites ATG - Informatiseringscentrum - Universiteit van Amsterdam • Relaying movies to display sites Systems- and Network Engineering Research Group • Display movie (from local cache) – Interconnect services • Streaming server transcodes 4k movie to lower res • Display movie stream • Both models have different storage requirements

  15. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Current setup in Amsterdam (CdL)

  16. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Current setup in Amsterdam (ctd.)

  17. Current setup in Amsterdam (ctd.) • Choices: – Existing 10Ge interconnect (we're into networking) ATG - Informatiseringscentrum - Universiteit van Amsterdam – Sun x4500 Thumpers • 48 x 1Tb data disks/2 x Opteron Systems- and Network Engineering Research Group – Running OpenSolaris “Nevada” – ZFS filesystem – Looking into upgrade to x4540, more on this later

  18. Current setup in Amsterdam (ctd.) • About 18T in use, 13T left • RAIDZ1 for now (speed/space considerations) ATG - Informatiseringscentrum - Universiteit van Amsterdam • Both thumpers are synced, using ZFS snapshot streaming Systems- and Network Engineering Research Group • 10Ge connection to “streaming node” node41 and (suitcees/node41) to “Optiputer net” • Syncing of thumpers may stop if other use dictates

  19. Experience • Ease of administration #!/usr/bin/bash for i in 1 2 3 4 5 6 7; do ATG - Informatiseringscentrum - Universiteit van Amsterdam if [ $i = "1" ]; then zpool_command="zpool create -f mypool raidz2 " else Systems- and Network Engineering Research Group zpool_command="zpool add mypool raidz2 " fi $zpool_command c0t${i}d0 c1t${i}d0 c4t${i}d0 c5t${i}d0 c6t${i}d0 c7t${i}d0 done zpool add mypool spare c0t0d0 c1t0d0 c6t0d0 c7t0d0 Wait two minutes --> approx. 32T filesystem

  20. Experience (ctd.) • Syncing filesystems # Make a “point in time” snapshot of the pool >zfs snapshot mypool@20080521_1 ATG - Informatiseringscentrum - Universiteit van Amsterdam # Stream the snapshot using RBUDP to the other host >zfs send mypool@20080521_1 | \ Systems- and Network Engineering Research Group sendstream 192.168.57.25 8000m 8000 # On the other host, receive the stream: >recvstream 192.168.57.24 8000 | \ zfs receive mypool/basketcees@20080521_1 This may take longer...

  21. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.)

  22. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.) • Got a Thor to play with :) – Can we move the Thumper disks to the Thor?

  23. Experience (ctd.) • Yes you can ! – And you'll even keep your ZFS volumes if you export them nicely ;) • Probably you don't want to move the OS ATG - Informatiseringscentrum - Universiteit van Amsterdam disks though... Systems- and Network Engineering Research Group • So let's look at some tests...

  24. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.) Thor

  25. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.) Thor

  26. Experience (ctd.) Thor • Not much faster! But there's some strange variation here... ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group • So what if we would split per controller ...

  27. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.) Thor

  28. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Experience (ctd.) Thor

  29. Lessons • Raw storage system speed may be enough for uncompressed 4K, but that doesn't scale to concurrent streaming • Using Thor-s it can help to think about ATG - Informatiseringscentrum - Universiteit van Amsterdam controller/disk assignment • With the ixgb NICs we max out at 6Gbps on a Systems- and Network Engineering Research Group 10Ge link! (comparable using iRODs) This needs to improve to use Thor-speed! • Considering the “CineGRID application” – We don't yet have a standard solution from storage to “Film display” – Conventional streaming tools use “file system paradigm”

  30. Future • Filesystems – Linux: Ext4, BTRFS looks promising – Solaris: ZFS remains very strong in ATG - Informatiseringscentrum - Universiteit van Amsterdam benchmarks and usability • Clustering Systems- and Network Engineering Research Group – Lustre – GlusterFS – pNFS • Networking – RDMA/iWARP interconnect (nice if we went 100Ge)

  31. Summary • Storage backend speed of individual modern systems may be sufficient for 1 stream: _But_ we will likely want more • We need to consider the entire component ATG - Informatiseringscentrum - Universiteit van Amsterdam stack of the CineGRID application Systems- and Network Engineering Research Group • We probably need an approach where “streaming nodes” can access data using cluster technology – Fast interconnect (RDMA/QDR Infinib.) – More than one storage server – New technologies may lead to more elegant designs (e.g. SSD/ZFS/Lustre)

  32. ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group Backup slides

Recommend


More recommend