cs 744 google file system
play

CS 744: GOOGLE FILE SYSTEM Shivaram Venkataraman Fall 2020 - PowerPoint PPT Presentation

! morning good CS 744: GOOGLE FILE SYSTEM Shivaram Venkataraman Fall 2020 ANNOUNCEMENTS no - Assignment 1 out later today 5pm or before - Group submission form me Machine Scale : \ - Anybody on the waitlist? Collaboration


  1. ! morning good CS 744: GOOGLE FILE SYSTEM Shivaram Venkataraman Fall 2020

  2. ANNOUNCEMENTS no - Assignment 1 out later today 5pm or before - Group submission form me Machine Scale → : \ - Anybody on the waitlist? Collaboration :

  3. OUTLINE 1. Brief history 2. GFS 3. Discussion 4. What happened next?

  4. HISTORY OF DISTRIBUTED FILE SYSTEMS

  5. SUN NFS CS 537 → " Bo , 4096 ) read [ f. Erno RPC RPC Client Client File Server Local FS Client Client RPC RPC

  6. dim / - e. a T backups home etc bin bak1 bak2 bak3 tyler .bashrc 537 p1 p2 no : : www.T.io ! /dev/sda1 on / /dev/sdb1 on /backups NFS on /home

  7. ↳ with If .ca CACHING " " www.i.am Client 2 Server stale read a c- 0¥ a' i' ' NFS Local FS name t2 cache: A cache: B t1 lstinertank Client cache records time when data block was fetched (t1) Before using data block, client does a STAT request to server = - get’s last modified timestamp for this file (t2) (not block…) - compare to cache timestamp - refetch data block if changed since timestamp (t2 > t1)

  8. 800 - d ANDREW FILE SYSTEM , r Ser ? firm res wrote file moffat c- read haha ÷ - Design for scale . J - Whole-file caching - Callbacks from server

  9. WORKLOAD PATTERNS (1991) workload in patterns of - oik regretted as it / - way " " was t as

  10. 90 's ( et early late OceanSTORE/PAST your e pit Wide area storage systems Fully decentralized Built on distributed hash tables (DHT)

  11. ↳ workloads are large ! Files write ( read → sequential pattern : Access fault tolerance Appends that → Components frequent had failures GFS: WHY ? scalability number F writers concurrent

  12. ↳ scale large → Components with failures Files are huge ! - GFS: WHY ? motivation Applications are different - append writers concurrent

  13. ③ weffwef.IT ① log admin GFS: WORKLOAD ASSUMPTIONS analysis pg j Indexing - if : : “Modest” number of large files - Two kinds of reads: Large Streaming and small random " Writes: Many large, sequential writes. No random High bandwidth more important than low latency -

  14. TML YE.in www.rotp.me " gftih coordinator " GFS: DESIGN metadata leader " " " " " " → M£%t¥%* F | - Single Master for metadata - Chunkservers for storing data - waist :# Em . - No POSIX API ! - No Caches! & ;÷ . storing often

  15. ⇒ CHUNK SIZE TRADE-OFFS retinas more chunks smaller → → Client à Master chinks → larger hotspots / more tame chunk to → requests Client à Chunkserver server more chunks → Larger Metadata - metadata lees MB larger ? → fragmentation 64 + Not in god

  16. secondary I D :D GFS: REPLICATION am ] ie " " innit ' frm secondary .com goes , secondary € o%Ym T v - 3-way replication to handle faults - Primary replica for each chunk - Chain replication (consistency) . scribe - Decouple data, control flow - Dataflow: Pipelining, network- .IE?gdiqaIfsrgdiotr aware ¥

  17. ↳ RECORD APPENDS lavishing model is tricky Write Client specifies the offset ↳ Applicators & Record Append GFS chooses offset rstat ! dunk for the replica primary be failures Consistency might there because At-least once → together record appears entire Atomic →

  18. symbiotes MASTER OPERATIONS no data structure - No “directory” inode! Simplifies locking tracks files no in that - - Replica placement considerations directory → failure rack a not same on - disk utilization value - operations ( write ) . A - Implementing deletes 'm collect yak lazy garbage - la

  19. ⇒ FAULT TOLERANCE - Chunk replication with 3 replicas D J - Master - Replication of log, checkpoint - Shadow master It " ' ④ - Data integrity using checksum blocks Iie m " . .

  20. DISCUSSION https://forms.gle/iUJh1MeVkKVRkt2X7

  21. GFS SOCIAL NETWORK You are building a new social networking application. The operations you will need to perform are user file per (a) add a new friend id for a given user (b) generate a histogram of number of friends per user. an How will you do this using GFS as your storage system ? ÷÷÷ ⇒ ÷:÷i÷÷f÷÷r÷÷÷ metadata friend new add a . winter of → large files small

  22. GFS EVAL 'd List your takeaways from “Table 3: Performance metrics” QR per read rate > ⑦ write rete ÷ W - good O ' woo → ;÷÷ : 0 :* :c our . generator ir ' of

  23. GFS SCALE The evaluation (Table 2) shows clusters with up to 180 TB of data. What part of the design would need to change if we instead had 180 PB of data?

  24. WHAT HAPPENED NEXT

  25. Keynote at PDSW-DISCS 2017: 2nd Joint International Workshop On Parallel Data Storage & Data Intensive Scalable Computing Systems

  26. GFS EVOLUTION Motivation: - GFS Master One machine not large enough for large FS Single bottleneck for metadata operations (data path offloaded) Fault tolerant, but not HA - Lack of predictable performance No guarantees of latency (GFS problems: one slow chunkserver -> slow writes)

  27. GFS EVOLUTION GFS master replaced by Colossus Metadata stored in BigTable Recursive structure ? If Metadata is ~1/10000 the size of data 100 PB data → 10 TB metadata 10TB metadata → 1GB metametadata 1GB metametadata → 100KB meta...

  28. GFS EVOLUTION Need for Efficient Storage Rebalance old, cold data Distributes newly written data evenly across disk Manage both SSD and hard disks

  29. Heterogeneous storage F4: Facebook Blob stores Key Value Stores

  30. NEXT STEPS - Assignment 1 out tonight! - Next week: MapReduce, Spark

Recommend


More recommend