distributed systems principles and paradigms
play

Distributed Systems Principles and Paradigms Maarten van Steen VU - PowerPoint PPT Presentation

Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 11: Distributed File Systems Version: December 2, 2009 Contents Chapter 01: Introduction 02:


  1. Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 11: Distributed File Systems Version: December 2, 2009

  2. Contents Chapter 01: Introduction 02: Architectures 03: Processes 04: Communication 05: Naming 06: Synchronization 07: Consistency & Replication 08: Fault Tolerance 09: Security 10: Distributed Object-Based Systems 11: Distributed File Systems 12: Distributed Web-Based Systems 13: Distributed Coordination-Based Systems 2 / 20

  3. Distributed File Systems 11.1 Architecture Distributed File Systems General goal Try to make a file system transparently available to remote clients. 1.�File�moved�to�client Client Server Client Server Old�file New�file Requests�from 2. Accesses�are client�to�access File�stays 3.�When�client�is�done, done�on�client remote�file on�server file�is�returned�to Remote access model Upload/download model 3 / 20

  4. Distributed File Systems 11.1 Architecture Example: NFS Architecture NFS NFS is implemented using the Virtual File System abstraction, which is now used for lots of different operating systems. Client Server System call layer System call layer Virtual file system Virtual file system (VFS) layer (VFS) layer Local file Local file NFS client NFS server system interface system interface RPC client RPC server stub stub Network 4 / 20

  5. Distributed File Systems 11.1 Architecture Example: NFS Architecture Essence VFS provides standard file system interface, and allows to hide difference between accessing local or remote file system. Question Is NFS actually a file system? 5 / 20

  6. Distributed File Systems 11.1 Architecture NFS File Operations Oper. v3 v4 Description Create Yes No Create a regular file Create No Yes Create a nonregular file Link Yes Yes Create a hard link to a file Symlink Yes No Create a symbolic link to a file Mkdir Yes No Create a subdirectory Mknod Yes No Create a special file Rename Yes Yes Change the name of a file Remove Yes Yes Remove a file from a file system Rmdir Yes No Remove an empty subdirectory Open No Yes Open a file Close No Yes Close a file Lookup Yes Yes Look up a file by means of a name Readdir Yes Yes Read the entries in a directory Readlink Yes Yes Read the path name in a symbolic link Getattr Yes Yes Get the attribute values for a file Setattr Yes Yes Set one or more file-attribute values Read Yes Yes Read the data contained in a file Write Yes Yes Write data to a file 6 / 20

  7. Distributed File Systems 11.1 Architecture Cluster-Based File Systems Observation When dealing with very large data collections, following a simple client-server approach is not going to work ⇒ for speeding up file accesses, apply striping techniques by which files can be fetched in parallel File block of file a File block of file e e a c b d e a b c d a b c d e Whole-file distribution a a a b b b c c e c d d e e d File-striped system 7 / 20

  8. Distributed File Systems 11.1 Architecture Example: Google File System file name, chunk index GFS client Master contact address Instructions Chunk-server state Chunk ID, range Chunk server Chunk server Chunk server Chunk data Linux file� Linux file� Linux file� system system system The Google solution Divide files in large 64 MB chunks, and distribute/replicate chunks across many servers: The master maintains only a (file name, chunk server) table in main memory ⇒ minimal I/O Files are replicated using a primary-backup scheme; the master is kept out of the loop 8 / 20

  9. Distributed File Systems 11.1 Architecture P2P-based File Systems Node where a file system is rooted � � � � Ivy Ivy Ivy File system layer DHash DHash DHash Block-oriented storage Chord Chord Chord DHT layer Network Basic idea Store data blocks in the underlying P2P system: Every data block with content D is stored on a node with hash h ( D ) . Allows for integrity check. Public-key blocks are signed with associated private key and looked up with public key. A local log of file operations to keep track of � blockID , h ( D ) � pairs. 9 / 20

  10. Distributed File Systems 11.3 Communication RPCs in File Systems Observation Many (traditional) distributed file systems deploy remote procedure calls to access files. When wide-area networks need to be crossed, alternatives need to be exploited. Client Server Client Server LOOKUP OPEN LOOKUP READ Lookup name Lookup name Open file READ Read file data Read file data Time Time (a) (b) 10 / 20

  11. Distributed File Systems 11.3 Communication Example: RPCs in Coda Observation When dealing with replicated files, sequentially sending information is not the way to go. Client Client Reply Invalidate Reply Invalidate Server Server Invalidate Reply Invalidate Reply Client Client Time Time (a) (b) Note In Coda, clients can cache files, but will be informed when an update has been performed. 11 / 20

  12. Distributed File Systems 11.5 Synchronization File sharing semantics Client machine #1 Problem a b When dealing with distributed file Process systems, we need to take into account A a b c the ordering of concurrent read/write operations and expected semantics 1. Read "ab" 2. Write "c" (i.e., consistency). File server Original file a b Single machine a b Process 3. Read gets "ab" A a b c Client machine #2 Process b a B Process B 1. Write "c" 2. Read gets "abc" (a) (b) 12 / 20

  13. Distributed File Systems 11.5 Synchronization File sharing semantics Semantics UNIX semantics: a read operation returns the effect of the last write operation ⇒ can only be implemented for remote access models in which there is only a single copy of the file Transaction semantics: the file system supports transactions on a single file ⇒ issue is how to allow concurrent access to a physically distributed file Session semantics: the effects of read and write operations are seen only by the client that has opened (a local copy) of the file ⇒ what happens when a file is closed (only one client may actually win) 13 / 20

  14. Distributed File Systems 11.5 Synchronization Example: File sharing in Coda Essence Coda assumes transactional semantics, but without the full-fledged capabilities of real transactions. Note: Transactional issues reappear in the form of “this ordering could have taken place.” Session S A Client Open(RD) File f Invalidate Close Server Close Open(WR) File f Client Time Session S B 14 / 20

  15. Distributed File Systems 11.6 Consistency and Replication Consistency and replication Observation In modern distributed file systems, client-side caching is the preferred technique for attaining performance; server-side replication is done for fault tolerance. Observation Clients are allowed to keep (large parts of) a file, and will be notified when control is withdrawn ⇒ servers are now generally stateful 1. Client asks for file Client Server 2. Server delegates file Old file Local copy 3. Server recalls delegation Updated file 4. Client sends returns file 15 / 20

  16. Distributed File Systems 11.6 Consistency and Replication Example: Client-side caching in Coda Session S Session S A A Client A Open(RD) Close Close Open(RD) Invalidate (callback break) File f File f Server File f OK (no file transfer) Open(WR) Open(WR) Close Close Client B Time Session S Session S B B Note By making use of transactional semantics, it becomes possible to further improve performance. 16 / 20

  17. Distributed File Systems 11.6 Consistency and Replication Example: Server-side replication in Coda Server Server S 1 S 3 Client Broken Client Server A network B S 2 Main issue Ensure that concurrent updates are detected: Each client has an Accessible Volume Storage Group (AVSG): is a subset of the actual VSG. Version vector CVV i ( f )[ j ] = k ⇒ S i knows that S j has seen version k of f . Example: A updates f ⇒ S 1 = S 2 = [+ 1 , + 1 , + 0 ] ; B updates f ⇒ S 3 = [+ 0 , + 0 , + 1 ] . 17 / 20

  18. Distributed File Systems 11.7 Fault Tolerance Fault tolerance Observation FT is handled by simply replicating file servers, generally using a standard primary-backup protocol: Client Client Primary server for item x Backup server W1 W5 R1 R2 W4 W4 W3 W3 Data store W2 W3 W4 W1. Write request R1. Read request W2. Forward request to primary R2. Response to read W3. Tell backups to update W4. Acknowledge update W5. Acknowledge write completed 18 / 20

  19. Distributed File Systems 11.7 Fault Tolerance High availability in P2P systems Problem There are many fully decentralized file-sharing systems, but because churn is high (i.e., nodes come and go all the time), we may face an availability problem ⇒ replicate files all over the place (replication factor: r rep ). Alternative Apply erasure coding: Partition a file F into m fragments, and recode into a collection F ∗ of n > m fragments Property: any m fragments from F ∗ are sufficient to reconstruct F . Replication factor: r ec = n / m 19 / 20

Recommend


More recommend