1
play

1 [Saito] Key Points Key Points How Do Computers Fail? How Do - PDF document

Using Clusters for Scalable Services Using Clusters for Scalable Services Clusters are a common vehicle for improving scalability and availability at a single service site in the network. Are network services the Killer App for clusters?


  1. Using Clusters for Scalable Services Using Clusters for Scalable Services Clusters are a common vehicle for improving scalability and availability at a single service site in the network. Are network services the “Killer App” for clusters? • incremental scalability Internet Server Clusters Internet Server Clusters just wheel in another box... • excellent price/performance high-end PCs are commodities: high-volume, low margins • fault-tolerance “simply a matter of software” • high-speed cluster interconnects are on the market SANs + Gigabit Ethernet... cluster nodes can coordinate to serve requests w/ low latency • “shared nothing” [Saito] The Porcupine Wheel The Porcupine Wheel Porcupine: A Highly Available Cluster- - Porcupine: A Highly Available Cluster based Mail Service based Mail Service Functional Replication homogeneity availability Yasushi Saito scale Brian Bershad Hank Levy performance manageability http://porcupine.cs.washington.edu/ Dynamic Automatic University of Washington transaction reconfiguration Department of Computer Science and Engineering, scheduling Seattle, WA Yasushi’s Slides Porcupine Replication: Overview Yasushi’s Slides Porcupine Replication: Overview Yasushi’s slides can be found on his web site at HP. To add/delete/modify a message: • Find and update any replica of the mailbox fragment. http://www.hpl.hp.com/personal/Yasushi_Saito/ Do whatever it takes : make a new fragment if necessary...pick a I used his job talk slides with a few of my own mixed in, which new replica if chosen replica does not respond. follow. • Replica asynchronously transmits updates to other fragment replicas. continuous reconciling of replica states • Log/force pending update state, and target nodes to receive update. on recovery, continue transmitting updates where you left off • Order updates by loosely synchronized physical clocks. Clock skew should be less than the inter-arrival gap for a sequence of order-dependent requests...use nodeID to break ties. • How many node failures can Porcupine survive? What happens if nodes fail “forever”? 1

  2. [Saito] Key Points Key Points How Do Computers Fail? How Do Computers Fail? • COTS/NOW/ROSE off-the-shelf Porcupine’s failure assumptions • Shared-nothing architecture (vs. shared disk) Large clusters are unreliable. • Functionally homogeneous (anything anywhere) Assumption : live nodes respond correctly in bounded time time • Hashing with balanced bucket assignment to nodes most of the time. • ROWA replication with load-balancing reads • Network can partition Read one write all • Nodes can become very slow temporarily. • Soft state vs. hard state: minimize hard state • Nodes can fail (and may never recover). • Leverage weak consistency: “ACID vs. BASE” • Byzantine failures excluded. • Idempotent updates and total ordering Loosely synchronized clocks • Operation logging/restart • Spread and affinity Gribble’s Slides Gribble’s Slides Taming the Internet Service Construction Beast Taming the Internet Service Construction Beast Persist ent , Clust er - - based Dist ribut ed Dat a St ruct ures based Dist ribut ed Dat a St ruct ures Steve Gribble’s slides can be found on his web site at UW. Persist ent , Clust er (in Java!) (in Java!) http://www.cs.washington.edu/homes/gribble/pubs.html Go to “selected talks” and for the slides on DDS. Steven D. Gribble I actually used his job talk slides with a few of my own mixed in on the basics of two-phase commit, which follow. gribble@cs.berkeley.edu It is important to understand the similarities/differences between Porcupine and DDS, and how they flow from the failure Ninj a Resear ch Group assumptions and application assumptions for each project. (ht t p:/ / ninj a.cs.berkeley.edu) The Universit y of Calif ornia at Berkeley Comput er Science Division Committing Distributed Transactions Two- -Phase Commit (2PC) Phase Commit (2PC) Committing Distributed Transactions Two Transactions may touch data stored at more than one site. Solution : all participating sites must agree on whether or not each action has committed. Each site commits (i.e., logs) its updates independently. • Phase 1 . The sites vote on whether or not to commit. Problem : any site may fail while a commit is in progress, but after precommit : Each site prepares to commit by logging its updates updates have been logged at another site. before voting “yes” (and enters prepared phase). An action could “partly commit”, violating atomicity. • Phase 2 . Commit iff all sites voted to commit. Basic problem: individual sites cannot unilaterally choose to abort A central transaction coordinator gathers the votes. without notifying other sites. If any site votes “no”, the transaction is aborted. “Log locally, commit globally.” Else, coordinator writes the commit record to its log. Coordinator notifies participants of the outcome. Note : one server ==> no 2PC is needed, even with multiple clients. 2

  3. The 2PC Protocol The 2PC Protocol Handling Failures in 2PC Handling Failures in 2PC 1. A participant P fails before preparing. 1. Tx requests commit, by notifying coordinator ( C ) C must know the list of participating sites. Either P recovers and votes to abort, or C times out and aborts. 2. Coordinator C requests each participant ( P ) to prepare . 2. Each P votes to commit, but C fails before committing. 3. Participants validate, prepare, and vote. Participants wait until C recovers and notifies them of the decision to abort. The outcome is uncertain until C recovers. Each P validates the request, logs validated updates locally, and responds to C with its vote to commit or abort . 3. P or C fails during phase 2, after the outcome is determined. If P votes to commit, Tx is said to be “prepared” at P . Carry out the decision by reinitiating the protocol on recovery. 4. Coordinator commits. Again, if C fails, the outcome is uncertain until C recovers. Iff P votes are unanimous to commit, C writes a commit record to its log, and reports “success” for commit request. Else abort . 5. Coordinator notifies participants. C asynchronously notifies each P of the outcome for Tx . Each P logs outcome locally and releases any resources held for Tx . More Slides More Slides Clusters: A Broader View Clusters: A Broader View The following are slides on “other” perspectives on Internet server MSCS (“Wolfpack”) is designed as basic infrastructure for clusters. We did not cover them in class this year, but I leave commercial applications on clusters. them to add some context for the work we did discuss. • “A cluster service is a package of fault-tolerance primitives.” • Service handles startup, resource migration, failover, restart. • But: apps may need to be “cluster-aware”. Apps must participate in recovery of their internal state. Use facilities for logging, checkpointing, replication, etc. • Service and node OS supports uniform naming and virtual environments. Preserve continuity of access to migrated resources. Preserve continuity of the environment for migrated resources. Wolfpack: Resources : Resources Fault- -Tolerant Systems: The Big Picture Tolerant Systems: The Big Picture Wolfpack Fault replication • The components of a cluster are nodes and resources . application application logging service service Shared nothing : each resource is owned by exactly one node. checkpointing voting • Resources may be physical or logical. replication Disks, servers, databases, mailbox fragments, IP addresses,... database mail service cluster logging service checkpointing • Resources have types, attributes, and expected behavior. voting • (Logical) resources are aggregated in resource groups . replication Each resource is assigned to at most one group. file/storage messaging RAID parity checksum system system • Some resources/groups depend on other resources/groups. ack/retransmission Admin-installed registry lists resources and dependency tree. • Resources can fail. redundant hardware parity cluster service/resource managers detect failures. ECC Note: dependencies redundancy at any/each/every level what failure semantics to the level above? 3

Recommend


More recommend