an open shared platform for developing deploying and
play

An open, shared platform for developing, deploying, and accessing - PowerPoint PPT Presentation

An open, shared platform for developing, deploying, and accessing planetary-scale applications Timothy Roscoe Intel Research at Berkeley Thursday, July 01, 2004 APAN, Cairns, 1 July 2004 PlanetLab is Large collection of machines


  1. An open, shared platform for developing, deploying, and accessing planetary-scale applications Timothy Roscoe Intel Research at Berkeley Thursday, July 01, 2004 APAN, Cairns, 1 July 2004

  2. PlanetLab is… • Large collection of machines spread around the world for distributed systems teaching and research • Focus/catalyst for networking and systems community • Intel project ⇒ consortium of companies and universities APAN, Cairns, 1 July 2004

  3. Lots of work being done in widely distributed systems… • Students and researchers had no vehicle to try out their next n great ideas in this space – Lots of architectures – Lots of simulations – Lots of emulation on large clusters – Lots of folks calling their 17 friends before the next deadline • but not the surprises and frustrations of experience at scale to drive innovation APAN, Cairns, 1 July 2004

  4. The value proposition • Institutions join, provide nodes – IA32 architecture servers – Hosted outside the firewall – Provide power, cooling, & bandwidth • In exchange, users get “slices” of many machines worldwide. APAN, Cairns, 1 July 2004

  5. Origins and early progress • “Underground” meeting March 2002 • Intel seeds – First 100 nodes – Operational support • First node up July 2002 • By SOSP (deadline March 2003) 25% of accepted papers refer to PlanetLab • Large presence at SIGCOMM • 11 out of 27 papers in NSDI 2004 • Currently ~500 class and research projects APAN, Cairns, 1 July 2004

  6. PlanetLab today About 403 nodes, 169 sites, ~30 countries, 5 continents Universities, Labs, POPs, CoLos, DSL lines Huge presence in systems research conferences > 400 projects so far APAN, Cairns, 1 July 2004

  7. Network coverage APAN, Cairns, 1 July 2004

  8. What is PlanetLab good for? • Planetary-Scale applications: – Low latency to widely spread users – Span boundaries: jurisdictional and administrative – Simultaneous viewpoints: on the network or sensors – Hardware deployment is undesirable • Long-running services, not only experiments / exercises APAN, Cairns, 1 July 2004

  9. What is it used for? • PlanetLab addresses the related problems of: – Deploying widely-distributed services – Evaluating competing approaches in a realistic setting – Evolving the network architecture to better support such services • So far, PlanetLab is highly successful at doing this. APAN, Cairns, 1 July 2004

  10. PlanetLab is not… • A distributed supercomputer • A simulation platform • An Internet emulator • An arena for repeatable experiments • Completely representative of the current Internet APAN, Cairns, 1 July 2004

  11. PlanetLab is… • An opportunity to qualitatively validate distributed systems in a real deployment environment • An opportunity to gain valuable experience about what works and what doesn’t in the wide area at scale APAN, Cairns, 1 July 2004

  12. Why is it successful? • Community model – “network effects” – Lots of benefit from small entry fee • Sliceability – Enables multiple approaches – Sharing of the platform • Virtual machine interface – Emphasis on multiplexing the machine – Isolation left to the VMM APAN, Cairns, 1 July 2004

  13. PlanetLab relationships • PlanetLab � member institutions – Shared control of nodes • PlanetLab � users – Distributed virtualization , slices • PlanetLab � research builders – Shared interfaces, unbundled mgmt • PlanetLab � rest of the Internet – Isolation, security, packet auditing APAN, Cairns, 1 July 2004

  14. Architectural principles • Application-Centric Interfaces – Evolve an existing API rather than create a new one • Distributed Virtualisation – Slices and isolation • Unbundled Management – Community infrastructure services • Self-obsolescence APAN, Cairns, 1 July 2004 – everything initially built should

  15. Application-Centric Interfaces • Inherent problems – Learning curve for users – stable platform versus research into platforms – integrating testbeds with desktop machines • Approach – take popular API (Linux), evolve implementation – later separate isolation & application interfaces – provide generic “shim” library for desktops APAN, Cairns, 1 July 2004

  16. Distributed Virtualization • S ervices run in slice s. Node structure: • Slice: set of virtual Local admin slice Node Manager machines ( slivers ) Service Service Service • Created by slice creation service acting as a broker VMM (Linux + stuff) APAN, Cairns, 1 July 2004

  17. Requirements for slices • Isolation of VMs: – Allocate resources – Partition or contextualize namespaces – Provide stable programming base • Isolation of PlanetLab: – Resource accounting and limits – Auditing of slice actions (eg. packets) • Unexpected requirement! APAN, Cairns, 1 July 2004

  18. Node virtualization options • Full virtualization (VMware, etc.) – Inadequate scaling for PlanetLab – Long-term: investigate h/w support • Paravirtualization (Xen, Denali) – Not yet mature, but multiple OSes – Very attractive medium-term • Syscall-level virtualization (Vservers) – Works today, fairly stable – Only one operating system (Linux) APAN, Cairns, 1 July 2004

  19. VServers • 3 rd -party Linux kernel patch – Provides illusion of virtual Linux kernel – Much more scalable than UML • Node manager runs in special context 0 • Linux “capabilities” for access control – Local admin slice has extra privilege • Slice creation provides: – New file system (COW subset of main root) – Installs ssh keys for slice users – Empty root password APAN, Cairns, 1 July 2004

  20. Resource allocation • rspecs describe resource bundles • Node manager doles out rcaps for the node: rcap <- acquire(rcap) • rcaps can be transferred or traded • eventually bound to a slice (redeemed): bind(slice_name, rcap) • Binding resources to slices is a higher- level policy – (e.g. Globus, SHARP, Emulab, etc.) APAN, Cairns, 1 July 2004

  21. Dynamic slice creation N 1 acquire Description rcap � lease . (rspec) . . . . . N 2 Description . . . (rspec) candidates Application N 3 Agent Broker (Service reserve Manager) Ticket N 4 . . . (rcap) . . . . . . . PlanetLab Central, . . SHARP, Emulab, etc. N m APAN, Cairns, 1 July 2004

  22. Network Virtualization • How to present a shared physical network interface to VMs? • Partition vs. Contextualization • More than simple multiplexing: – Raw sockets – Per-slice routing tables – GRE tunnels – Shared DNS sockets – Etc… APAN, Cairns, 1 July 2004

  23. Current solution: very simple • Port reservations: included in rspec • “Safe raw sockets” – Tap packets sent or received by slice’s own sockets – Send any packet that could be sent by a socket owned by the slice – E.g. user-space TCP APAN, Cairns, 1 July 2004

  24. Packet auditing • “Sniffer sockets” annotate packet trace with slice identifiers – Used by privileged slices – Adminstrative slice logs all traffic – Web server allows remote inspection • Enables mapping: <ip packet, time> → <PI email addr> APAN, Cairns, 1 July 2004

  25. Unbundled Management • PlanetLab management is a Planetary-scale application – Allow many groups to experiment with infrastructure services in parallel – Low-level, sharable interfaces – Nodes only provide local abstractions – Includes VM creation => Slices are defined by services APAN, Cairns, 1 July 2004

  26. Unbundled Management • Partition mgmt into orthogonal services – resource discovery – monitoring system health – topology management – manage user accounts and credentials – software distribution and updates • Approach – management services run in their own slice – allow competing alternatives – engineer for innovation (minimal interfaces) APAN, Cairns, 1 July 2004

  27. What do people use it for? (some ones we know about) • Overlay Networks • Content Dist. Networks – RON, ROM++, ESM, XBone, – CoDeeN, ESM, UltraPeer ABone, etc. emulation, Gnutella mapping • Network measurement • Management and Monitoring – Scriptroute, *Probe, I3, – Ganglia, InfoSpect, Scout etc. Monitor, BGP Sensors, etc. • Application-level multicast • Distributed Hash Tables – ESM, Scribe, TACT, etc. – Chord, Tapestry, Pastry, Bamboo, etc. • Wide-area distributed • Virtualization and Isolation storage – Denali, VServers, SILK, – Oceanstore, SFS, CFS, Mgmt VMs, etc. Palimpsest, IBP • Router Design implications • Resource allocation – NetBind, Scout, NewArch, – Sharp, Slices, XenoCorp, Icarus, etc. Automated contracts • Testbed Federation • Distributed query processing – NetBed, RON, XenoServers – PIER, IrisLog, Sophia, etc. • Etc., etc., etc. APAN, Cairns, 1 July 2004

  28. Example #1: OpenHash (Brad Karp, Sylvia Ratnasamy, Sean Rhea) • Sharable, stable DHT service – “Turn on, put in, get out” – Accessible from any machine – “Redir” allows consistent hashing over arbitrary node sets • Implemented over Bamboo – Extremely robust & churn-resilient DHT implementation • Going live this week APAN, Cairns, 1 July 2004

Recommend


More recommend