large scale virtualization in the emulab network testbed
play

Large-scale Virtualization in the Emulab Network Testbed Mike - PowerPoint PPT Presentation

Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller, Jonathon Duerig, Shashi Guruprasad, Tim Stack, Kirk Webb, Jay Lepreau Emulab: Network Testbed 2 Emulab: Network Testbed 2 Emulab: Network


  1. Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller, Jonathon Duerig, Shashi Guruprasad, Tim Stack, Kirk Webb, Jay Lepreau

  2. Emulab: Network Testbed 2

  3. Emulab: Network Testbed 2

  4. Emulab: Network Testbed 2

  5. Emulab: Network Testbed 2

  6. What’s Wrong? 3

  7. What’s Wrong? Too small 3

  8. What’s Wrong? Too small Inefficient 3

  9. What’s Wrong? Too small Inefficient Solution? 3

  10. What’s Wrong? Too small Inefficient Solution? Virtualize 3

  11. What’s Wrong? Too small Inefficient Solution? Virtualize Multiplex 3

  12. The Basic Idea:

  13. The Basic Idea: Use virtualization to perform network experiments using fewer physical resources.

  14. The Basic Idea: Use virtualization to perform network experiments using fewer physical resources. ... and do this in a way that:

  15. The Basic Idea: Use virtualization to perform network experiments using fewer physical resources. ... and do this in a way that: is transparent to applications and preserves experiment fidelity

  16. 5

  17. Challenges Opportunities 5

  18. Challenges Opportunities • Fidelity 5

  19. Challenges Opportunities • Fidelity • Preserve network topology 5

  20. Challenges Opportunities • Fidelity • Closed world • Preserve network topology 5

  21. Challenges Opportunities • Fidelity • Closed world • Can re-run • Preserve network experiments topology 5

  22. Not Just Picking a VM Technology 6

  23. Not Just Picking a VM Technology Complete Virtual Network Experimentation System 6

  24. Full System • Virtualization technology • Host and network • Resource mapping • Feedback-directed emulation • IP address assignment • Scalable control system • Routing table calculation 7

  25. <virtualization>

  26. Start: FreeBSD jail • Namespace isolation • Virtual disks • We added network virtualization: • Ability to bind to multiple interfaces • New virtual network device ( veth ) • Separate routing tables 9

  27. 10

  28. 10

  29. 10

  30. 10

  31. 10

  32. 10

  33. 10

  34. 10

  35. <mapping>

  36. What does it mean to make a good mapping?

  37. Good Mapping • Pack well • Use resources efficiently • Specifying packing criteria • Do it quickly • Critical path for creating an experiment 13

  38. assign • Solves an NP-hard problem • Pack both nodes and links • Avoid scarce resources • Paper: [Ricci+:CCR03] • Based on simulated annealing • We extended for virtual nodes 14

  39. Resource-Based Packing • Use quantities we can directly measure • Resource-based system “This virtual node uses 100 MHz of CPU” “This physical node has 3 GHz of CPU” • Works well for heterogenous virtual and physical nodes 15

  40. Assigning Quickly

  41. Small Topologies 17

  42. Small Topologies 17

  43. Small Topologies 17

  44. Virtual Topologies 18

  45. Virtual Topologies 18

  46. Virtual Topologies 18

  47. Prepass 19

  48. Prepass 19

  49. Prepass 19

  50. Scaling With Prepass 20

  51. Scaling With Prepass 200 20

  52. Mapping Quality 21

  53. Mapping Quality 21

  54. Mapping Quality 21

  55. Mapping Quality 21

  56. <feedback>

  57. How do I know how tightly I can pack my virtual nodes?

  58. How do I know how tightly I can pack my virtual nodes? I don’t!

  59. Closed, repeatable world

  60. The Plan 25

  61. The Plan • Pick a packing 25

  62. The Plan • Pick a packing • Run experiment 25

  63. The Plan • Pick a packing • Run experiment • Monitor for artifacts 25

  64. The Plan • Pick a packing • Run experiment • Monitor for artifacts • If artifacts found: 25

  65. The Plan • Pick a packing • Run experiment • Monitor for artifacts • If artifacts found: • Re-pack 25

  66. The Plan • Pick a packing • Run experiment • Monitor for artifacts • If artifacts found: • Re-pack • Repeat 25

  67. The Plan • Pick a packing • Run experiment • Monitor for artifacts • If artifacts found: • Re-pack • Repeat 25

  68. The Plan • Pick a packing • Run experiment • Monitor for artifacts • If artifacts found: • Re-pack • Repeat 25

  69. Picking Initial Packing • Start one-to-one • Possibly with a subset of topology • Start tightly packed • Optimistically assume low usage 26

  70. Monitoring for Artifacts • CPU near 100% • Significant paging activity • Disk utilization 27

  71. Re-Packing • Measure resource use • Feed into resource-based packing 28

  72. Feedback in a Nutshell • Rely on packing, not isolation • Discover packing factors empirically • Re-use between experiments 29

  73. <numbers>

  74. kindex: Packing Factors

  75. Feedback Case Study

  76. Feedback Case Study Transactions Response Round Per Second Time (s)

  77. Feedback Case Study Transactions Response Round Per Second Time (s) Bootstrap: 74 physical 2.29 0.43

  78. Feedback Case Study Transactions Response Round Per Second Time (s) Bootstrap: 74 physical 2.29 0.43 Round 1: 7 physical 1.85 0.53

  79. Feedback Case Study Transactions Response Round Per Second Time (s) Bootstrap: 74 physical 2.29 0.43 Round 1: 7 physical 1.85 0.53 Round 2: 7 physical 2.29 0.43

  80. Deployed Use • Creation time: 7 minutes for 100 nodes • 5,125 experiments • 296,621 virtual nodes • 32% of Emulab nodes virtual • 5.75 average packing factor • Average: 58 nodes, max: 1,000 nodes 33

  81. Conclusion • Virtualization increases Emulab’s capacity • Transparently • Preserves fidelity • Requires solving several challenging problems • Proven useful in production www.emulab.net 34

  82. <end/>

  83. Related Work • Virtual Network Emulation • ModelNet, DieCast • Virtual Machines • Xen, VMWare, vservers, OpenVZ, NET • Network Virtualization • NetNS, OpenVZ, Trellis, IMUNES • Feedback-based Mapping • Hippodrome 36

  84. ModelNet • Applications can only run on edge nodes: single-homed only • More basic virtualization • No artifact detection • No feedback system • Emulab has richer control framework • Scales to much larger interior networks 37

  85. Minimal Effective Virtualization • Application transparency • Application fidelity • System capacity 38

  86. Application Transparency • Real applications • Simulation • Virtual machines • Keep most semantics of unshared machines • Simple processes • Preserve experimenter’s topology • Full network virtualization 39

  87. Application Fidelity • Physical results ≈ Virtual results • Virtual node interference • Perfect resource isolation • Detect artifacts and re-run 40

  88. System Capacity • Low overhead • In-kernel (vservers, jails) • Hypervisor (VMWare, Xen) • Don’t prolong experiments • DieCast 41

Recommend


More recommend