a a 2010 2011 tecnologie per la virtualizzazione delle
play

A.A. 2010-2011 Tecnologie per la Virtualizzazione delle Reti di - PowerPoint PPT Presentation

A.A. 2010-2011 Tecnologie per la Virtualizzazione delle Reti di Calcolatori Massimo RIMONDINI RCNG 02/11/10 Virtualization HA! HA! Easy! Virtualization DOh! Virtualization ... the act of decoupling the (logical) service from its


  1. A.A. 2010-2011

  2. Tecnologie per la Virtualizzazione delle Reti di Calcolatori Massimo RIMONDINI RCNG – 02/11/10

  3. Virtualization HA! HA! Easy!

  4. Virtualization D’Oh!

  5. Virtualization “... the act of decoupling the (logical) service from its (physical) realization ...” “ execution of software in an environment separated from the underlying hardware resources ” “ sufficiently complete simulation of the underlying hardware to allow software, typically a guest operating system, to run unmodified ” “ complete simulation of the underlying hardware ”

  6. Virtualization Full virtualization (emulation) Partial virtualization Paravirtualization (OS-assisted virtualization) Hardware-assisted virtualization OS-Level virtualization

  7. Full Virtualization (Emulation) Emulation of a fully fledged hardware box (e.g., x86) Binary translation For non-virtualizable instructions (have different semantic in Rings ≠0) Direct execution For performance VirtualBox, Parallels, ~VMware, Microsoft Virtual PC, QEMU, Bochs Picture from Understanding Full Virtualization, Paravirtualization, and Hardware Assist . White Paper. VMware.

  8. Partial Virtualization E.g., address space virtualization Supports multiple instances of a specific hardware device Does not support running a guest OS FreeBSD network stack virtualization project, IBM M44/44X (More) of historical interest Address space “virtualization” is a basic component in modern OSs

  9. Paravirtualization (OS-assisted virtualization) VMM{+,/}Hypervisor Guest OS communicates with hypervisor Changes to guest OS (to prevent non- virtualizable instructions from contacting bare metal) Better performance Support for hardware- assisted virtualization Xen, VMware, Microsoft Hyper-V, Oracle VM Server for SPARC, VirtualBox Picture from Understanding Full Virtualization, Paravirtualization, and Hardware Assist . White Paper. VMware.

  10. Hypervisor (and VMM) Hypervisor T ype 1 ( native ): runs on bare metal; loads prior to the OS Microsoft Hyper-V, VMware vSphere T ype 2 ( hosted ): runs within a conventional OS Virtual Machine Monitor Same as hypervisor (?) Picture from Understanding Full Virtualization, Paravirtualization, and Hardware Assist . White Paper. VMware.

  11. Transparent Paravirtualization Huh? Photo credit goes to Flickr user Alexy.

  12. Transparent Paravirtualization Virtual Machine Interface (VMI) Single VMI-compliant guest kernel VMI calls may have two implementations inline native instructions (run on bare metal) indirect calls to hypervisor paravirt-ops IBM+VMware+Red Hat+XenSource Part of Linux kernel since 2.6.20 Picture from Understanding Full Virtualization, Paravirtualization, and Hardware Assist . White Paper. VMware.

  13. Hardware-assisted Virtualization Hypervisor runs below Ring 0 Sensitive calls are automatically trapped to the hypervisor Effective guest isolation AMD-V (Pacifica) Intel VT-x (Vanderpool) VirtualBox, KVM, Microsoft Virtual PC, Xen, Parallels, ... Picture from Understanding Full Virtualization, Paravirtualization, and Hardware Assist . White Paper. VMware.

  14. OS-Level Virtualization Single OS/kernel Actually isolation (of contexts), not virtualization No emulation overhead Requires host kernel patch Share same system call interface Limits the set of runnable guests Processes in a virtual server are regular processes on the host Resources (e.g., memory) can be requested at runtime Linux VServer, Parallels Virtuozzo Containers, OpenVZ, Solaris Containers, FreeBSD Jails; to a certain extent, UMview, UML

  15. Able to run Unmodified Unmodified Overhead Flexibility guest OS Guest Host ✓ ✓ Full virtualization Depends High Limited (emulation) ✗ ✓ ✓ Partial Low Limited virtualization ✓ ✗ ✗ Paravirtualization (OS-assisted Low High virtualization) Mostly ✓ ✓ ✓ Hardware-assisted offloaded Average virtualization to hardware ✗ ✗ OS-Level Almost Low High Virtualization

  16. Which virtualization for networking?

  17. Requirements Depend much on the context Operational network Experimentation Anyway... Performance and scalability Flexibility Configuration Programmability (for development) Support for mobility Strong isolation Ahem.... usability

  18. T ools for Managing Virtual Network Scenarios

  19. Netkit Roma T re University VM engine: UML VM interconnection: uml_switch Core: shell scripts Routing engine: Quagga, XORP Lab description: (mostly) uses native router language Lightweight Easy-to-share labs Several networking technologies, including MPLS forwarding

  20. Netkit

  21. VNUML Universidad Politécnica de Madrid, T elefónica I+D VM engine: UML VM interconnection: uml_switch Core: python + perl scripts Routing engine: Quagga Lab description: XML Build, then play Support for distributed emulation (segmentation) Round robin Weighted (by CPU load before deploy) round robin

  22. VNUML H: ssh, scp, rsync W: SNMP, telnet TFTP

  23. VNUML Backplane: one or more 802.1Q-compliant switches Host-switch and switch-switch connections are trunks

  24. Marionnet Université Paris VM engine: UML VM interconnection: uml_switch, vde Core: OCaml Routing engine: Quagga Lab description: GUI (dot-based layout) Ability to export project file for reuse Network impairments (delay, loss → unidirectional links, bandwidth, flipped bits) Switch status leds

  25. Marionnet Introducing indirection to support... ...stable endpoints ...port “defects”

  26. GINI McGill University VM engine: UML VM interconnection: customized uml_switch, implementation of a wireless channel Core: C + python Routing engine: custom implementation (compliant with the Linux TCP/IP stack) Lab description: GUI, XML Integrated task manager to start/stop nodes Real-time performance plots

  27. Vincent Perrier VM engine: UML, (QEMU+)KVM, Openwrt VM interconnection: improved uml_switch Core: C Routing engine: N/A Lab description: custom markup Several customizations and hacks Ability to plot the value of any kernel variables Switch supports tcp sockets, real-time configuration with XML messages Built-on-the-fly CDROM image for machine differentiation

  28. IMUNES University of Zagreb VM engine: N/A (stack virtualization) VM interconnection: N/A Core: N/A Routing engine: N/A GUI Based on FreeBSD VirtNET

  29. IMUNES VirtNET: network state replication

  30. IMUNES VirtNET: network state replication In a vimage it is possible to configure network interfaces open sockets run processes (in some way) similar to UMview

  31. Virtual Routers

  32. DynaMIPS Christophe Fillot (University of T echnology of Compiegne) Supported platforms (as of v0.2.7) Cisco 7200 (NPE-100 to NPE-400) Cisco 3600 (3620, 3640 and 3660) Cisco 2691 Cisco 3725 Cisco 3745 No acceleration CPU idle times must be tuned “ Of course, this emulator cannot replace a real router: you should be able to get a performance of about 1 kpps [...], to be compared to the 100 kpps delivered by a NPE-100 [...]. So, it is simply a complementary tool to real labs for administrators of Cisco networks or people wanting to pass their CCNA/CCNP/CCIE exams. ”

  33. DynaMIPS Fully virtualized hardware ROM, RAM, NVRAM Chassis Console, AUX PCMCIA ATA disks ATM/Frame Relay/Ethernet virtual switch between emulator instances Port adapters, network modules Interface binding UNIX socket VDE tap host interface (optionally via libpcap) UDP port Some lacking opcodes (mostly FPU) Can manage multiple instances (“hypervisor” mode) Development stalled, but still a milestone

  34. DynaMIPS Dynagen: a Python frontend Dynagui

  35. DynaMIPS Dynagen: a Python frontend Dynagui GNS3

  36. So, Performance and Scalability are 2 Bugaboos... Carrier grade equipment: Software routers: 40Gbps ÷ 92Tbps 1Gbps ÷ 3Gbps Per-packet processing capability must scale with O(line_rate) Aggregate switching capability must scale with O(port_count * line_rate) But software routers need not run on a single server: RouteBricks Columbia+Intel+UCLA+Berkeley+... (a 10-authors paper!) Click-based

  37. RouteBricks Cluster router architecture Parallelism across servers Nodes can make independent decisions on a subset of the overall traffic Parallelism within servers CPU I/O, memory

  38. RouteBricks Intra-cluster routing Valiant Load Balancing (VLB): source  random node  destination Randomizes input traffic No centralized scheduling Beware of reordering T opology Full mesh: not feasible (server fanout is limited!) Commodity Ethernet switches: not viable! Missing load-sensitive routing features Cost T ori and butterflies

  39. RouteBricks Butterfly topology 2-ary 4-fly

  40. RouteBricks Experimental setup 4 Intel Xeon servers (Nehalem microarchitecture) 1 10Gbps external line each Full-mesh topology Bottleneck 64-byte packets: <19Mpps sustained Caused by CPU Per-byte CPU load higher for smaller packets Programmability NIC driver 2 Click elements

  41. Click UCLA A modular software router UNIX-pipe-like composition Implemented as a Linux kernel extension 333,000 64-byte packets per second on a 700 MHz Pentium III Element Connection

Recommend


More recommend