PdP: Parallelizing Data Plane in Virtual Network Substrate Yong Liao, Dong Yin, Lixin Gao University of Massachusetts at Amherst
Network Virtualization Platform � Multiple heterogeneous concurrent virtual networks � Flexibility � Customizable virtual networks � High-performance � Good forwarding speed � Isolation � Minimal interference ` � Low-cost ` � Facilitate wide-area deployment ` `
Existing Network Virtualization Platforms � VINI � User mode forwarding, slow, highly customizable � Trellis � Kernel mode forwarding, faster, less customizable � VRouter (Xen) � Close to native speed � Needs hardware support to scale � Supercharging Planetlab � Special-purpose hardware, superior speed � Harder to program
Existing Network Virtualization Platforms Flexibility Performance Isolation Cost VINI Good Slow forwarding Good Low cost Trellis Moderate Moderate Low cost Close to native speed VRouter Good Close to native Good High speed SPP Moderate Superior speed Moderate High
Main Ideas of PdP � Accelerate data forwarding with multiple forwarding engines � Faster aggregate forwarding speed � Commodity hardware is inexpensive � Run virtual network data plane and control plane in VMs � Isolation among virtual networks � Better flexibility to customization
Architecture of PdP management host (vnet control plane) � Multiple forwarding engines (FEs) � Sliced into virtual nodes � Isolation � Multiplexer & demultiplexer multiplexer & � Classify packets to data plane demultiplexer VMs � Send packets out to physical NICs outgoing, processed incoming, unprocessed packets � High-speed packets � Control plane and data plane running in VMs � Customizable � Isolation and management forwarding engines (vnet data plane)
VNet Data Plane � Mapping between VNet and Forwarding Engines � Multiple FEs for one VNet � How to allocate FEs to VNets � Each virtual node performs (in user mode) � Lookup, encapsulation for virtual links
Multiplexer & Demultiplexer � Packet classifier � Different ports for different VMs � Packet dispatcher sends packets out � FE already marked the outgoing NIC � Can potentially be bottleneck
Prototype Implementation � Commodity PCs control plane host and multiplexer&demultiplexer � P4 2.6GHz CPU, 1G mem, Gbit NIC � Multiplexer & demultiplexer � Kernel mode Click � VNet Data plane Gbit Ethernet Switch � User mode Click in VM � VNet Control plane � XORP in VM � Interaction between VNet control plane and data plane � Updating forwarding table by forwarding engines multicast
Raw UDP packet forwarding Speed Small table: two entries, Similar for large table UDP packet forwarding speed 350 300 forwarding speed (Kpps) 250 200 user Click 150 kernel Click one FE 100 two FEs three FEs 50 0 0 200 400 600 800 1000 1200 input speed (Kpps)
Raw UDP Packet Loss Rate and RTT Loss rate in UDP packet forwarding 1 0.9 0.8 0.7 0.6 loss rate 0.5 user Click 0.4 kernel Click 0.3 one FE 0.2 two FEs three FEs 0.1 0 0 200 400 600 800 1000 1200 input speed (Kpps) User Click PdP Kernel Click Two-hop RTT(ms) 0.208 0.296 0.132
TCP Performance � Aggregate throughput is close to kernel Click � Out-of-order packets 1000 TCP Throughput Experiment Results 860 900 763 800 700 Throughput (Mbps) 565 600 500 369 360 400 300 200 100 0 User Click One FE Two FEs Three Fes Kernel Click one FE two FEs three FEs round-robin proportional % of out-of- 0.31% 10.19% 13.02% % of out-of- 12.27% 10.02% order pkts order pkts
Conclusion and Future Work � PdP provides the maximal flexibility to customize VNets � Forwarding speed of PdP scales with the number of FEs � Hardware multiplexer/demultiplexer � Flow based classification (out-of-order packet problem)
Recommend
More recommend