docker cker ov overlay rlay networks tworks
play

Docker cker Ov Overlay rlay Networks tworks Performance analysis - PowerPoint PPT Presentation

Docker cker Ov Overlay rlay Networks tworks Performance analysis in high-latency environments Students: s: Siem Hermans Patrick de Niet Resea earc rch Project ect 1 Supervi rviso sor: Dr. Paola Grosso System and Network Engineering


  1. Docker cker Ov Overlay rlay Networks tworks Performance analysis in high-latency environments Students: s: Siem Hermans Patrick de Niet Resea earc rch Project ect 1 Supervi rviso sor: Dr. Paola Grosso System and Network Engineering System and Network Engineering

  2. 2 Resea search rch question on “What is the performance of various Docker overlay solutions when implemented in high latency environments and more specifically in the GÉANT Testbeds Services (GTS )?”

  3. 3 Relat lated ed Work Internal • Claassen, J. (2015, July). Container Network Solutions. Retrieved January 31, 2016, from http://rp.delaat.net/2014-2015/p45/report.pdf. • Rohprimardho, A. (2015, August). Measuring The Impact of Docker on Network I/O Performance. Retrieved January 31, 2016, from http://rp.delaat.net/2014-2015/p92/report.pdf. External • Kratzke, N. (2015). About Microservices, Containers and their Underestimated Impact on Network Performance. CLOUD COMPUTING 2015, 180. • Barker, S. K., & Shenoy, P. (2010, February). Empirical evaluation of latency-sensitive application performance in the cloud. In Proceedings of the first annual ACM SIGMM conference on Multimedia systems (pp. 35-46). ACM.

  4. 4 Docker er - Concepts Basics • Containerization • Gaining traction • Performance increases • Role of Docker Conta taine ner Virt rtual ual Machi hine ne

  5. 5 Multi-host st networki working ng • Virtual networks that span underlying hosts • Powered by libnetwork

  6. 6 Overlay erlay solutions ns Libnetwork Weave Net Flannel Project Calico (Native overlay driver) • Based on SocketPlane • Previously routing based • Flanneld agent • Technically not an overlay on pcap . Now uses OVS. • Integrating OVS APIs in • No integration with • Routing via BGP Docker • Libnetwork plugin libnetwork • Segmentation via iptables • Subnet per host • VXLAN based forwarding • VXLAN based forwarding • State distribution via BGP • UDP or VXLAN forwarding route reflectors Kratzke, N. (2015). • No tunneling

  7. 7 GÉANT NT - Introduction • European research community - Amsterdam - Bratislava - Ljubljana - Milan - Prague GÉANT Testbeds Service (GTS) • OpenStack platform, interconnected by MPLS • KVM for compute nodes • Resembles IaaS providers; Shared infrastructure •

  8. 8 Topologies Topologies (1) 1) • Four full mesh instances • • DSL 2.0 grammar (JSON) • Local site; Feasibility evaluation DSL FullMesh { id="FullMesh_Dispersed" host { id= "h1" location= "AMS" port { id="port11"} port { id="port12"} } link { id="l1" port {id="src"} port {id="dst"} } adjacency h1.port14, l1.src adjacency h2.port24, l1.dst } {...}

  9. 9 Topologies (2) Topologies 2) • Scaling up from single-site feasibility check • Calico o droppe ropped • Full mesh divided in: 1. 1. Point oint-to to-poi point, synthetic benchmarks 2. 2. Star ar top opol ology gy, real-world scenario Setup • Flannel VXLAN tunneling • Key-value store placement • Storing network state • Separate distributed system

  10. 10 Methodology logy - Performance Synthe ntheti tic benchm nchmar ark (PtP tP) Placement of nodes • Netperf • Latency • Jitter Iperf • TCP/UDP throughput • Jitter Laten atency cy sensi nsiti tive ve app pplica cati tion n (Medi dia stream reaming) g) • Darwin Streaming Server, Faban RTSP clients • Jitter (with netperf) • Bitrate Barker, S. K., & Shenoy, P. (2010, February).

  11. 11 Resu sults ts - GÉANT Documentatio ion Provis isio ionin ing Acces cess Setup up Support VPN Resources rces

  12. 12 Resu sults ts - PtP VM to VM Latency

  13. 13 Resu sults ts - PtP Docker to Docker Latency In Milliseconds (ms) 99 th % Latency Circuit Topology Min. Latency Mean Latency LIBNET 36.3 36.5 37.0 AMS – MIL WEAVE 36.2 36.5 37.0 FLANNEL 42.5 42.9 43.0 LIBNET 30.1 30.3 31.0 AMS – LJU WEAVE 29.8 30.3 31.0 FLANNEL 29.8 30.3 31.0 LIBNET 17.6 17.7 18.0 AMS – BRA WEAVE 17.4 17.7 18.0 FLANNEL 17.4 17.7 18.0 LIBNET 61.8 62.1 62.4 MIL – LJU WEAVE 59.6 59.8 60.0 FLANNEL 55.6 55.8 56.0 LIBNET 12.7 13.0 14.0 MIL – BRA WEAVE 12.9 13.1 14.0 FLANNEL 12.9 13.1 14.0 LIBNET 47.1 47.4 48.0 BRA – LJU WEAVE 43.1 59.5 130.0 FLANNEL 43.1 43.4 44.0

  14. 14 Resu sults ts - PtP Throughput AMS to BRA TCP Throughput AMS to BRA UDP Throughput VM VM Flannel Flannel Solution Solution Weave Weave Libnet Libnet 0 50 100 150 200 250 0 50 100 150 200 250 300 Mbps Mbps

  15. 15 Mean Jitter in ms 0.5 1.5 2.5 0 1 2 Resu 1 Worker sults VM BRA - AMS Concurrency Jitter 3 Worker 9 Worker ts - Streaming Experiment 1 Worker LIBNET 3 Worker BRA – AMS Instance 9 Worker 1 Worker WEAVE 3 Worker 9 Worker 1 Worker FLANNEL 3 Worker 9 Worker Bitrate per stream in Mbps 0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 1.80 2.00 1 Worker BRA - AMS Concurrency Bitrate VM 3 Worker 9 Worker 1 Worker LIBNET 3 Worker BRA – AMS Instance 9 Worker 1 Worker WEAVE 3 Worker 9 Worker Mean 1 Worker FLANNEL Maximum 3 Worker 9 Worker

  16. 16 Conclus usion ion & Fu Future e Work Measurements currently only valid within GTS environment; • Reconduct performance analysis in heavily shared environment (e.g. Amazon EC2) – Perform experiments with more compute resources (CPU capping) – Anomalies in throughput performance not identified (UDP, TCP) • Similar behavior discovered in the work of J. Claassen – Ideally more measurements to increase accuracy • No significant performance degradations by implementing Docker overlays within GTS • Use Weave ideally within the GTS environment •

  17. Resea earc rch Project ect 1 System and Network Engineering System and Network Engineering Qu Ques estio ions ns ? Thank you A: Science Park 904, Amsterdam NH github.com/siemhermans/gtsperf W: rp.delaat.net siem.hermans@os3.nl patrick.deniet@os3.nl

Recommend


More recommend