FACILITATING ICN DEPLOYMENT WITH AN EXTENDED OPENFLOW PROTOCOL Piotr Zuraniewski, Niels van Adrichem, Wieger Ijntema, Daan Ravesteijn (TNO) Christos Papadopoulos, Chengyu Fan (CSU)
SDN4ICN: CONNECTING ICN „ISLANDS” „Forklift upgrade” of Internet to speak ICN not realistic, migration scenarios needed One of deployment modes of ICN: „islands” separated by traditional network Manual tunnels set-up to enable communication: tedious, error-prone etc. SDN could help: set-up tunnel on demand, based on ICN name (address) ...that would require parsing ICN packets in SDN which is not trivial a) Available in cache? ICN b) Pending in PIT? c) If not, then forward Non-ICN network ICN Data interest … … ICN Data source 2
SDN – NO ICN SUPPORT OUT OF THE BOX Contrary to common belief it is not easy to test and deploy any kind of new protocol in SDN Reason: matching in OpenFlow is on pre-defined fields (ingress port, src/dst MAC, src/dst UDP port etc.) Asking controller to handle every single „unsupported” packet not an option due to performance penalty T 0 L 0 T 1 L 1 Complex structure of NDN packet V 1 Nested Type-Length-Value (TLV) format T 2 L 2 „Value” but also „Type” and „Length” fields V 2 can be of variable size Interesting ICN name can be buried deeply inside packet 3 Facilitating ICN Deployment with an Extended OpenFlow Protocol
REQUIRED SOLUTION: FLEXIBLE, EASY TO PROGRAM MATCHING ON SWITCH Avoid frequent controller communication Universal and “future-proof” (solve more than ICN parsing) High performance, preferably line-rate Easy to deploy No changes in OpenFlow standard Our proposition: extend OpenFlow protocol to allow for matching on the result of Extended Berkely Packet Filter program executed locally on a switch Extensibility is within current standard “Ultimate” extension – all protocols can be handled Inspired by architecture described first by Jouet, Cziva and Pezaros 4 Facilitating ICN Deployment with an Extended OpenFlow Protocol
INTERMEZZO: BERKELEY PACKET FILTER BPF: way of filtering packets in the kernel (McCanne, Van Jacobson 1993) You use it every day with tcpdump/libcap/wireshark/... BPF program is compiled to bytecode and is attached to the network tap interface Extended BPF (eBFP) – can be written in C, loops possible https://blog.cloudflare.com/bpf-the-forgotten-bytecode/ 5 Facilitating ICN Deployment with an Extended OpenFlow Protocol
ARCHITECTURE DETAILS 6 Facilitating ICN Deployment with an Extended OpenFlow Protocol
ARCHITECTURE DETAILS – EBPF PROGRAM 7 Facilitating ICN Deployment with an Extended OpenFlow Protocol
ARCHITECTURE DETAILS – OUR IMPLEMENTED EXTENSIONS Ryu controller modified to handle eBPF programs; can send them to and remove from the switch Experimenter OpenFlow message: capable to transport programs of 64kB size; allows for complex code, if needed 8 Facilitating ICN Deployment with an Extended OpenFlow Protocol
ARCHITECTURE DETAILS – OUR IMPLEMENTED EXTENSIONS Ryu controller modified to handle eBPF programs; can send them to and remove from the switch Experimenter OpenFlow message: capable to transport programs of 64kB size; allows for complex code, if needed OFSoftSwitch has now experimenter flow match field • Matching locally on a switch (controller not asked) • Many concurrent eBPF programs can be present • Each can be parametrized (e.g., ICN name) • Meta-data also handled (port ID, table ID) • Own vendor extension used – OpenFlow compliant 9 Facilitating ICN Deployment with an Extended OpenFlow Protocol
SDN ENHANCED ICN FORWARDING Once switch can match on ICN name, any SDN controller OpenFlow supported action to transport (tunnel) over legacy network is possible ICN Use GRE tunnel, re-write IP, push MPLS,.. SDN ICN routing information is leaked to controller to create correct flows Non-ICN network Two modes possible: with SDN gateways Proactive: controller pre-installs eBPF SDN programs and flows with matching/tunneling actions SDN Reactive: installation after switch finds ICN Data unknow ICN name source ICN 10
TNO/SCINET TESTBED 3 locations: 1 in NL, 2 in USA /tno ? dstIP:=1.1.1.1 SDN Connectivity via plain IPv4 Internet 1.1.1.1/24 Each location hosted TNO – the Netherlands Colorado State University VM with vanilla NDN 0.4.1 stack ICN ICN and python script advertising RIB to controller 2.2.2.2/24 Internet VM with modified SDN switch and ICN eBPF VM Additionally, TNO hosted modified NCAR - Wyoming Supercomputing Center Ryu SDN controller 3.3.3.3/24 Tunneling: switches re-write dstIP based on ICN name and IP mapping 11 Facilitating ICN Deployment with an Extended OpenFlow Protocol
TNO/SCINET TESTBED Especially for Dave: ~10 Bearer channels in ISDN J Test 1: general connectivity ndnping(server) between each pair of nodes Test 2: specific application repo-ng file transfer In both cases connectivity seamlessly established Performance test was not a goal here, data transfer speed can be improved by using app with window control*) *) reviewer’s remark File transfer experiment 12 Facilitating ICN Deployment with an Extended OpenFlow Protocol
MODIFIED SWITCH PERFORMANCE How fast switch can match on ICN name and perform tunnelling ? 100 000 interests packet sent with various speeds (PPS); 30 reps each time Look for breaking-point, i.e., first PPS value for which we see losses Four set-ups: one “operational” - using eBPF and header rewriting header so packet can be consumed by next-hop IPv4 router three “non-operational” for baselining only like set-up (D) being passthrough test eBPF IP/MAC re- purpose last loss-less first loss first loss first loss set-up match write [PPS] mean[P] stdev [P] [PPS] (A) Y Y operational 2100 2200 99997.8 9.5 (B) N Y evaluation 2100 2200 99999.9 0.5 (C) Y N evaluation 4000 4100 99998.1 10.6 (D) N N evaluation 4100 4200 99998.4 8.9 eBPF match is cheap, rewriting is expensive and “costs” about 2000 PPS 13 Facilitating ICN Deployment with an Extended OpenFlow Protocol
GOING GIGABITS ? EXPRESS DATA PATH + MASTER STUDENTS*) TO THE RESCUE 2000 interest PPS may generate lots of data in return but on its own means speed of ~2Mbps New linux kernels (4.13+) and several card vendors offer usage of eXpress Data Path (XDP) XDP hooks into the network-adapter device driver, no kernel by-pass eBPF programs can be triggered upon receiving packet by XDP *) Based on “eBPF filter acceleration for arbitrary packet matching in the Linux kernel”, MSc thesis, Jeffrey Panneman, TNO/UvA, Aug 2017 Figure: https://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp 14 Facilitating ICN Deployment with an Extended OpenFlow Protocol
XDP/EBPF RESULTS SNEAK PEEK PREVIEW Netronome Agilo CX 2x10Gbps NIC with driver supporting XDP (“xdpdrv”) Interest packets sent towards the card @10Gbps rate Four test flavors, “map-match-and- rewrite” is operational one with eBPF program doing ICN matching and MAC/IP headers manipulation No losses observed up to 2Gbps Loss rate for 4Gbps ~1e-7 All of these using only 1 core No OpenFlow here; control via “maps” RFC 2544 guided tests; error bars too small to be noticed 15 Facilitating ICN Deployment with an Extended OpenFlow Protocol
CONCLUSIONS Proposed framework allows for easy development of ICN – new parametrizable, flexible eBPF programs architecture Capability to match on arbitrary part of a datagram Virtually any current and future protocols can be handled Current performance of whole stack ~Mbps SDN - Accelerated Modern data plane solutions (XDP) seem very flexibility data plane - promising with Gbps rates performance Next steps: control plane for XDP, hardware offload… 16 Facilitating ICN Deployment with an Extended OpenFlow Protocol
Recommend
More recommend