Dataplane Broker (DPB) Steven Simpson, Arsham Farshad, Paul McCherry, Abubakr “Ali” Magzoub
Problem statement ● Multj-site (multj-VIM) – Each VNF assigned to a site Site 1 VNF – Some VLs split across sites – WIM responsible for inter- site connectjvity ● Dataplane Broker VNF VNF (DPB) – Can act as WIM Site 2
Wide-area L2 connectjons ● VLAN endpoints ● Multjswitch – Plugin framework for base – Functjonal isolatjon of VLs ‘fabric’ layer ● Multjpoint – Heterogeneous physical network – NSes can be split over 2+ ● Corsa DP2000 series sites ● Generic OpenFlow ● Bandwidth guarantees ● Scalability – Non-functjonal isolatjon – Hierarchical abstractjon – Traffjc from one NS – Not looking for optjmal shouldn’t be able to solutjon drown out another ● OpenSource – Asymmetric
Network abstractjon ● Named terminals ● Logical switch – Associated with sliced resources – Logical network subtype at specifjc locatjons, e.g., – Maps directly to physical switch lancaster-openstack, paris- vpngw, berlin-ofx – Uses adaptor to map to fabric technology ● Numerically labeled circuits site1- opst circuit – Distjnguishes services occupying labels site1- same terminal 91 435 961 ofx site2- ofx – Maps to encapsulatjon 961 technology (e.g., VLAN ids) site2- ● Services services opst logical – Connect 2+ circuits network terminals 2010 – Bandwidth guarantees site3- ofx site3- opst
Aggregator ● Control of inferior ● Same northbound interface networks – Hierarchies could be built – Inferiors are either more – ‘Trunks’ connect ‘internal’ aggregators, or ‘logical switches’ terminals of inferiors – Leaves are always switches – Own terminals map to ‘external’ terminals of aggregator site1- site2- inferiors opst opst 91 961 – Aggregator manages capacity of its own trunks synonymities – Aggregator service maps inferior inferior network network site2 site1 to set of inferior services site1 site2 73 73 91 961 trunk opst opst
Fabric adaptatjon ● OpenFlow adaptor ● Fabric adaptors are plugins for specifjc technologies – Uses VLAN OF operatjons – Difgerent adaptor usable by each logical switch for VLAN switching – Network heterogeneity – Some metering applied to – No persistent state implement QoS Fabric ● OF1.5 VLANCircuitFabric.java – Custom Ryu controller app REST implements multjple isolated learning switches tupleslicer.py Ryu in one physical switch OpenFlow OpenFlow switch
Fabric adaptatjon ● Corsa adaptor Fabric – Uses custom Ryu app to PortSlicedVFCFabric.java switch between internal REST ports of VFC Corsa REST – Uses switch management portslicer.py Ryu REST API to atuach VFC OpenFlow ports to physical ports and mgmt VLANs VFC ● (De-)tagging handled by VLAN id 1 91 73 2 atuachments, not by virtual tunnel forwarding attachment OpenFlow context physical ● Shaping applied to port Corsa DP2000 atuachments – QoS not implemented by OpenFlow
Aggregate bandwidths
Aggregate bandwidths
Aggregate bandwidths
Sub-optjmal results
Sub-optjmal results
Future of DPB ● Service modifjcatjon ● Multj-segment – Pretend that resources – Establish all disjoint consumed by current segments or fail confjguratjon are available ● Alternatjve metrics for for new path computatjon ● Bandwidth matrix – Latency, reliability, … – For betuer expression of ● Multjtenancy (say) E-TREE ● OVSDB as fabric – In the control plane – Betuer isolatjon of one – Similar to Corsa user’s services from other architecture users’ control
Acknowledgements
OSM multj-VIM issues ● IP pool splittjng ● Pre-existjng networks – (including management) – OSM must co-ordinate IP confjguratjon as it splits VL, – Don’t connect them during ns- not afuer create! – Same subnet; disjoint IP pools – Assume they are already connected – Our work-around: block DHCP – Or deal with: – Watch out for connected ● Modifjcatjon of existjng services internal and external VLDs ● Merging of two services into one – What about switch-like and ● Surprise unrelated subnets router-like behaviour across interfaces? – Detectjon: ● vim-network-name expressed or – Holistjc solutjon to related implied; and issues? ● profjle unspecifjed
Multj-tenant multj-VIM management networks ● Per-tenant VIM ● Tool to set up multj-VIM management network? confjguratjons – Admin credentjals of OSM and all requested VIMs – Distjnct VIM tenants and – Create VIM projects at each site default management ● Create VIM network network names – Create VPN gateway(s) – Per-tenant isolatjon of ● vpnmgr – Gather endpoints and connect with management networks broker – Overlapping subnets – Create OSM tenant ● Populate with VIMs’ project credentjals and – Juju client needs distjnct local network names ● Provide Juju with VPN credentjals netns context to access ● Or do it through OSM? multjple simultaneously – Need VPN gateways as VNFs ● VPN in? – Need VLD pinning (or dummy VNFs)
Multj-VIM IP pool split ● A VNF could consist of multjple and variable VDUs (scaling) VNF ● VL(D) profjles: VL VL – Subnet (e.g., 192.168.10/24) VNF VNF – DHCP range (e.g., 30-40) – Some defjned by VL VNFD/NSD providers – Rest defjned at deployment
Multj-VIM IP pool split ● Express as NSD ● Deploy it Site 1 – Assign VNFs to difgerent VNF VIMs ● OSM 5/6 implementatjon VNF VNF – Leads to WIM interactjon – No IP address co- ordinatjon Site 2
Multj-VIM IP pool split ● No VNF spans two or more sites ● No internal VL spans sites Site 1 ● Some external VLs span sites VNF – Some may span more than two 192.168.20/24 192.168.10/24 (20-39) – A split VL will need representatjon at each site ● VL profjles must be defjned VNF VNF before splittjng – Representatjons of the same VL at 10.30.67/24 difgerent sites must be compatjble (20-39) Site 2 – Representatjons of difgerent VLs at difgerent sites must be distjnct – To permit L2 inter-site connectjvity
Multj-VIM IP pool split ● Each OpenStack site provides a DHCP agent for each VL it represents Site 1 – One address is used as the VNF default gateway, DNS server and DHCP server 192.168.20/24 192.168.10/24 (30-39) – Agent only responds to DHCP 192.168.20/24 (20-29) requests of MACs known locally to use that network VNF VNF – No awareness of DHCP at other site 10.30.67/24 10.30.67/24 (30-39) (20-29) – DHCP ranges for same VL at each Site 2 site must not overlap! – DHCP ranges must antjcipate scaling
Multj-VIM IP pool split ● Inter-site connectjvity – Get VLAN tags of VIM representatjons of multj-site VLs Site 1 ● 42 & 57 VNF ● 69 & 60 – Add site identjfjcatjon as 192.168.20/24 192.168.10/24 57 42 (30-39) context 192.168.20/24 (20-29) ● Site 1.42 & Site 2.57 VNF VNF ● Site 1.69 & Site 2.60 – Estjmate bandwidth at each end point 10.30.67/24 60 10.30.67/24 69 (30-39) (20-29) ● Site 1.42 (10M) & Site 2.57 (10M) Site 2 ● Site 1.69 (10M) & Site 2.60 (10M) – Supply to WIM
Multj-VIM IP pool split ● Site 1.42 (10M) & Site 2.57 (10M) Site 1 ● Site 1.69 (10M) & Site VNF 2.60 (10M) 192.168.20/24 192.168.10/24 57 42 (30-39) ● Broadcasts are visible 192.168.20/24 (20-29) across both sites VNF VNF – ARPs work – DHCP requests seen by 10.30.67/24 60 10.30.67/24 69 (30-39) (20-29) both agents, but only one Site 2 responds
New management networks through OSM ● Defjne a VLD public – Include a VPN gateway as a VNF VPN gateway ● Deploy across sites mgmt – But only VNFs can be assigned to VIMs public ● Create tenant-specifjc VPN gateway VIMs using new ? ? ? network as default mgmt Site 1 Site 2 Site 3 Site 4 management
Recommend
More recommend