using octavia
play

Using Octavia deep dive Dean H. Lorenz, IBM Research Haifa Allan - PowerPoint PPT Presentation

Elastic Load-Balancing Using Octavia deep dive Dean H. Lorenz, IBM Research Haifa Allan Hu, Cloud Networking Services, IBM NSJ OpenStack Austin 2016 Load Balancing 101 Users access a service Service hosted on cloud Pool of


  1. Elastic Load-Balancing Using Octavia deep dive Dean H. Lorenz, IBM Research – Haifa Allan Hu, Cloud Networking Services, IBM NSJ OpenStack Austin 2016

  2. Load Balancing 101 • Users access a service ‒ Service hosted on cloud • Pool of back-end servers (aka members ) Elastic Load Balancing Using Octavia Service IP ‒ High availability: (VIP) • server failure ≠ service failure ‒ Performance: • add/remove servers to match load back-end server Service • One service IP (aka VIP ) back-end server back-end server ‒ Clients do not know which back-end serves them back-end server ‒ Need to split incoming VIP traffic pool OpenStack Austin 2016

  3. Load Balancing 101 (2) • Load balancer ‒ Distribute new VIP connections to members ‒ High availability: avoid failed servers ‒ Performance: avoid overloaded servers Elastic Load Balancing Using Octavia • LB is not the pool manager: does not add/remove servers Load Balancer (on VIP) • But uses all available servers, reports broken ones ‒ Health Monitor + Stats Collector • LB Algorithm / Policy back-end server Service ‒ Balance something back-end server • # connections, CPU load… back-end server back-end server ‒ Affinity : similar packets go to same back-end pool OpenStack Austin 2016 • All packets from same flow (minimum affinity) • All packets from same source (quicker TLS handshakes) • All packets from same HTTP user

  4. Load-Balancing as a Service (LBaaS) • Neutron LBaaSv2 API ‒ LB (VIP)  Listeners (protocol)  Pool  Members, Health monitor • neutron lbaas-{loadbalancer,listener,pool,member,healthmonitor}- CRUD , CRUD : {create,delete,list,show,update} Elastic Load Balancing Using Octavia • Octavia (operator-grade LB) Octavia ‒ VM per LB (aka Amphora ) running HAProxy • 2 VMs for active-standby HA (Mitaka) Operator-grade Load Balancer Neutron Netwk. services LBaaS U sr API Handler OpenStack Austin 2016 Operator API Octavia Driver Amphora Amphora Amphora Amphora Amphora Amphora Amphora Amphora LB VM LB VM LB VM LB VM LB VM LB VM LB VM LB VM back-end back-end back-end back-end back-end back-end back-end back-end back-end back-end back-end back-end back-end

  5. Load-Balancing as a Service (LBaaS) • Neutron LBaaSv2 API ‒ LB (VIP)  Listeners (protocol)  Pool  Members, Health monitor • neutron lbaas-{loadbalancer,listener,pool,member,healthmonitor}- CRUD , CRUD : {create,delete,list,show,update} Elastic Load Balancing Using Octavia • Octavia (operator-grade LB) Controller Barbican Certificate Driver ‒ VM per LB (aka Amphora ) running HAProxy Nova Compute Driver • 2 VMs for active-passive HA (Mitaka) Amphora Driver Neutron Networking Driver ‒ Many pieces under the hood… Housekeeping Mgr Netwk. services • Lot’s of pluggability LBaaS Health Manager U sr API Handler Octavia Worker OpenStack Austin 2016 Octavia Driver Amphora Amphora Amphora Amphora Amphora Amphora Amphora Amphora LB VM LB VM LB VM LB VM Octavia API LB VM LB VM LB VM LB VM back-end back-end back-end back-end DB back-end back-end back-end back-end back-end back-end back-end back-end back-end

  6. Amphora can do even more • HAProxy is great New ‒ L7 Content Switching in Octavia ‒ Monitor back-end health (Mitaka) Elastic Load Balancing Using Octavia ‒ Cookie insertion (session stickiness) HAProxy LB ‒ SSL termination ‒ Authentication Not supported back-end server ‒ Compression in Octavia (yet) Service back-end server ‒ … back-end server back-end server • Would be nice to include other functions ‒ E.g., cache, FW, rewrite, … OpenStack Austin 2016 The more it does, the more resources it needs

  7. Remind me again; why did I need a LB? ‒ High availability • Amphora is single point of failure Elastic Load Balancing Using Octavia • But active-standby just added in Mitaka Load Balancer Amphora (on VIP) ‒ Performance: • Huge, successful service… • Amphora might not be able to handle load back-end server Service back-end server back-end server back-end server back-end server Service back-end server back-end server back-end server back-end server Service back-end server back-end server back-end server OpenStack Austin 2016 back-end server Service back-end server back-end server back-end server

  8. Elastic Load Balancing (ELB) Remind me again; why did I need a LB? ‒ High availability • Amphora is single point of failure Elastic Load Balancing Using Octavia • But active-standby just added in Mitaka Load Balancer Amphora (on VIP) ‒ Performance: Amphora Amphora Amphora • Huge, successful service… pool • Amphora might not be able to handle load back-end server Service back-end server Elastic Load-Balancing (ELB) back-end server back-end server back-end server Service back-end server ‒ Pool of Amphorae back-end server back-end server back-end server Service ‒ Need to split incoming VIP traffic over Amphorae pool back-end server back-end server back-end server OpenStack Austin 2016 ‒ Déjà vu… back-end server Service back-end server back-end server back-end server

  9. LBaaS Challenge: Cost-effectively provide LBaaS for cloud workloads • Customers expect the cloud to support their elastic workloads ‒ Cheap for small workloads (free tier) ‒ Acceptable performance for large workloads Elastic Load Balancing Using Octavia • No matter how large • LbaaS should ‒ Use as little resources as possible for small workloads ‒ Have the resources to handle huge workloads • Existing Octavia topologies have per LB ‒ One active VM • Too small for large workloads? Too much for free tier? Maybe use containers? OpenStack Austin 2016 ‒ (optionally) One idle standby VM • 50% utilization

  10. Introducing: Active-Active, N+1 Topology • N Amphorae, all active ‒ Can handle large load Service IP (VIP) Distributor • 2-stage VIP traffic splitting Elastic Load Balancing Using Octavia 1) Distributor to Amphorae 2) Amphora to Back-end servers Standby Amphora Amphora • Standby Amphora Amphora N +1 Amphora Amphorae ‒ Ready to replace a failed Amphora Pool • Takes over the load ‒ Failed Amphora recreated as standby back-end server Service back-end server ‒ Can generalize to more than one standby OpenStack Austin 2016 back-end server back-end server • N + k Active-Active topology is still a draft blue-print  ( + demo code  ) Disclaimer:

  11. The Distributor • Equivalent to a GW router ‒ Should have similar high availability attributes Service IP ‒ Needs to handle entire VIP load (VIP) Distributor Elastic Load Balancing Using Octavia ‒ HW is a good match • “Not so smart” LB Amphora ‒ More like ECMP Amphora Amphora ‒ L3 only, but must have per-flow affinity Amphora • Cannot break TCP • Could be shared (multi-tenant) ‒ SSL termination is only at Amphora OpenStack Austin 2016 • Could be DNS ‒ If you have enough (public) IPs

  12. Our SDN SW Distributor • 1-arm Direct Routing ‒ Co-located on same LAN as Amphorae ‒ L2 forwarding   Front-end • Replace own MAC with MAC of Amphora Elastic Load Balancing Using Octavia Router ‒ Direct Server Return   • Return traffic goes directly to GW Distributor VIP  ‒ Amphorae do not advertise VIP Router VIP Amphora Front-end private public Instance • OpenFlow rules (using groups)  VIP Amphora ‒ Select Amphora by hash of SrcIP:Port Instance  • OVS VM VIP Amphora ‒ Can be any OpenFlow switch Instance OpenStack Austin 2016 ‒ Multi-tenant ‒ No HA for now 

  13. Our SDN SW Distributor • 1-arm Direct Routing $ sudo ovs-ofctl – O OpenFlow 15 dump-groups br-data ‒ Co-located on same LAN as Amphorae OFPST_GROUP_DESC reply (OF1.5) (xid=0x2): ‒ L2 forwarding  group_id=1, type=select, se sele lection_ ction_met ethod od=hash hash, fi fiel elds( s(ip ip_sr src, , tcp_s cp_src),  Front-end • Replace own MAC with MAC of Amphora Elastic Load Balancing Using Octavia bucket=bucket_id:0,actions=set_field:fa:16:3e:95:86:06->eth_dst,IN_PORT, Router ‒ Direct Server Return bucket=bucket_id:1,actions= ns=set set_fiel _field:f d:fa: a:16:3 6:3e: e:9d:c d:c9:d :d2->eth_ th_dst dst,IN_PORT,   bucket=bucket_id:2,actions=set_field:fa:16:3e:ef:97:60->eth_dst,IN_PORT • Return traffic goes directly to GW Distributor VIP  $ ‒ Amphorae do not advertise VIP Router VIP Amphora Front-end private public Instance • OpenFlow rules (using groups)  VIP Amphora ‒ Select Amphora by hash of SrcIP:Port Instance  • OVS VM VIP Amphora ‒ Can be any OpenFlow switch Instance OpenStack Austin 2016 ‒ Multi-tenant ‒ No HA for now 

Recommend


More recommend