cap for networks
play

CAP for Networks Or: How to Stop Worrying and Embrace Failure= - PowerPoint PPT Presentation

CAP for Networks Or: How to Stop Worrying and Embrace Failure= Aurojit Panda, Colin Scott, Ali Ghodsi, Teemu Koponen, Scott Shenker UC Berkeley, KTH, VMware, ICSI Keshav raps about SDN CAP Theorem In the presence of network P artitions pick


  1. CAP for Networks Or: How to Stop Worrying and Embrace Failure= Aurojit Panda, Colin Scott, Ali Ghodsi, Teemu Koponen, Scott Shenker UC Berkeley, KTH, VMware, ICSI

  2. Keshav raps about SDN

  3. CAP Theorem In the presence of network P artitions pick one of • Service C orrectness • Service A vailability

  4. CAP Theorem: Impact Divides the database community (even today) NoSQL SQL Availability above all Correctness above all

  5. How does the CAP theorem apply to networks?

  6. What about Networks?

  7. What about Networks? Traditionally connectivity was the only concern • Correctness : Deliver packets to destination • Availability : Deliver packets to destination • Correctness is the same as Availability

  8. The move to SDN

  9. The move to SDN SDN provides more sophisticated functionality : • Tenant isolation (ACL enforcement) • Fine grained load balancing • Virtualization

  10. The move to SDN SDN provides more sophisticated functionality : • Tenant isolation (ACL enforcement) • Fine grained load balancing • Virtualization Control plane partitions no longer imply data plane partitions • Control traffic often does not use data plane network

  11. Availability ≠ Correctness During control plane partitions • Data plane connected => Deliver packets (Availability) • Inconsistent control plane data (Correctness) • Availability does not imply Correctness

  12. How does the CAP theorem apply to networks SDN?

  13. How does the CAP theorem apply to networks SDN? Can one provide correct isolation and availability in the presence of link failures?

  14. Network Model Controller 1 Controller 2 Switch Switch A C B D 10.1.1.1 10.1.2.1 10.1.1.2 10.1.2.2 Out-of-band control network. •

  15. Network Model Controller 1 Controller 2 Switch Switch A C B D 10.1.1.1 10.1.2.1 10.1.1.2 10.1.2.2 Out-of-band control network. • Routing and forwarding based on addresses. •

  16. Network Model Controller 1 Controller 2 Switch Switch A C B D 10.1.1.1 10.1.2.1 10.1.1.2 10.1.2.2 Out-of-band control network. • Routing and forwarding based on addresses. • Policy specification using end-host names. •

  17. Network Model A 10.1.1.1 C 10.1.2.1 Controller 1 Controller 2 B 10.1.1.2 D 10.1.2.2 Switch Switch A C B D 10.1.1.1 10.1.2.1 10.1.1.2 10.1.2.2 Out-of-band control network. • Routing and forwarding based on addresses. • Policy specification using end-host names. • Controller only aware of local name-address bindings. •

  18. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.1.2 Switch Switch A B D 10.1.1.2 10.1.1.1 10.1.2.2 • Consider policy isolating A from B.

  19. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.1.2 Switch Switch A B D 10.1.1.2 10.1.1.1 10.1.2.2 • Consider policy isolating A from B. • A control network partition occurs.

  20. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.2.1 Switch Switch A B B D 10.1.1.2 10.1.1.1 10.1.2.2 10.1.2.1 • Consider policy isolating A from B. • A control network partition occurs.

  21. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.2.1 Switch Switch A B B D 10.1.1.2 10.1.1.1 10.1.2.2 10.1.2.1 • Consider policy isolating A from B. • A control network partition occurs. • Only possible choices

  22. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.2.1 Switch Switch A B 10 . 1 . 1 . 1 → 10 . 1 . 2 . 1 B D 10.1.1.2 10.1.1.1 10.1.2.2 10.1.2.1 • Consider policy isolating A from B. • A control network partition occurs. • Only possible choices • Let all packets through (including from A to B) (Correctness)

  23. Isolation Result D 10.1.2.2 A 10.1.1.1 Controller 1 Controller 2 B 10.1.2.1 Switch Switch 10 . 1 . 1 . 1 → 10 . 1 . 2 . 2 A B 10 . 1 . 1 . 1 → 10 . 1 . 2 . 1 B D 10.1.1.2 10.1.1.1 10.1.2.2 10.1.2.1 • Consider policy isolating A from B. • A control network partition occurs. • Only possible choices • Let all packets through (including from A to B) (Correctness) • Drop all packets (including from A to D) (Availability)

  24. Workarounds for Isolation • Identity-Address disconnect underlies isolation result

  25. Workarounds for Isolation • Identity-Address disconnect underlies isolation result • Network can label packets with sender ’ s identity

  26. Workarounds for Isolation • Identity-Address disconnect underlies isolation result • Network can label packets with sender ’ s identity • Route based on identity instead of address

  27. Workarounds not General Edge Disjoint Traffic Engineering Two flows must traverse disjoint links • Requires consistent topology across controllers •

  28. Can one provide correct isolation and availability in the presence of link failures?

  29. Not in general Can one provide correct isolation and availability in the presence of link failures?

  30. In the Paper • More policies and proofs • More details on workarounds • Other ways to model the network

  31. CAP for Networks? Choices for network architects Availability above all Correctness above all Traditional Routing? Security Policies? BGP ICING? NOX Routing

  32. Backup Slides

  33. Host Migration • Our model assumes host migrations without controller involvement. • In part this is because host migrations are surprisingly common • Soundararajan and Govil 2010: 6 migrations/day/VM • In a datacenter ~480,000 migrations/day • 5.5 migrations per second • Controller involvement is too expensive in datacenters • NVP and Floodlight work in a similar manner • In enterprises controller involvement complicated by mobility.

Recommend


More recommend