qos services with dynamic packet state
play

QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon - PowerPoint PPT Presentation

QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon University (joint work with Hui Zhang and Scott Shenker) Todays Internet Service: best-effort datagram delivery Architecture: stateless routers excepting


  1. QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon University (joint work with Hui Zhang and Scott Shenker)

  2. Today’s Internet • Service: best-effort datagram delivery • Architecture: “stateless” routers – excepting routing state, routers do not maintain any fine grained state about traffic • Properties – scalable – robust istoica@cs.cmu.edu

  3. Trends • Deploy more sophisticated services, e.g., traffic management, Quality of Service (QoS) • Two types of solutions: – Stateless: preserve original Internet advantages • RED – support for congestion control • Differentiated services (Diffserv) – provide QoS – Stateful: routers perform per flow management • Fair Queueing - support for congestion control • Integrated services (Intserv) – provide QoS istoica@cs.cmu.edu

  4. Stateful Solutions: Router Complexity • Data path – Per-flow classification Per-flow State – Per-flow buffer management … – Per-flow scheduling flow 1 • Control path flow 2 – install and maintain Scheduler Classifier flow n per-flow state for data and control planes Buffer management output interface istoica@cs.cmu.edu

  5. Stateless vs. Stateful • Stateless solutions are more – scalable – robust • Stateful solutions provide more powerful and flexible services – Fair Queueing vs. RED – Intserv vs. Diffserv istoica@cs.cmu.edu

  6. Question • Can we achieve the best of two worlds, i.e., provide services implemented by stateful networks while maintaining advantages of stateless architectures? istoica@cs.cmu.edu

  7. Answer • Yes, at least in some interesting cases: – Per-flow guaranteed services [SIGCOMM’99] – Fair Queueing approximation [SIGCOMM’98] – large spatial service granularity [NOSSDAV’98] istoica@cs.cmu.edu

  8. Scalable Core (SCORE) • A contiguous and trusted region of network in which – edge nodes – perform per flow management – core nodes – do not perform any per flow management istoica@cs.cmu.edu

  9. The Approach Define a reference stateful network that • implements the desired service 2. Emulate the functionality of the reference network in a SCORE network Reference Stateful Network SCORE Network istoica@cs.cmu.edu

  10. The Idea Instead of having core routers • maintaining per-flow state have packets carry per-flow state Reference Stateful Network SCORE Network istoica@cs.cmu.edu

  11. The Technique: Dynamic Packet State (DPS) • Ingress node: compute and insert flow state in packet’s header istoica@cs.cmu.edu

  12. The Technique: Dynamic Packet State (DPS) • Ingress node: compute and insert flow state in packet’s header istoica@cs.cmu.edu

  13. The Technique: Dynamic Packet State (DPS) • Core node: – process packet based on state it carries and node’s state – update both packet and node’s state istoica@cs.cmu.edu

  14. The Technique: Dynamic Packet State (DPS) • Egress node: remove state from packet’s header istoica@cs.cmu.edu

  15. Examples • Support for congestion control • Per flow guaranteed services istoica@cs.cmu.edu

  16. Core- Stateless Fair Queueing (CSFQ) • Approximate functionality of a network in which every node performs Fair Queueing (FQ) FQ CSFQ FQ FQ FQ CSFQ CSFQ FQ CSFQ FQ CSFQ FQ CSFQ Reference Stateful Network SCORE Network istoica@cs.cmu.edu

  17. Algorithm Outline Ingress nodes: estimate rate r for each flow • and insert it in the packets’ headers istoica@cs.cmu.edu

  18. Algorithm Outline Ingress nodes: estimate rate r for each flow • and insert it in the packets’ headers istoica@cs.cmu.edu

  19. Algorithm Outline • Core node: – Compute fair rate f on the output link – Enqueue packet with probability P = min(1 , f / r ) – Update packet label to r = min( r, f ) istoica@cs.cmu.edu

  20. Algorithm Outline • Egress node: remove state from packet’s header istoica@cs.cmu.edu

  21. Example: CSFQ Core Core • Assume estimated fair rate f = 4 – flow 1, r = 8 => P = min(1, 4/8) = 0.5 8 10 6 • expected rate of forwarded traffic 8 *P = 4 2 – flow 2, r = 6 => P = min(1, 4/6) = 0.67 • expected rate of forwarded traffic 6 *P = 4 – flow 3, r = 2 => P = min(1, 4/2) = 1 • expected rate of forwarded traffic 2 Core Node (10 Mbps) 8 8 8 8 8 8 4 4 4 4 FIFO 6 6 6 6 6 4 4 4 4 2 2 2 2 2 2 istoica@cs.cmu.edu

  22. Simulation Results • 1 UDP (10 Mbps) and 31 TCPs sharing a 10 Mbps link – fair rate 0.31 Mbps UDP (#1) - 10 Mbps UDP (#1) TCP (#2) TCP (#2) . . . . . . TCP (#32) TCP (#32) Bottleneck link (10 Mbps) istoica@cs.cmu.edu

  23. Throughput of TCP and UDP Flows with RED, FRED, FQ, CSFQ 10 2 9 1.8 Throughput(Mbps Throughput(Mbps RED FRED 8 1.6 7 1.4 6 1.2 5 1 4 0.8 3 0.6 2 0.4 1 0.2 0 0 1 4 7 10 13 16 19 22 25 28 31 1 4 7 10 13 16 19 22 25 28 31 Flow Number Flow Number 2 2 1.8 1.8 CSFQ Throughput(Mbps Throughput(Mbps FQ 1.6 1.6 1.4 1.4 1.2 1.2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 1 4 7 10 13 16 19 22 25 28 31 1 4 7 10 13 16 19 22 25 28 31 istoica@cs.cmu.edu Flow Number Flow Number

  24. Results • Complexity – n – number of (active) flows FIFO/RED FRED FQ CSFQ State O(1) O(n) O(n) O(n) - edge O(1) - core Time O(1) O(1) O(log n) O(1) • Accuracy – the extra service that a flow can receive in CSFQ as compared to FQ is bounded istoica@cs.cmu.edu

  25. Examples • Support for congestion control • Per flow guaranteed services istoica@cs.cmu.edu

  26. Guaranteed Services • Intserv: – provide per flow bandwidth and delay guarantees, and achieve high resource utilization – support for fined grained and short-lived reservations – not scalable • Diffserv (Premium Service): – Scalable (on data path) – cannot provide low delay guarantees and high resource utilization simultaneously • even at low utilization (e.g., 10%) in a medium network (e.g., 15 hops) the worst case queueing delay > 200ms – centralized admission control (e.g., Bandwidth Broker) - not appropriate for short-lived reservations istoica@cs.cmu.edu

  27. Goal • Unicast Intserv guaranteed service semantic • Diffserv like scalability istoica@cs.cmu.edu

  28. Solution • Data path: approximate Jitter-Virtual Clock (Jitter-VC) with Core-Jitter Virtual Clock (CJVC) • Control path: approximate distributed admission control Jitter-VC CJVC Jitter-VC Jitter-VC Jitter-VC CJVC CJVC Jitter-VC Jitter-VC CJVC CJVC Jitter-VC CJVC Reference Stateful Network SCORE Network istoica@cs.cmu.edu

  29. Theoretical Results • CJVC provides same end-to-end delay guarantees as Jitter-VC (and Weighted Fair Queueing) • Admission control: provides semantic of a hard state protocol, but… – typically achieves only 80 % link utilization istoica@cs.cmu.edu

  30. Implementation • Problem: Where to insert the state ? • Possible solutions: – between link layer and network layer headers (e.g., MPLS) – as an IP option – find room in IP header • Current implementation (FreeBSD 2.2.6): use 17 bits in IP header – 4 bits in DS field (former TOS) – 13 bits by reusing fragment offset istoica@cs.cmu.edu

  31. Status • Working prototype in FreeBSD 2.2.6 that implements: – Core-Stateless Fair Queueing – Guaranteed services • data path – Core Jitter Virtual Clock • control path – distributed admission control istoica@cs.cmu.edu

  32. Conclusions • Diffserv has serious limitations: – no flow protection – cannot provide guaranteed services and high resource utilization simultaneously – no scalable admission control architecture (e.g. Bandwidth Broker) • DPS compatible with Diffserv: can greatly enhances the functionality while requiring minimal changes • Let’s do it in Qbone ! istoica@cs.cmu.edu

  33. More Information http://www.cs.cmu.edu/~istoica/DPS istoica@cs.cmu.edu

Recommend


More recommend