scaled vip algorithms for joint dynamic forwarding and
play

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in - PowerPoint PPT Presentation

1896 1920 1987 2006 Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks Ying Cui Shanghai Jiao Tong University, Shanghai, China Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh 16/9/28 1


  1. 1896 1920 1987 2006 Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks Ying Cui Shanghai Jiao Tong University, Shanghai, China Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh 16/9/28 1

  2. Background • Promising future Internet architecture: NDN/CCN – replace connection-based model with content-centric model – reflect how Internet is primarily used today – improve efficiency of content dissemination • Content delivery in NDN – two packet types: Interest Packet (IP) and Data Packet (DP) – three data structures in nodes: Forwarding Information Base (FIB), Pending Interest Table (PIT), Content Store • Goal of NDN – optimally utilize bandwidth and storage resources – jointly design forwarding and caching 16/9/28 2

  3. Previous Work Separate Forwarding and Caching Joint Forwarding and Caching [Chai'12] : caching based on concept of [Xie'12] : single-path routing and betweenness centrality caching [Ming'12] : cooperative caching [Amble'11] : single-hop throughput schemes (without joint forwarding optimal routing and caching design) [Yi'12] : adaptive multipath forwarding [Yeh'14] : VIP framework, multi-hop scheme (without joint caching design) joint dynamic forwarding and caching 16/9/28 3

  4. VIP Framework Virtual interest packets (VIPs) • – capture (measured) demand for respective data objects – demand unavailable in actual network due to interest suppression Virtual control plane • – operate on VIPs at data object level – develop throughput optimal control operating on VIPs Actual plane • – handle interest and data packets at data chunk level – specify efficient actual control using VIP flow rates and queue length 16/9/28 4

  5. Motivation and Challenge • Problem in previous virtual control plane actual network: virtual control plane: VIP have interest suppression capture demand via VIPs scaling not have demand information not reflect interest suppression • Aim to further improve performance of previous VIP algorithms by improving virtual control plane – provide sufficient demand information – reflect more accurately actual interest traffic with suppression • Challenge – how to capture both demand and interest suppression effect? – how to maintain throughput optimality when reflecting interest suppression? 16/9/28 5

  6. Network Model • General multi-hop network (bi-directed graph) – node set : N nodes, index n, cache size – link set : L directed links, index (a,b), capacity – data set : K data objects, index k, size D • Discrete time system – time slot index t , slot duration 1 • Exogenous data request arrival – per-slot arrivals: – long-term arrival rate: • Content sources – content source for data k is src(k) Î N – fixed sources and varying caching points in time 16/9/28 6

  7. Virtual Control Plane • Virtual interest packet (VIP) – one corresponding VIP generated for one exogenous request arrival • VIP counts: – represent VIP queue size ( maintain one VIP queue ) • VIP transmission rate: – not involve actual packets being sent, DPs travel on reverse path , – • Caching state: (1: cached, 0: not) – assume a node can gain access to any data object when there is a VIP – • I/O rate of storage disk: r n – maximum rate of producing copies of cached objects 16/9/28 7

  8. VIP Scaling • Scaling parameter: (reduce to previous results if = 1) – reflect average per-slot interest suppression effect (VIP arrival rate) – approximate by a predetermined constant – estimate using e.g., Exponential Moving Average (EMA) outgoing VIPs Scaled-down incoming VIPs • Dynamics of scaled VIPs sinking VIPs due to caching – provide downward gradient from entry points of data requests to content source and caching nodes capture to some extent demand information reflect to some extend interest suppression 16/9/28 8

  9. Stability Region with VIP Scaling long-term VIP long-term data transmission rate caching chance For any in int(Λ), there exists some policy guarantees scaled VIP queues stable without knowing exact value of λ. • Interpretation – greater when θ nk increases – superset of previous VIP stability region 16/9/28 9

  10. Scaled VIP Algorithm Forwarding: For each data object and each link , choose Î ( a , b ) L , , data object with max BP weight max BP weight BP weight • Interpretation – at t , for k and (a,b) , allocate entire normalized reverse link capacity C ba /D to transmit VIPs for object with max backpressure (BP) weight – balance out VIP counts (spread out VIPs from high potential to low one) – capture interest suppression at receiving node 16/9/28 10

  11. Scaled VIP Algorithm Caching: At each node , choose to • Interpretation – knapsack problem: at t , for n , allocate cache space to L n /D data objects with highest VIP counts – balance out VIP counts (sink VIPs with high potential) – same as previous caching without VIP scaling 16/9/28 11

  12. Features of Scaled VIP Algorithm • Joint design – forwarding and caching both based on VIP counts • Dynamic design – adaptive to varying VIP counts (instantaneous congestion) • Distributed design – forwarding: exchange VIP counts with neighbors – caching: local VIP counts • Complexity – same order as previous VIP algorithm: O(N 2 K) • Extend Lyapunov drift techniques – incorporate scaling • Develop algorithms for actual plane from scaled VIP algorithm using mapping in previous VIP framework 16/9/28 12

  13. Throughput Optimality • Interpretation – adaptively stabilize all VIP queues for any in virtual plane – exploit both bandwith and storage resources to maximally balance out VIP load and prevent congestion buildup – upper bound on average total number of scaled VIPs smaller than previous upper bound for VIPs without scaling 16/9/28 13

  14. Experiment Evaluation • Baselines – previous VIP algorithm – six other baselines – forwarding: shortest path, potential-based routing – caching decision: LCE, LCD, Age-based, LFU – caching replacement: LRU, FIFO, BIAS, UNIF, Age-based, LFU • Delay between arrival time of requested Data Packet and creation time of Interest Packet • Arrival models and parameters – request arrivals follow Poisson process with same rate λ (requests/node/slot) – content popularity follows Zipf (0.75), w.p. p k requesting data k – Interest Packet size 125B, chunk size 50 KB, object size 5 MB 16/9/28 14

  15. DTelekom Network Previous VIP and scaled VIP algorithms with different scaling parameters achieve better delay than six baseline schemes Scaled VIP algorithms with θ = 1.5 and EMA θ n k (t) greatly improve performance of VIP algorithm Large θ overemphasizes 3000 Objects, cache size 2GB (400 objects), suppression and underestimates link capacity 500 Mb/s, all nodes generate demand, leading to performance requests and can be sources constant θ=1.5 EMA θ degradation Improvement 36 %, λ = 40 57 %, λ = 60 16/9/28 15

  16. GEANT Network 3000 Objects, cache size 2GB (400 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources constant θ=1.5 EMA θ Improvement 47 %, λ = 40 58 %, λ = 40 16/9/28 16

  17. Abilene Network 3000 Objects, cache size 5GB (1000 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources constant θ=1.5 EMA θ Improvement 31 %, λ = 60 40 %, λ = 60 16/9/28 17

  18. Service Network 3000 Objects, cache size 5GB (1000 objects), link capacity 500 Mb/s, consumer nodes generate requests, Node 1 is source node. constant θ=1.5 EMA θ Improvement 31 %, λ = 40 58 %, λ = 30 16/9/28 18

  19. Summary • Propose a modified VIP framework based on scaled VIPs – capture both demand and interest suppression • Characterize stability region with VIP scaling • Develop scaled forwarding and caching algorithm • Prove throughput optimality of scaled VIP algorithm • Evaluate superior delay performance with experiment results • Can be extended to incorporate congestion control 16/9/28 19

  20. Reference [Yeh’14] E. Yeh, T. Ho, Y. Cui, M. Burd, R. Liu, and D. Leong. Vip: A framework for joint dynamic forwarding and caching in named data networks. In Proceedings of the 1st International Conference on Information-centric Networking, ICN ’14, pages 117–126, New York, NY, USA, 2014. ACM. [Amble’11] M. M. Amble, P. Parag, S. Shakkottai, and L. Ying. Content-aware caching and traffic management in content distribution networks. In Proceedings of IEEE INFOCOM 2011, pages 2858–2866, Shanghai, China, Apr. 2011. [Xie’12] H. Xie, G. Shi, and P. Wang. Tecc: Towards collaborative in-network caching guided by traffic engineering. In Proceedings of IEEE INFOCOM 2012:Mini-Conference, pages 2546–2550, Orlando, Florida, USA, Mar. 2012. [Chai’12] W. Chai, D. He, L. Psaras, and G. Paviou, Cache “less for more” in information-centric networks. IFIP’ 12, pages 27-40, Berlin, Heideberg, 2012. [Ming’12] Z. Ming, M. Xu, and D. Wang. Age-based cooperative caching in information-centric networks. In IEEE INFOCOM WKSHPS, pages 268–273, March 2012. [Yi’12] C. Yi, A. Afanasyev, L. Wang, B. Zhang, and L. Zhang. Adaptive forwarding in named data networking. SIGCOMM Comput. Commun. Rev., 42(3):62–67, June 2012. 16/9/28 20

  21. 16/9/28 21

Recommend


More recommend