Fundamental Limits of Caching Urs Niesen Jointly with Mohammad Maddah-Ali Bell Labs, Alcatel-Lucent
Video on Demand Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . .
Video on Demand Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . . ⇒ Places significant stress on service provider’s networks
Video on Demand Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . . ⇒ Places significant stress on service provider’s networks ⇒ Caching (prefetching) can be used to mitigate this stress
Caching (Prefetching) 100 normalized demand 80 60 40 20 0 0 6 12 18 24 time of day
Caching (Prefetching) 100 normalized demand 80 60 40 20 0 0 6 12 18 24 time of day High temporal tra ffi c variability
Caching (Prefetching) 100 normalized demand 80 60 40 20 0 0 6 12 18 24 time of day High temporal tra ffi c variability Caching can help smooth tra ffi c
The Role of Caching Conventional beliefs about caching:
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work:
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global Global cache size matters
The Role of Caching Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global Global cache size matters Statistically identical users ⇒ di ff erent cache content
Problem Setting server shared link K users caches
Problem Setting server N files, N ≥ K for simplicity shared link K users caches
Problem Setting server N files shared link K users caches size M
Problem Setting server N files shared link K users caches size M Placement: cache arbitrary function of files (linear, nonlinear, . . . )
Problem Setting server N files shared link K users caches size M Delivery:
Problem Setting server N files shared link K users caches size M Delivery: - requests are revealed to server
Problem Setting server N files shared link K users caches size M Delivery: - requests are revealed to server - server sends arbitrary function of files
Problem Setting server N files shared link K users caches size M Delivery: - requests are revealed to server - server sends arbitrary function of files
Problem Setting server N files shared link K users caches size M Question: smallest worst-case rate R ( M ) needed in delivery phase?
Conventional Caching Scheme N files, K users, cache size M N M
Conventional Caching Scheme N files, K users, cache size M N M N M
Conventional Caching Scheme N files, K users, cache size M N M N M
Conventional Caching Scheme N files, K users, cache size M N M N M
Conventional Caching Scheme N files, K users, cache size M N M N M
Conventional Caching Scheme N files, K users, cache size M N Performance of conventional scheme: M R ( M ) = K · (1 − M / N ) N M
Conventional Caching Scheme N files, K users, cache size M N Performance of conventional scheme: M R ( M ) = K · (1 − M / N ) N M Caches provide content locally ⇒ local cache size matters Identical cache content at users
Conventional Caching Scheme N = 2 files , K = 2 users R 2 conventional scheme 0 M 0 2
Conventional Caching Scheme N = 4 files , K = 4 users R 4 conventional scheme 0 M 0 4
Conventional Caching Scheme N = 8 files , K = 8 users R 8 conventional scheme 0 M 0 8
Conventional Caching Scheme N = 16 files , K = 16 users R 16 conventional scheme 0 M 0 16
Conventional Caching Scheme N = 32 files , K = 32 users R 32 conventional scheme 0 M 0 32
Conventional Caching Scheme N = 64 files , K = 64 users R 64 conventional scheme 0 M 0 64
Conventional Caching Scheme N = 128 files , K = 128 users R 128 conventional scheme 0 M 0 128
Conventional Caching Scheme N = 256 files , K = 256 users R 256 conventional scheme 0 M 0 256
Conventional Caching Scheme N = 512 files , K = 512 users R 512 conventional scheme 0 M 0 512
Proposed Caching Scheme N files, K users, cache size M Design guidelines advocated in this work: The main gain in caching is global Global cache size matters Di ff erent cache content at users
Proposed Caching Scheme N files, K users, cache size M Design guidelines advocated in this work: The main gain in caching is global Global cache size matters Di ff erent cache content at users Performance of proposed scheme: 1 R ( M ) = K · (1 − M / N ) · 1 + KM / N
Proposed Caching Scheme N = 2 files , K = 2 users R 2 conventional scheme proposed scheme 0 M 0 2
Proposed Caching Scheme N = 4 files , K = 4 users R 4 conventional scheme proposed scheme 0 M 0 4
Proposed Caching Scheme N = 8 files , K = 8 users R 8 conventional scheme proposed scheme 0 M 0 8
Proposed Caching Scheme N = 16 files , K = 16 users R 16 conventional scheme proposed scheme 0 M 0 16
Proposed Caching Scheme N = 32 files , K = 32 users R 32 conventional scheme proposed scheme 0 M 0 32
Proposed Caching Scheme N = 64 files , K = 64 users R 64 conventional scheme proposed scheme 0 M 0 64
Proposed Caching Scheme N = 128 files , K = 128 users R 128 conventional scheme proposed scheme 0 M 0 128
Proposed Caching Scheme N = 256 files , K = 256 users R 256 conventional scheme proposed scheme 0 M 0 256
Proposed Caching Scheme N = 512 files , K = 512 users R 512 conventional scheme proposed scheme 0 M 0 512
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A B
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 A A A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 A A A 1 , B 1 A 1 , B 1 ⇒ Identical cache content at users ⇒ Gain from delivering content locally
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 , B 2 A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 , B 2 A B A 1 , B 1 A 1 , B 1
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 , B 2 A B A 1 , B 1 A 1 , B 1 ⇒ Multicast only possible for users with same demand
Recall: Conventional Scheme N = 2 files, K = 2 users, cache size M = 1 R 2 conventional scheme proposed scheme 1 0 M 0 1 2
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A B
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 1 , B 1 A 2 , B 2
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 1 , B 1 A 2 , B 2
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 B 1 A 1 , B 1 A 2 , B 2
Proposed Scheme N = 2 files, K = 2 users, cache size M = 1 A 1 , A 2 B 1 , B 2 A 2 B 1 A 1 , B 1 A 2 , B 2
Recommend
More recommend