Asymptotically Exact TTL-Approximations of the Cache Replacement Algorithms LRU(m) and h-LRU Nicolas Gast 1 , Benny Van Houdt 2 ITC 2016 September 13-15, W¨ urzburg, Germany 1 Inria 2 University of Antwerp Nicolas Gast – 1 / 24
Caches are everywhere User/Application fast Examples: Processor cache Database CDN slow Data source Nicolas Gast – 2 / 24
Caching policies Popularity-oblivious policies ◮ Cache-replacement policies 3 (LRU, RANDOM), ◮ TTL-caches 4 . Popularity-aware policies / learning ◮ LFU and variants 5 ◮ Optimal policies for network of caches 6 3 started with [King 1971, Gelenbe 1973] 4 e.g. , Fofack e al 2013, Berger et al. 2014 5 Optimizing TTL Caches under Heavy-Tailed Demands (Ferragut et al. 2016) 6 Adaptive Caching Networks with Optimality Guarantees (Ioannidis and Yeh, 2016) Nicolas Gast – 3 / 24
Caching policies Popularity-oblivious policies ◮ Cache-replacement policies 3 (LRU, RANDOM), ◮ TTL-caches 4 . Popularity-aware policies / learning ◮ LFU and variants 5 ◮ Optimal policies for network of caches 6 3 started with [King 1971, Gelenbe 1973] 4 e.g. , Fofack e al 2013, Berger et al. 2014 5 Optimizing TTL Caches under Heavy-Tailed Demands (Ferragut et al. 2016) 6 Adaptive Caching Networks with Optimality Guarantees (Ioannidis and Yeh, 2016) Nicolas Gast – 3 / 24
Contributions (and Outline) Two cache replacement policies 1 Performance analysis via TTL approximation 2 Asymptotic exactness of the approximation 3 Comparison between LRU, LRU( � m ) and h -LRU 4 Conclusion 5 Nicolas Gast – 4 / 24
Outline Two cache replacement policies 1 Performance analysis via TTL approximation 2 Asymptotic exactness of the approximation 3 Comparison between LRU, LRU( � m ) and h -LRU 4 Conclusion 5 Nicolas Gast – 5 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. copy cache Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache (do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache (do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. copy cache Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The two policies generalize the LRU policy hit : do nothing LRU: miss : evict the LRU (least-recently used) item. cache (do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.) Nicolas Gast – 6 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache exchange h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache copy h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache copy h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache exchange h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache exchange h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache copy h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache exchange h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache copy copy 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
The LRU( � m ) and h -LRU policies m ) 7 : exchange the requested item with the LRU of next list LRU( � virtual cache exchange h -LRU 8 : copy the requested item in the next list (and evict the LRU) virtual cache 7 Variant of RAND( � m ) of [G, Van Houdt 2015] 8 Introduced as k -LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24
Outline Two cache replacement policies 1 Performance analysis via TTL approximation 2 Asymptotic exactness of the approximation 3 Comparison between LRU, LRU( � m ) and h -LRU 4 Conclusion 5 Nicolas Gast – 8 / 24
In this talk: Performance analysis and comparison Qualitatively: less popular popular items It takes time to adapt 9 (RAND( � m ) in [G, Van Houdt, 2015]) for which product form solution exist. 10 heuristic for h -LRU [Martina et al. 2014] Nicolas Gast – 9 / 24
In this talk: Performance analysis and comparison Qualitatively: less popular popular items It takes time to adapt Quantitatively: Related work: Variants 9 or less accurate approximation 10 We present TTL approximations for MAP arrival (in the talk: IRM). 9 (RAND( � m ) in [G, Van Houdt, 2015]) for which product form solution exist. 10 heuristic for h -LRU [Martina et al. 2014] Nicolas Gast – 9 / 24
Pure LRU: the Che-approximation reset timer start new timer Cache eviction after T Nicolas Gast – 10 / 24
Pure LRU: the Che-approximation reset timer start new timer Cache eviction after T If the request of object k is a Poisson process of intensity λ k : (TTL) Object k is in cache with probability π k ( T ) = 1 − e − λ k T Nicolas Gast – 10 / 24
Pure LRU: the Che-approximation reset timer start new timer Cache eviction after T If the request of object k is a Poisson process of intensity λ k : (TTL) Object k is in cache with probability π k ( T ) = 1 − e − λ k T � T satisfies π k ( T ) = cache size . (Fixed point) k Nicolas Gast – 10 / 24
The TTL-approximation for LRU(m) exchange Cache 1 Cache 2 Cache 3 Nicolas Gast – 11 / 24
The TTL-approximation for LRU(m) reset timer start new timer start new timer start new timer Cache 1 Cache 2 Cache 3 eviction after T 1 eviction after T 2 eviction after T 3 If the request of object k is a Poisson process of intensity λ k : ℓ � ( e λ k T i − 1) Object k is in cache ℓ with probability π k , i ( T 1 . . . T h ) ∝ i =1 � T 1 . . . T h satisfy π k , i ( T 1 . . . T h ) = size of list i . k Nicolas Gast – 11 / 24
The TTL-approximation for h -LRU copy Cache 1 Cache 2 Cache 3 First idea: track the lists in which an object are. [Martina et al. 14] Problem: number of states = 2 h . Nicolas Gast – 12 / 24
Recommend
More recommend