Web Caching and Content Delivery Web Caching and Content Delivery
Caching for a Better Web Caching for a Better Web Performance is a major concern in the Web Proxy caching is the most widely used method to improve Web performance • Duplicate requests to the same document served from cache • Hits reduce latency, bandwidth demand, server load • Misses increase latency (extra hops) Hits Internet Misses Misses Clients Proxy Cache Servers [Source: Geoff Voelker]
Proxy Caching Proxy Caching How should we build caching systems for the Web? • Seminal paper [Chankhunthod96] • Proxy caches [Duska97] • Akamai DNS interposition [Karger99] • Cooperative caching [Tewari99, Fan98, Wolman99] • Popularity distributions [Breslau99] • Proxy filtering and transcoding [Fox et al] • Consistency [Tewari,Cao et al] • Replica placement for CDNs [et al] [Voelker]
Issues for Web Caching Issues for Web Caching • Binding clients to proxies, handling failover Manual configuration, router-based “transparent caching”, WPAD (Web Proxy Automatic Discovery) • Proxy may confuse/obscure interactions between server and client. • Consistency management At first approximation the Web is a wide-area read-only file service...but it is much more than that. caching responses vs. caching documents deltas [Mogul+Bala/Douglis/Misha/others@research.att.com] • Prefetching, scale, request routing, scale, performance Web caching vs. content distribution (CDNs, e.g., Akamai)
End- -to to- -End Content Delivery End Content Delivery End request stream CDN servers hosting Internet network request surrogate distributor caches proxies server array + storage upstream downstream
Proxy Cache Effectiveness Proxy Cache Effectiveness How to measure Web cache effectiveness (goals)? • Hit ratio • Savings in bandwidth or server load • Reduction in perceived user latency What factors determine/limit effectiveness? • Capacity? • User population? • Proxy placement in the network? • Updates and invalidations?
Web Traffic Characterization Web Traffic Characterization Research question: how do goals and traffic behavior shape strategies for deploying and managing proxy caches? • Replacement policy: what objects to retain in cache? Large vs. small, relative importance of popularity and stability • Deployment: where to place the cache? Close to server or client? • How many users per cache? • Prefetching? Since the Web is in active deployment on a large-scale, Web traffic characterization is an empirical science. • Science of mass behavior: observe and test hypotheses.
Zipf Zipf [Breslau/Cao99] and others observed that Web accesses can be modeled using Zipf-like probability distributions . • Rank objects by popularity: lower rank i ==> more popular. • The probability that any given reference is to the i th most popular object is p i Not to be confused with p c , the percentage of cacheable objects. Zipf says: “ p i is proportional to 1/i α , for some α with 0 < α < 1 ”. • Higher α gives more skew: popular objects are way popular. • Lower α gives a more heavy-tailed distribution. • In the Web, α ranges from 0.6 to 0.8 [Breslau/Cao99]. • With α =0.8, 0.3% of the objects get 40% of requests.
Zipf- -like Reference Distributions like Reference Distributions Zipf Probability of access to the object with popularity rank i : [Zipf 49, Duska et al. 97, Breslau et al. 98] p i ! 1/ i α head such that: p i alpha-0.7 Σ p i = 1 tail Popularity rank (This is equivalent to a power-law or Pareto distribution.) heavy tail
Importance of Traffic Models Importance of Traffic Models Analytical models like this help us to predict cache hit ratios (object hit ratio or byte hit ratio). • E.g., get object hit ratio as a function of size by integrating under segments of the Zipf curve …assuming perfect LFU replacement • Must consider update rate Do object update rates correlate with popularity? • Must consider object size How does size correlate with popularity? • Must consider proxy cache population What is the probability of object sharing? • Enables construction of synthetic load generators SURGE [Barford and Crovella 99]
The “Trickle- -Down Effect” Down Effect” The “Trickle to servers cache clients flood trickle What is the effect on “downstream” traffic? What is the significance of this effect? How does it impact design choices for components “behind” the caches?
A Look at the Miss Stream A Look at the Miss Stream synthetic trace Zipf-like SURGE-generated low locality: α = 0.6 1035 log-log plot 816 head: flattened midrange: tapers tail: intact
Effect on Server Trace ( ibm ibm.com) .com) Effect on Server Trace ( 1998 ibm.com high locality fit Zipf α = 0.76 skewed: 77 % / 1%
What’s Happening? (LRU) What’s Happening? (LRU) Suppose the cache fills up in R references. (That’s a property of the trace and the cache size.) Then a cache miss on object with rank i occurs only if i is referenced…. probability p i … and i has not been referenced in the last R requests. probability (1 - p i ) R Stack distance P(a miss is to object i) is q i = p i (1 - p i ) R
Miss Stream Probability by Popularity Miss Stream Probability by Popularity Moderately popular objects now dominate. q i : R = 10 4 , α α α α =0.7 IBM 1998 (32 MB)
Object Hit Ratio by Popularity (1) Object Hit Ratio by Popularity (1) synthetic α = 0.6
Object Hit Ratio by Popularity (2) Object Hit Ratio by Popularity (2) IBM 1998
Limitations/Features of This Study Limitations/Features of This Study static (cacheable) objects ignore misses caused by updates • invalidation/expiration LRU replacement vary cache effectiveness by capacity • cache intercepts all client traffic ignore effect on downstream traffic volume
Proxy Deployment and Use Proxy Deployment and Use Where to put it? How to direct user Web traffic through the proxy? Request redirection • Much more to come on this topic… Must the server consent? • Protected content • Client identity “Transparent” caching and the end-to-end principle • Must the client consent?
Interception Switches Interception Switches The client doesn’t know. The server doesn’t know. Neither side told HTTP to disable it. Is it legal? Good thing? Bad thing? ISP cache array
Shouldn’t This Be Illegal? Shouldn’t This Be Illegal? end end middle RFC 1122: The Internet Architecture (IPv4) specifies that each packet has a unique destination “host” address. Problems middle boxes may be subversive IPsec and SSL dynamic routing
Cache Effectiveness Cache Effectiveness Previous work has shown that hit rate increases with population size [Duska et al. 97, Breslau et al. 98] However, single proxy caches have practical limits • Load, network topology, organizational constraints One technique to scale the client population is to have proxy caches cooperate
Recommend
More recommend