Introduction : SELF TUNING MEMORY MANAGEMENT FOR DATA SERVERS By Sangeetha Sivaprakasam
Introduction : 1) Introduction. 2)Need for memory tuning. 3)Self –tuning server caching. 4)Automatic tuning of server and cache memory. 5)Exploiting distributed What is memory tuning ? memory. When you run multiple instances on a 6)Integrating speculative computer,each instance dynamically acquires prefetching with caching. and frees memory to adjust for changes in the 7)Self – tuning caching and workload of the instance. prefetching for web based systems. 8)Conclusion. 9)Bibliography.
Need for memory tuning : 1) Introduction. • In case of complex software. 2)Need for memory tuning. • In case of data server in multi-user mode and multiple data- 3)Self –tuning server intensive decision support queries. caching. 4)Automatic tuning of server and cache memory. • Increasing data volumes and critical decision. 5)Exploiting distributed memory. 6)Integrating speculative prefetching with caching. • Thrashing ,memory bottle Memory contention neck. 7)Self – tuning caching and prefetching for web based systems. • Automatic tuning decisions reduce the cost of human administration. 8)Conclusion. 9)Bibliography.
Self – tuning server caching : • Memory in data server is for caching frequently accessed data to 1) Introduction. avoid disk I/O. 2)Need for memory tuning. 3)Self –tuning server • Cache manager is to maximize the cache hit ratio. caching. 4)Automatic tuning of • The most used replacement is LRU( Least Recently Used) server and cache memory. algorithm. 5)Exploiting distributed memory. 6)Integrating speculative a) Sequential scan over large set of pages . prefetching with caching. 7)Self – tuning caching and prefetching for web based b) Random access to pages sets with highly skewed cardinalities . systems. 8)Conclusion. 9)Bibliography.
Self – tuning server caching : • To overcome these deficiencies –had developed – no of 1) Introduction. tuning methods but they are not fully self –tuning . 2)Need for memory tuning. The various approaches are : 3)Self –tuning server caching. 4)Automatic tuning of 1) PANDORA : server and cache memory. • This approach relies on explicit tuning hints from programs. 5)Exploiting distributed memory. 6)Integrating speculative • This is an hint processing approach. Eg: a query processor prefetching with caching. engine. 7)Self – tuning caching and prefetching for web based systems. • The difficulty is hinting passing approach is very limited and 8)Conclusion. bears high risk. 9)Bibliography.
Self – tuning server caching : SISYPHUS : 1) Introduction. 2)Need for memory • This approach aims to tune the cache manager by portioning tuning. the overall cache into separate “Pools”. 3)Self –tuning server caching. • It works well with partitioning index Vs data pages. 4)Automatic tuning of server and cache memory. •But the difficult - appropriate pool size and proper assignment 5)Exploiting distributed of page classes of pools . memory. 6)Integrating speculative SPHINX : prefetching with caching. 7)Self – tuning caching and • It abandons LRU and adopts a replacement policy based on prefetching for web based access frequencies. systems. 8)Conclusion. • LFU (Least frequently used ) policy –optimal for static work 9)Bibliography. load ----pages have independent reference probabilities.
Self – tuning server caching : • The problem in sphinx can also be improved by using a “Nike 1) Introduction. approach” - LRU-k algorithm. 2)Need for memory tuning. 3)Self –tuning server • It uses three methods observe-predict –react. caching. Observation : 4)Automatic tuning of server and cache memory. • It keeps limiting on relevant page’s reference history – 5)Exploiting distributed k last reference time points. memory. 6)Integrating speculative prefetching with caching. • ‘Relevant’ - all pages that are currently in the cache plus some 7)Self – tuning caching and more pages that are potential caching candidates. prefetching for web based systems. 8)Conclusion. • Five - minute rule -last 5 mins can be safely discarded. 9)Bibliography.
Self – tuning server caching : Predictions : 1) Introduction. 2)Need for memory • Page’s specific access rate is known as page’s heat. tuning. • Page’s heat(p) = k / now – tk. 3)Self –tuning server caching. • Probability for accessing the page within next T time units is 4)Automatic tuning of server and cache memory. 1- e ^ - (heat(p) * T). 5)Exploiting distributed memory. • optimal to rank pages - near-future access probabilities. 6)Integrating speculative prefetching with caching. Reaction : 7)Self – tuning caching and prefetching for web based • When page - freed up in cache LRU-k algorithm replaces the systems. pages with smallest value for above estimated probability. 8)Conclusion. 9)Bibliography.
Self – tuning server caching : • This algorithms can be generalized with variable size caching 1) Introduction. (documents) rather than pages. 2)Need for memory tuning. 3)Self –tuning server • We calculate temperature of document. caching. 4)Automatic tuning of server and cache memory. • Caching documents are simply ranked by their temperature. 5)Exploiting distributed memory. 6)Integrating speculative prefetching with caching. 7)Self – tuning caching and prefetching for web based systems. 8)Conclusion. 9)Bibliography.
Automatic tuning of server and cache memory : • A data server needs to manage also working memory for long 1) Introduction. running operations. 2)Need for memory tuning. • Memory management should not focus on single global 3)Self –tuning server performance . caching. 4)Automatic tuning of • It has consider to different workload classes. server and cache • System cannot automatically infer importance of each class - memory. needs human administrator. 5)Exploiting distributed memory. • Mechanism for handling multiple work load classes - class 6)Integrating speculative specific memory areas. prefetching with caching. 7)Self – tuning caching and • The partition is merely conceptual and not physical - memory area prefetching for web based - shared by multiple workload classes. systems. 8)Conclusion. 9)Bibliography.
Automatic tuning of server and cache memory : • Approaches for automatic memory performance is described 1) Introduction. as a feedback loop. 2)Need for memory tuning. 3)Self –tuning server Uses moving time window averaging. Here caching. OBSERVATION observation widow must be carefully choosen . 4)Automatic tuning of server and cache memory. An algorithm is used to predict the performance change and so response time predictions are 5)Exploiting distributed concerned i.e., is Ri of class i as function of PREDICTION M1,…Mm memory areas. memory. Approx Ri(M1,…Mm) is difficult . 6)Integrating speculative prefetching with caching. 7)Self – tuning caching and Re-initiate prediction is found by max(Ri / Gi ,1<=i<=m) where Ri is response time and Gi is response time goal prefetching for web based REACTION of class i. systems. 8)Conclusion. 9)Bibliography.
Exploiting distributed memory : Two cases : 1) Introduction. 2)Need for memory tuning. • High end data servers implemented on server clusters. 3)Self –tuning server caching. • Collection of independent servers with data replicated across all of them. 4)Automatic tuning of server and cache memory. 5)Exploiting distributed • Distributed caching algorithm –controls dynamic replication of data objects in (fixed sized pages or dynamic documents) caches. memory. 6)Integrating speculative prefetching with caching. •Two approaches : 7)Self – tuning caching and •1) egoistic caching . prefetching for web based •2) altruistic caching. systems. 8)Conclusion. 9)Bibliography.
Exploiting distributed memory : Egoistic : 1) Introduction. 2)Need for memory • Each server runs on local cache replacement algorithm –LRU and tuning. LRU-k . 3)Self –tuning server caching. • It views remotely cached data that is not locally cached. 4)Automatic tuning of server and cache memory. 5)Exploiting distributed • It ends with hottest data fully replicated and in all caches with little space left out for others. memory. 6)Integrating speculative Altruistic : prefetching with caching. 7)Self – tuning caching and • It aims at maximizing this replication by giving preference in the prefetching for web based local cache replacement to data. systems. 8)Conclusion. • That data should not be cache resident in different server. 9)Bibliography.
Exploiting distributed memory : • For high band width network altruistic approach is better – 1) Introduction. affordable overhead. 2)Need for memory tuning. • In fastest interconnect it becomes congested under high load. 3)Self –tuning server caching. 4)Automatic tuning of server and cache memory. • Mathematical cost model -it decides which method is useful under the current workload and system settings. 5)Exploiting distributed memory. 6)Integrating speculative • Benefit is proportional to mean response time of data and requests prefetching with caching. over all servers. 7)Self – tuning caching and prefetching for web based systems. • This model includes disk queuing the entire approach can even 8)Conclusion. contribute to disk load balancing . 9)Bibliography.
Recommend
More recommend