Address generation Processes generate logical addresses to physical memory when � they are running How/ when do these addresses get generated? � Address binding - f ixing a physical address t o t he logical � address of a process’ address space Compile t ime - if program locat ion is f ixed and known ahead of � t ime Load t ime - if program locat ion in memory is unknown unt il run- � t ime AND locat ion is f ixed Execut ion t ime - if processes can be moved in memory during � execut ion • Requires hardware support
Relocatable address generation 0 1000 Library Library Routines Routines 100 Prog P P: 0 P: P: 1100 P: : : : : : : push ... push ... push ... push ... foo() jmp _foo jmp 75 jmp 175 jmp 1175 : : : : : : End P foo: ... 75 foo: ... 175 foo: ... 1175 foo: ... Compilation Assembly Linking Loading
1000 Library 0 Library Routines Compile Time Routines Address Binding 1100 P: 100 P: : : push ... push ... jmp 1175 jmp 175 : : 1175 foo: ... 175 foo: ... Load Time Address Binding Execution Time 1000 Address Binding Library 0 Library Routines Routines 1100 P: 100 P: : Base register : push ... push ... 1000 jmp 1175 jmp 175 : : 1175 foo: ... 175 foo: ...
Making systems more usable Dynamic Loading - load only those routines that are � accessed while running +) Does not load unused rout ines Dynamic Linking - link shared code such as system � libraries and window code until run- time +) More ef f icient use of disk space Overlays - allows procedures to “overlay” each other to � decrease the memory size required to run the program +) Allows more programs t o be run +) P rograms can be larger t han memory
Basics - logical and physical addressing Memory Management Unit (MMU) - dynamically converts � logical addresses into physical address MMU stores base address register when loading process � Relocation register for process i Max Mem 1000 Max addr process i 0 Program generated address + Physical memory address MMU Operating system 0
Basics - swapping Swapping - allows processes to be temporarily “swapped” � out of main memory to a backing store (typically disk) Swapping has several uses: � � Allows mult iple pr ogr ams t o be r un concur r ent ly � Allows O.S. f iner gr ain cont r ol of which pr ocesses can be r un Max mem Process i Swap in Process m Process j Process k Swap out Operating system 0
Basics - simple memory protection “keep addresses in play” � � Relocat ion regist er gives st art ing address f or process � Limit regist er limit s t he of f set accessible f rom t he relocat ion regist er limit relocation register register Physical memory logical address address yes + < no addressing error
Basics - overlays Overlays - allow dif f erent parts of the same program to � “overlay” each other in memory to reduce the memory requirements Example - scanner program � 100k window init, data structures 20 k overlay driver 140 k 120 k Image editing code Capture code
Memory management architectures � Fixed size allocation � Memor y is divided int o f ixed par t it ions � Fixed Par t it ioning (par t it ion > pr oc. size) • Dif f erent constant size partitions � Paging (par t it ion < pr oc. size) • Constant size partitions � Dynamically sized allocation � Memor y allocat ed t o f it pr ocesses exact ly • Dynamic Partitioning (partition > proc. size) • Segmentation
Multiprogramming with f ixed partitions Memory is divided into f ixed size partitions � Processes loaded into partitions of equal or greater size � MEMORY 5000k Job Queues P 1 2800k P 2 1200k P 3 500k O.S. Internal Fragmentation 0
Multiprogramming with f ixed partitions
Dynamic partitioning Allocate contiguous memory equal to the process size � Corresponds to one job queue f or memory � MEMORY 5000k P 4 Job Queue P 1 P 3 P 2 500k O.S. External Fragmentation 0
64K 64K P 3 P 3 576K 352K 288K 288K 896K P 2 P 2 224K 224K 224K P 1 P 1 P 1 P 1 320K 320K 320K 320K 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K 64K 64K 64K 64K P 3 P 3 P 3 P 3 288K 288K 288K 288K 96K 96K 96K 96K ??? 128K P 4 P 4 P 4 P 4 128K 128K 128K 128K P 6 96K 96K P 1 320K 320K P 5 224K P 5 224K O.S. O.S. O.S. O.S. 128K 128K 128K 128K
Relocatable address generation 0 1000 Library Library Routines Routines 100 Prog P P: 0 P: P: 1100 P: : : : : : : push ... push ... push ... push ... foo() jmp _foo jmp 75 jmp 175 jmp 1175 : : : : : : End P foo: ... 75 foo: ... 175 foo: ... 1175 foo: ... Compilation Assembly Linking Loading
1000 Library 0 Library Routines Compile Time Routines Address Binding 1100 P: 100 P: : : push ... push ... jmp 1175 jmp 175 : : 1175 foo: ... 175 foo: ... Load Time Address Binding Execution Time 1000 Address Binding Library 0 Library Routines Routines 1100 P: 100 P: : Base register : push ... push ... 1000 jmp 1175 jmp 175 : : 1175 foo: ... 175 foo: ...
Dealing with external f ragmentation Compaction – f rom time to time shif t processes around to � collect all f ree space into one contiguous block Placement algorithms: First- f it, best- f it, worst- f it � 64K 256K P 3 288K 96K P 3 288K ??? P 4 128K P 6 128K P 6 P 4 128K 96K P 5 224K P 5 224K O.S. O.S. 128K 128K
Compaction examples FIRST-FIT P6 P4 P4 P4 P4 P6 P3 P3 P3 P2 P2 P2 P2 P4 P4 P2 P2 P1 P5 P5 P5 P5 1. Scan O.S. O.S. O.S. O.S. O.S. O.S. 2. Compact BEST-FIT P5 P5 P5 P4 P4 P4 P4 P4 P3 P3 P3 P2 P2 P2 P2 P2 P1 P6 O.S. O.S. O.S. O.S. O.S.
Compaction algorithms First- f it: place process in f irst hole that f its � � At t empt s t o minimize scanning t ime by f inding f irst available hole. � Lower memory will get smaller and smaller segment s (unt il compact ion algorit hm is run) Best- f it: smallest hole that f its the process � � At t empt s t o minimize t he number of compact ions t hat need t o be run Worst- f it: largest hole in memory � � At t empt s t o maximize t he ext ernal f ragment sizes
Memory management using paging Fixed partitioning of memory suf f ers f rom internal � f ragmentation, due to coarse granularity of the f ixed memory partitions Memory management via paging: � � P ermit physical address space of a process t o be noncont iguous � Break physical memory int o f ixed-size blocks called f rames � Break a process’s address space int o t he same sized blocks called pages � P ages are relat ively small compared t o processes (reduces t he int ernal f ragment at ion)
Memory management using paging � Logical address: <page number, page of f set> page number page offset m-n n . . . Logical 1 off 4 off 6K Address Space 5K 3K 4K 0 1 2K 1 4 3K 1K 2 2 2K 3 5 0 1K Page Memory 0 Table
Hardware Support f or Paging The page table needs to be stored somewhere � � Regist ers � Main Memory Page Table Base Register (PTBR) - points to the in � memory location of the page table. Translation Look- aside Buf f ers make translation f aster � Paging I mplementation I ssues � � Two memory accesses per address? � What if page t able > page size? � How do we implement memory prot ect ion? � Can code sharing occur?
Paging system perf ormance The page table is stored in memory, thus, every logical � address access results in TWO physical memory accesses: • 1) Look up the page table • 2) Look up the true physical address f or ref erence To make logical to physical address translation quicker: � � Translat ion Look-Aside Buf f er - very small associat ive cache t hat maps logical page ref erences t o physical page ref erences � Localit y of Ref erence - a ref erence t o an area of memory is likely t o cause anot her access t o t he same area
Translation lookaside buf f er p d CPU page frame # # TLB Hit Physical memory f d TLB Page Table
TLB implementation I n order to be f ast, TLBs must implement an associative � search where the cache is searched in parallel. � EXP ENSI VE � The number of ent ries varies (8 -> 2048) Because the TLB translates logical pages to physical � pages, the TLB must be f lushed on every context switch in order to work � Can improve perf ormance by associat ing process bit s wit h each TLB ent ry A TLB must implement an eviction policy which f lushes old � entries out of the TLB � Occurs when t he TLB is f ull
Memory protection with paging Associate protection bits with each page table entry � � Read/ Writ e access - can provide read-only access f or re- ent rant code � Valid/ I nvalid bit s - t ells MMU whet her or not t he page exist s in t he process address space Frame # R/W V/I 0 5 R V Page Table 1 3 R V 2 2 W V 3 9 W V 4 I 5 I � P age Table Lengt h Regist er (P TLR) - st ores how long t he page t able is t o avoid an excessive number of unused page t able ent ries
Multilevel paging For modern computer systems, � � # f rames < < # pages Example: � � 8 kbyt e page/ f rame size � 32-bit addresses � 4 byt es per P TE � How many page t able ent ries? � How large is page t able? Multilevel paging - page the page table itself �
Multilevel paging Page the page table � Logical address - - > [Section # , Page # , Of f set] � Logical address p 1 p 2 d f d . . . Physical p 1 address p 2 . . . Outer Page Table Page Table How do we calculate size of section and page? �
Virtual memory management overview What have we learned about memory management? � � P rocesses require memory t o run � We have assumed t hat t he ent ire process is resident during execut ion Some f unctions in processes never get invoked � � Error det ect ion and recovery rout ines � I n a graphics package, f unct ions like smoot h, sharpen, bright en, et c... may not get invoked Virtual Memory - allows f or the execution of processes � that may not be completely in memory (extension of paging technique f rom the last chapter) Benefits?
Virtual memory overview Hides physical memory f rom user � Allows higher degree of multiprogramming � (only bring in pages that are accessed) Allows large processes to be run on small amounts of � physical memory Reduces I / O required to swap in/ out processes � (makes the system f aster) Requires: � � P ager - page in / out pages as required � “Swap” space in order t o hold processes t hat are part ially complet e � Hardware support t o do address t ranslat ion
Demand paging Each process address space is broken into pages (as in � the paged memory management technique) Upon execution, swap in a page if it is not in memory � (lazy swapping or demand paging) Pager - is a process that takes care of swapping in/ out � pages to/ f rom memory memory . . . disk
Demand paging implementation One page- table entry (per page) � valid/ invalid bit - tells whether the page is resident in memory � For each page brought in, mark the valid bit � 0 1 2 C 0 A 0 9 V 3 1 B 1 i A B C 4 2 C 2 2 V 5 E D E 3 D 3 i 6 4 E 4 5 V 7 Logical Page table 8 A memory 9 Physical memory
Another example 0 1 2 0 A 0 3 A 1 B 1 A B C 4 2 C 2 D E 5 3 D 3 6 E 4 E 4 7 Logical Page table 8 B memory 9 Physical memory
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7 Implementation issues 4.8 Segmentation
Memory Management � I deally programmers want memory that is � large � f ast � non volat ile � Memory hierarchy � small amount of f ast , expensive memory – cache � some medium-speed, medium price main memory � gigabyt es of slow, cheap disk st orage � Memory manager handles the memory hierarchy
Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process
Analysis of Multiprogramming System Perf ormance Arrival and work requirements of 4 jobs � CPU utilization f or 1 – 4 jobs with 80% I / O wait � Sequence of events as jobs arrive and f inish � not e numbers show amout of CPU t ime j obs get in each int erval �
Relocation and Protection Cannot be sure where program will be loaded in memory � � address locat ions of variables, code rout ines cannot be absolut e � must keep a program out of ot her processes’ part it ions Use base and limit values � � address locat ions added t o base value t o map t o physical addr � address locat ions larger t han limit value is an error
Swapping (1) Memory allocation changes as � processes come int o memory � leave memory Shaded regions are unused memory
Swapping (2) Allocating space f or growing data segment � Allocating space f or growing stack & data segment �
Memory Management with Bit Maps � Part of memory with 5 processes, 3 holes � t ick marks show allocat ion unit s � shaded regions are f ree � Corresponding bit map � Same inf ormation as a list
Memory Management with Linked Lists Four neighbor combinations f or the terminating process X
Page Size (1) Small page size � Advantages � less int er nal f r agment at ion � bet t er f it f or var ious dat a st r uct ur es, code sect ions � less unused pr ogr am in memor y � Disadvantages � pr ogr ams need many pages, lar ger page t ables
Page Size (2) � Overhead due to page table and internal f ragmentation page table space ⋅ s e p = + � Where overhead internal s = average process size in byt es p 2 � fragmentation p = page size in byt es � e = page ent ry � Optimized when = 2 p se
Separate I nstruction and Data Spaces � One address space � Separate I and D spaces
Shared Pages Two processes sharing same program sharing its page table
Cleaning Policy � Need f or a background process, paging daemon � per iodically inspect s st at e of memor y � When too f ew f rames are f ree � select s pages t o evict using a r eplacement algor it hm � I t can use same circular list (clock) � as r egular page r eplacement algor it hm but wit h dif f pt r
I mplementation I ssues Operating System I nvolvement with Paging Four times when OS involved with paging • Process creation det ermine program size − creat e page t able − • Process execution MMU reset f or new process − TLB f lushed − • Page f ault time det ermine virt ual address causing f ault − swap t arget page out , needed page in − • Process termination time release page t able, pages −
Backing Store (a) Paging to static swap area (b) Backing up pages dynamically
Separation of Policy and Mechanism Page f ault handling with an external pager
Segmentation (1) One- dimensional address space with growing tables � One table may bump into another �
Segmentation (2) Allows each table to grow or shrink, independently
Segmentation (3) Comparison of paging and segmentation
I mplementation of Pure Segmentation (a)- (d) Development of checkerboarding (e) Removal of the checkerboarding by compaction
Segmentation with Paging: MULTI CS (1) � Descriptor segment points to page tables � Segment descriptor – numbers are f ield lengths
Segmentation with Paging: MULTI CS (2) A 34- bit MULTI CS virtual address
Segmentation with Paging: MULTI CS (3) Conversion of a 2- part MULTI CS address into a main memory address
Segmentation with Paging: MULTI CS (4) Simplif ied version of the MULTI CS TLB � Existence of 2 page sizes makes actual TLB more complicated �
Page Replacement Algorithms and Perf ormance Modelling
Virtual memory perf ormance What is the limiting f actor in the perf ormance of virtual � memory systems? � I n t he above example, st eps 5 and 6 require on t he order of 10 milliseconds, while t he rest of t he st eps require on t he order of microseconds/ nanoseconds. � Thus, disk accesses t ypically limit t he perf ormance of virt ual memory syst ems. Ef f ective Access Time - mean memory access time f rom � logical address to physical address retrieval ef f ect ive access t ime = (1-p)*ma + p*page_f ault _t ime p = probabilit y t hat a page f ault will occur
Recommend
More recommend