Part III Part III Storage Management Storage Management Chapter 8: Memory Management Chapter 8: Memory Management 1 Fall 2010
Address Address Generation Address Address Generation Generation eneration � Address generation has three stages: � Address generation has three stages: � Compile: compiler � Link: linker or linkage editor � Link: linker or linkage editor � Load: loader compiler linker loader memory source object load code module module 2
Th Th Three Th ree Add Add Add Address ress Bi Bindi Bi di ding ng Schemes S h emes � Compile Time: If the complier knows the � C il Ti If h li k h location a program will reside, it can generate absolute code Example: compile go systems absolute code. Example: compile-go systems and MS-DOS .COM -format programs. � Load Time: Since the compiler may not know � Load Time: Since the compiler may not know the absolute address, it generates relocatable code. Address binding is delayed until load code. Address binding is delayed until load time. � Execution Time: If the process may be moved � Execution Time: If the process may be moved in memory during its execution, then address binding must be delayed until run time. This is g y the commonly used scheme. 3
Add Add Add Address ress Genera G enerati ti ti tion: on: Comp C ompil il ile e Ti Ti Time me 4
5 Linking and Loading Linking and Loading
Address Generation: Static Linking Address Generation: Static Linking 6
Loaded into Loaded into Memory Memory � Code and data are loaded into memory at o ded o e o y addresses 10000 and 20000, respectively. , p y � Every unresolved address must be address must be adjusted. 7
L Log ogica i cal, l Vi t l V , Vi Virtua ual, l P l Ph , Ph Phys ysica i cal Add l Add Address ress � Logical Address: the address generated by the � Logical Address: the address generated by the CPU. � Physical Address: the address seen and used by � Ph i l Add th dd d d b the memory unit. � Virtual Address: Run-time binding may generate different logical address and physical address. In this case, logical address is also referred to as virtual address. (Logical = Virtual in this course) 8
D D Dynam ynamic i i c Loa L oadi ding di ng � S � Some routines in a program ( e.g ., error handling) i i ( h dli ) may not be used frequently. � With dynamic loading , a routine is not loaded until � Wi h d i l di i i l d d il it is called. � To use dynamic loading, all routines must be in a � T d i l di ll i b i relocatable format. � The main program is loaded and executes. � Th i i l d d d t � When a routine A calls B , A checks to see if B is loaded. If B is not loaded, the relocatable linking l d d If B i t l d d th l t bl li ki loader is called to load B and updates the address table Then control is passed to B table. Then, control is passed to B . 9
D Dynam ynamic i c Li Li Li ki Linki ki king ng � Dynamic loading postpones the loading of routines � Dynamic loading postpones the loading of routines until run-time. Dynamic linking postpones both linking and loading until run-time. linking and loading until run-time � A stub is added to each reference of library routine. A stub is a small piece of code that indicates how to A stub is a small piece of code that indicates how to locate and load the routine if it is not loaded. � When a routine is called its stub is executed The � When a routine is called, its stub is executed. The called routine is loaded, its address replaces the stub, and executes. � Dynamic linking usually applies to language and system libraries. A Windows DLL is a dynamic y y linking library. 10
Memory Memory Memory Management Memory Management Management Schemes Management Schemes Schemes Schemes � Monoprogramming Systems: MS-DOS � M i S t MS DOS � Multiprogramming Systems: � Fixed Partitions � Variable Partitions � Variable Partitions � Paging 11
Monoprogramming Systems Monoprogramming Systems 0 0 0 OS OS user prog. user prog. user prog. device drivers device drivers OS OS in ROM max max max 12
Why Multiprogramming? Why Multiprogramming? � Suppose a process spends a fraction of p of its time in I/O wait state. i i I/O i � Then, the probability of n processes being all in wait state at the same time is p n . � The CPU utilization is 1 – p n . p � Thus, the more processes in the system, the higher the CPU utilization. higher the CPU utilization. � Well, since CPU power is limited, throughput decreases when n is sufficiently large decreases when n is sufficiently large. 13
Multiprogramming Multiprogramming w ith Fixed Partitions w ith Fixed Partitions � Memory is divided into n (possibly unequal) partitions. � Partitioning may be done at the startup time and altered later altered later. � Each partition may have a job queue. Or, all partitions share the same job queue. j q OS OS 300k 300k partition 1 p partition 1 p 200k 200k partition 2 partition 2 partition 2 partition 2 150k 150k partition 3 partition 3 150k 150k partition 3 partition 3 14 100k partition 4 100k partition 4
Relocation and Protection: 1/2 Relocation and Protection: 1/2 � Because executables may run in any y y base partition, relocation OS limit and protection are needed needed. � Recall the base / limit register pair for g p memory protection. � It could also be used f for relocation. l i a user program � Linker generates relocatable code relocatable code starting with 0. The base register contains the starting address. h i dd 15
Relocation Relocation and Relocation Relocation and and Protection: and Protection: Protection: 2/2 Protection: 2/2 2/2 2/2 protection relocation limit base yes + + CPU CPU < < logical address physical no address address not your space traps to the OS p addressing error 16
Relocation: Relocation: How Relocation: Relocation: How How does How does does it does it it w ork? it w ork? w ork? w ork? 17
Multiprogramming w ith Multiprogramming w ith Variable Variable Partitions Variable Variable Partitions Partitions Partitions � The OS maintains a memory pool, and allocates whatever a job needs. � Thus, partition sizes are not fixed, The number of partitions also varies. OS OS OS OS A A A A A A A A B B B B free free free C C free free free 18
Memory Allocation: Memory Allocation: 1/2 1/2 � When a memory request is made, the OS searches all free blocks ( i.e ., holes) to find a h ll f bl k ( i h l ) t fi d suitable one. � There are some commonly seen methods: � First Fit: Search starts at the beginning of the set of holes and allocate the first large enough hole. � Next Fit: Search starts from where the previous first- fit fit search ended . h d d � Best-Fit: Allocate the smallest hole that is larger than the request one than the request one. � Worst-Fit: Allocate the largest hole that is larger than the request one than the request one. 19
Memory Allocation: Memor y Allocation: 2/2 2/2 � If the hole is larger than the requested size, it is cut into two. The one of the requested size is given to the process, the remaining one becomes a new hole. � When a process returns a memory block, it becomes a hole and must be combined with its neighbors. before X is freed after X is freed A X B A B A X A X B B X 20
Fra Fragmentation g mentation � Processes are loaded and removed from memory, eventually memory is cut into small holes that are eventually memory is cut into small holes that are not large enough to run any incoming process. � Free memory holes between allocated ones are � Free memory holes between allocated ones are called external fragmentation . � It is unwise to allocate exactly the requested � It is unwise to allocate exactly the requested amount of memory to a process, because of address boundary alignment requirements or the y g q minimum requirement for memory management. � Thus, memory that is allocated to a partition, but is , y p , not used, is an internal fragmentation . 21
External/Internal Fragmentation External/Internal Fra g mentation allocated partition external used fragmentation used used free used used un used un-used used used internal fragmentation f g free free used 22
Com Compaction for p p action for External Fra External Fragmentation g g mentation � If processes are relocatable, we may move used memory blocks together to make a larger free memory block. used used free used used used used used used used free free free 23 used used
Pa Pagin g g ing: 1/2 g g 1/2 � The physical memory is divided into fixed-sized page frames , or frames . � The virtual address space is also divided into blocks of the same size, called pages . � When a process runs, its pages are loaded into p , p g page frames. � A page table stores the page numbers and their � A page table stores the page numbers and their corresponding page frame numbers . � The virtual address is divided into two fields: � The virtual address is divided into two fields: page number and offset (in that page). 24
Recommend
More recommend