Memory Management Disclaimer: some slides are adopted from book authors’ slides with permission 1
Recap: Virtual Memory (VM) Process A Process C Process B MMU 2 Physical Memory
Recap: MMU • Hardware that translates virtual addresses to physical addresses 1) base register 2) base + limit registers (segmentation); 3) paging 3
Recap: Page Table based Address Translation Virtual address 0x12345678 Page # Offset Ox12345 0x678 0x678 Page table 0x12345 Physical address 0xabcde678 offset frame #: 0xabcde frame # 4
Recap: Translation Lookaside Buffer • Cache frequent address translations – So that CPU don’t need to access the page table all the time – Much faster 5
Multi-level Paging • Two-level paging 6
Two Level Address Translation Virtual address 1 st level 2 nd level offset Base ptr 2 nd level Page table 1 st level Physical address Page table Frame # Offset 7
Example 8 bits 8 bits 8 bits 1 st level 2 nd level offset Virtual address format (24bits) Vaddr: 0x082370 Vaddr: 0x0703FE Vaddr: 0x072370 1 st level idx: __ 1 st level idx: 07 1 st level idx: __ 2 nd level idx: __ 2 nd level idx: 03 2 nd level idx: __ Offset: __ Offset: FE Offset: __ 8
Multi-level Paging • Can save table space • How, why? 9
Quiz • What is the minimum page table size of a process that uses only 4MB memory space? – assume a PTE size is 4B 12 bits 20 bits 4 * 2^20 = 4MB 1 st level offset 12 bits 10 bits 10 bits 1 st level 2 nd level offset 4 * 2^10 + 4* 2^10 = 8KB 10
Paging Summary • Advantages – Efficient use of memory space • No external fragmentation • Two main Issues – Translation speed can be slow • Solution: TLB – Table size is big • Solution: Multi-level page table 11
Concepts to Learn • Demand paging 12
Virtual Memory (VM) • Abstraction – 4GB linear address space for each process • Reality – 1GB of actual physical memory shared with 20 other processes • Does each process use the (1) entire virtual memory (2) all the time? 13
Demand Paging • Idea: instead of keeping the entire memory pages in memory all the time, keep only part of them on a on-demand basis 14
Page Table Entry (PTE) • PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No – Valid bit (V): whether the page is in memory – Modify bit (M): whether the page is modified – Reference bit (R): whether the page is accessed – Protection bits(P): readable, writable, executable 15
Partial Memory Mapping • Not all pages are in memory (i.e., valid=1) 16
Page Fault • When a virtual address can not be translated to a physical address, MMU generates a trap to the OS • Page fault handling procedure – Step 1: allocate a free page frame – Step 2: bring the stored page on disk (if necessary) – Step 3: update the PTE (mapping and valid bit) – Step 4: restart the instruction 17
Page Fault Handling 18
Demand Paging 19
Starting Up a Process Stack Unmapped pages Heap Data Code 20
Starting Up a Process Stack Heap Access next instruction Data Code 21
Starting Up a Process Stack Heap Page fault Data Code 22
Starting Up a Process Stack OS 1) allocates free page frame 2) loads the missed page from the disk (exec file) 3) updates the page table entry Heap Data Code 23
Starting Up a Process Stack Over time, more pages are mapped as needed Heap Data Code 24
Anonymous Page • An executable file contains code (binary) – So we can read from the executable file • What about heap? – No backing storage (unless it is swapped out later) – Simply map a new free page (anonymous page) into the address space 25
Program Binary Sharing Bash #2 Bash #1 Physical memory Bash text • Multiple instances of the same program – E.g., 10 bash shells 26
Recommend
More recommend