se350 operating systems
play

SE350: Operating Systems Lecture 5: Multithreaded Kernels Outline - PowerPoint PPT Presentation

SE350: Operating Systems Lecture 5: Multithreaded Kernels Outline Use cases for multithreaded programs Kernel vs. user-mode threads Concurrencys problems Recall: Why Processes & Threads? Go Goals ls: Mu Multiprogramming


  1. SE350: Operating Systems Lecture 5: Multithreaded Kernels

  2. Outline • Use cases for multithreaded programs • Kernel vs. user-mode threads • Concurrency’s problems

  3. Recall: Why Processes & Threads? Go Goals ls: • Mu Multiprogramming : Run multiple applications concurrently • Pr Protec ection : Don’t want bad applications to crash system! Solu So lutio ion: • Process Pr ess : unit of execution and allocation • Virtual Vir l Ma Machin ine abstractio ion : give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Ch Chal allenge ge: • Process creation & switching expensive • Need concurrency within same app (e.g., web server) So Solu lutio ion: • Th Threa ead : Decouple allocation and execution • Run multiple threads within same process

  4. Multithreaded Processes • PCBs could point to multiple TCBs • Switching threads within one block is simple thread switch • Switching threads across blocks requires changes to memory and I/O address tables

  5. Examples Multithreaded Programs • Embedded systems • Elevators, planes, medical systems, smart watches • Single program, concurrent operations • Most modern OS kernels • Internally concurrent to deal with concurrent requests by multiple users/applications • But no protection needed within kernel • Database servers • Access to shared data by many concurrent users • Also background utility processing must be done

  6. Example Multithreaded Programs (cont.) • Network servers • Concurrent requests from network • Again, single program, multiple concurrent operations • File server, web server, and airline reservation systems • Parallel programming (more than one physical CPU) • Split program into multiple threads for parallelism • This is called multiprocessing • Some multiprocessors are actually uniprogrammed • Multiple threads in one address space but one program at a time

  7. A Typical Use Case Web Server - fork process for each client connection Client Browser - create threads to get request and issue response - create threads to read data, access DB, etc. - fork process for each tab - join and respond - create thread to render page - run GET in separate thread - spawn multiple outstanding GETs - as they complete, render portion

  8. Kernel Use Cases • Thread for each user process • Thread for sequence of steps in processing I/O • Threads for device drivers • …

  9. Device Drivers • Device-specific code in kernel that interacts directly with device hardware • Supports standard, internal interface • Same kernel I/O system can interact easily with different device drivers • Special device-specific configuration supported with ioctl() syscall • Device drivers are typically divided into two pieces • To Top half: accessed in call path from system calls • implements a set of standard, cross-device calls like open(), close(), read(), write(), ioctl() , etc. • This is kernel’s interface to device driver • Top half will start I/O to device, may put thread to sleep until finished • Bo Botto ttom hal alf : run as interrupt routine • Gets input or transfers next block of output • May wake sleeping threads if I/O now complete

  10. Life Cycle of An I/O Request User Program Kernel I/O Subsystem Device Driver Top Half Device Driver Bottom Half Device Hardware

  11. Multithreaded Kernel Code Kernel Thread 1 Kernel Thread 2 Kernel Thread 3 Process 1 Process 2 PCB 1 PCB 2 Kernel Globals TCB 1 TCB 2 TCB 3 TCB 1.A TCB 1.B TCB 2.A TCB 2.B Stack Stack Stack Stack Stack Stack Stack Heap Process 1 Process 2 User-Level Processes Thread A Thread B Thread A Thread B Stack Stack Stack Stack Code Code Globals Globals Heap Heap • User programs use syscalls to create, join, yield, exit threads • Kernel handles scheduling and context switching • Simple, but a lot of transitions between user and kernel mode

  12. Kernel vs. User-Mode Threads • We have been talking about kernel supported threads • Each user-level thread maps to one kernel thread • Every thread can run or block independently • One process may have several threads waiting on different events • Examples: Windows, Linux • Downside of kernel supported threads: a bit expensive • Need to make crossing into kernel mode to schedule • Solution: user supported threads

  13. Basic Cost of System Calls • Min syscall has ~ 25x cost of function call • Scheduling could be many times more • Streamline system processing as much as possible • Other optimizations seek to process as much of syscall in user space as possible (e.g., Linux vDSO)

  14. User-Mode Threads • Lighter weight option • Many user-level threads are mapped to single kernel thread • User program provides scheduler and thread package • Examples: Solaris Green Threads, GNU Portable Threads • Downside of user-mode threads • Multiple threads may not run in parallel on multicore • When one thread blocks on I/O, all threads block • Option: Scheduler Activations • Have kernel inform user level when thread blocks …

  15. Classification • Most operating systems have either • One or many address spaces • One or many threads per address space spaces: # of addr One Many # threads Per AS: One MS/DOS, early Macintosh Traditional UNIX Mach, OS/2, Linux Embedded systems Windows 10 Many (Geoworks, VxWorks, Win NT to XP , Solaris, HP- JavaOS, Pilot(PC), etc.) UX, OS X

  16. Putting it Together: Process (Unix) Process Memory A(int tmp) { if (tmp<2) B(); Stack Resources printf(tmp); } I/O State B() { (e.g., file, Sequential stream C(); socket of instructions } contexts) C() { A(2); } CPU state A(1); (PC, SP , Stored in OS registers..) …

  17. Putting it Together: Processes Process 1 • Switch overhead: hi high Process 2 Process N • CPU state: lo low Mem. Mem. Mem. • Memory/IO state: hi high … IO IO IO state state state • Process creation: hi high CPU CPU CPU state state state • Protection • CPU: ye yes OS • Memory/IO: ye yes CPU scheduler • Sharing overhead: hi high 1 process (involves at least one context at a time CPU switch) (1 core)

  18. Putting it Together: Threads Process 1 Process N • Switch overhead: me medium threads threads • CPU state: lo low Mem. Mem. • Thread creation: me medium IO IO … … … state state • Protection CPU CPU CPU CPU state state state state • CPU: ye yes • Memory/IO: no no • Sharing overhead: lo low (ish) OS CPU scheduler (thread switch overhead 1 thread low) at a time CPU (1 core)

  19. Putting it Together: Multi-Cores Process 1 Process N • Switch overhead: lo low threads threads (only CPU state) Mem. Mem. • Thread creation: lo low IO IO … … … state state • Protection CPU CPU CPU CPU • CPU: ye yes state state state state • Memory/IO: no no • Sharing overhead: lo low OS CPU scheduler (thread switch overhead 4 threads at low , may not need to switch lo a time at all!) CPU Core 1 Core 2 Core 3 Core 4

  20. Hyperthreading Superscalar Multi-processor Fine-grained Simultaneous Architecture Architecture Multithreading Multithreading Colored blocks show executed instructions Thread 1 Time (cycles) Thread 2 • Superscalar processors can execute multiple instructions that are independent • Multiprocessors can execute multiple independent threads • Fine-grained multithreading executes two independent threads by switches between them • Hyperthreading duplicates register state to make second (hardware) “thread” (virtual core) • From OS’s point of view, virtual cores are separate CPUs • OS can schedule as many threads at a time as there are virtual cores (but, sub-linear speedup!) • See: http://www.cs.washington.edu/research/smt/index.html

  21. Putting it Together: Hyperthreading Process 1 Process N • Switch overhead threads threads between hardware- Mem. Mem. threads: ve very-lo low IO IO … … … state state (done in hardware) CPU CPU CPU CPU state state state state • Contention for ALUs/FPUs may hur hurt OS performance CPU scheduler hardware-threads (VCores) 8 threads at a time CPU PCore 1 PCore 2 PCore 3 PCore 4

  22. Recall: Thread Abstraction • Illusion: Infinite number of processors • Each thread runs on dedicated virtual processor • Reality: few processors, multiple threads running at variable speed • To map arbitrary set of threads to fixed set of cores, kernel implements scheduler Programmer Abstraction Physical Reality Threads Processors 1 2 3 4 5 1 2 Running Ready Threads Threads

  23. Programmer vs. Processor View Programmer � s � � � Possible Possible Possible View Execution Execution Execution #1 #2 #3 . . . . . . . . . . . . x = x + 1 ; x = x + 1 ; x = x + 1 ; x = x + 1 ; y = y + x ; y = y + x ; . . . . . . . . . . . . . . y = y + x ; z = x + 5 y ; z = x + 5 y ; . . . . . . . . . . . . . . . Thread is suspended. . . . Thread is suspended. Other thread(s) run. . . . Other thread(s) run. Thread is resumed. . . . . . . . . . . . . . . . . . Thread is resumed. y = y + x ; . . . . . . . . . . . . . . . . z = x + 5 y ; z = x + 5 y ;

  24. Possible Interleavings One Execution Another Execution Thread 1 Thread 1 Thread 2 Thread 2 Thread 3 Thread 3 Another Execution Thread 1 Thread 2 Thread 3

Recommend


More recommend