1
play

1 Implementing processes Primitives of Processes Os needs to keep - PowerPoint PPT Presentation

An aside on concurrency Timing and sequence of events are key concurrency issues Processes and We will study classical OS concurrency issues, including implementation and use of classic OS mechanisms to support Non-Preemptive Scheduling


  1. An aside on concurrency • Timing and sequence of events are key concurrency issues Processes and • We will study classical OS concurrency issues, including implementation and use of classic OS mechanisms to support Non-Preemptive Scheduling concurrency • In a later course on parallel programming may revisit this material Otto J. Anshus • Later course on distributed systems you may want to use formal University of Tromsø, University of Oslo tools to understand and model timing and sequencing better • Single CPU computers are designed to uphold a simple and Tore Larsen rigid model of sequencing and timing. ”Under the hood,” even University of Tromsø single CPU systems are distributed in nature, and are carefully Kai Li organized to uphold strict external requirements Princeton University 28.08.03 1 28.08.03 2 Process Supporting and Using Processes • An instance of a program under execution • Multiprogramming – Program specifying (logical) control-flow (thread) – Supporting concurrent execution (parallel or transparently interleaved) of multiple processes (or threads). – Data – Achieved by process- or context switching, switching the CPU(s) back – Private address space and forth among the individual processes, keeping track of each process’ – Open files progress – Running environment • Concurrent programs • The most important operating system concept – Programs (or threads) that exploit multiprogramming for some purpose • Used for supporting the concurrent execution of independent or (e.g. performance, structure) cooperating program instances – Independent or cooperating – Operating systems is important application area for concurrent • Used to structure applications and systems programming. Many others (event driven programs, servers, ++) 28.08.03 3 28.08.03 4 1

  2. Implementing processes Primitives of Processes • Os needs to keep track of all processes • Creation and termination – Keep track of it’s progress – fork, exec, wait, kill – Parent process • Signals – Metadata (priorities etc.) used by OS – Action, Return, Handler – Memory management • Operations – File management – block, yield • Process table with one entry (Process Control Block) per process • Synchronization • Will also align processes in queues – We will talk about this later 28.08.03 5 28.08.03 6 fork (UNIX) fork, exec, wait, kill • Spawns a new process (with new PID) • Return value tested for error, zero, or positive • Zero, this is the child process • Called in parent process – Typically redirect standard files, and • Returns in parent and child process – Call Exec to load a new program instead of the old • Return value in parent is child’s PID • Positive, this is the parent process • Return value in child is ’0’ • Wait, parent waits for child’s termination • Child gets duplicate, but separate, copy of parent’s – Wait before corresponding exit, parent blocks until exit user-level virtual address space – Exit before corresponding wait, child becomes zombie (un-dead) until wait • Child gets identical copy of parent’s open file • Kill, specified process terminates descriptors 28.08.03 7 28.08.03 8 2

  3. Context Switching Issues When may OS switch contexts? • Performance – Should be no more than a few microseconds • Only when OS runs – Most time is spent SAVING and RESTORING the context of processes • Events potentially causing a context switch: • Less processor state to save, the better – Process created ( fork ) Non-Preemptive – Pentium has a multitasking mechanism, but SW can be faster if it saves – Process exits ( exit ) scheduling less of the state – Process blocks implicitly (I/O, IPC) • How to save time on the copying of context state? – Process blocks explicitly ( yield ) – Re-map (address) instead of copy (data) – User or System Level Trap • Where to store Kernel data structures “shared” by all processes • HW • Memory • SW: User level System Call Preemptive • How to give processes a fair share of CPU time • Exception scheduling – Kernel preempts current process • Preemptive scheduling, time-slice defines maximum time interval • Potential scheduling decision at “any of above” between scheduling decisions • +“Timer” to be able to limit running time of processes 28.08.03 9 28.08.03 10 Example Process State Transitions Process State Transition of U s e r L e v e l P r o c e s s e s PC MULTIPROGRAMMING Non-Preemptive Scheduling P1 P2 P3 P4 •Uniprocessor: Interleaving (“pseudoparallelism”) KERNEL Trap Service •Multiprocessor: Overlapping (“true Terminate Handler paralellism”) (call scheduler) Trap Return P2 BlockedQueue Handler Scheduler Terminate Running dispatch (call scheduler) P3 P4 Scheduler Block for resource ReadyQueue Scheduler (call scheduler) Running Yield dispatch Current P1 Dispatcher Block for resource PCB’s (call scheduler) Create (call scheduler) Yield a process (call scheduler) Ready Create Blocked a process Ready Memory resident part Blocked Resource becomes available Resource becomes available (move to ready queue) 28.08.03 11 28.08.03 12 (move to ready queue) 3

  4. Stacks • Remember: We have only one copy of the Kernel in memory Scheduler => all processes “execute” the same kernel code => Must have a kernel stack for each process • Non-preemptive scheduler invoked by explicit block or yield calls, possibly also fork and exit • Used for storing parameters, return address, locally created • The simplest form variables in frames or activation records Scheduler: • Each process save current process state (store to PCB) – user stack choose next process to run – kernel stack dispatch (load PCB and run) • always empty when process is in user mode executing • Does this work? instructions • PCB (something) must be resident in memory • Does the Kernel need its own stack(s)? • Remember the stacks 28.08.03 13 28.08.03 14 More on Scheduler Win NT Idle • Should the scheduler use a special stack? – Yes, • because a user process can overflow and it would require another stack to deal with stack overflow • No runable thread exists on the processor • because it makes it simpler to pop and push to rebuild a process’s – Dispatch Idle Process (really a thread ) context • Idle is really a dispatcher in the kernel • Must have a stack when booting... • Should the scheduler simply be a “kernel process”? – Enable interrupt; Receive pending interrupts; Disable interrupts; – You can view it that way because it has a stack, code and its data – Analyze interrupts; Dispatch a thread if so needed; structure – Check for deferred work; Dispatch thread if so needed; – This process always runs when there is no user process – Perform power management; • “Idle” process – In kernel or at user level? 28.08.03 15 28.08.03 16 4

  5. Threads and Processes Project OpSys Trad. Threads Processes in individual address spaces Process Kernel threads User Level Threads Kernel Address Space Kernel Level 28.08.03 17 28.08.03 18 Where Should PCB Be Saved? Job swapping • Save the PCB on user stack • The processes competing for resources may have combined demands that results in poor system performance – Many processors have a special instruction to do it • Reducing the degree of multiprogramming by moving some efficiently processes to disk, and temporarily not consider them for – But, need to deal with the overflow problem execution may be a strategy to enhance overall system – When the process terminates, the PCB vanishes performance • Save the PCB on the kernel heap data structure – From which states(s), to which state(s)? Try extending the following examples using two suspended states. – May not be as efficient as saving it on stack • The term is also used in a slightly different setting, see MOS – But, it is very flexible and no other problems Ch. 4.2 pp. 196-197 28.08.03 19 28.08.03 20 5

  6. Add Job Swapping to Job Swapping, ii State Transition Diagram Terminate (call scheduler) Swap in Swap out Partially executed Scheduler Running dispatch swapped-out processes Block for resource (call scheduler) Yield (call scheduler) Create a process Ready CPU Blocked Ready Queue Terminate Resource becomes available I/O Waiting I/O (move to ready queue) Swap out queues Swapped Swap in 28.08.03 21 28.08.03 22 I/O Multiplexing: Concurrent Programming w/ Processes More than one State Machine per Process • select blocks for any of multiple events • Clean programming model – File tables are shared • Handle (one of the events) that unblocks select – User address space is private – Advance appropriate state machine • Processes are protected from each other • Repeat • Sharing requires some sort of IPC (InterProcess Communication) • Slower execution – Process switch, process control expensive – IPC expensive 28.08.03 23 28.08.03 24 6

  7. Concurrent prog. w/ I/O Multiplexing • Establishes several control flows (state machines) in one process • Uses select • Offers application programmer more control than processor model (How?) • Easy sharing of data among state machines • More efficient (no process switch to switch between control flows in same process) • Difficult programming 28.08.03 25 7

Recommend


More recommend