execution states with swapping processes execution and
play

execution states with swapping Processes, Execution, and State 3F. - PDF document

4/8/2018 execution states with swapping Processes, Execution, and State 3F. Execution State Model exit running 4A. Introduction to Scheduling create 4B. Non-Preemptive Scheduling request 4C. Preemptive Scheduling allocate blocked


  1. 4/8/2018 execution states with swapping Processes, Execution, and State 3F. Execution State Model exit running 4A. Introduction to Scheduling create 4B. Non-Preemptive Scheduling request 4C. Preemptive Scheduling allocate blocked ready 4D. Adaptive Scheduling 4E. Scheduling and Performance swap out swap in 4F. Real-Time Scheduling swapped swap allocate out wait 9F. Performance under Load Processes, Execution, and State 1 un-dispatching a running process (re-)dispatching a process • somehow we enter the operating system • decision to switch is made in supv mode – e.g. via a yield system call or a clock interrupt – after state of current process has been saved • state of the process has already been preserved – the scheduler has been called to yield the CPU – user mode PC, PS and registers are already saved on • select the next process to be run stack – supervisor mode registers are also saved on (the – get pointer to its process descriptor(s) supervisor mode) stack • locate and restore its saved state – descriptions of address space. and pointers to – restore code, data, stack segments code, data and stack segments, and all other resources are already stored in the process descriptor – restore saved registers, PS, and finally the PC • yield CPU – call scheduler to select next process • and we are now executing in a new process Blocking and unblocking processes Blocking and Unblocking Processes • blocked/unblocked are merely notes to scheduler • Process needs an unavailable resource – blocked processes are not eligible to be dispatched • anyone can set them, anyone can change them – data that has not yet been read in from disk – a message that has not yet been sent • this usually happens in a resource manager – a lock that has not yet been released – when process needs an unavailable resource • change process's scheduling state to "blocked" • Must be blocked until resource is available • call the scheduler and yield the CPU – change process state to blocked – when the required resource becomes available • Un-block when resource becomes available • change process's scheduling state to "ready" – change process state to ready • notify scheduler that a change has occurred 1

  2. 4/8/2018 Primary and Secondary Storage Why we swap • Make the best use of limited memory • primary = main (executable) memory – a process can only execute if it is in memory – primary storage is expensive and very limited – max # of processes limited by memory size – only processes in primary storage can be run – if it isn't READY, it doesn't need to be in memory • secondary = non-executable (e.g. Disk) • Improve CPU utilization – blocked processes can be moved to secondary storage – when there are no READY processes, CPU is idle – swap out code, data, stack and non-resident context – idle CPU time is wasted, reduced throughput – make room in primary for other "ready" processes – we need READY processes in memory • returning to primary memory • Swapping takes time and consumes I/O – process is copied back when it becomes unblocked – so we want to do it as little as possible Swapping Out Swapping Back In • Re-Allocate memory to contain process • Process’ state is in main memory – code and data segments, non-resident process descriptor – code and data segments • Read that data back from secondary storage – non-resident process descriptor • Change process state back to Ready • Copy them out to secondary storage • What about the state of the computations – if we are lucky, some may still be there – saved registers are on the stack • Update resident process descriptor – user-mode stack is in the saved data segments – process is no longer in memory – supervisor-mode stack is in non-resident descriptor – pointer to location on 2ndary storage device • This involves a lot of time and I/O • Freed memory available for other processes Three State Scheduling Model What is CPU Scheduling? • Choosing which ready process to run next exit running • Goals: create request – keeping the CPU productively occupied – meeting the user’s performance expectations allocate blocked ready • a process may block to await yield (or preemption) – completion of a requested I/O operation context dispatcher CPU switcher ready queue – availability of an requested resource – some external event resource resource granted manager resource request new process • or a process can simply yield Scheduling: Algorithms, Mechanisms and Performance 11 Scheduling: Algorithms, Mechanisms and Performance 12 2

  3. 4/8/2018 Goals and Metrics CPU Scheduling: Proposed Metrics • candidate metric: time to completion (seconds) • goals should be quantitative and measurable – different processes require different run times – if something is important, it must be measurable • candidate metric: throughput (procs/second) – if we want "goodness" we must be able to quantify it – you cannot optimize what you do not measure – same problem, not different processes • metrics ... the way & units in which we measure • candidate metric: response time (milliseconds) – choose a characteristic to be measured – some delays are not the scheduler’s fault • it must correlate well with goodness/badness of service • time to complete a service request, wait for a resource • it must be a characteristic we can measure or compute • candidate metric: fairness (standard deviation) – find a unit to quantify that characteristic – per user, per process, are all equally important – define a process for measuring the characteristic Scheduling: Algorithms, Mechanisms and Performance 13 Scheduling: Algorithms, Mechanisms and Performance 14 Different Kinds of Systems have Rectified Scheduling Metrics Different Scheduling Goals • mean time to completion (seconds) • Time sharing – for a particular job mix (benchmark) – Fast response time to interactive programs • throughput (operations per second) – Each user gets an equal share of the CPU – Execution favors higher priority processes – for a particular activity or job mix (benchmark) • Batch • mean response time (milliseconds) – Maximize total system throughput – time spent on the ready queue – Delays of individual processes are unimportant • overall “goodness” • Real-time – requires a customer specific weighting function – Critical operations must happen on time – Non-critical operations may not happen at all – often stated in Service Level Agreements Scheduling: Algorithms, Mechanisms and Performance 15 Non-Preepmtive Scheduling Non-Preemptive: First-In-First-Out • scheduled process runs until it yields CPU • Algorithm: – may yield specifically to another process – run first process in queue until it blocks or yields – may merely yield to "next" process • Advantages: • works well for simple systems – very simple to implement – small numbers of processes – seems intuitively fair – with natural producer consumer relationships – all process will eventually be served • depends on each process to voluntarily yield • Problems: – a piggy process can starve others – highly variable response time (delays) – a buggy process can lock up the entire system – a long task can force many others to wait (convoy) Scheduling: Algorithms, Mechanisms and Performance 17 Scheduling: Algorithms, Mechanisms and Performance 18 3

Recommend


More recommend