processes execution and state
play

Processes, Execution, and State Operating Systems Principles 4A. - PDF document

4/6/2016 Processes, Execution, and State Operating Systems Principles 4A. Introduction to Scheduling 4B. Non-Preemptive Scheduling Scheduling 4C. Preemptive Scheduling Algorithms, Mechanisms, Performance 4D. Adaptive Scheduling 4E.


  1. 4/6/2016 Processes, Execution, and State Operating Systems Principles 4A. Introduction to Scheduling 4B. Non-Preemptive Scheduling Scheduling 4C. Preemptive Scheduling Algorithms, Mechanisms, Performance 4D. Adaptive Scheduling 4E. Introduction to System Performance Mark Kampe (markk@cs.ucla.edu) Scheduling: Algorithms, Mechanisms and Performance 2 What is CPU Scheduling? Goals and Metrics • Choosing which ready process to run next • goals should be quantitative and measurable • Goals: – if something is important, it must be measurable – if we want "goodness" we must be able to quantify it – keeping the CPU productively occupied – you cannot optimize what you do not measure – meeting the user’s performance expectations • metrics ... the way & units in which we measure yield (or preemption) – choose a characteristic to be measured • it must correlate well with goodness/badness of service context ready queue dispatcher CPU switcher • it must be a characteristic we can measure or compute – find a unit to quantify that characteristic resource resource granted manager resource request – define a process for measuring the characteristic new process Scheduling: Algorithms, Mechanisms and Performance 3 Scheduling: Algorithms, Mechanisms and Performance 4 CPU Scheduling: Proposed Metrics Rectified Scheduling Metrics • candidate metric: time to completion (seconds) • mean time to completion (seconds) – different processes require different run times – for a particular job mix (benchmark) • candidate metric: throughput (procs/second) • throughput (operations per second) – same problem, not different processes – for a particular activity or job mix (benchmark) • candidate metric: response time (milliseconds) • mean response time (milliseconds) – some delays are not the scheduler’s fault – time spent on the ready queue • time to complete a service request, wait for a resource • overall “goodness” • candidate metric: fairness (standard deviation) – requires a customer specific weighting function – per user, per process, are all equally important – often stated in Service Level Agreements Scheduling: Algorithms, Mechanisms and Performance 5 Scheduling: Algorithms, Mechanisms and Performance 6 1

  2. 4/6/2016 Basic Scheduling State Model Non-Preepmtive Scheduling • scheduled process runs until it yields CPU exit running – may yield specifically to another process create request – may merely yield to "next" process • works well for simple systems allocate blocked ready – small numbers of processes • a process may block to await – with natural producer consumer relationships – completion of a requested I/O operation • depends on each process to voluntarily yield – availability of an requested resource – a piggy process can starve others – some external event – a buggy process can lock up the entire system • or a process can simply yield Scheduling: Algorithms, Mechanisms and Performance 7 Scheduling: Algorithms, Mechanisms and Performance 8 Non-Preemptive: First-In-First-Out Example: First In First Out • Algorithm: A Tav = (10 +20 + 120)/3 B C = 50 – run first process in queue until it blocks or yields • Advantages: 0 20 40 60 80 100 120 – very simple to implement A B C – seems intuitively fair – all process will eventually be served Tav = (100 +110 + 120)/3 • Problems: = 110 0 20 40 60 80 100 120 – highly variable response time (delays) – a long task can force many others to wait (convoy) Scheduling: Algorithms, Mechanisms and Performance 9 Scheduling: Algorithms, Mechanisms and Performance 10 Non-Preemptive: Shortest Job First Starvation • unbounded waiting times • Algorithm: – not merely a CPU scheduling issue – all processes declare their expected run time – it can happen with any controlled resource – run the shortest until it blocks or yields • caused by case-by-case discrimination • Advantages: – where it is possible to lose every time – likely to yield the fastest response time • ways to prevent • Problems: – strict (FIFO) queuing of requests – some processes may face unbounded wait times • credit for time spent waiting is equivalent • Is this fair? Is this even “correct” scheduling? • ensure that individual queues cannot be starved – ability to correctly estimate required run time – input metering to limit queue lengths Scheduling: Algorithms, Mechanisms and Performance 11 Scheduling: Algorithms, Mechanisms and Performance 12 2

  3. 4/6/2016 Non-Preemptive: Priority Preemptive Scheduling • a process can be forced to yield at any time • Algorithm: – if a higher priority process becomes ready – all processes are given a priority • perhaps as a result of an I/O completion interrupt – run the highest priority until it blocks or yields – if running process's priority is lowered • Advantages: • Advantages – users control assignment of priorities – enables enforced "fair share" scheduling – can optimize per-customer “goodness” function • Problems • Problems: – introduces gratuitous context switches – still subject to (less arbitrary) starvation – creates potential resource sharing problems – per-process may not be fine enough control Scheduling: Algorithms, Mechanisms and Performance 13 Scheduling: Algorithms, Mechanisms and Performance 14 Forcing Processes to Yield Preemptive: Round-Robin • need to take CPU away from process • Algorithm – e.g. process makes a system call, or clock interrupt – processes are run in (circular) queue order • consult scheduler before returning to process – each process is given a nominal time-slice – if any ready process has had priority raised – timer interrupts process if time-slice expires • Advantages – if any process has been awakened – greatly reduced time from ready to running – if current process has had priority lowered – intuitively fair • scheduler finds highest priority ready process • Problems – if current process, return as usual – some processes will need many time-slices – if not, yield on behalf of the current process – extra interrupts/context-switches add overhead Scheduling: Algorithms, Mechanisms and Performance 15 Example: Round-Robbin Costs of an extra context-switch • entering the OS A B C – taking interrupt, saving registers, calling scheduler Trsp = (0 +30 + 60)/3 = 30 • cycles to choose who to run – the scheduler/dispatcher does work to choose 0 20 40 60 80 100 120 • moving OS context to the new process A A A B C B C B C – switch process descriptor, kernel stack Trsp = (0 +11 + 22)/3 = 11 • switching process address spaces – map-out old process, map-in new process 0 20 40 60 80 100 120 • losing hard-earned L1 and L2 cache contents Scheduling: Algorithms, Mechanisms and Performance 17 Scheduling: Algorithms, Mechanisms and Performance 18 3

  4. 4/6/2016 So which approach is best? Response Time/Throughput Trade-off • preemptive has better response time – but what should we choose for our time-slice? • non-preemptive has lower overhead – but how should we order our the processes? Throughput • there is no one “best” algorithm Response Time – performance depends on the specific job mix – goodness is measured relative to specific goals • a good scheduler must be adaptive 1000 500 200 125 80 60 40 20 12 8 4 1 – responding automatically to changing loads Time-Slice/Context Switch overhead – configurable to meet different requirements Scheduling: Algorithms, Mechanisms and Performance 19 Scheduling: Algorithms, Mechanisms and Performance 20 The “Natural” Time-Slice Dynamic Multi-Queue Scheduling • natural time-slice is different for each process • CPU share = time_slice x slices/second – create multiple ready queues 2% = 20ms/sec 2ms/slice x 10 slices/sec – some with short time-slices that run more often 2% = 20ms/sec 5ms/slice x 4 slices/sec – some with long time-slices that run infrequently • context switches are far from free – different queues may get different CPU shares – they waste otherwise useful cycles • Advantages: – they introduce delay into useful computations – response time very similar to Round-Robin – relatively few gratuitous preemptions • natural rescheduling interval • Problem: – when a process blocks for resources or I/O – how do we know where a process belongs – optimal time-slice would be based on this period Scheduling: Algorithms, Mechanisms and Performance 21 Scheduling: Algorithms, Mechanisms and Performance 22 Dynamic Equilibrium Dynamic Multi-Queue Scheduling • Natural equilibria are seldom calibrated • Usually the net result of 20% real time queue #yield = ∞ ts max = ∞ #tse = ∞ – competing processes – negative feedback 50% short quantum queue #yield = ∞ ts max = 500us #tse = 10 share • Once set in place these processes scheduler 25% – are self calibrating medium quantum queue #yield = 10 ts max = 2ms #tse = 50 – automatically adapt to changing circumstances 05% long quantum queue • The tuning is in rate and feedback constants #tse = ∞ #yield = 20 ts max = 5ms – avoid over-correction, ensure covergence Scheduling: Algorithms, Mechanisms and Performance 23 Scheduling: Algorithms, Mechanisms and Performance 24 4

Recommend


More recommend