Department of Computer Science Institute for System Architecture, Operating Systems Group THREADS ADMINISTRIVIA MICHAEL ROITZSCH TU Dresden MOS – Threads 2 EXERCISES ALTERNATIVE 2 ■ Nov 7th: practical „getting started“ ■ due to date and room clashes we have to exercise (room will be announced) divert from our regular schedule: ■ Nov 14th: Brinch - Hansen paper reading ■ there was no exercise last week ■ Nov 28th: paper reading ■ there is no exercise on Oct 31st ■ Dec 5th: practical exercise „IPC“ ■ moved to Nov 7th ■ Dec 12th: paper reading ■ there is no exercise tomorrow ■ Jan 9th: practical exercise „Bastei“ ■ alternative 1: this Friday, 1.00 PM, INF E06 ■ alternative 2: postpone and reorder ■ Jan 23rd: paper reading TU Dresden MOS – Threads 3 TU Dresden MOS – Threads 4 PAPER READING ■ read the paper for the exercise: Per Brinch - Hansen: The nucleus of a multiprogramming system RECAP ■ you find the link on the course website ■ understand it ■ be prepared to summarize it ■ be prepared to discuss it TU Dresden MOS – Threads 5 TU Dresden MOS – Threads 6
MICROKERNEL ABSTRACTIONS ■ kernel: Resource Mechanism ■ provides system foundation ■ usually runs in privileged CPU mode CPU Thread ■ microkernel: Memory T ask ■ kernel provides mechanisms, no policies ■ most functionality implemented in user Communication IPC mode, unless dictated otherwise by Platform Virtual Machine ■ security ■ performance TU Dresden MOS – Threads TU Dresden MOS – Threads 7 8 VIRTUAL MACHINE IPC ■ inter - process communication ■ provides an exclusive instance of a full ■ between threads system platform ■ may be a synthetic platform (bytecode) ■ two - way agreement, synchronous ■ full software implementations ■ short IPC: register only ■ hardware - assisted implementations in the ■ long IPC: direct and indirect kernel (hypervisor) ■ memory mapping with flexpages ■ see virtualization lecture on Dec 11th ■ see communication lecture on Nov 6th TU Dresden MOS – Threads 9 TU Dresden MOS – Threads 10 TASK ALTERNATIVES user shared system privileged ■ (virtual) address space 4G ■ unit of memory management ■ provides spatial isolation ■ common memory content can be shared ■ shared libraries 0 ■ kernel Monolith Exokernel Microkernel Software ■ see memory lecture next week Isolation TU Dresden MOS – Threads 11 TU Dresden MOS – Threads 12
BASICS ■ abstraction of code execution CPU IP ■ unit of scheduling SP Regs ■ provides temporal isolation THREADS ■ typically requires a stack Stack ■ thread state: ■ instruction pointer Code ■ stack pointer ■ CPU registers, flags TU Dresden MOS – Threads TU Dresden MOS – Threads 13 14 STACK KERNEL ‘S VIEW ■ storage for function - local data ■ maps user - level threads to kernel - level threads ■ local variables ■ usually a 1:1 mapping ■ return address Stack Frame 1 ■ threads can be implemented in userland ■ one stack frame per function Stack Frame 2 ■ assigns threads to hardware ■ grows and shrinks Stack Frame 3 ■ one kernel - level thread per logical CPU dynamically ■ grows from high to low ■ with hyper - threading & multicore, we have addresses more than one hardware thread now TU Dresden MOS – Threads 15 TU Dresden MOS – Threads 16 KERNEL ENTRY KERNEL ENTRY ■ IP and SP point ■ thread can enter into kernel Stack kernel: Code ■ user CPU state ■ voluntarily stored in TCB CPU CPU Stack Stack SP SP ■ system call ■ old IP and SP IP IP Regs Regs ■ registers ■ forced Code Code ■ flags ■ interrupt ■ FPU state ■ exception ■ MMX, SSE TU Dresden MOS – Threads 17 TU Dresden MOS – Threads 18
TCB KERNEL EXIT ■ once the kernel has provided its services, ■ thread control block it returns back to userland ■ kernel object, one per thread ■ by restoring the saved user IP and SP ■ stores thread‘s state while it is not ■ to the same thread or a different thread running ■ the old thread may be blocking now ■ untrusted parts can be stored in user ■ waiting for some resource space ■ returning to a different thread might ■ separation into KTCB and UTCB involve switching address spaces TU Dresden MOS – Threads TU Dresden MOS – Threads 19 20 BASICS ■ scheduling describes the decision, which thread to run on a CPU at a given time ■ When do we schedule? SCHEDULING ■ current thread blocks or yields ■ time quantum expired ■ How do we schedule? ■ RR, FIFO, RMS, EDF ■ based on thread priorities TU Dresden MOS – Threads 21 TU Dresden MOS – Threads 22 POLICY QUANTA ■ scheduling decisions are policies ■ a thread‘s time quantum defines the time ■ should not be in a microkernel it owns the CPU before it is preempted ■ L4 has facilities to implement scheduling ■ preemption is the process of (involuntarily) in user land blocking a thread in favor of another one ■ each thread has an associated preempter ■ flavors of time quanta ■ kernel sends an IPC when thread blocks ■ time slices for round robin scheduling ■ preempter tells kernel where to switch to ■ execution time budgets for real - time ■ no efficient implementation yet ■ time quanta get replenished ■ scheduling is the only policy still in L4 TU Dresden MOS – Threads 23 TU Dresden MOS – Threads 24
L4 EXAMPLE ■ scheduling in L4 is based on thread ■ thread 1 is a high priority driver thread, priorities waiting for an interrupt (blocking) ■ time - slice - based round robin within the ■ thread 2 and 3 are ready with equal same priority level priority ■ kernel manages priority and timeslice as Priority part of the thread state Thread 1 ■ Fiasco has some additional real - time Thread 2 scheduling Thread 3 ■ see scheduling lecture on Dec 4th TU Dresden MOS – Threads TU Dresden MOS – Threads 25 26 EXAMPLE EXAMPLE ■ thread 1 becomes ready (device interrupt ■ 1 hardware thread arrived) ■ threads 2 and 3 get their time slice filled ■ its time slice is filled ■ scheduler selects 2 to run ■ thread 2 is preempted Priority Priority Thread 1 Thread 1 Thread 2 Thread 2 Thread 3 Thread 3 TU Dresden MOS – Threads 27 TU Dresden MOS – Threads 28 EXAMPLE EXAMPLE ■ thread 1 blocks again (interrupt handled, ■ thread 2‘s time slice has expired waiting for next) ■ scheduler selects the next thread on the ■ thread 2 has time left same priority level (round robin) Priority Priority Thread 1 Thread 1 Thread 2 Thread 2 Thread 3 Thread 3 TU Dresden MOS – Threads 29 TU Dresden MOS – Threads 30
BASICS ■ synchronization used for ■ mutual exclusion SYNCHRONIZATION ■ producer - consumer - scenarios ■ traditional approaches that do not work ■ spinning, busy waiting ■ disabling interrupts TU Dresden MOS – Threads TU Dresden MOS – Threads 31 32 ATOMIC OPS EXPECTATION Thread 1 ■ for concurrent access to data structures ■ use atomic operations to protect manipulations Thread 2 ■ only suited for simple critical sections T T h h r r e e a a d d 1 2 i n i n c c r r i i t t i i c c a a l l s s e e c c t t i i o o n n TU Dresden MOS – Threads 33 TU Dresden MOS – Threads 34 SOLUTION SEMAPHORES ■ serializer and atomic operations can be Thread 1 combined to a nice counting semaphore ■ semaphore Serializer Thread ■ shared counter for correctness ■ wait queue for fairness Thread 2 T T h h r r e e ■ down (P) and up (V) operation a a d d 1 2 i i n n c c r r i i t t ■ semaphore available iff counter > 0 i i c c a a l l s s e e c c t t i i o o n n TU Dresden MOS – Threads 35 TU Dresden MOS – Threads 36
SEMAPHORES BENEFITS ■ counter increments and decrements using ■ cross - task semaphores, when counter is atomic operations in shared memory ■ when necessary, call semaphore thread to ■ IPC only in the contention case block/unblock and enqueue/dequeue ■ good for mutual exclusion when down up contention is rare Thread 1 ■ for producer - consumer - scenarios, enqueue dequeue Semaphore and block and unblock contention is the common case Thread down up ■ solution for small critical sections in Thread 2 scheduling lecture TU Dresden MOS – Threads TU Dresden MOS – Threads 37 38 INTRODUCTION ■ NOVA is a research microhypervisor currently developed by Udo Steinberg NOVA ■ explore technologies for a small and robust platform that hosts: ■ legacy operating systems ■ native NOVA applications TU Dresden MOS – Threads 39 TU Dresden MOS – Threads 40 FEATURES KERNEL STYLES Process - Style Interrupt - Style Mechanism L4 NOVA ■ one kernel stack per thread ■ one kernel stack per CPU ■ context switch: save state ■ context switch: switch to of current thread, discard Thread ✔ ✔ stack of target thread stack, restore state of target thread T ask ✔ ✔ ■ ■ state retained on stack at state must be serializable switch time at switch time IPC ✔ ✔ ■ ■ can switch anytime all state made explicit ■ target thread resumes with ■ target thread resumes at empty stack in Virtual Machine ✘ ✔ last context switch point continuation function Fiasco, Linux NOVA TU Dresden MOS – Threads 41 TU Dresden MOS – Threads 42
Recommend
More recommend