changelog
play

Changelog Changes made in this version not seen in fjrst lecture: - PowerPoint PPT Presentation

Changelog Changes made in this version not seen in fjrst lecture: 18 Feb 2019: counting to binary semaphores: really correct implementation (after some failed attempts) 0 Locks part 2 1 last time disabling interrupts for locks (fjnish)


  1. Changelog Changes made in this version not seen in fjrst lecture: 18 Feb 2019: counting to binary semaphores: really correct implementation (after some failed attempts) 0

  2. Locks part 2 1

  3. last time disabling interrupts for locks (fjnish) compilers and processors reorder loads/stores cache coherency — modifjed/shared/invalid atomic read-modify-write operations spinlocks mutexes (start) 2

  4. spinlock problems spinlocks can send a lot of messages on the shared bus makes every non-cached memory access slower… wasting CPU time waiting for another thread could we do something useful instead? 3

  5. spinlock problems spinlocks can send a lot of messages on the shared bus makes every non-cached memory access slower… wasting CPU time waiting for another thread could we do something useful instead? 4

  6. problem: busy waits ; what if it’s going to be a while? waiting for process that’s waiting for I/O? really would like to do something else with CPU instead… 5 while (xchg(&lk − >locked, 1) != 0)

  7. mutexes: intelligent waiting mutexes — locks that wait better instead of running infjnite loop, give away CPU lock = go to sleep, add self to list sleep = scheduler runs something else unlock = wake up sleeping thread 6

  8. mutexes: intelligent waiting mutexes — locks that wait better instead of running infjnite loop, give away CPU lock = go to sleep, add self to list sleep = scheduler runs something else unlock = wake up sleeping thread 6

  9. mutex implementation idea shared list of waiters spinlock protects list of waiters from concurrent modifjcation lock = use spinlock to add self to list, then wait without spinlock unlock = use spinlock to remove item from list 7

  10. mutex implementation idea shared list of waiters spinlock protects list of waiters from concurrent modifjcation lock = use spinlock to add self to list, then wait without spinlock unlock = use spinlock to remove item from list 7

  11. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 8

  12. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 8

  13. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 8

  14. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; } LockSpinlock(&m->guard_spinlock); UnlockSpinlock(&m->guard_spinlock); } Linux solution: seperate ‘on cpu’ fmags xv6 solution: acquire ptable lock } switch to the scheduler (and save our regs) doesn’t run us on another core until we UnlockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { remove a thread from m->wait_queue make that thread runnable } else { if woken up here, need to make sure scheduler } mutex: one possible implementation run scheduler struct Mutex { list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free UnlockSpinlock(&m->guard_spinlock); subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue 8 these threads are not runnable

  15. these threads are not runnable mutex: one possible implementation } else { UnlockSpinlock(&m->guard_spinlock); } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { remove a thread from m->wait_queue make that thread runnable } else { UnlockSpinlock(&m->guard_spinlock); } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags struct Mutex { run scheduler 8 WaitQueue wait_queue; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked list of threads that discovered lock is taken bool lock_taken = false ; and are waiting for it be free subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { SpinLock guard_spinlock; put current thread on m->wait_queue }; LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; m->lock_taken = true ;

Recommend


More recommend