higher level synchronization
play

Higher Level Synchronization 9A. Practical Problems locking and - PDF document

4/24/2016 Higher Level Synchronization 9A. Practical Problems locking and waiting Operating Systems Principles 9B. Semaphores and Condition Variables 9C. File Level Locking Higher Level Synchronization 9D. Bottlenecks, Contention and


  1. 4/24/2016 Higher Level Synchronization 9A. Practical Problems – locking and waiting Operating Systems Principles 9B. Semaphores and Condition Variables 9C. File Level Locking Higher Level Synchronization 9D. Bottlenecks, Contention and Granularity Mark Kampe (markk@cs.ucla.edu) Higher Level Synchronization 2 Using Condition Variables The Bounded Buffer Problem void producer( FIFO *fifo, char *msg, int len ) { pthread_mutex_t lock = PTHEAD_MUTEX_INITIALIZER; for( int i = 0; i < len; i++ ) { pthread_cond_t cond = PTHEAD_COND_INITIALIZER; pthread_mutex_lock(&mutex); while (fifo->count == MAX) … pthread_cond_wait(&empty, &mutex); pthread_mutex_lock(&lock); put(fifo, msg[i]); while (ready == 0) pthread_cond_signal(&fill); pthread_cond_wait(&cond, &lock); pthread_mutex_unlock(&mutex); } pthread_mutex_lock(&lock) void consumer( FIFO *fifo, char *msg, int len ) { } … for( int i = 0; i < len; i++ ) { if (pthread_mutex_lock(&lock)) { pthread_mutex_lock(&mutex); while (fifo->count == 0) ready = 1; pthread_cond_wait(&fill, &mutex); pthread_mutex_signal(&cond); msg[i] = get(fifo); pthread_mutex_unlock(&lock); pthread_cond_signal(&empty); } pthread_mutex_unlock(&mutex); } } IPC, Threads, Races, Critical Sections 3 Higher Level Synchronization 4 Semaphores – signaling devices Semaphores - History when direct communication was not an option • Concept introduced in 1968 by Edsger Dijkstra e.g. between villages, ships, trains – cooperating sequential processes • THE classic synchronization mechanism – behavior is well specified and universally accepted – a foundation for most synchronization studies – a standard reference for all other mechanisms • more powerful than simple locks – they incorporate a FIFO waiting queue – they have a counter rather than a binary flag Higher Level Synchronization 6 1

  2. 4/24/2016 Semaphores - Operations using semaphores for exclusion • Semaphore has two parts: • initialize semaphore count to one – an integer counter (initial value unspecified) – count reflects # threads allowed to hold lock – a FIFO waiting queue • use P/wait operation to take the lock • P (proberen/test) ... “wait” – the first will succeed – decrement counter, if count >= 0, return – subsequent attempts will block – if counter < 0, add process to waiting queue • use V/post operation to release the lock • V (verhogen/raise) ... “post” or “signal” – restore semaphore count to non-negative – increment counter – if any threads are waiting, unblock the first in line – if counter >= 0 & queue non-empty, wake 1 st proc Higher Level Synchronization 7 Higher Level Synchronization 8 using semaphores for notifications Counting Semaphores • initialize semaphore count to zero • initialize semaphore count to ... – count reflects # of completed events – count reflects # of available resources • use P/wait operation to await completion • use P/wait operation to consume a resource – if already posted, it will return immediately – if available, it will return immediately – else all callers will block until V/post is called – else all callers will block until V/post is called • use V/post operation to signal completion • use V/post operation to produce a resource – increment the count – increment the count – if any threads are waiting, unblock the first in line – if any threads are waiting, unblock the first in line • one signal per wait: no broadcasts • one signal per wait: no broadcasts Higher Level Synchronization 9 Higher Level Synchronization 10 The Producer/Consumer Problem Implementing Semaphores void producer( FIFO *fifo, char *msg, int len ) { void sem_wait(sem_t *s) { for( int i = 0; i < len; i++ ) { pthread_mutex_lock(&s->lock); sem_wait(&empty); sem_wait(&mutex); while (s->value <= 0) put(fifo, msg[i]); pthread_cond_wait(&s->cond, &s->lock); sem_post(&mutex); s->value--; sem_post(&full); pthread_mutex_unlock(&s->lock); } } void consumer( FIFO *fifo, char *msg, int len ) { } for( int i = 0; i < len; i++ ) { void sem_post(sem_t *s) { sem_wait(&full); pthread_mutex_lock(&s->lock); sem_wait(&mutex); s->value++; msg[i] = get(fifo); pthread_cond_signal(&s->cond); sem_post(&mutex); pthread_mutex_unlock(&s->lock) sem_post(&empty); } } } Higher Level Synchronization 11 Higher Level Synchronization 12 2

  3. 4/24/2016 Implementing Semaphores in OS (locking to solve sleep/wakeup race) void sem_wait(sem_t *s ) { • requires a spin-lock to work on SMPs for (;;) { save = intr_enable( ALL_DISABLE ); – sleep/wakeup may be called on two processors while( TestAndSet( &s->lock ) ); if (s->value > 0) { – the critical section is short and cannot block s->value--; void sem_post(struct sem_t *s) { – we must spin, because we cannot sleep ... the lock we s->sem_lock = 0; struct proc_desc *p = 0; intr_enable( save ); save = intr_enable( ALL_DISABLE ); need is the one that protects the sleep operation return; while ( TestAndSet( &s->lock ) ); } • also requires interrupt disabling in sleep s->value++; add_to_queue( &s->queue, myproc ); if (p = get_from_queue( &s->queue )) { myproc->runstate |= PROC_BLOCKED; – wakeup is often called from interrupt handlers p->runstate &= ~PROC_BLOCKED; s->lock = 0; } – interrupt possible during sleep/wakeup critical section intr_enable( save ); s->lock = 0; yield(); intr_enable( save ); – If spin-lock already is held, wakeup will block for ever } if (p) } • very few operations require both of these reschedule( p ); } Higher Level Synchronization 13 Higher Level Synchronization 14 Limitations of Semaphores Object Level Locking • semaphores are a very spartan mechanism • mutexes protect code critical sections – they are simple, and have few features – brief durations (e.g. nanoseconds, milliseconds) – more designed for proofs than synchronization – other threads operating on the same data • they lack many practical synchronization features – all operating in a single address space – It is easy to deadlock with semaphores • persistent objects are more difficult – one cannot check the lock without blocking – critical sections are likely to last much longer – they do not support reader/writer shared access – many different programs can operate on them – no way to recover from a wedged V'er – may not even be running on a single computer – no way to deal with priority inheritance • solution: lock objects (rather than code) • none the less, most OSs support them Higher Level Synchronization 15 Higher Level Synchronization 16 File Descriptor Locking Advisory vs Enforced Locking int flock( fd , operation ) • Enforced locking – done within the implementation of object methods • supported operation s: – guaranteed to happen, whether or not user wants it – LOCK_SH … shared lock (multiple allowed) – may sometimes be too conservative – LOCK_EX … exclusive lock (one at a time) • Advisory locking – LOCK_UN … release a lock – a convention that “good guys” are expected to follow • lock applies to open instances of same fd – users expected to lock object before calling methods – distinct opens are not affected – gives users flexibility in what to lock, when • locking is purely advisory – gives users more freedom to do it wrong (or not at all) – mutexes are advisory locks – does not prevent reads, writes, unlinks Higher Level Synchronization 17 Higher Level Synchronization 18 3

  4. 4/24/2016 Ranged File Locking Cost of not getting a Lock int lockf( fd , cmd, offset, len ) • protect critical sections to ensure correctness • supported cmds : • many critical sections are very brief – F_LOCK … get/wait for an exclusive lock – in and out in a matter of nano-seconds – F_ULOCK … release a lock • blocking is much more (e.g. 1000x) expensive – F_TEST/F_TLOCK … test, or non-blocking request – micro-seconds to yield, context switch – offset/len specifies portion of file to be locked – milliseconds if swapped-out or a queue forms • lock applies to file (not the open instance) • performance depends on conflict probability – distinct opens are not affected C expected = (C get * P conflict ) + (C block * (1 – P conflict )) • locking may be enforced – depending on the underlying file system Higher Level Synchronization 19 Higher Level Synchronization 20 Performance: lock contention Probability of Conflict • The riddle of parallelism: – parallelism: if one task is blocked, CPU runs another – concurrent use of shared resources is difficult – critical sections serialize tasks, eliminating parallelism • What if everyone needs to use one resource? – one process gets the resource – other processes get in line behind him (convoy) – parallelism is eliminated; B runs after A finishes – that resource becomes a bottle-neck Higher Level Synchronization 21 Higher Level Synchronization 22 Convoy Formation Performance: resource convoys • in general P conflict = 1 – (1 – (T critical / T total )) threads ideal (nobody else in critical section at the same time) • unless a FIFO queue forms throughput P conflict = 1 – (1 – ((T wait + T critical )/ T total )) threads convoy newcomers have to get into line and an (already huge) T wait gets even longer • if T wait reaches the mean inter-arrival time offered load the line becomes permanent, parallelism ceases Higher Level Synchronization 23 Higher Level Synchronization 24 4

Recommend


More recommend