INF4140 - Models of concurrency Høsten 2015 October 19, 2015 Abstract This is the “handout” version of the slides for the lecture (i.e., it’s a rendering of the content of the slides in a way that does not waste so much paper when printing out). The material is found in [Andrews, 2000]. Being a handout-version of the slides, some figures and graph overlays may not be rendered in full detail, I remove most of the overlays, especially the long ones, because they don’t make sense much on a handout/paper. Scroll through the real slides instead, if one needs the overlays. This handout version also contains more remarks and footnotes, which would clutter the slides, and which typically contains remarks and elaborations, which may be given orally in the lecture. Not included currently here is the material about weak memory models. 1 Message passing and channels 1. Oct. 2015 1.1 Intro Outline Course overview: • Part I: concurrent programming; programming with shared variables • Part II: “distributed” programming Outline: asynchronous and synchronous message passing • Concurrent vs. distributed programming 1 • Asynchronous message passing: channels, messages, primitives • Example: filters and sorting networks • From monitors to client–server applications • Comparison of message passing and monitors • About synchronous message passing Shared memory vs. distributed memory more traditional system architectures have one shared memory: • many processors access the same physical memory • example: fileserver with many processors on one motherboard Distributed memory architectures: • Processor has private memory and communicates over a “network” (inter-connect) • Examples: 1 The dividing line is not absolute. One can make perfectly good use of channels and message passing also in a non-distributed setting. 1
– Multicomputer: asynchronous multi-processor with distributed memory (typically contained inside one case) – Workstation clusters: PC’s in a local network – Grid system: machines on the Internet, resource sharing – cloud computing: cloud storage service – NUMA-architectures – cluster computing . . . Shared memory concurrency in the real world thread 0 thread 1 shared memory • the memory architecture does not reflect reality • out-of-order executions: – modern systems: complex memory hierarchies, caches, buffers. . . – compiler optimizations, SMP, multi-core architecture, and NUMA CPU 0 CPU 1 CPU 2 CPU 3 L 1 L 1 L 1 L 1 L 2 L 2 L 2 L 2 shared memory CPU 0 CPU 1 CPU 2 CPU 3 L 1 L 1 L 1 L 1 L 2 L 2 shared memory Mem. CPU 3 CPU 2 Mem. CPU 0 CPU 1 Mem. Mem. 2
Concurrent vs. distributed programming Concurrent programming: • Processors share one memory • Processors communicate via reading and writing of shared variables Distributed programming: • Memory is distributed ⇒ processes cannot share variables (directly) • Processes communicate by sending and receiving messages via shared channels or (in future lectures): communication via RPC and rendezvous 1.2 Asynch. message passing Asynchronous message passing: channel abstraction Channel: abstraction, e.g., of a physical communication network 2 • One–way from sender(s) to receiver(s) • unbounded FIFO (queue) of waiting messages • preserves message order • atomic access • error–free • typed Variants: errors possible, untyped, . . . Asynchronous message passing: primitives Channel declaration chan c ( type 1 id 1 , . . . , type n id n ); Messages: n -tuples of values of the respective types communication primitives: • send c ( expr 1 , . . . , expr n ); Non-blocking, i.e. asynchronous • receive c ( var 1 , . . . , var n ); Blocking: receiver waits until message is sent on the channel • empty ( c ); True if channel is empty c send receive P1 P2 Simple channel example in Go 1 func main ( ) { 2 messages := make ( chan string , 0) // declare + i n i t i a l i z e 3 4 go func ( ) { messages < − " ping " }() // send 5 msg := < − messages // receive 6 fmt . P r i n t l n (msg) 7 } 8 2 but remember also: producer-consumer problem 3
Example: message passing foo send receive A B (x,y) = (1,2) chan foo ( int ) ; 1 2 process A { 3 send foo ( 1 ) ; 4 send foo ( 2 ) ; 5 } 6 7 process B { 8 receive foo (x ) ; 9 receive foo (y ) ; 10 } 11 Example: shared channel (x,y) = (1,2) or (2,1) send A1 foo receive B A2 send process A1 { 1 send foo ( 1 ) ; 2 } 3 4 process A2 { 5 send foo ( 2 ) ; 6 } 7 8 process B { 9 foo ( x ) ; receive 10 foo ( y ) ; receive 11 } 12 func main () { 1 foo := make ( chan int , 10) 2 go func () { 3 time . Sleep (1000) 4 foo < − 1 // send 5 }() 6 7 go func () { 8 time . Sleep (1) 9 foo < − 2 10 }() 11 fmt . Println ( " f i r s t ␣=␣" , < − foo ) 12 fmt . Println ( " second ␣=␣" , < − foo ) 13 } 14 Asynchronous message passing and semaphores Comparison with general semaphores: channel semaphore ≃ send V ≃ receive P ≃ 4
Number of messages in queue = value of semaphore (Ignores content of messages) type dummy interface {} // dummy type , 1 type Semaphore chan dummy // type d e f i n i t i o n 2 3 func ( s Semaphore ) Vn (n int ) { 4 i :=0; i<n ; i++ { for 5 s < − true // send something 6 } 7 } 8 func ( s Semaphore ) Pn (n int ) { 9 i :=0; i<n ; i++ { for 10 < − s // r e c e i v e 11 } 12 13 } 14 15 ( s Semaphore ) V () { func 16 s .Vn(1) 17 } 18 func ( s Semaphore ) P () { 19 s . Pn(1) 20 } 21 Listing 1: 5 Phils package main 1 import ( "fmt" 2 " time " 3 " sync " 4 "math/rand" 5 "andrewsbook/semchans" ) // semaphores using channels 6 7 8 var wg sync . WaitGroup 9 10 const m = 5 // l e t ’ s make j u s t 5 11 f o r k s = [m] semchans . Semaphore { var 12 make ( semchans . Semaphore , 1 ) , 13 make ( semchans . Semaphore , 1 ) , 14 make ( semchans . Semaphore , 1 ) , 15 make ( semchans . Semaphore , 1 ) , 16 make ( semchans . Semaphore , 1 ) } 17 18 19 20 func main () { 21 for i :=0; i< m; i++ { // i n i t i a l i z e the sem ’ s 22 f o r k s [ i ] . V () 23 } 24 wg . Add(m) 25 for i :=0; i< m; i++ { 26 go philosopher ( i ) 27 } 28 wg . Wait () 29 30 } 31 32 func philosopher ( i int ) { 33 defer wg . Done () 34 r := rand .New( rand . NewSource (99)) // random generator 35 fmt . P r i n t f ( " s t a r t ␣P(%d)\n" , i ) 36 for true { 37 fmt . P r i n t f ( "P(%d) ␣ i s ␣ thinking \n" , i ) 38 f o r k s [ i ] . P () 39 // time . Sleep ( time . Duration ( r . Int31n (0))) // small delay for DL 40 f o r k s [ ( i +1)%m] . P () 41 fmt . P r i n t f ( "P(%d) ␣ s t a r t s ␣ eating \n" , i ) 42 time . Sleep ( time . Duration ( r . Int31n ( 5 ) ) ) // small delay 43 fmt . P r i n t f ( "P(%d) ␣ f i n i s h e s ␣ eating \n" , i ) 44 f o r k s [ i ] . V () 45 f o r k s [ ( i +1)%m] . V () 46 } 47 } 48 5
1.2.1 Filters Filters: one–way interaction Filter F = process which: • receives messages on input channels, • sends messages on output channels, and • output is a function of the input (and the initial state). in out receive send 1 1 . . . . F . . in out receive send n n • A filter is specified as a predicate. • Some computations: naturally seen as a composition of filters. • cf. stream processing/programming (feedback loops) and dataflow programming Example: A single filter process Problem: Sort a list of n numbers into ascending order. process Sort with input channels input and output channel output . Define: n : number of values sent to output . sent [ i ] : i ’th value sent to output . Sort predicate � � ∀ i : 1 ≤ i < n. sent [ i ] ≤ sent [ i + 1] values sent to output are a permutation of values from ∧ input . Filter for merging of streams Problem: Merge two sorted input streams into one sorted stream. Process Merge with input channels in 1 and in 2 and output channel out : in 1 : 1 4 9 . . . 1 out : 1 2 4 5 8 9 . . . 2 in 2 : 2 5 8 . . . 3 Special value EOS marks the end of a stream. Define: n : number of values sent to out . sent [ i ] : i ’th value sent to out . The following shall hold when Merge terminates : � � in 1 and in 2 are empty ∧ sent [ n + 1] = EOS ∧ ∀ i : 1 ≤ i < n sent [ i ] ≤ sent [ i + 1] ∧ values sent to out are a permutation of values from in 1 and in 2 6
Recommend
More recommend