Distributed Programming Reasoning about Synchronous Message Passing Message Passing Dr. Liam O’Connor University of Edinburgh LFCS (and UNSW) Term 2 2020 1
Distributed Programming Reasoning about Synchronous Message Passing Where we are at In the last lecture, we saw monitors and the readers and writers problem , concluding our examination of shared variable concurrency. For the rest of this course, our focus will be on message passing , both as a useful concurrency abstraction on one computer, as well as the foundation for distributed programming . In this lecture, we will introduce message passing and discuss simple non-compositional proof techniques for synchronous message passing. 2
Distributed Programming Reasoning about Synchronous Message Passing Distributed Programming concurrent program: processes + communication + synchronization distributed program: processes can be distributed across machines → they cannot use shared variables (usually; DSM exception) processes do share communication channels they access channels by message passing , remote procedure call (RPC) , or rendezvous languages : Promela (synchronous and asynchronous MP), Java (RPC), Ada (rendezvous) libraries : sockets, message passing interface (MPI), parallel virtual machine (PVM), JCSP etc. 3
Distributed Programming Reasoning about Synchronous Message Passing Message Passing A channel is a typed FIFO queue between processes. We distinguish synchronous from asynchronous channels. Ben-Ari Promela send a message ch ⇐ x ch ! x recieve a message ch ⇒ y ch ? y Synchronous channels If the channel is synchronous , the queue has capacity 0. Both the send and the receive operation block until they both are ready to execute. When they are, they proceed at the same time and the value of x is assigned to y. Asynchronous channels If the channel is asynchronous , the send operation doesn’t block. It appends the value of x to the FIFO queue associated with the channel ch. Only the receive operation blocks until the channel ch contains a message. When it does, the oldest message is removed and its content is stored in y. 4
Distributed Programming Reasoning about Synchronous Message Passing Taxonomy of Asynchronous Message Passing RelFIFO Rel RelDupFIFO RelDup FairFIFO Fair FairDupFIFO FairDup Rel = “reliable”, Dup = “with duplication” FiFo = “order-preserving”, Fair = “lossy but fair” RelFIFO ≃ TCP and FairDup ≃ UDP (if only it was fair) 5
Distributed Programming Reasoning about Synchronous Message Passing Algorithm 2.1: Producer-consumer (channels) channel of integer ch producer consumer integer x integer y loop forever loop forever x ← produce ch ⇒ y p1: q1: ch ⇐ x consume(y) p2: q2: 6
Distributed Programming Reasoning about Synchronous Message Passing Conway’s Problem Example Input on channel inC: a sequence of characters Output on channel outC: The sequence of characters from inC, with runs of 2 ≤ n ≤ 9 occurrences of the same character c replaced by the n and c a newline character after every K th character in the output. Let’s use message-passing for separation of concerns: pipe inC outC compress ✲ ✲ output ✲ 7
Distributed Programming Reasoning about Synchronous Message Passing Algorithm 2.2: Conway’s problem constant integer MAX ← 9 constant integer K ← 4 channel of integer inC, pipe, outC compress output char c, previous ← 0 char c integer n ← 0 integer m ← 0 inC ⇒ previous loop forever loop forever inC ⇒ c pipe ⇒ c p1: q1: if (c = previous) and outC ⇐ c p2: q2: (n < MAX − 1) n ← n + 1 m ← m + 1 p3: q3: else if n > 0 if m > = K p4: q4: pipe ⇐ i2c(n+1) outC ⇐ newline p5: q5: n ← 0 m ← 0 p6: q6: pipe ⇐ previous p7: q7: previous ← c p8: q8: 8
Distributed Programming Reasoning about Synchronous Message Passing Reminder: Matrix Multiplication Example 1 2 3 1 0 2 4 2 6 × = 4 5 6 0 1 2 10 5 18 7 8 9 1 0 0 16 8 30 ∈ T p × q and B = ( b j , k ) 1 ≤ j ≤ q ∈ T q × r be two Let p , q , r ∈ N . Let A = ( a i , j ) 1 ≤ i ≤ p 1 ≤ j ≤ q 1 ≤ k ≤ r ∈ T p × s (compatible) matrices. Recall from math that another matrix C = ( c i , k ) 1 ≤ i ≤ p 1 ≤ k ≤ s is their product , A × B , iff, for all 1 ≤ i ≤ p and 1 ≤ k ≤ r : q � c ij = a i , j b j , k j =1 9
Distributed Programming Reasoning about Synchronous Message Passing Algorithms for Matrix Multiplication The standard algorithm for matrix multiplication is: for all rows i of A do: for all columns k of B do: set c i , k to 0 for all columns j of A do: add a i , j b j , k to c i , k Because of the three nested loops, its complexity is O ( p · q · r ). In case both matrices are quadratic, i.e., p = q = r , that’s O ( p 3 ). (Subtle optimisations exist for this very common case. The current best yields an upper bound of O ( p 2 . 3727 ). Ask Aleks in your next algorithms class.) 10
Distributed Programming Reasoning about Synchronous Message Passing Process Array for Matrix Multiplication Source Source Source 0 2 2 0 1 0 1 0 1 ❄ ❄ ❄ 4,2,6 3,2,4 3,0,0 0,0,0 ✛ ✛ ✛ ✛ Result 1 2 3 Zero 2 2 0 0 1 0 1 0 1 ❄ ❄ ❄ 10,5,18 6,5,10 6,0,0 0,0,0 ✛ ✛ ✛ ✛ Result 4 5 6 Zero 2 2 0 0 1 0 1 0 1 ❄ ❄ ❄ 16,8,30 9,8,16 9,0,0 0,0,0 ✛ ✛ ✛ ✛ Result 7 8 9 Zero 2 2 0 0 1 0 1 0 1 ❄ ❄ ❄ Sink Sink Sink 11
Distributed Programming Reasoning about Synchronous Message Passing Computation of One Element 2 2 0 ❄ ❄ ❄ 30 16 0 0 ✛ ✛ ✛ ✛ Result 7 8 9 Zero 12
Distributed Programming Reasoning about Synchronous Message Passing Algorithm 2.3: Multiplier process with channels integer FirstElement channel of integer North, East, South, West integer Sum, integer SecondElement loop forever North ⇒ SecondElement p1: East ⇒ Sum p2: Sum ← Sum + FirstElement · SecondElement p3: South ⇐ SecondElement p4: West ⇐ Sum p5: 13
Distributed Programming Reasoning about Synchronous Message Passing Algorithm 2.4: Multiplier with channels and selective input integer FirstElement channel of integer North, East, South, West integer Sum, integer SecondElement loop forever either North ⇒ SecondElement p1: East ⇒ Sum p2: or East ⇒ Sum p3: North ⇒ SecondElement p4: South ⇐ SecondElement p5: Sum ← Sum + FirstElement · SecondElement p6: West ⇐ Sum p7: 14
Distributed Programming Reasoning about Synchronous Message Passing Multiplier Process in Promela proctype Multiplier(byte Coeff; 1 chan North; 2 chan East; 3 chan South; 4 chan West) 5 { 6 byte Sum, X; 7 for (i : 0..(SIZE-1)) { 8 if :: North ? X -> East ? Sum; 9 :: East ? Sum -> North ? X; 10 fi; 11 South ! X; 12 Sum = Sum + X*Coeff; 13 West ! Sum; 14 } 15 15 } 16
Distributed Programming Reasoning about Synchronous Message Passing Algorithm 2.5: Dining philosophers with channels channel of boolean forks[5] philosopher i fork i boolean dummy boolean dummy loop forever loop forever think forks[i] ⇐ true p1: q1: forks[i] ⇒ dummy forks[i] ⇒ dummy p2: q2: forks[i+1] ⇒ dummy p3: q3: eat p4: q4: forks[i] ⇐ true p5: q5: forks[i+1] ⇐ true p6: q6: 16
Distributed Programming Reasoning about Synchronous Message Passing Synchronous Message Passing Recall that, when message passing is synchronous, the exchange of a message requires coordination between sender and receiver (sometimes called a handshaking mechanism). In other words, the sender is blocked until the receiver is ready to cooperate. Examples MPI_SSend in MPI synchronous languages such as Signal Lustre Esterel 17
Distributed Programming Reasoning about Synchronous Message Passing Synchronous Transition Diagrams Definition A synchronous transition diagram is a parallel composition P 1 � . . . � P n of some (sequential) transition diagrams P 1 , . . . , P n called processes . The processes P i do not share variables communicate along unidirectional channels C , D , . . . connecting at most 2 different processes by way of output statements C ⇐ e for sending the value of expression e along channel C input statements C ⇒ x for receiving a value along channel C into variable x 18
Distributed Programming Reasoning about Synchronous Message Passing Edges in (Sequential) Transition Diagrams For shared variable concurrency, labels b ; f , where b is a Boolean condition and f is a state transformation sufficed. Example t = 1; in1 ← True ℓ ′ ℓ Now, we call such transitions internal . 19
Distributed Programming Reasoning about Synchronous Message Passing I/O Transitions We extend this notation to message passing by allowing the guard to be combined with an input or an output statement: b ; C ⇒ x ; f ℓ ′ ℓ b ; C ⇐ e ; f ℓ ′ ℓ 20
Distributed Programming Reasoning about Synchronous Message Passing Example 1 Let P = P 1 � P 2 be given as: C ⇐ 1 C ⇒ x s 1 s 2 t 1 � t 2 Obviously, { True } P { x = 1 } , but how to prove it? 21
Recommend
More recommend