Concurrent Counting is harder than Queuing Costas Busch Rensselaer Polytechnic Intitute Srikanta Tirthapura Iowa State University 1
Arbitrary graph 2
Distributed Counting count count count count Some processors request a counter value 3
Distributed Counting 3 4 1 2 Final state 4
Distributed Queuing Enq(B) Enq(A) A B Enq(C) C Enq(D) D Some processors perform enqueue operations 5
Distributed Queuing Previous=D Previous=nil A B Previous=A C D Previous=B A D B C Tail head 6
Applications Counting: - parallel scientific applications - load balancing (counting networks) Queuing: - distributed directories for mobile objects - distributed mutual exclusion 7
Ordered Multicast Multicast with the condition: all messages received at all nodes in the same order Either Queuing or Counting will do Which is more efficient? 8
Queuing vs Counting? Total orders Queuing = finding predecessor Needs local knowledge Counting = finding rank Needs global knowledge 9
Problem Is there a formal sense in which Counting is harder problem than Queuing? Reductions don’t seem to help 10
Our Result Concurrent Counting is harder than Concurrent Queuing on a variety of graphs including: many common interconnection topologies complete graph, mesh hypercube perfect binary trees 11
Model Synchronous system G=(V,E) - edges of unit delay Congestion: Each node can process only one message in a single time step Concurrent one-shot scenario: a set R subset V of nodes issue queuing (or counting) operations at time zero No more operations added later 12
Cost Model C Q ( v ) : delay till v gets back queuing result Cost of algorithm A on request set R is ( , ) ( ) C A R C v Q Q v R min {max C Q ( A , R )} Queuing Complexity = A R V Define Counting Complexity Similarly 13
Lower Bounds on Counting For arbitrary graphs: ( n log * n ) Counting Cost = D For graphs with diameter : ( D 2 ) Counting Cost = 14
D Theorem: For graphs with diameter : ( D 2 ) Counting Cost = Proof: Consider some arbitrary algorithm for counting 15
Graph D Take shortest path of length 16
Graph 6 2 7 1 5 3 8 4 make these nodes to count 17
k Node of count decides after k 1 at least time steps 2 k 1 k 1 2 2 k Needs to be aware of other processors k-1 18
k 1 D Counting Cost: ( D 2 ) 2 k 1 End of Proof 19
Theorem: For arbitrary graphs: ( n log * n ) Counting Cost = Proof: Consider some arbitrary algorithm for counting 20
n Prove it for a complete graph with nodes: n any algorithm on any graph with nodes can be simulated on the complete graph 21
The initial state affects the outcome Red: count Blue: don’t count Initial State v 22
Red: count Blue: don’t count 1 Final state 3 2 4 v 5 23
Red: count Blue: don’t count Initial state v 24
Red: count Blue: don’t count 2 Final state 3 5 4 1 8 v 7 6 25
A ( v ) Let be the set of nodes whose v input may affect the decision of A ( v ) v 26
Suppose that there is an initial state v k for which decides | A ( v ) | k Then: A ( v ) v k 27
A ( v ) v k These two initial states give same result for v A ( v ) v k 28
A ( v ) v k v | A ( v ) | k If , then would decide less than k Thus, | A ( v ) | k 29
Suppose that decides at time t v We show: 2 t times 2 | A ( v ) | 2 2 30
Suppose that decides at time t v 2 t 2 times | A ( v ) | 2 2 t log * k | A ( v ) | k 31
v t log k * Cost of node : n If nodes wish to count: n log k ( n log n ) * * Counting Cost = k 1 32
A ( t v , ) B ( t v , ) v v Nodes that Nodes that v affect v affects up to time t up to time t a ( t ) max | A ( x , t ) | b ( t ) max | B ( x , t ) | x x 33
A ( v , t 1 ) B ( v , t 1 ) v v a ( t ) 1 b ( t ) 1 t 1 After , the sets grow 34
A ( v , t 1 ) A ( t v , ) v 35
A ( z , t ) z A ( v , t 1 ) Eligible to A ( t v , ) send message v t 1 at time There is an initial state such that v z that sends a message to 36
A ( s , t ) A ( z , t ) s z A ( v , t 1 ) Eligible to A ( t v , ) send message v t 1 at time A ( s , t ) A ( z , t ) Suppose that Then, there is an initial state such that both send message to v 37
A ( s , t ) A ( z , t ) s z A ( v , t 1 ) Eligible to A ( t v , ) send message v t 1 at time v However, can receive one message at a time 38
A ( s , t ) A ( z , t ) s z A ( v , t 1 ) Eligible to A ( t v , ) send message v t 1 at time A ( s , t ) A ( z , t ) Therefore: 39
A ( s , t ) A ( z , t ) s z a ( t ) b ( t ) s Number of nodes like : max | A ( x , t ) | max | B ( x , t ) | x x 40
A ( s , t ) A ( z , t ) s z A ( v , t 1 ) A ( t v , ) v Therefore: A ( v , t 1 ) A ( v , t ) a ( t ) a ( t ) b ( t ) 41
a ( t 1 ) a ( t ) 1 a ( t ) b ( t ) Thus: b ( t 1 ) b ( t ) 1 2 a ( t ) We can also show: 2 Which give: 2 times a ( ) 2 2 End of Proof 42
Upper Bound on Queuing For graphs with spanning trees of constant degree: O ( n log n ) Queuing Cost = For graphs whose spanning trees are lists or perfect binary trees: O ( n ) Queuing Cost = 43
An arbitrary graph 44
Spanning tree 45
Spanning tree 46
Distributed Queue A Previous = Nil Tail A Tail Head 47
B enq(B) Previous = ? A Previous = Nil Tail A Tail Head 48
B Previous = ? enq(B) A Previous = Nil Tail A Tail Head 49
B Previous = ? enq(B) A Previous = Nil Tail A Tail Head 50
B Previous = ? enq(B) A Previous = Nil Tail A Tail Head 51
B Previous = ? enq(B) A Previous = Nil Tail A Tail Head 52
Tail B Previous = A A Previous = Nil A informs B A B Tail Head 53
Concurrent Enqueue Requests enq(C) C enq(B) B Previous = ? Previous = ? A Previous = Nil Tail A Tail Head 54
C B Previous = ? Previous = ? enq(B) enq(C) A Previous = Nil Tail A Tail Head 55
C B Previous = ? Previous = ? enq(B) enq(C) A Previous = Nil Tail A Tail Head 56
Tail C B Previous = A Previous = ? enq(B) A Previous = Nil A C Tail Head 57
Tail enq(B) C B Previous = A Previous = ? A Previous = Nil A C Tail Head 58
C informs B Tail C B Previous = A Previous = C A Previous = Nil A B C Tail Head 59
Tail C B Previous = A Previous = C A Previous = Nil A B C Tail Head 60
Tail Paths of enqueue requests C B Previous = A Previous = C A Previous = Nil A B C Tail Head 61
Nearest-Neighbor TSP tour on Spanning tree C B D E F A Origin (first element in queue) 62
Visit closest unused node in Tree C B D E F A 63
Visit closest unused node in Tree C B D E F A 64
Nearest-Neighbor TSP tour C B D E F A 65
[Herlihy, Tirthapura, Wattenhofer PODC’01] For spanning tree of constant degree: Queuing Cost x Nearest-Neighbor 2 TSP length 66
[Rosenkratz, Stearns, Lewis SICOMP1977] If a weighted graph satisfies triangular inequality: Nearest-Neighbor Optimal x log n TSP length TSP length 67
C B E F D A weighted graph of distances 4 C B 3 3 1 2 1 2 1 F E 1 4 D 2 1 2 3 4 A 68
C 1 e w ( e ) w ( e ) w ( e ) 3 e 2 D 1 2 3 2 e 1 1 A Satisfies triangular inequality 4 C B 3 3 1 2 1 2 1 F E 1 4 D 2 1 2 3 4 A 69
C B Length=8 F E D A Nearest Neighbor TSP tour 4 C B 3 3 1 2 1 2 1 F E 1 4 D 2 1 2 3 4 A 70
C B Length=6 F E D A Optimal TSP tour 4 C B 3 3 1 2 1 2 1 F E 1 4 D 2 1 2 3 4 A 71
It can be shown that: 2 n Optimal TSP length (Nodes in graph) C B F E D A Since every edge is visited twice 72
Therefore, for constant degree spanning tree: Queuing Cost = O(Nearest-Neighbor TSP) = O(Optimal TSP x ) log n O ( n log n ) = 73
For special cases we can do better: Spanning Tree is List balanced binary tree O ( n ) Queuing Cost = 74
Recommend
More recommend