Consensus over Stochastically Switching Directed Topologies S. Vanka, V. Gupta and M. Haenggi Department of Electrical Engineering University of Notre Dame S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 1 / 22
Outline 1 Introduction 2 Problem Formulation and Main Results 3 Key Ingredients of the Proof Expected deviation from average consensus point is zero Constructing a martingale Bounding martingale differences Using the Azuma-Hoeffding Inequality 4 Applications Consensus over Fading Channels 5 Conclusions 6 Future Work S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 2 / 22
Introduction Reaching Consensus Decentralized algorithm for a system of n agents 1 , 2 , . . . , n to achieve agreement over value of their state o for i th node Initial state x i Iterative exchange of information specified by a time-varying communication digraph G = ( V , E ), where Vertex set V : set of all participating agents 1 , 2 , . . ., n An edge e ji � − − → ( j , i ) ∈ E iff the message from node j is used by i Average Consensus Algorithm: x t = W t − 1 x t − 1 , where ] T is the state vector x t � [ x (1) x (2) . . . x ( n ) t t t Average Consensus Point: x ∗ av = n − 1 1 ∗ x o W t = I − hL t , where L t is the Laplacian of G t For all t , G t is balanced ⇒ W t is doubly stochastic ⇒ x t can reach average consensus S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 3 / 22
Introduction Prior Work Deterministically varying communication topologies Roots going back to Tsitsiklis (1984) Problems studied include criteria for convergence, optimizing convergence rate etc. (Jadbabaie, Lin and Morse (2003), Xiao and Boyd (2004), Olfati-Saber and Murray (2004), Ren and Beard (2006) . . . ) Randomly varying communication topologies Problems studied include randomized gossip, convergence on random graphs, consensus in networks with noise and packet losses (Boyd et al. (2006), Hatano and Mesbahi (2005), Porfiri and Stilwell (2007), Salehi and Jadbabaie (2008), Fagnani and Zampieri (2008), Huang (2007a&b), Hovareshti et al. (2008) . . . ) S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 4 / 22
Introduction Average Consensus over Random Networks State after t + 1 iterations: x t +1 = W t W t − 1 · · · W 0 x o . ⇒ Time evolution determined by matrix product W t W t − 1 . . . W 0 of random stochastic matrices Suppose that { W t : t = 0 , 1 , . . . } is matrix-valued i.i.d. sequence All update matrices W t have positive diagonal elements The linear system x t +1 = E [ W ] x t asymptotically reaches consensus It is known that The state x t almost surely reaches consensus (Salehi-Jadbabaie 2008) The consensus point is random variable S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 5 / 22
Introduction Average Consensus over Random Networks State after t + 1 iterations: x t +1 = W t W t − 1 · · · W 0 x o . ⇒ Time evolution determined by matrix product W t W t − 1 . . . W 0 of random stochastic matrices Suppose that { W t : t = 0 , 1 , . . . } is matrix-valued i.i.d. sequence All update matrices W t have positive diagonal elements The linear system x t +1 = E [ W ] x t asymptotically reaches consensus It is known that The state x t almost surely reaches consensus (Salehi-Jadbabaie 2008) The consensus point is random variable S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 5 / 22
Introduction Average Consensus over Random Networks State after t + 1 iterations: x t +1 = W t W t − 1 · · · W 0 x o . ⇒ Time evolution determined by matrix product W t W t − 1 . . . W 0 of random stochastic matrices Suppose that { W t : t = 0 , 1 , . . . } is matrix-valued i.i.d. sequence All update matrices W t have positive diagonal elements The linear system x t +1 = E [ W ] x t asymptotically reaches consensus It is known that The state x t almost surely reaches consensus (Salehi-Jadbabaie 2008) The consensus point is random variable S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 5 / 22
Problem Formulation and Main Results Problem Formulation Linear dynamics x t +1 = W t x t , fixed initial state x 0 Ideally: Reach average consensus point 1 ( n − 1 1 ∗ x 0 ) But what if not all W t are balanced? Matrix sequence W t : An i.i.d. sequence of stochastic matrices All W t have positive diagonal entries The system x ( t + 1) = E [ W ] x ( t ) reaches average consensus ⇒ x ( t ) almost surely reaches consensus Can we quantify this deviation from the average consensus point? S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 6 / 22
Problem Formulation and Main Results Problem Formulation Linear dynamics x t +1 = W t x t , fixed initial state x 0 Ideally: Reach average consensus point 1 ( n − 1 1 ∗ x 0 ) But what if not all W t are balanced? Matrix sequence W t : An i.i.d. sequence of stochastic matrices All W t have positive diagonal entries The system x ( t + 1) = E [ W ] x ( t ) reaches average consensus ⇒ x ( t ) almost surely reaches consensus Can we quantify this deviation from the average consensus point? S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 6 / 22
Problem Formulation and Main Results Problem Formulation Linear dynamics x t +1 = W t x t , fixed initial state x 0 Ideally: Reach average consensus point 1 ( n − 1 1 ∗ x 0 ) But what if not all W t are balanced? Matrix sequence W t : An i.i.d. sequence of stochastic matrices All W t have positive diagonal entries The system x ( t + 1) = E [ W ] x ( t ) reaches average consensus ⇒ x ( t ) almost surely reaches consensus Can we quantify this deviation from the average consensus point? S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 6 / 22
Problem Formulation and Main Results Problem Formulation Linear dynamics x t +1 = W t x t , fixed initial state x 0 Ideally: Reach average consensus point 1 ( n − 1 1 ∗ x 0 ) But what if not all W t are balanced? Matrix sequence W t : An i.i.d. sequence of stochastic matrices All W t have positive diagonal entries The system x ( t + 1) = E [ W ] x ( t ) reaches average consensus ⇒ x ( t ) almost surely reaches consensus Can we quantify this deviation from the average consensus point? S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 6 / 22
Problem Formulation and Main Results Geometric Interpretation Instantaneous average n = 2 x av ( t ) � 1 ∗ x t n For balanced communication graphs x av ( t ) = x av (0) = average consensus point Interested in characterizing the deviation x t − 1 x av (0) of instantaneous state from average consensus point S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 7 / 22
Problem Formulation and Main Results Geometric Interpretation n = 2 Now x t can be written as a sum of e t and r t where e t � 1 ( x av (0) − x av (0)) : Deviation of Instantaneous Average r t � x t − 1 x av ( t ) : Disagreement Define δ t � x av ( t ) − x av (0), so that � e t � = δ t S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 7 / 22
Problem Formulation and Main Results Our Results For a fixed initial state x 0 , ¯ W � E [ W ], P � I − n − 1 11 ∗ . Let 1 = λ 1 ≥ λ 2 ≥ · · · λ n > − 1 be the eigenvalues of ¯ W , and µ � max( | λ 2 | , | λ n | ). Then for all ǫ > 0 and t : Deviation δ t of instantaneous average from the average consensus point P ( | δ t | ≥ ǫ ) ≤ min(1 , 2 exp( − ǫ 2 β t )) Distance r t from the consensus subspace P ( � r t − P ¯ W t x o � ∞ ≥ ǫ ) ≤ min(1 , 2 exp( − ǫ 2 β t )) 1 − µ 2 where β ( t ) � ∞ (1 − µ 2 t ) . 2 C 2 � x o � 2 S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 8 / 22
Key Ingredients of the Proof Will highlight the key results used to derive the concentration bound on the deviation δ t from the average consensus point Proof has four main ingredients Expected deviation from average consensus point is zero Constructing a martingale Bounding the differences between successive terms of this sequence by leveraging the connectivity of ¯ W Using the Azuma-Hoeffding inequality to bound the deviation δ t from its mean S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 9 / 22
Key Ingredients of the Proof Expected deviation from average consensus point is zero Expected Deviation in the Consensus Subspace Can show that E [ δ t ] = 0, exploiting: (1) The matrix sequence { W t } is i.i.d. (2) ¯ W is doubly stochastic � � t − 1 � � � n − 1 1 ∗ E [ δ t ] = W k x o − x o E k =0 S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 10 / 22
Key Ingredients of the Proof Expected deviation from average consensus point is zero Expected Deviation in the Consensus Subspace Can show that E [ δ t ] = 0, exploiting: (1) The matrix sequence { W t } is i.i.d. (2) ¯ W is doubly stochastic � � t − 1 � � � n − 1 1 ∗ E [ δ t ] = E W k x o − x o k =0 (1) W t − I ) x o n − 1 1 ∗ ( ¯ = S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 10 / 22
Key Ingredients of the Proof Expected deviation from average consensus point is zero Expected Deviation in the Consensus Subspace Can show that E [ δ t ] = 0, exploiting: (1) The matrix sequence { W t } is i.i.d. (2) ¯ W is doubly stochastic � � t − 1 � � � n − 1 1 ∗ E [ δ t ] = E W k x o − x o k =0 (2) n − 1 ( 1 ∗ ¯ W t − 1 ∗ ) x o = 0 . = S. Vanka, V. Gupta and M. Haenggi IEEE IT School 2009 10 / 22
Recommend
More recommend