distributed motion coordination of robotic networks
play

Distributed motion coordination of robotic networks Lecture 5 - PowerPoint PPT Presentation

Distributed motion coordination of robotic networks Lecture 5 agreement Jorge Cort es Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz http://www.ams.ucsc.edu/jcortes Summer


  1. Distributed motion coordination of robotic networks Lecture 5 – agreement Jorge Cort´ es Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz http://www.ams.ucsc.edu/˜jcortes Summer School on Geometry Mechanics and Control Centro Internacional de Encuentros Matem´ aticos Castro Urdiales, June 25-29, 2007 ,

  2. Roadmap Lecture 1: Introduction, examples, and preliminary notions Lecture 2: Models for cooperative robotic networks Lecture 3: Rendezvous Lecture 4: Deployment Lecture 5: Agreement ,

  3. Today 1 Agreement – Basic coordination capability 2 Non-deterministic continuous-time dynamical systems – nonsmooth stability analysis 3 Algebraic graph theory – interplay between graph theory and linear algebra ,

  4. Outline 1 Agreement Quick graph theory review 2 Design of distributed algorithms for χ -consensus Implications on allowable interconnection topologies Systematic design of distributed algorithms for χ -consensus Exponential convergence of power-mean consensus algorithms 3 Further results: switching topologies and finite-time convergence 4 Conclusions ,

  5. Reaching agreement is critical Coordination tasks such as self-organization formation pattern distributed estimation parallel processing require individual agents to agree on the identity of a leader jointly synchronize their operation decide specific pattern to form fuse the information gathered ,

  6. Agreement Objective: given χ : R n → R , agree on value of χ , i.e., for ( p 1 (0) , . . . , p n (0)) ∈ R n , p i ( t ) − → χ ( p 1 (0) , . . . , p n (0)) Directed graph G as communication topology (–fixed, for simplicity) Traditional topic in computer science In cooperative control , consensus for computing e.g., weighted-least squares estimate of noisy signal product of conditional independent probabilities statistical moments of spatially distributed measurements ,

  7. Let’s agree! We want to compute the average age of the people sitting at the table Each of us can only talk to her/his neighbors to the left, and to the right ,

  8. Let’s agree! We want to compute the average age of the people sitting at the table Each of us can only talk to her/his neighbors to the left, and to the right How could we do it? For instance, by message passing I tell my right-neighbor and left-neighbor my age. They also tell me the same information about themselves When I receive the information about the age of my right-neighbor, I pass it to my left-neighbor, and the other way around We keep doing this message passing until everybody knows everybody’s age, and then each one of us can compute the average ,

  9. Message passing Drawbacks of this approach? With more number of people sitting at the table, algorithm takes longer How many messages do I send/receive, and how much memory do I need to keep track of it? How do I know when I have received everybody’s ages? What if somebody sits at the table while we are running our message passing? Or if somebody leaves? ,

  10. Let’s agree – averaging law Here is an alternative I define a variable p , which I initially set up to my own age At each communication round, I send my current p When I receive p right from my right-neighbor, and p left from my left-neighbor, I recompute my p as follows p new = 1 2 p old + 1 4( p right + p left ) ,

  11. Let’s agree – averaging law Here is an alternative I define a variable p , which I initially set up to my own age At each communication round, I send my current p When I receive p right from my right-neighbor, and p left from my left-neighbor, I recompute my p as follows p new = 1 2 p old + 1 4( p right + p left ) Amazingly, this algorithm makes p tend to the average age of everybody sitting at the table! Convergence is exponentially fast I don’t need to know how many we are I need a pretty low memory to run the algorithm ,

  12. Outline 1 Agreement Quick graph theory review 2 Design of distributed algorithms for χ -consensus Implications on allowable interconnection topologies Systematic design of distributed algorithms for χ -consensus Exponential convergence of power-mean consensus algorithms 3 Further results: switching topologies and finite-time convergence 4 Conclusions ,

  13. Graph-theoretic notions G = ( V , E , A ) weighted digraph , E ∈ P ( V × V ) and A = ( a ij ) ∈ R n × n ≥ 0 Weighted out-degree and in-degree n n � � d out ( i ) = a ij and d in ( i ) = a ji j =1 j =1 G is weight-balanced if each vertex has equal in- and out-degree G topologically balanced if each vertex has the same number of incoming and outgoing edges ,

  14. Laplacian matrix – connecting algebra and graph theory The graph Laplacian of the weighted digraph G is L ( G ) = D out ( G ) − A ( G ) Lemma (Properties of the Laplacian matrix) The following statements hold: 1 L ( G ) 1 n = 0 n 2 G is undirected if and only if L ( G ) is symmetric 3 if G is undirected, then L ( G ) is positive semidefinite 4 G contains a globally reachable vertex if and only if rank L ( G ) = n − 1 5 G is weight-balanced if and only if 1 T n L ( G ) = 0 n if and only if 2 ( L ( G ) + L ( G ) T ) is positive semi-definite. Sym( L ( G )) = 1 ,

  15. Disagreement function Disagreement function n � Φ G ( P ) = 1 a ij ( p j − p i ) 2 2 i,j =1 If G weakly connected, Φ G ( P ) = 0 iff all in agreement If G weight-balanced, Φ G ( P ) = P T L ( G ) P If G weight-balanced and weakly connected, λ n (Sym( L )) � P − Ave ( P ) 1 � 2 ≥ Φ G ( P ) ≥ λ 2 (Sym( L )) � P − Ave ( P ) 1 � 2 For G undirected, λ 2 is algebraic connectivity ,

  16. Stating the consensus problem Network of agents ˙ p i = u i , i ∈ { 1 , . . . , n } Continuous χ : V ⊂ R n → R Continuous u : V ⊂ R n → R n asymptotically achieves χ -consensus if, for any P (0) = ( p 1 (0) , . . . , p n (0)) ∈ V , any solution of ˙ p i = u i starting at P (0) stays in V and p i ( t ) − → χ ( P (0)) , t → + ∞ Problem Given weighted digraph G , design u u is distributed over G , and u asymptotically achieves χ -consensus ,

  17. Average consensus via gradient descent Disagreement function � Φ G ( P ) = 1 ( p j − p i ) 2 2 ( i,j ) ∈ E Gradient descent of Φ G � p i = − ∂ Φ G ˙ = ( p j − p i ) ∂p i j ∈N G ( i ) With G connected, LaSalle implies consensus is reached. To what? ,

  18. Average consensus via gradient descent Disagreement function � Φ G ( P ) = 1 ( p j − p i ) 2 2 ( i,j ) ∈ E Gradient descent of Φ G � p i = − ∂ Φ G ˙ = ( p j − p i ) ∂p i j ∈N G ( i ) With G connected, LaSalle implies consensus is reached. To what? � n Note that χ ave = 1 i =1 p i is conserved along trajectories n n � � L −∇ Φ g ( χ ave ) = ( p j − p i ) = 0 i =1 j ∈N G ( i ) Asymptotic consensus + conservation of χ ave ⇒ Gradient flow achieves average-consensus ,

  19. Necessary and sufficient conditions for χ -consensus Is this only way (consensus + conservation of χ ) to solve χ -consensus? Let χ : R n → R be continuous and surjective Let u : R n → R n be continuous with trajectories of ˙ p i = u i bounded Theorem u asymptotically achieves χ -consensus if and only if 1 trajectories converge to diag( R n ) 2 χ is constant along trajectories, and 3 χ ( p, . . . , p ) = p for all p ∈ R Result can be stated to account for weaker continuity on χ and u ,

  20. Design of distributed algorithms for χ -consensus From a design perspective, result tells us how to synthesize algorithms for χ -consensus 1 steer the system to agreement while, at the same time, 2 conserve the value of χ Within the class of (time-independent) feedback laws, there are no more ways to solve the problem Additional constraint for us: do 1 and 2 with only local information ,

  21. Example: weighted power means – I > 0 , � n For w ∈ R n i =1 w i = 1 and r ∈ R , � � 1 n � r , w i p r χ w,r ( p 1 , . . . , p n ) = p i > 0 , χ w,r ( p 1 , . . . , p n ) = 0 , otherw. i i =1 For w = ( 1 n , . . . , 1 n ) , we just use χ r � � − 1 1 1 χ − 1 Harmonic Mean p 1 + · · · + p n √ p 1 · · · p n χ 0 Geometric Mean n 1 χ 1 Arithmetic Mean or Average n ( p 1 + · · · + p n ) � p 2 χ 2 1 + · · · + p 2 Root-Mean-Square n ,

  22. Example: weighted power means – II Coordination algorithm u w,r over weighted digraph G = ( V , E , A ) n � ( u w,r ) i ( p 1 , . . . , p n ) = 1 p 1 − r a ij ( p j − p i ) i w i j =1 is distributed over G Does u w,r achieve χ w,r -consensus? ,

  23. Example: weighted power means – II Coordination algorithm u w,r over weighted digraph G = ( V , E , A ) n � ( u w,r ) i ( p 1 , . . . , p n ) = 1 p 1 − r a ij ( p j − p i ) i w i j =1 is distributed over G Does u w,r achieve χ w,r -consensus? 1 preserves χ w,r if and only if G is weight-balanced L u w,r χ w,r = grad χ w,r · u w,r = 1 T L 2 for equilibria to be in agreement , G must be weakly connected Convergence to diag( R n ) established with LaSalle via V = χ r +1 w,r +1 w,r +1 = − ( r + 1) P T L ( G ) P ≤ 0 L u w,r χ r +1 ,

  24. Questions from power mean example 1 How do digraphs that are both weight-balanced and weakly connected look like? 2 Is systematic design of distributed algorithms possible for general χ ? 3 Is convergence of u w,r exponential? ,

Recommend


More recommend