the cavity method for matchings
play

The cavity method for matchings Marc Lelarge INRIA & ENS - PowerPoint PPT Presentation

The cavity method for matchings Marc Lelarge INRIA & ENS Cargese 2014 1 MAXIMUM MATCHINGS Let G = ( V, E ) be a simple finite graph. A matching of G is a subset of edges with no common end-vertex. Notation: B = ( B e ) e E { 0 , 1


  1. The cavity method for matchings Marc Lelarge INRIA & ENS Cargese 2014 1

  2. MAXIMUM MATCHINGS Let G = ( V, E ) be a simple finite graph. A matching of G is a subset of edges with no common end-vertex. Notation: B = ( B e ) e ∈ E ∈ { 0 , 1 } E , with � e ∈ ∂v B e ≤ 1 for all v ∈ V . The size of the matching is � e ∈ E B e . � A maximum matching is a matching with maximum size denoted ν ( G ) = max B e B e .

  3. MOTIVATION • Optimization is easy: flow problem • Counting is hard: #P- complete • called monomer-dimer model in stat. phys. with a correspondance with Ising model Heilmann, Lieb 72 • link with XORSAT

  4. XORSAT Does there exist a solution to the linear system H x = b , i.e. does b belongs to the image of H ?

  5. CONNECTION WITH MATCHINGS For any k × k submatrices A of H , we have in GF (2) , k � � det( A ) = A i,σ ( i ) σ ∈S k i =1 Lemma 1. Let G be the bipartite graph associated to H , then we have rk 2 ( H ) ≤ ν ( G ) . For a given H ∈ { 0 , 1 } n × m and a random b ∈ { 0 , 1 } m , we then have P ( XORSAT ( H , b ) has a solution ) = 2 rk 2 ( H ) − m ≤ 2 ν ( G ) − m

  6. GIBBS MEASURE FOR MATCHINGS Introduce the Gibbs measure on matchings: � e B e G ( B ) = z µ z P G ( z ) where P G ( z ) is the partition function: � e B e � � � P G ( z ) = z 1 I( B e ≤ 1) v ∈ V e ∈ ∂v B ν ( G ) � m k ( G ) z k , = k =0 where m k ( G ) is the number of matchings of size k .

  7. CAVITY METHOD ON TREES If G is a tree then Y − e ( z ) Y −− e ( z ) → → µ z G ( B e = 1) = e ( z ) = x e ( z ) ∈ (0 , 1) z + Y − e ( z ) Y −− → → where z Y u → v ( z ) = 1 + � w ∈ ∂u \ v Y w → u ( z ) Notation: Y ( z ) = z R G ( Y ( z )) Zdeborov´ a, M´ ezard 06

  8. BELIEF PROPAGATION ON GENERAL GRAPHS z The map ( x 1 , . . . , x k ) �→ i =1 x i is non-increasing, hence iterating the map 1+ � k z R G ( . ) , we build a sequence Y t ( z ) such that 0 ≤ Y 2 t ( z ) ≤ Y − ( z ) ≤ Y ( z ) ≤ Y + ( z ) ≤ Y 2 t +1 ( z ) ≤ z, and Y − ( z ) = z R G ( Y + ( z )) Y + ( z ) = z R G ( Y − ( z )) . In particular, we have Y − e ( Y − ) = Y + e ( Y + ) . e R −− e R − → → − → −− →

  9. Y + ( z ) = Y − ( z ) : PROOF Recall: Y − e ( Y − ) = Y + e ( Y + ) . e R −− e R − → → − → −− → We define Y − e R −− e ( Y ) → → � D v ( Y ) = 1 + Y − e R −− e ( Y ) → → − → e ∈ ∂v � e ∈ ∂v Y − → − → e = , 1 + � e ∈ ∂v Y − → → − e which is an increasing function of Y . Also D v ( Y + ( z )) � = D v ( Y − ( z )) , when summing over all vertices, we have � � D v ( Y + ( z )) = D v ( Y − ( z )) . v ∈ V v ∈ V

  10. LIMIT OF BELIEF PROPAGATION Bethe internal energy � U B ( x ) = − x e e ∈ E Bethe entropy �� 1 � S B ( x ) = − x e ln x e + (1 − x e ) ln(1 − x e ) 2 v ∈ V e ∈ ∂v � � � �� � � − 2 1 − x e ln 1 − x e e ∈ ∂v e ∈ ∂v The Bethe entropy is concave on FM ( G ) = { x ∈ R E , x e ≥ 0 , � e ∈ ∂v x e ≤ 1 } . Vontobel 13

  11. LIMIT OF BELIEF PROPAGATION Belief Propagation maximizes the Bethe free entropy Φ B ( x , z ) = − U B ( x ) ln z + S B ( x ) From the messages Y ( z ) , compute Y − e ( z ) Y −− e ( z ) → → x e ( z ) = e ( z ) ∈ (0 , 1) z + Y − e ( z ) Y −− → → Then, we have Φ B ( x ( z ) , z ) = x ∈ FM ( G ) Φ B ( x , z ) . max

  12. LIMIT z → ∞ Since S B ( x ) ≤ | E | , for z sufficiently large, x ( z ) is on the optimal face of FM ( G ) , so that � x e ( z ) = ν ∗ ( G ) lim z →∞ e where ν ∗ ( G ) is the fractional matching number. Chertkov 08 Relation between LP relaxation and BP: Bayati, Shah, Sharma 08; Sanghavi, Shah, Willsky 09; Sanghavi, Malioutov, Willsky 11; Bayati, Borgs, Chayes, Zecchina 11

  13. SIMPLIFICATION OF BP WHEN z → ∞ The only relevant quantity is I Y e = 1 I(lim z →∞ Y − e ( z ) = ∞ ) . → − → For { 0 , 1 } -valued messages, define J = P G ( I ) by    � J u → v = 1 I I w → u = 0  w ∈ ∂u \ v Then we have I Y = P G ◦ P G ( I Y ) and 2 ν ∗ ( G ) = � v F v ( I Y ) , with � + � � � � � F v ( I ) = 1 ∧ I u → v + 1 − I v → u u ∈ ∂v u ∈ ∂v Indeed, I Y minimizes � v F v ( I ) under the constraint I = P G ◦ P G ( I ) .

  14. IMPLICATION FOR VERTEX COVER The vertex cover number is the solution of the following ILP: � τ ( G ) = min y v v ∈ V y u + y v ≥ 1 , ∀ ( uv ) ∈ E ; y v ∈ { 0 , 1 } , s.t. Then ( F v ( I Y ) / 2 , v ∈ V ) is a minimum half-integral vertex cover. In particular, ( F v ( I Y ) , v ∈ V ) is a 2-approximate solution to vertex cover on G . For bipartite graphs, a slight extension of these results gives a minimum vertex cover. Lelarge 14

  15. RANDOM GRAPHS We are interested in a sequence G n of random diluted graphs : deg( v ; G n ) = O (1) as the number of vertices n tends to infinity. Important examples of random graphs on { 1 , · · · , n } , enyi graphs with parameter p = λ/n . - Erd˝ os-R´ - Uniform measure on k -regular graphs. - Graphs with prescribed degree distribution F ∗ : independently for each vertex, we draw a random number of half-edges with distribution F ∗ . If the total number of half-edge is even, we match them uniformly.

  16. CAVITY METHOD ON LARGE GRAPHS - Introduce the Gibbs measure on matchings: � e B e G ( B ) = z µ z P G ( z ) so that the size of a maximum matching of the graph G = ( V, E ) is given by 1 � � µ z 2 lim G ( B e = 1) . z →∞ v ∈ V e ∈ ∂v - Show that on trees, the marginal µ z G ( B e = 1) can be computed by a message passing algorithm with a unique fixed point. - Show that on trees, when z → ∞ , this message passing algorithm reduces to the previously described 0 − 1 valued message passing algorithm and that the limit of µ z G ( B e = 1) can be computed from the minimal fixed point solution. - Using a convexity argument, invert the limits in n and z .

  17. ? CONVERGENCES ? Do we have convergence of the (normalized) matching number ? 1 � � µ z lim lim G n ( B e = 1) 2 n n →∞ z →∞ v ∈ V n e ∈ ∂v YES in the following cases: • Karp & Sipser 81 for Erd˝ os-R´ enyi graph. • Bohman & Frieze 09 with a ’log-concave’ condition on the degree distribution. For the random assignment problem: it converges at zero temperature to ζ (2) , Aldous 01 and at very high temperature, Talagrand 03.

  18. CAVITY METHOD ON LARGE GRAPHS - Introduce the Gibbs measure on matchings: � e B e G ( B ) = z µ z P G ( z ) so that the size of a maximum matching of the graph G = ( V, E ) is given by 1 � � µ z 2 lim G ( B e = 1) . z →∞ v ∈ V e ∈ ∂v - Show that on trees, the marginal µ z G ( B e = 1) can be computed by a message passing algorithm with a unique fixed point. Unimodularity. - Show that on trees, when z → ∞ , this message passing algorithm reduces to the previously described 0 − 1 valued message passing algorithm and that the limit of µ z G ( B e = 1) can be computed from the minimal fixed point solution. - Using a convexity argument, invert the limits in n and z .

  19. CAVITY METHOD ON LARGE GRAPHS - Introduce the Gibbs measure on matchings: � e B e G ( B ) = z µ z P G ( z ) so that the size of a maximum matching of the graph G = ( V, E ) is given by 1 � � µ z 2 lim G ( B e = 1) . z →∞ v ∈ V e ∈ ∂v - Show that on trees, the marginal µ z G ( B e = 1) can be computed by a message passing algorithm with a unique fixed point. Unimodularity. - Show that on trees, when z → ∞ , this message passing algorithm reduces to the previously described 0 − 1 valued message passing algorithm and that the limit of µ z G ( B e = 1) can be computed from the minimal fixed point solution. - Using a convexity argument, invert the limits in n and z .

  20. CAVITY METHOD ON LARGE GRAPHS - Introduce the Gibbs measure on matchings: � e B e G ( B ) = z µ z P G ( z ) so that the size of a maximum matching of the graph G = ( V, E ) is given by 1 � � µ z 2 lim G ( B e = 1) . z →∞ v ∈ V e ∈ ∂v - Show that on trees, the marginal µ z G ( B e = 1) can be computed by a message passing algorithm with a unique fixed point. Unimodularity. - Show that on trees, when z → ∞ , this message passing algorithm reduces to the previously described 0 − 1 valued message passing algorithm and that the limit of µ z G ( B e = 1) can be computed from the minimal fixed point solution. - Using a convexity argument, invert the limits in n and z .

  21. RESULT FOR RANDOM BIPARTITE GRAPHS We consider standard bipartite graphs between variable and function nodes d ≥ 0 Λ d x d is G N = G ( N, Λ , Γ) , where N is the number of variable nodes, Λ( x ) = � d ≥ 0 Γ d x d is the function-node the variable-node degree distribution and Γ( x ) = � degree distribution. Proposition 1. Bordenave, Lelarge, Salez 12; Salez 12; Lelarge 12; Leconte, Lelarge, e 13. For a sequence of graphs G N = G ( N, Λ , Γ) , where Λ and Γ are fixed, Massouli´ we have 1 M ν ( G N ) → 1 − max x ∈ [0 , 1] H ( x ) , where 1 − Λ ′ (1 − x ) � � H ( x ) = Γ Λ ′ (1) − Γ ′ (1) 1 − Λ(1 − x ) − x Λ ′ (1 − x ) � � . Λ ′ (1)

Recommend


More recommend