disordered systems and random graphs 1
play

Disordered Systems and Random Graphs 1 Amin Coja-Oghlan Goethe - PowerPoint PPT Presentation

Disordered Systems and Random Graphs 1 Amin Coja-Oghlan Goethe University based on joint work with Dimitris Achlioptas, Oliver Gebhard, Max Hahn-Klimroth, Joon Lee, Philipp Loick, Noela Mller, Manuel Penschuck, Guangyan Zhou Overview


  1. Disordered Systems and Random Graphs 1 Amin Coja-Oghlan Goethe University based on joint work with Dimitris Achlioptas, Oliver Gebhard, Max Hahn-Klimroth, Joon Lee, Philipp Loick, Noela Müller, Manuel Penschuck, Guangyan Zhou

  2. Overview Lecture 1: introduction � random graphs and phase transitionss � the cavity method � first/second moment method � Belief Propagation and density evolution

  3. Overview Lecture 2: random 2-SAT � the contraction method � spatial mixing � the Aizenman-Sims-Starr scheme � the interpolation method

  4. Overview Lecture 3: group testing � basics of Bayesian inference � analysis of combinatorial algorithms � spatial coupling � information-theoretic lower bounds

  5. Disordered systems O Si From glasses to random graphs [MP00] � (spin) glasses are disordered materials rather than crystals � lattice models are difficult to grasp even non-rigorously � classical mean-field models: complete interaction � diluted mean-field models: sparse random graph topology

  6. Disordered systems The binomial random graph G = G ( n , p ) [ER60] � vertex set x 1 ,..., x n � connect any two vertices w/ probability p = d n independently � local structure converges to Po( d ) Galton-Watson tree

  7. The Potts antiferromagnet Definition � fix d > 0, q ≥ 2 and β > 0 � the Boltzmann distribution reads 1 ( σ ∈ {1,..., q } n ) � µ G , β ( σ ) = exp( − β 1 { σ v = σ w }) Z ( G , β ) vw ∈ E ( G ) � � Z ( G , β ) = exp( − β 1 { τ v = τ w }) τ ∈ {1,..., q } n vw ∈ E ( G )

  8. The physics story: replica symmetry breaking Replica symmetry [KMRTSZ07] � fix a large d and increase β � for small β there are no extensive long-range correlations µ G , β ({ σ x 1 = τ 1 , σ x 2 = τ 2 }) ∼ q − 2 ( τ 1 , τ 2 ∈ {1,..., q }) � in fact, there is non-reconstruction and rapid mixing

  9. The physics story: replica symmetry breaking Dynamic replica symmetry breaking [KMRTSZ07] � still no extensive long-range correlations for moderate β µ G , β ({ σ x 1 = τ 1 , σ x 2 = τ 2 }) ∼ q − 2 ( τ 1 , τ 2 ∈ {1,..., q }) � but there is reconstruction and torpid mixing

  10. The physics story: replica symmetry breaking Static replica symmetry breaking [KMRTSZ07] � for large β long-range correlations emerge µ G , β ({ σ x 1 = τ 1 , σ x 2 = τ 2 }) �∼ q − 2 ( τ 1 , τ 2 ∈ {1,..., q }) � a few pure states dominate

  11. The stochastic block model The Potts model as an inference problem [DKMZ11] � choose a random colouring σ ∗ ∈ {1,..., q } n � then choose a random graph G ∗ with G ∗ = G | | E ( G ∗ ) | = | E ( G ) | ∝ µ G , β ( σ ∗ ) � � P � given G ∗ can we (partly) infer σ ∗ ?

  12. Rigorous work Techniques � Classical random graphs techniques � method of moments � branching processes � large deviations � Mathematial physics techniques � coupling arguments � exchangeable arrays and the cut metric � Belief Propagation and the contraction method � the interpolation method

  13. Rigorous work a 1 a 2 a 3 x 1 x 2 x 3 x 4 x 5 x 6 y 1 y 2 y 3 y 4 y 5 y 6 Success stories � solution space geometry [ACO08,M12] � random k -SAT [AM02,AP03,COP16,DSS15] � low-density parity check codes [G63,KRU13] � stochastic block model [AS15,M14,MNS13,MNS14,COKPZ16] � group testing [MTT08,COGHKL20] � ...

  14. Rigorous work Theorem [COKPZ17] Let Λ ( x ) = x log x and B ∗ q , β ( d ) = sup B q , β , d ( π ) where π � Λ ( � q i = 1 1 − (1 − e − β ) µ ( π ) Λ (1 − (1 − e − β ) � q σ = 1 µ ( π ) 1 ( σ ) µ ( π ) � γ ( σ )) − d 2 ( σ )) � σ = 1 i B q , β , d ( π ) = E . q (1 − (1 − e − β )/ q ) γ 2 1 − (1 − e − β )/ q Then � q , β ( d ) = ln q + d � d > 0 : B ∗ 2 ln(1 − (1 − e − β )/ q ) d cond ( q , β ) = inf .

  15. Random 2-SAT The 2-SAT problem � Boolean variables x 1 ,..., x n � truth values + 1 and − 1 � four types of clauses: x i ∨ x j x i ∨¬ x j ¬ x i ∨ x j ¬ x i ∨¬ x j � a 2-SAT formula is a conjunction Φ = � m i = 1 a i of clauses � S ( Φ ) = set of satisfying assignments � Z ( Φ ) = | S ( Φ ) |

  16. Random 2-SAT x 1 x 2 x 3 a 1 a 2 a 3 Example � Φ = ( ¬ x 1 ∨ x 2 ) ∧ ( x 1 ∨ x 3 ) ∧ ( ¬ x 2 ∨¬ x 3 ) � Z ( Φ ) = 2 and S ( Φ ) consists of the two assignments σ x 1 = + 1 σ x 2 = + 1 σ x 3 = − 1 σ x 1 = − 1 σ x 2 = − 1 σ x 3 = + 1 � glassy because variables may appear with opposing signs

  17. Random 2-SAT x 1 x 2 x 3 a 1 a 2 a 3 Computational complexity � 2-SAT admits an efficient decision algorithm [K67] � in fact, WalkSAT solves the problem efficiently [P91] � the problem is NL -complete [IS87,P94] � however, computing log Z ( Φ ) is # P -hard [V79]

  18. Random 2-SAT x 1 x 2 x 3 a 1 a 2 a 3 Random 2-SAT � for a fixed 0 < d < ∞ let m = Po( dn /2) � Φ = conjunction of m independent random clauses � variable degrees have distribution Po( d ) � Key questions: is Z ( Φ ) > 0 and if so, what is 1 lim n log Z ( Φ ) ? n →∞

  19. Random 2-SAT Prior work � the threshold for S ( Φ ) = � occurs at d = 2 [CR92,G96] � computation of log Z ( Φ ) via replica/cavity method [MZ96] � the scaling window [BBCKW01] � partial results on ‘soft’ version [T01,MS07,P14] � existence of a function φ ( d ) such that [AM14] log Z ( Φ ) lim = φ ( d ) n →∞ n for almost all d ∈ (0,2)

  20. The satisfiability threshold Bicycles � the clause l ∨ l ′ is logically equivalent to the two implications l ∨ l ′ ≡ ( ¬ l → l ′ ) ∧ ( ¬ l ′ → l ) � Φ is satisfiable unless there is an implication chain x i → ··· → ¬ x i → ··· → x i � such chains are called bicycles

  21. The satisfiability threshold Theorem [CR92,G96] � If d < 2 then Φ does not contain a bicycle w.h.p. � If d > 2 then Φ contains a bicycle w.h.p.

  22. The second moment method A naive attempt � we aim to compute log Z ( Φ ) for a typical Φ � Jensen’s inequality shows that log Z ( Φ ) ≤ logE[ Z ( Φ ) | m ] + o ( n ) w.h.p.

  23. The second moment method The first moment � computing E[ Z ( Φ ) | m ] is a cinch: � m � 3 E[ Z ( Φ ) | m ] = 2 n · 4 � hence, 1 n log Z ( Φ ) ≤ (1 − d )log2 + d 2 log3 w.h.p.

  24. The second moment method The second moment � this bound is tight if E[ Z ( Φ ) 2 ] = O (E[ Z ( Φ )] 2 ) � we calculate E[ Z ( Φ ) 2 | m ] = � σ , τ ∈ { ± 1} n P[ Φ | = σ , Φ | = τ | m ] � m n 2 + (1 + ℓ / n ) 2 � 1 � � = 16 ℓ =− n σ , τ : σ · τ = ℓ � m � �� 1 n 2 + (1 + ℓ / n ) 2 n � = ( n + ℓ )/2 16 ℓ =− n

  25. The second moment method The second moment � hence, 2 + (1 + α ) 2 1 − 1 ≤ α ≤ 1 H ((1 + α )/2) + d � 1 � n logE[ Z ( Φ ) 2 | m ] ∼ max 2 log 16 � at α = 0 the above function evaluates to log2 + d log 3 4 ∼ 2 n logE[ Z ( Φ ) | m ] � therefore, we succeed iff the max is attained at α = 0 :(

  26. The cavity method x 1 x 2 x 3 a 1 a 2 a 3 The factor graph � vertices x 1 ,..., x n represent variables � vertices a 1 ,..., a m represent clauses � the graph G ( Φ ) contains few short cycles � locally G ( Φ ) resembles a Galton-Watson branching process

  27. The cavity method x 1 x 2 x 3 a 1 a 2 a 3 The Boltzmann distribution � assuming S ( Φ ) �= � define µ Φ ( σ ) = 1 { σ ∈ S ( Φ )} ( σ ∈ { ± 1} { x 1 ,..., x n } ) Z ( Φ ) � let σ = σ Φ be a sample from µ Φ

  28. The cavity method x 1 x 2 x 3 a 1 a 2 a 3 Belief Propagation � define the variable–to–clause messages by µ Φ , x → a ( σ ) = µ Φ − a ( σ x = σ ) ( σ = ± 1) � “marginal of x upon removal of a ”

  29. The cavity method x 1 x 2 x 3 a 1 a 2 a 3 Belief Propagation � define the clause–to–variable messages by µ Φ , a → x ( σ ) = µ Φ − ( ∂ x \ a ) ( σ x = σ ) ( σ = ± 1) � “marginal of x upon removal of all neighbours b ∈ ∂ x , b �= a ”

  30. The cavity method The replica symmetric ansatz The messages (approximately) satisfy � µ Φ , x → a ( σ ) ∝ µ Φ , b → x ( σ ) b ∈ ∂ x \ a � � µ Φ , a → x ( σ ) ∝ 1 − 1 σ �= sign( x , a ) µ Φ , ∂ a \ x ( − sign( ∂ a \ x ))

  31. The cavity method The Bethe free entropy � we expect that n � � � log Z ( Φ ) ∼ log µ Φ , a → x ( σ ) i = 1 σ =± 1 a ∈ ∂ x i � � m � � + log 1 − µ Φ , x → a i ( − sign( x , a i )) i = 1 x ∈ ∂ a i n � � � − log µ Φ , x → a i ( σ ) µ Φ , a i → x ( σ ) i = 1 a ∈ ∂ x i σ =± 1

  32. The cavity method Density evolution � consider the empirical distribution of the messages: n π Φ = 1 � � δ µ Φ , x → a ( + 1) 2 m i = 1 a ∈ ∂ x i � d + , d − ∼ Po( d /2), µ 0 , µ 1 , µ 2 ,... samples from π Φ � d + i = 1 µ i d = µ 0 � d + i = 1 µ i + � d − i = 1 µ i + d +

Recommend


More recommend