Disordered systems and random graphs 2 Amin Coja-Oghlan Goethe University based on joint work with Dimitris Achlioptas, Oliver Gebhard, Max Hahn-Klimroth, Joon Lee, Philipp Loick, Noela Müller, Manuel Penschuck, Guangyan Zhou
Overview This lecture: random 2-SAT � Belief Propagation and density evolution � the contraction method � spatial mixing � the Aizenman-Sims-Starr scheme � the interpolation method
Random 2-SAT The 2-SAT problem � Boolean variables x 1 ,..., x n � truth values + 1 and − 1 � four types of clauses: x i ∨ x j x i ∨¬ x j ¬ x i ∨ x j ¬ x i ∨¬ x j � a 2-SAT formula is a conjunction Φ = � m i = 1 a i of clauses � S ( Φ ) = set of satisfying assignments � Z ( Φ ) = | S ( Φ ) |
Random 2-SAT x 1 x 2 x 3 a 1 a 2 a 3 Random 2-SAT � for a fixed 0 < d < ∞ let m = Po( dn /2) � Φ = conjunction of m independent random clauses � variable degrees have distribution Po( d ) � Key questions: is Z ( Φ ) > 0 and if so, what is 1 lim n log Z ( Φ ) ? n →∞
The cavity method x 1 x 2 x 3 a 1 a 2 a 3 The factor graph � vertices x 1 ,..., x n represent variables � vertices a 1 ,..., a m represent clauses � the graph G ( Φ ) contains few short cycles � locally G ( Φ ) resembles a Galton-Watson branching process
The cavity method x 1 x 2 x 3 a 1 a 2 a 3 The Boltzmann distribution � assuming S ( Φ ) �= � define µ Φ ( σ ) = 1 { σ ∈ S ( Φ )} ( σ ∈ { ± 1} { x 1 ,..., x n } ) Z ( Φ ) � let σ = σ Φ be a sample from µ Φ
The cavity method x 1 x 2 x 3 a 1 a 2 a 3 Belief Propagation � define the variable–to–clause messages by µ Φ , x → a ( σ ) = µ Φ − a ( σ x = σ ) ( σ = ± 1) � “marginal of x upon removal of a ”
The cavity method x 1 x 2 x 3 a 1 a 2 a 3 Belief Propagation � define the clause–to–variable messages by µ Φ , a → x ( σ ) = µ Φ − ( ∂ x \ a ) ( σ x = σ ) ( σ = ± 1) � “marginal of x upon removal of all neighbours b ∈ ∂ x , b �= a ”
The cavity method The replica symmetric ansatz The messages (approximately) satisfy � µ Φ , x → a ( σ ) ∝ µ Φ , b → x ( σ ) b ∈ ∂ x \ a � � µ Φ , a → x ( σ ) ∝ 1 − 1 σ �= sign( x , a ) µ Φ , ∂ a \ x ( − sign( ∂ a \ x ))
The cavity method The Bethe free entropy � we expect that n � � � log Z ( Φ ) ∼ log µ Φ , a → x ( σ ) i = 1 σ =± 1 a ∈ ∂ x i � � m � � + log 1 − µ Φ , x → a i ( − sign( x , a i )) i = 1 x ∈ ∂ a i n � � � − log µ Φ , x → a i ( σ ) µ Φ , a i → x ( σ ) i = 1 a ∈ ∂ x i σ =± 1
The cavity method Density evolution � consider the empirical distribution of the messages: n π Φ = 1 � � δ µ Φ , x → a ( + 1) 2 m i = 1 a ∈ ∂ x i � d + , d − ∼ Po( d /2), µ 0 , µ 1 , µ 2 ,... samples from π Φ � d + i = 1 µ i d = µ 0 � d + i = 1 µ i + � d − i = 1 µ i + d +
The cavity method Summary: the replica symmetric prediction [MZ96] For d < 2 there is a unique distribution π d on (0,1) s.t. � d + i = 1 µ i d = µ 0 � d + i = 1 µ i + � d − i = 1 µ i + d + n →∞ n − 1 log Z ( Φ ) = B d where and lim � d + � � � d − − d � � � � B d = E log µ i + 2 log 1 − µ 1 µ 2 µ i + d + i = 1 i = 1
The cavity method Theorem [ACOHKLMPZ20] For d < 2 there is a unique distribution π d on (0,1) s.t. � d + i = 1 µ i d = µ 0 � d + i = 1 µ i + � d − i = 1 µ i + d + n →∞ n − 1 log Z ( Φ ) = B d where and lim � d + � � � d − − d � � � � B d = E log 2 log 1 − µ 1 µ 2 µ i + µ i + d + i = 1 i = 1
The cavity method 1.0 0.70 d=1.9 d=1.5 0.65 d=1.2 0.8 0.60 0.6 0.55 0.4 0.50 0.45 0.2 0.40 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 d
The proof strategy Outline 1. Contraction method: unique solution to density evolution 2. Spatial mixing: the empirical distribution π Φ 3. Aizenman-Sims-Starr: derivation of the Bethe formula 4. Interpolation method: concentration of log Z ( Φ )
The proof strategy Outline 1. Contraction method: unique solution to density evolution 2. Spatial mixing: the empirical distribution π Φ 3. Aizenman-Sims-Starr: derivation of the Bethe formula 4. Interpolation method: concentration of log Z ( Φ ) Comparison with prior work [DM10,DMS13,MS07,P14,T01] � zero temperature: hard constraints � spatial mixing: delicate construction of extremal boundaries � Aizenman-Sims-Starr instead of varying temperature β
Step 1: the contraction method Proposition For d < 2 there is a unique distribution π d on (0,1) s.t. � d + i = 1 µ i d = µ 0 � d + i = 1 µ i + � d − i = 1 µ i + d +
Step 1: the contraction method Log-likelihood ratios � let d = Po( d ) � let s 1 , s ′ 1 , s 2 , s ′ 2 ,... ∈ { ± 1} be unifom and independent � introducing µ i η i = log ∈ R 1 − µ i we obtain 1 + s ′ d i tanh( η i /2) d � = s i log η 0 2 i = 1
Step 1: the contraction method The Wasserstein space � W 2 ( R ) = {probability measures with finite 2nd moment} � for ̺ , ̺ ′ ∈ W 2 ( R ) define � ∆ 2 ( ̺ , ̺ ′ ) = E[( X − X ′ ) 2 ] inf X ∼ ̺ , X ′ ∼ ̺ ′ � this metric turns W 2 ( R ) into a complete separable space
Step 1: the contraction method The Banach fixed point theorem � a map F : W 2 ( R ) → W 2 ( R ) is a contraction if ( ̺ , ̺ ′ ∈ W 2 ( R )) ∆ 2 ( F ( ̺ ), F ( ̺ ′ )) ≤ (1 − ǫ ) ∆ 2 ( ̺ , ̺ ′ ) � a contraction has a unique fixed point
Step 1: the contraction method Lemma The map F : W 2 ( R ) → W 2 ( R ) that maps ̺ to the distribution of 1 + s ′ d i tanh( η i /2) � s i log 2 i = 1 is a contraction.
Step 1: the contraction method Proof With ( η 1 , η ′ 1 ),( η 2 , η ′ 2 ),... be pairs drawn from ̺ , ̺ ′ , � 2 � �� 1 + s ′ d i tanh( η i /2) ∆ 2 ( F ( ̺ ), F ( ̺ ′ )) 2 ≤ E � s i log 1 + s ′ i tanh( η ′ i /2) i = 1 � log 2 1 + s ′ � d i tanh( η i /2) � = E 1 + s ′ i tanh( η ′ i /2) i = 1 log 2 1 + s ′ � 1 tanh( η 1 /2) � = d · E 1 + s ′ 1 tanh( η ′ 1 /2) � η 1 ∨ η ′ � 2 = d � � 1 + s tanh( z /2) � 1 | η 1 − η ′ � E 1 | d z 2 2 η 1 ∧ η ′ s =± 1 1 ≤ d 1 ) 2 ] = d 2 ∆ 2 ( ̺ , ̺ ′ ) 2 2 E[( η 1 − η ′
Step 2: spatial mixing Proposition For d < 2 the empirical distribution of marginals n π Φ = 1 � δ µ Φ ( σ xi = 1) n i = 1 converges to the density evolution fixed point π d .
Step 2: spatial mixing x 0 The Galton-Watson tree � a random tree T comprising variable and clause nodes � the root x 0 is a variable � each variable node spawns Po( d ) clause nodes � each clause node has one variable node child
Step 2: spatial mixing x 0 x 0 The Gibbs uniqueness property � T (2 ℓ ) = top 2 ℓ levels of T � we are going to show that � � � � ℓ →∞ E lim max � µ T (2 ℓ ) ( σ x 0 = 1) − µ T (2 ℓ ) ( σ x 0 = 1 | σ ∂ 2 ℓ x 0 = σ ∂ 2 ℓ x 0 ) = 0 � σ ∈ S ( T (2 ℓ ) )
Step 2: spatial mixing + 1 + − − + + 1 + 1 − − + − + + − + + − − 1 − 1 − 1 + 1 − 1 The extremal boundary condition � given T (2 ℓ ) we construct σ + ∈ S ( T (2 ℓ ) ) that maximises µ T (2 ℓ ) ( σ x 0 = 1 | σ ∂ 2 ℓ x 0 = σ + ∂ 2 ℓ x 0 ) = 0 � we start by setting σ + x 0 = 1 and proceed inductively � given σ + x the spins σ + y nudge x towards σ + x
Step 2: spatial mixing + 1 + − − + + 1 + 1 − − + − + + − + + − − 1 − 1 − 1 + 1 − 1 Extremal density evolution � the process leads to a modified density evolution equation d s i log 1 + s i tanh( η i /2) d � = η 0 2 i = 1 � the contraction method applies � we re-discover the solution π d to the original density evolution � consequently, π Φ converges to π d
Step 2: spatial mixing ℓ x 3 ℓ x 1 ℓ x 2 Corollary For any fixed k ≥ 2 we have � � k � � � � lim E � µ Φ ( σ x 1 = σ 1 ,..., σ x k = σ k ) − µ Φ ( σ x i = σ i ) � = 0 � � � � n →∞ σ ∈ { ± 1} k i = 1
Step 3: Aizenman-Sims-Starr Proposition � � � � We have lim n →∞ E log(1 ∨ Z ( Φ n + 1 )) − E log(1 ∨ Z ( Φ n )) = B d
Step 3: Aizenman-Sims-Starr Proposition � � � � We have lim n →∞ E log(1 ∨ Z ( Φ n + 1 )) − E log(1 ∨ Z ( Φ n )) = B d Corollary 1 � � We have lim n E log(1 ∨ Z ( Φ n )) = B d n →∞ Proof Just write a telescoping sum n − 1 � � � � � � � E log(1 ∨ Z ( Φ n )) = E log(1 ∨ Z ( Φ N + 1 )) − E log(1 ∨ Z ( Φ N )) N = 1
Step 3: Aizenman-Sims-Starr Φ ′ n Φ n Φ n + 1 A coupling argument n comprise m ′ ∼ Po( d ( n − 1)/2) random clauses � let Φ ′ � obtain Φ n by adding ∆ ′′ ∼ Po( d /2) clauses � to obtain Φ n + 1 add x n + 1 and ∆ ′′′ ∼ Po( d ) clauses Elog Z ( Φ n ) ∨ 1 Elog Z ( Φ n + 1 ) ∨ 1 Z ( Φ ′ Z ( Φ ′ n ) ∨ 1 n ) ∨ 1
Recommend
More recommend