Phase Transition Phenomena in Integral Geometry Martin Lotz Warwick University joint work with Joel Tropp (Caltech) and Dennis Amelunxen, Michael McCoy, Ivan Nourdin, Giovanni Peccati Jena, September 18, 2019
Motivation Let x 0 ∈ R n be s -sparse, b = Ax 0 for A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . ( ⋆ ) When is x 0 the unique solution of ( ⋆ ) ? ◮ Every row of A is seen as a measurement or observation that reveals information about x 0 . ◮ Motivation: applications in seismic imaging, signal processing, medical imaging, statistics and machine learning, ... 2 / 24
Compressed Sensing Let x 0 ∈ R n be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . ◮ Donoho, Cand` es, Romberg & Tao, Rudelson & Vershynin: Successful recovery of x 0 when m ≥ const · log( n/m ) · s (“complexity” m is proportional to the “information content” s ) ◮ Phase transitions for successful recovery were observed and precisely located by Donoho & Tanner and Stojnic 3 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 25 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 50 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 75 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 100 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 125 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 150 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 175 , n = 200 4 / 24
Observed Phase Transitions Let x 0 be s -sparse, b = Ax 0 for random A ∈ R m × n ( s < m < n ). minimize � x � 1 subject to Ax = b . 1 0.9 0.8 0.7 Probability of success 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 Number of equations m s = 50 , m = 200 , n = 200 4 / 24
Phase Transitions for Linear Inverse Problems Associate to a solution x 0 of Ax = b and a convex problem minimize f ( x ) subject to Ax = b ( ⋆ ) a parameter δ ( f, x 0 ) , the statistical dimension of f at x 0 . Theorem [Amelunxen, L, McCoy & Tropp, 2014] Let η ∈ (0 , 1) and let x 0 ∈ R n be a fixed vector. Suppose A ∈ R m × n has independent standard normal entries, and let b = Ax 0 . Then √ n m ≥ δ ( f, x 0 ) + a η = ⇒ ( ⋆ ) recovers x 0 with probability ≥ 1 − η ; √ n m ≤ δ ( f, x 0 ) − a η = ⇒ ( ⋆ ) recovers x 0 with probability ≤ η . � where a η := 4 log(4 /η ) . 5 / 24
Phase Transitions for Linear Inverse Problems 100 900 75 600 50 300 25 0 0 0 25 50 75 100 0 10 20 30 6 / 24
From Optimization to Geometry The problem minimize f ( x ) subject to Ax = b has x 0 as unique solution if and only if the optimality condition ker A ∩ D ( f, x 0 ) = { 0 } is satisfied, where y ∈ R n : f ( x 0 + τ y ) ≤ f ( x 0 ) � � � D ( f, x 0 ) := τ> 0 is the convex descent cone of f at x 0 . ◮ A Gaussian ⇒ ker A uniform in Grassmannian 7 / 24
The Mathematical Problem Given a closed convex cone C ⊂ R n and a random linear subspace L with dim L = k , find: P { C ∩ L � = { 0 }} := ν k ( { L ∈ Gr( k, n ) : C ∩ L � = { 0 }} ) , where ν k is the normalized Haar measure on the Grassmannian. ◮ Bounds can be derived from Gordon’s escape through the mesh argument; ◮ Exact formulas for probability of intersection are based on the Crofton Formula from (spherical) integral geometry. 8 / 24
Spherical Intrinsic Volumes v 0 ( C ) , . . . , v n ( C ) : spherical/conic intrinsic volumes of C . 0.18 1 0.16 0.9 0.8 0.14 0.7 0.12 0.6 0.1 0.5 0.08 0.4 0.06 0.3 0.04 0.2 0.02 0.1 0 0 0 5 10 15 20 25 0 5 10 15 20 25 v k ( R n ≥ 0 ) v k ( L ) , dim L = k 0.1 0.35 0.09 0.3 0.08 0.07 0.25 0.06 0.2 0.05 0.15 0.04 0.03 0.1 0.02 0.05 0.01 0 0 5 10 15 20 25 0 0 5 10 15 20 25 v k (Circ( n, π/ 4)) v k ( { x : x 1 ≤ · · · ≤ x n } ) 9 / 24
The Crofton and Kinematic Formula ◮ Kinematic Formula � � � � P { C ∩ Q D � = { 0 }} = 2 v i ( C ) v n − i + k ( D ) , i k odd where Q uniformly distributed on SO ( n ) . ◮ Crofton Formula � P { C ∩ L � = { 0 }} = 2 v m + k ( C ) , k odd where L uniform in Gr( n − m, n ) . 10 / 24
Moments Associate to a cone C the discrete random variable X C with 0.1 0.09 P { X C = k } = v k ( C ) , 0.08 0.07 and define the statistical 0.06 dimension as the expectation 0.05 0.04 0.03 n � 0.02 δ ( C ) = E [ X C ] = kv k ( C ) . 0.01 k =0 0 0 5 10 15 20 25 It appears that X C might Circ( n, π/ 4) concentrate around δ ( C ) . 11 / 24
Concentration of Measure Theorem (Amelunxen-L-McCoy-Tropp 2014) Let C be a convex cone, and X C a discrete random variable with distribution P { X C = k } = v k ( C ) . Let δ ( C ) = E [ X C ] . Then � − λ 2 / 8 � P {| X C − δ ( C ) | > λ } ≤ 4 exp for λ ≥ 0 , ω ( C ) + λ where ω ( C ) := min { δ ( C ) , d − δ ( C ) } . ◮ Refined by McCoy-Tropp 2014. ◮ Central Limit Theorem by Goldstein-Nourdin-Peccati 2017. 12 / 24
Approximate Crofton Formula Applying the concentration result to the Crofton Formula Corollary Let η ∈ (0 , 1) and assume L uniformly distributed in Gr( n − m, n ) . Then √ n � � δ ( C ) ≤ m − a η = ⇒ P C ∩ L = { 0 } ≥ 1 − η ; √ n � � δ ( C ) ≥ m + a η = ⇒ C ∩ L = { 0 } ≤ η, P � where a η := 4 log(4 /η ) . ◮ If C = D ( f, x 0 ) , define δ ( f, x 0 ) = δ ( C ) . 13 / 24
Approximate Kinematic Formula Applying the concentration result to the Kinematic Formula Corollary Let η ∈ (0 , 1) and assume one of C, D is not a subspace. Then √ n � � δ ( C ) + δ ( D ) ≤ n − a η = ⇒ P C ∩ Q D = { 0 } ≥ 1 − η ; √ n � � δ ( C ) + δ ( D ) ≥ n + a η = ⇒ C ∩ Q D = { 0 } ≤ η, P � where a η := 4 log(4 /η ) . 14 / 24
Euclidean Integral Geometry Do similar results hold in Euclidean space R n ? Steiner Formula n � λ n − i κ n − i · V i ( K ) , Vol n ( K + λ B n ) = i =0 where K convex body, κ i = Vol i ( B i ) . ◮ V i ( K ) : i -th intrinsic volume; ◮ Wills functional W( K ) = V 0 ( K ) + V 1 ( K ) + · · · + V n ( K ) . 15 / 24
Crofton and Kinematic Formula Different normalization (Nijenhuis, ...) n V i ( K ) := ω n +1 � V i ( K ) , W( K ) := V i ( K ) , ω i +1 i =0 where ω k is the surface area of a k -dimensional unit ball. ◮ Crofton Formula n � � W( K ∩ E ) µ n − i (d E ) = V k ( K ) . Af( n − i,n ) k = i 16 / 24
Crofton and Kinematic Formula Different normalization (Nijenhuis, ...) n V i ( K ) := ω n +1 � V i ( K ) , W( K ) := V i ( K ) , ω i +1 i =0 where ω k is the surface area of a k -dimensional unit ball. ◮ Crofton Formula n � W( K ∩ E ) V k ( K ) � µ n − i (d E ) = W( K ) . W( K ) Af( n − i,n ) k = i 16 / 24
Crofton and Kinematic Formula Different normalization (Nijenhuis, ...) n V i ( K ) := ω n +1 � V i ( K ) , W( K ) := V i ( K ) , ω i +1 i =0 where ω k is the surface area of a k -dimensional unit ball. ◮ Kinematic Formula n n � � � . W( K ∩ gM ) µ ( d g ) = V j ( K ) V k ( M ) G n j =0 k = n − j 17 / 24
Crofton and Kinematic Formula Different normalization (Nijenhuis, ...) n V i ( K ) := ω n +1 � V i ( K ) , W( K ) := V i ( K ) , ω i +1 i =0 where ω k is the surface area of a k -dimensional unit ball. ◮ Kinematic Formula n n � W( K ∩ gM ) V j ( K ) V k ( M ) � � . W( K ) W( M ) µ ( d g ) = W( K ) W( K ) G n j =0 k = n − j 17 / 24
Recommend
More recommend