Constructive Discrepancy Minimization for Convex Sets Thomas Rothvoss UW Seattle
Discrepancy theory ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] i S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 i S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85]
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81]
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81] Main method: Find a partial coloring χ : [ n ] → { 0 , ± 1 } ◮ low discrepancy max S ∈S | χ ( S ) | ◮ | supp( χ ) | ≥ Ω( n )
Discrepancy theory (2) Lemma (Spencer) For m set on n ≤ m elements there is a partial coloring of � n log 2 m discrepancy O ( n ). ◮ Run argument log n times ◮ Total discrepancy is � √ n + n/ 2 2 + . . . + 1 = O ( √ n ) � � n/ 2 +
Thm of Spencer-Gluskin-Giannopolous ( − 1 , 1) (1 , 1) 0 ( − 1 , − 1) (1 , − 1)
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := 0 K
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := find a point x ∈ {− 1 , 1 } n ∩ K 0 x K
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := find a point x ∈ {− 1 , 0 , 1 } n ∩ K with | supp( x ) | ≥ Ω( n ). 0 x K
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := find a point x ∈ {− 1 , 0 , 1 } n ∩ K with | supp( x ) | ≥ Ω( n ). 0 ≥ 100 ≥ 100 K ◮ K is intersection of n strips of width 100
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := find a point x ∈ {− 1 , 0 , 1 } n ∩ K with | supp( x ) | ≥ Ω( n ). 0 ≥ 100 ≥ 100 K ◮ K is intersection of n strips of width 100 ◮ Gaussian measure γ n ( K ) ≥ ( γ n (width 100 strip)) n ≥ e − n/ 100
Thm of Spencer-Gluskin-Giannopolous j ∈ S i x j | ≤ 100 √ n ∀ i ∈ [ n ] � x ∈ R n : | � � Goal: For K := find a point x ∈ {− 1 , 0 , 1 } n ∩ K with | supp( x ) | ≥ Ω( n ). 0 x K ◮ K is intersection of n strips of width 100 ◮ Gaussian measure γ n ( K ) ≥ ( γ n (width 100 strip)) n ≥ e − n/ 100 ◮ Counting argument: any such K admits partial coloring
Gaussian measure Lemma (Sidak-Kathri ’67) For K convex and symmetric and strip S , γ n ( K ∩ S ) ≥ γ n ( K ) · γ n ( S ) K 0 0 S
Gaussian measure Lemma (Sidak-Kathri ’67) For K convex and symmetric and strip S , γ n ( K ∩ S ) ≥ γ n ( K ) · γ n ( S ) K 0 S
Partial coloring approaches ◮ Spencer ’85, Gluskin ’87, Giannopolous ’97: ◮ (+) very general ◮ ( − ) non-constructive
Partial coloring approaches ◮ Spencer ’85, Gluskin ’87, Giannopolous ’97: ◮ (+) very general ◮ ( − ) non-constructive ◮ Bansal ’10: SDP-based random walk for Spencer’s Thm ◮ (+) poly-time ◮ ( − ) custom-tailored to Spencers setting
Partial coloring approaches ◮ Spencer ’85, Gluskin ’87, Giannopolous ’97: ◮ (+) very general ◮ ( − ) non-constructive ◮ Bansal ’10: SDP-based random walk for Spencer’s Thm ◮ (+) poly-time ◮ ( − ) custom-tailored to Spencers setting ◮ Lovett-Meka ’12: ◮ (+) poly-time ◮ (+) simple and elegant ◮ (+ / − ) Works for any K = { x : |� x, v i �| ≤ λ i ∀ i ∈ [ m ] } i / 16 ≤ n with � m i =1 e − λ 2 16
Our main result Theorem (R. 2014) For a convex symmetric set K ⊆ R n with γ n ( K ) ≥ e − δn , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ εn in poly-time . [ − 1 , 1] n K y ∗ 0
Our main result Theorem (R. 2014) For a convex symmetric set K ⊆ R n with γ n ( K ) ≥ e − δn , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ εn in poly-time . Algorithm: (1) take a random x ∗ ∼ γ n [ − 1 , 1] n K 0 x ∗
Our main result Theorem (R. 2014) For a convex symmetric set K ⊆ R n with γ n ( K ) ≥ e − δn , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ εn in poly-time . Algorithm: (1) take a random x ∗ ∼ γ n (2) compute y ∗ = argmin {� x ∗ − y � 2 | y ∈ K ∩ [ − 1 , 1] n } [ − 1 , 1] n K 0 y ∗ x ∗
Isoperimetric inequality ◮ For set K K
Isoperimetric inequality ◮ For set K , let K ∆ := { x : d ( x, K ) ≤ ∆ } K ∆ ∆ K
Isoperimetric inequality ◮ For set K , let K ∆ := { x : d ( x, K ) ≤ ∆ } K ∆ ∆ K Lemma γ n ( K ) ≥ e − δn = δn ) ≥ 1 − e − δn ⇒ γ n ( K 3 √
Isoperimetric inequality ◮ For set K , let K ∆ := { x : d ( x, K ) ≤ ∆ } K ∆ ∆ K Lemma γ n ( K ) ≥ e − δn = δn ) ≥ 1 − e − δn ⇒ γ n ( K 3 √ ◮ Isoperimetric inequality: worst case are half planes! 1 2 π e − z 2 / 2 √ z γ n = e − δn γ n = e − δn
Isoperimetric inequality ◮ For set K , let K ∆ := { x : d ( x, K ) ≤ ∆ } K ∆ ∆ K Lemma γ n ( K ) ≥ e − δn = δn ) ≥ 1 − e − δn ⇒ γ n ( K 3 √ ◮ Isoperimetric inequality: worst case are half planes! 1 2 π e − z 2 / 2 √ z √ ≤ 3 δn γ n = e − δn γ n = e − δn
Analysis [ − 1 , 1] n K 0
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 y ∗ 0 x ∗
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 x ∗
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 ◮ Def. I ∗ := { i : | y ∗ x ∗ i | = 1 } Suppose | I ∗ | ≤ εn
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 ◮ Def. I ∗ := { i : | y ∗ x ∗ i | = 1 } Suppose | I ∗ | ≤ εn ◮ K ( I ∗ ) := K ∩ {| x i | ≤ 1 : i ∈ I ∗ } K ( I ∗ )
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 ◮ Def. I ∗ := { i : | y ∗ x ∗ i | = 1 } Suppose | I ∗ | ≤ εn ◮ K ( I ∗ ) := K ∩ {| x i | ≤ 1 : i ∈ I ∗ } K ( I ∗ ) ◮ � x ∗ − y ∗ � 2 = d ( y ∗ , K ( I ∗ ))
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 ◮ Def. I ∗ := { i : | y ∗ x ∗ i | = 1 } Suppose | I ∗ | ≤ εn ◮ K ( I ∗ ) := K ∩ {| x i | ≤ 1 : i ∈ I ∗ } K ( I ∗ ) ◮ � x ∗ − y ∗ � 2 = d ( y ∗ , K ( I ∗ )) ◮ K ( I ∗ ) still large: γ n ( K ( I ∗ )) ≥ γ n ( K ) · ( γ n (strip of width 1)) εn ≥ e − 2 δn
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 [ − 1 , 1] n K 5 ◮ For any Q with γ n ( Q ) ≥ e − o ( n ) : 5 · √ n ] ≤ e − Ω( n ) y ∗ Pr[ d ( x ∗ , Q ) ≥ 1 0 ◮ Def. I ∗ := { i : | y ∗ x ∗ i | = 1 } Suppose | I ∗ | ≤ εn ◮ K ( I ∗ ) := K ∩ {| x i | ≤ 1 : i ∈ I ∗ } K ( I ∗ ) ◮ � x ∗ − y ∗ � 2 = d ( y ∗ , K ( I ∗ )) ◮ K ( I ∗ ) still large: γ n ( K ( I ∗ )) ≥ γ n ( K ) · ( γ n (strip of width 1)) εn ≥ e − 2 δn √ 3 δ · √ n ≪ 1 √ n ◮ W.h.p. d ( x ∗ , K ( I ∗ )) ≤ 5
Recommend
More recommend