Discrepancy Theory and Applications to Bin Packing Thomas Rothvoss Joint work with Becca Hoberg
Discrepancy theory ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] i S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 i S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85]
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81]
b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81] Main method: Iteratively find a partial coloring .
Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . [ − 1 , 1] n K y ∗ 0
Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . Algorithm: (1) take a random Gaussian x ∗ [ − 1 , 1] n K 0 x ∗
Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . Algorithm: (1) take a random Gaussian x ∗ (2) compute y ∗ = argmin {� x ∗ − y � 2 | y ∈ K ∩ [ − 1 , 1] n } [ − 1 , 1] n K 0 y ∗ x ∗
Analysis [ − 1 , 1] n K 0
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 [ − 1 , 1] n K y ∗ 0 x ∗
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). [ − 1 , 1] n K y ∗ 0 x ∗
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ i } [ − 1 , 1] n K y ∗ 0 x ∗
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } ◮ Strip of o ( n ) coord.: Pr[gaussian ∈ K ∩ STRIP] ≥ e − Ω( n ) . [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP
Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } ◮ Strip of o ( n ) coord.: Pr[gaussian ∈ K ∩ STRIP] ≥ e − Ω( n ) . ◮ Then E [dist(gaussian , K ∩ STRIP)] ≤ o ( √ n ). Contradiction! [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP
Application to Bin Packing
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79]
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4 ◮ [de la Vega & L¨ ucker ’81] : APX ≤ (1 + ε ) OPT + O (1 /ε 2 ) in time O ( n ) · f ( ε )
Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4 ◮ [de la Vega & L¨ ucker ’81] : APX ≤ (1 + ε ) OPT + O (1 /ε 2 ) in time O ( n ) · f ( ε ) ◮ [Karmarkar & Karp ’82]: APX ≤ OPT + O (log 2 OPT ) in poly-time
The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1
The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: � min x p p ∈P � p i · x p ≥ b i ∀ i ∈ [ n ] p ∈P x p ≥ 0 ∀ p ∈ P
The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: 1 T x min � Ax ≥ b p ∈P x p ≥ 0 ∀ p ∈ P
The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: 1 T x min � Ax ≥ b p ∈P x p ≥ 0 ∀ p ∈ P ◮ Can find x with 1 T x ≤ OPT f + δ in time poly( � b � 1 , 1 δ )
The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3
The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x 2 0 0 0 1 1 1 0 0 0 1 0 1 A b 0 2 0 0 1 0 0 1 1 0 0 1 1 x ≥ 0 0 3 0 0 1 0 1 0 1 1 1 1 0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0
The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x 2 0 0 0 1 1 1 0 0 0 1 0 1 0 2 0 0 1 0 0 1 1 0 0 1 1 x ≥ 0 0 3 0 0 1 0 1 0 1 1 1 1 0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0
The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x 2 0 0 0 1 1 1 0 0 0 1 0 1 0 2 0 0 1 0 0 1 1 0 0 1 1 x ≥ 0 0 3 0 0 1 0 1 0 1 1 1 1 0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0 1 / 2 × 1 / 2 × 1 / 2 ×
Karmarkar-Karp’s Grouping input:
Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input:
Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 ×
Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 × ◮ increases OPT by O (log n )
Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 × ◮ increases OPT by O (log n ) A
Recommend
More recommend