the entropy rounding method in approximation algorithms
play

The Entropy Rounding Method in Approximation Algorithms Thomas - PowerPoint PPT Presentation

The Entropy Rounding Method in Approximation Algorithms Thomas Rothvo Department of Mathematics, M.I.T. Carg` ese 2011 A general rounding problem Problem: Given: A R n m , fractional solution x [0 , 1] m Find: y { 0 ,


  1. Entropy Definition For random variable Z , the entropy is � � 1 � H ( Z ) = Pr[ Z = z ] · log 2 Pr[ Z = z ] z Properties: ◮ Uniform distribution maximizes entropy: If Z attains k distinct values: H ( Z ) ≤ log 2 ( k ) (attained if Pr[ Z = z ] = 1 k ∀ z ). ◮ One likely event: ∃ z : Pr[ Z = z ] ≥ ( 1 2 ) H ( Z ) ◮ Subadditivity: H ( f ( Z, Z ′ )) ≤ H ( Z ) + H ( Z ′ ).

  2. Chernov-type bound Lemma Let X 1 , . . . , X n be indep. RV with Pr[ X i = ± 1] = 1 2 . n �� � � ≥ λ √ n � � ≤ 2 e − λ 2 / 2 . � � Pr X i � i =1

  3. Chernov-type bound Lemma Let X 1 , . . . , X n be indep. RV with Pr[ X i = ± α i ] = 1 2 . n �� � � � ≤ 2 e − λ 2 / 2 . � � Pr X i � ≥ λ � α � 2 � i =1 ◮ Standard deviation : � � �� � n � � � α 2 Var[ X i ] = E [( X i − E [ X i ]) 2 ] = i = � α � 2 � i i i =1

  4. An isoperimetric inequality Lemma (Special case of Isoperimetric Ineq – Kleitman’66) For any X ⊆ { 0 , 1 } n of size | X | ≥ 2 0 . 8 n and n ≥ 2 , there are x, y ∈ X with � x − y � 1 ≥ n/ 2 . x y

  5. An isoperimetric inequality Lemma (Special case of Isoperimetric Ineq – Kleitman’66) For any X ⊆ { 0 , 1 } n of size | X | ≥ 2 0 . 8 n and n ≥ 2 , there are x, y ∈ X with � x − y � 1 ≥ n/ 2 . x y ◮ Proof with weaker constant: � en � � � n � � n/ 10 ball of radius n/ 10 � � � < 2 0 . 8 n � ≤ ≤ � � around 0 q n/ 10 � 0 ≤ q<n/ 10

  6. Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  7. b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  8. b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  9. b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  10. b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  11. b b b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  12. b b b b b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  13. b b b b b b b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  14. b b b b b b b b b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  15. b b b b b b b b b b b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } ◮ ∃ cell : Pr[ Aχ ∈ cell] ≥ ( 1 2 ) m/ 5 . 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  16. b b b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } ◮ ∃ cell : Pr[ Aχ ∈ cell] ≥ ( 1 2 ) m/ 5 . ◮ At least 2 m · ( 1 2 ) m/ 5 = 2 0 . 8 m colorings χ have Aχ ∈ cell 2∆ A 2 χ m 2∆ � � A = A 1 χ {± 1 } m

  17. b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } ◮ ∃ cell : Pr[ Aχ ∈ cell] ≥ ( 1 2 ) m/ 5 . ◮ At least 2 m · ( 1 2 ) m/ 5 = 2 0 . 8 m colorings χ have Aχ ∈ cell ◮ Pick χ 1 , χ 2 differing in half of entries 2∆ A 2 χ m Aχ 1 2∆ � � Aχ 2 A = A 1 χ χ 1 {± 1 } m χ 2

  18. b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } ◮ ∃ cell : Pr[ Aχ ∈ cell] ≥ ( 1 2 ) m/ 5 . ◮ At least 2 m · ( 1 2 ) m/ 5 = 2 0 . 8 m colorings χ have Aχ ∈ cell ◮ Pick χ 1 , χ 2 differing in half of entries ◮ Define χ 0 ( i ) := 1 2 ( χ 1 ( i ) − χ 2 ( i )) ∈ { 0 , ± 1 } . 2∆ A 2 χ m Aχ 1 2∆ � � Aχ 2 A = A 1 χ χ 1 {± 1 } m χ 2

  19. b b b b Theorem [Beck’s entropy method] �� �� 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. Aχ ≤ m H 2∆ χ i ∈{± 1 } ◮ ∃ cell : Pr[ Aχ ∈ cell] ≥ ( 1 2 ) m/ 5 . ◮ At least 2 m · ( 1 2 ) m/ 5 = 2 0 . 8 m colorings χ have Aχ ∈ cell ◮ Pick χ 1 , χ 2 differing in half of entries ◮ Define χ 0 ( i ) := 1 2 ( χ 1 ( i ) − χ 2 ( i )) ∈ { 0 , ± 1 } . 2 � Aχ 1 − Aχ 2 � ∞ ≤ ∆. ◮ Then � Aχ 0 � ∞ ≤ 1 2∆ A 2 χ m Aχ 1 2∆ � � Aχ 2 A = A 1 χ χ 1 {± 1 } m χ 2

  20. b b b b b b b b b b b b b b b b A slight generalization Theorem For any auxiliary function f ( χ ) with � Aχ − f ( χ ) � ∞ ≤ ∆: 5 ⇒ ∃ half-coloring χ 0 : � Aχ 0 � ∞ ≤ ∆. ( f ( χ )) ≤ m H χ i ∈{± 1 } 2∆ A 2 χ 2∆ m � � A = A 1 χ f ( χ ) {± 1 } m Aχ

  21. b A bound on the entropy Lemma For α ∈ R m , ∆ > 0 : � � ∆ � � α T χ � � 9 e − λ 2 / 5 if λ ≥ 2 � ≤ G := H 2∆ � α � 2 log 2 (32 + 64 /λ ) if λ < 2 χ i ∈{± 1 } � �� � =: λ � α T χ � bound on H ( ) 2∆ O (log( 1 λ )) + O (1) e − Ω( λ 2 ) 2 λ

  22. Proof – Case λ ≥ 2 1 . 0 x · log( 1 x ) 0 . 5 x 0 − 3∆ − ∆ ∆ 3∆ 0 0 . 5 1 . 0 Z = − 1 Z = 0 Z = 1 � � α T χ ◮ Recall: ∆ = λ · � α � 2 with λ ≥ 2 and Z = 2∆ � 1 � � H ( Z ) = Pr[ Z = i ] · log Pr[ Z = i ] i ∈ Z

  23. Proof – Case λ ≥ 2 Pr[ Z = 0] ≥ 1 − e − Ω( λ 2 ) 1 . 0 x · log( 1 x ) 0 . 5 x 0 − 3∆ − ∆ ∆ 3∆ 0 0 . 5 1 . 0 Z = − 1 Z = 0 Z = 1 � � α T χ ◮ Recall: ∆ = λ · � α � 2 with λ ≥ 2 and Z = 2∆ Pr[ Z =0] ) ≤ e − Ω( λ 2 ) 1 ◮ Pr[ Z = 0] · log( � 1 � � H ( Z ) = Pr[ Z = i ] · log Pr[ Z = i ] i ∈ Z

  24. Proof – Case λ ≥ 2 Pr[ Z = i ] ≤ Pr[ α T χ is ≥ iλ times standard dev] ≤ e − Ω( i 2 λ 2 ) 1 . 0 x · log( 1 x ) 0 . 5 x 0 − 3∆ − ∆ ∆ 3∆ 0 0 . 5 1 . 0 Z = − 1 Z = 0 Z = 1 � � α T χ ◮ Recall: ∆ = λ · � α � 2 with λ ≥ 2 and Z = 2∆ Pr[ Z =0] ) ≤ e − Ω( λ 2 ) 1 ◮ Pr[ Z = 0] · log( Pr[ Z = i ] ) ≤ e − Ω( i 2 λ 2 ) · log( e i 2 λ 2 ) 1 ◮ Pr[ Z = i ] · log( � 1 � � H ( Z ) = Pr[ Z = i ] · log Pr[ Z = i ] i ∈ Z

  25. Proof – Case λ ≥ 2 Pr[ Z = i ] ≤ Pr[ α T χ is ≥ iλ times standard dev] ≤ e − Ω( i 2 λ 2 ) 1 . 0 x · log( 1 x ) 0 . 5 x 0 − 3∆ − ∆ ∆ 3∆ 0 0 . 5 1 . 0 Z = − 1 Z = 0 Z = 1 � � α T χ ◮ Recall: ∆ = λ · � α � 2 with λ ≥ 2 and Z = 2∆ Pr[ Z =0] ) ≤ e − Ω( λ 2 ) 1 ◮ Pr[ Z = 0] · log( Pr[ Z = i ] ) ≤ e − Ω( i 2 λ 2 ) · log( e i 2 λ 2 ) 1 ◮ Pr[ Z = i ] · log( � 1 � � ≤ e − Ω( λ 2 ) H ( Z ) = Pr[ Z = i ] · log Pr[ Z = i ] i ∈ Z

  26. Proof – Case λ < 2 2∆ − 3 � α � 2 −� α � 2 � α � 2 3 � α � 2 Subadditivity: H ( Z ) ≤

  27. Proof – Case λ < 2 2∆ − 3 � α � 2 −� α � 2 � α � 2 3 � α � 2 block Subadditivity: H ( Z ) ≤ H (which block of length 2 � α � 2 ) + ≤ O (1) +

  28. Proof – Case λ < 2 2∆ − 3 � α � 2 −� α � 2 � α � 2 3 � α � 2 block 2 � α � 2 = 1 λ intervals 2∆ Subadditivity: H ( Z ) ≤ H (which block of length 2 � α � 2 ) + H (index) O (1) + O (log 1 ≤ λ )

  29. Entropy rounding (extended version) Algorithm: ◮ Input: A ∈ R n × m , x ∈ [0 , 1] m (1) y := x (2) FOR phase k = last bit TO 1 DO (3) Call y i active if k th bit is 1 (4) Find half-coloring χ : active var → {− 1 , +1 , 0 } (5) Update y ′ := y + ( 1 2 ) k χ (6) REPEAT WHILE ∃ active var.

  30. Entropy rounding (extended version) Algorithm: ◮ Input: A ∈ [ − 1 , 1] n × m , x ∈ [0 , 1] m ◮ Row weights w ( i ) ( � i w ( i ) = 1; w ( i ) ≥ 0) (1) y := x (2) FOR phase k = last bit TO 1 DO (3) Call y i active if k th bit is 1 (4) Find half-coloring χ : active var → {− 1 , +1 , 0 } √ #active var ) ≤ w ( i ) · #active var ∆ i with | A i χ | ≤ ∆ i s.t. G ( 5 (5) Update y ′ := y + ( 1 2 ) k χ (6) REPEAT WHILE ∃ active var.

  31. Entropy rounding (extended version) Algorithm: ◮ Input: A ∈ [ − 1 , 1] n × m , x ∈ [0 , 1] m ◮ Row weights w ( i ) ( � i w ( i ) = 1; w ( i ) ≥ 0) (1) y := x (2) FOR phase k = last bit TO 1 DO (3) Call y i active if k th bit is 1 (4) Find half-coloring χ : active var → {− 1 , +1 , 0 } √ #active var ) ≤ w ( i ) · #active var ∆ i with | A i χ | ≤ ∆ i s.t. G ( 5 (5) Update y ′ := y + ( 1 2 ) k χ (6) REPEAT WHILE ∃ active var. ◮ In each step: � �� A i χ �� � Subadd. n �� A i χ �� n � � ∆ i ≤ #act.var. � � H ≤ H ≤ G √ #act. var 2∆ i 2∆ i 5 i i =1 i =1 √ ◮ Use α ∈ [ − 1 , 1] m ′ ⇒ � α � 2 ≤ m ′

  32. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i )

  33. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it 1

  34. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it 2 it 1 v halves

  35. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it . . . it 2 it 1 v halves

  36. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it . . . it 2 it 1 v halves

  37. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it . . . it 2 it 1 v halves

  38. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it . . . it 2 it 1 v halves

  39. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it log m it . . . it 2 it 1 v halves

  40. b Entropy rounding (extended version) (2) ◮ Consider row i while rounding bit k : ∆ i v · w ( i ) ) · √ v � 2 ) v · w ( i ) · √ v 1 ∼ ( 1 ∼ ln( �� � 1 Θ w ( i ) v := #active var. � � 1 Θ w ( i ) it log m it . . . it 2 it 1 v halves �� � 1 ◮ By convergence: | A i x − A i y | ≤ O w ( i )

  41. Example: Discrepancy of set systems ◮ Given: Set system S with n sets and n elements   1 1 0 A = 0 1 1 set S   1 0 1 i S i

  42. Example: Discrepancy of set systems ◮ Given: Set system S with n sets and n elements   1 1 0 A = 0 1 1 set S   1 0 1 i S i ◮ Pick x := ( 1 2 , . . . , 1 2 ); weight w ( i ) := 1 n for each row

  43. Example: Discrepancy of set systems ◮ Given: Set system S with n sets and n elements   1 1 0 A = 0 1 1 set S   1 0 1 i S i ◮ Pick x := ( 1 2 , . . . , 1 2 ); weight w ( i ) := 1 n for each row 1 /n ) = O ( √ n ). � ◮ There is y ∈ { 0 , 1 } n : � Ax − Ay � ∞ = O ( 1

  44. Example: Discrepancy of set systems ◮ Given: Set system S with n sets and n elements   1 1 0 A = 0 1 1 set S   1 0 1 i S i ◮ Pick x := ( 1 2 , . . . , 1 2 ); weight w ( i ) := 1 n for each row 1 /n ) = O ( √ n ). � ◮ There is y ∈ { 0 , 1 } n : � Ax − Ay � ∞ = O ( 1 � +1 y i = 1 has discrepancy O ( √ n ). ◮ Coloring χ ( i ) = − 1 y i = 0 ◮ “ 6 Standard deviations suffice ”-Thm [Spencer ’85]

  45. Summarizing Theorem Input: ◮ matrix A ∈ [ − 1 , 1] n × m ( ∀ A ′ ⊆ A : there are f : − ∆ ≤ A ′ χ − f ( χ ) ≤ ∆ and H ( f ( χ )) ≤ # cols ( A ′ ) ) 10 ◮ vector x ∈ [0 , 1] m ◮ row weights w ( i ) ( � i w ( i ) = 1 ) There is a y ∈ { 0 , 1 } m with ◮ Bounded difference: ◮ | A i x − A i y | ≤ O (log( m )) · ∆ i � ◮ | A i x − A i y | ≤ O ( 1 /w ( i ))

  46. Summarizing Theorem Input: ◮ matrix A ∈ [ − 1 , 1] n × m ( ∀ A ′ ⊆ A : there are f : − ∆ ≤ A ′ χ − f ( χ ) ≤ ∆ and H ( f ( χ )) ≤ # cols ( A ′ ) ) 10 ◮ vector x ∈ [0 , 1] m ◮ row weights w ( i ) ( � i w ( i ) = 1 ) There is a random variable y ∈ { 0 , 1 } m with ◮ Bounded difference: ◮ | A i x − A i y | ≤ O (log( m )) · ∆ i � ◮ | A i x − A i y | ≤ O ( 1 /w ( i )) ◮ Preserved expectation: E [ y i ] = x i . ◮ Randomness: y = x + � � log m 2 ) k · χ ( k,t ) t =1 ( 1 k ≥ 1

  47. Summarizing Theorem Input: ◮ matrix A ∈ [ − 1 , 1] n × m ( ∀ A ′ ⊆ A : there are f : − ∆ ≤ A ′ χ − f ( χ ) ≤ ∆ and H ( f ( χ )) ≤ # cols ( A ′ ) ) 10 ◮ vector x ∈ [0 , 1] m ◮ row weights w ( i ) ( � i w ( i ) = 1 ) There is a random variable y ∈ { 0 , 1 } m with ◮ Bounded difference: ◮ | A i x − A i y | ≤ O (log( m )) · ∆ i � ◮ | A i x − A i y | ≤ O ( 1 /w ( i )) ◮ Preserved expectation: E [ y i ] = x i . ◮ Randomness: y = x + � � log m 2 ) k · rand. ± 1 · χ ( k,t ) t =1 ( 1 k ≥ 1

  48. Summarizing Theorem Input: ◮ matrix A ∈ [ − 1 , 1] n × m ( ∀ A ′ ⊆ A : there are f : − ∆ ≤ A ′ χ − f ( χ ) ≤ ∆ and H ( f ( χ )) ≤ # cols ( A ′ ) ) 10 ◮ vector x ∈ [0 , 1] m ◮ row weights w ( i ) ( � i w ( i ) = 1 ) There is a random variable y ∈ { 0 , 1 } m with ◮ Bounded difference: ◮ | A i x − A i y | ≤ O (log( m )) · ∆ i � ◮ | A i x − A i y | ≤ O ( 1 /w ( i )) ◮ Preserved expectation: E [ y i ] = x i . ◮ Randomness: y = x + � � log m 2 ) k · rand. ± 1 · χ ( k,t ) t =1 ( 1 k ≥ 1 ◮ Can be computed by SDP in poly-time using [Bansal ’10]

  49. Bin Packing With Rejection Input: ◮ Items i ∈ { 1 , . . . , n } with size s i ∈ [0 , 1], and rejection penalty π i ∈ [0 , 1] Goal: Pack or reject. Minimize # bins + rejection cost. 1 0 bin 1 bin 2 s i 1 input π i 0 . 9 0 . 7 0 . 4 0 . 6

  50. Bin Packing With Rejection Input: ◮ Items i ∈ { 1 , . . . , n } with size s i ∈ [0 , 1], and rejection penalty π i ∈ [0 , 1] Goal: Pack or reject. Minimize # bins + rejection cost. 1 0 bin 1 bin 2 s i 1 input π i 0 . 7 0 . 4 0 . 6

  51. Bin Packing With Rejection Input: ◮ Items i ∈ { 1 , . . . , n } with size s i ∈ [0 , 1], and rejection penalty π i ∈ [0 , 1] Goal: Pack or reject. Minimize # bins + rejection cost. 1 0 bin 1 bin 2 × s i 1 input π i 0 . 7 0 . 4 0 . 6

  52. Bin Packing With Rejection Input: ◮ Items i ∈ { 1 , . . . , n } with size s i ∈ [0 , 1], and rejection penalty π i ∈ [0 , 1] Goal: Pack or reject. Minimize # bins + rejection cost. 1 0 bin 1 bin 2 × s i 1 input π i 0 . 7 0 . 6

  53. Bin Packing With Rejection Input: ◮ Items i ∈ { 1 , . . . , n } with size s i ∈ [0 , 1], and rejection penalty π i ∈ [0 , 1] Goal: Pack or reject. Minimize # bins + rejection cost. 1 0 bin 1 bin 2 × s i 1 input π i 0 . 7

  54. Known results Bin Packing With Rejection: ◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] OP T ◮ AFPTAS ( APX ≤ OPT + (log OP T ) 1 − o (1) ) [Epstein & Levin ’10]

  55. Known results Bin Packing With Rejection: ◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] OP T ◮ AFPTAS ( APX ≤ OPT + (log OP T ) 1 − o (1) ) [Epstein & Levin ’10] Bin Packing: ◮ APX ≤ OPT + O (log 2 OPT ) [Karmarkar & Karp ’82]

  56. Known results Bin Packing With Rejection: ◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] OP T ◮ AFPTAS ( APX ≤ OPT + (log OP T ) 1 − o (1) ) [Epstein & Levin ’10] Bin Packing: ◮ APX ≤ OPT + O (log 2 OPT ) [Karmarkar & Karp ’82] Theorem There is a randomized approximation algorithm for Bin Packing With Rejection with APX ≤ OPT + O (log 2 OPT ) (with high probability).

  57. The column-based LP Set Cover formulation: ◮ Bins: Sets S ⊆ [ n ] with � i ∈ S s i ≤ 1 of cost c ( S ) = 1 ◮ Rejections: Sets S = { i } of cost c ( S ) = π i LP: � min c ( S ) · x S S ∈S � 1 S · x S ≥ 1 S ∈S x S ≥ 0 ∀ S ∈ S

  58. The column-based LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 π i 0 . 9 0 . 7 0 . 4 0 . 6

  59. The column-based LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 π i 0 . 9 0 . 7 0 . 4 0 . 6 � � min 1 1 1 1 1 1 1 1 1 1 1 1 | . 9 . 7 . 4 . 6 x     1 0 0 0 1 1 1 0 0 0 1 0 | 1 0 0 0 1 0 1 0 0 1 0 0 1 1 0 0 1 | 0 1 0 0 1      x ≥     0 0 1 0 0 1 0 1 0 1 1 1 | 0 0 1 0 1    0 0 0 1 0 0 1 0 1 1 1 1 | 0 0 0 1 1 x ≥ 0

  60. The column-based LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 π i 0 . 9 0 . 7 0 . 4 0 . 6 � � min 1 1 1 1 1 1 1 1 1 1 1 1 | . 9 . 7 . 4 . 6 x     1 0 0 0 1 1 1 0 0 0 1 0 | 1 0 0 0 1 0 1 0 0 1 0 0 1 1 0 0 1 | 0 1 0 0 1      x ≥     0 0 1 0 0 1 0 1 0 1 1 1 | 0 0 1 0 1    0 0 0 1 0 0 1 0 1 1 1 1 | 0 0 0 1 1 x ≥ 0 1 / 2 × 1 / 2 × 1 / 2 ×

  61. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n

  62. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n ◮ Call items large if size at least ε := log OP T f (otherwise OP T f small

  63. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n ◮ Call items large if size at least ε := log OP T f (otherwise OP T f small   1 0 1 1 0 0 1 1 0 0 1 0     0 1 1 0 0 1     0 1 1 1 0 0   1 1 0 1 0 0

  64. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n ◮ Call items large if size at least ε := log OP T f (otherwise OP T f small ◮ Add up rows 1 , . . . , i of constraint matrix to obtain row i for matrix A (for large i ’s). “1 slot per item” ⇒ “ i slots for largest i items”     1 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 0 2 1 1 1 1 0         0 1 1 0 0 1 → A = 2 2 2 1 1 1         0 1 1 1 0 0     1 1 0 1 0 0

  65. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n ◮ Call items large if size at least ε := log OP T f (otherwise OP T f small ◮ Add up rows 1 , . . . , i of constraint matrix to obtain row i for matrix A (for large i ’s). “1 slot per item” ⇒ “ i slots for largest i items” ◮ Append row vector ( � i ∈ S : s i <ε s i ) S     1 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 0 2 1 1 1 1 0         0 1 1 0 0 1 → A = 2 2 2 1 1 1         0 1 1 1 0 0 space for small items     1 1 0 1 0 0

  66. Massaging the LP ◮ Sort items s 1 ≥ . . . ≥ s n ◮ Call items large if size at least ε := log OP T f (otherwise OP T f small ◮ Add up rows 1 , . . . , i of constraint matrix to obtain row i for matrix A (for large i ’s). “1 slot per item” ⇒ “ i slots for largest i items” ◮ Append row vector ( � i ∈ S : s i <ε s i ) S ◮ Append objective function c     1 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 0 2 1 1 1 1 0         0 1 1 0 0 1 → A = 2 2 2 1 1 1         0 1 1 1 0 0 space for small items     1 1 0 1 0 0 objective function

  67. Entropy bound for monotone matrices Theorem Let A be column-monotone matrix, max entry ≤ ∆ , sum of last row = σ . There are auxiliary RV f : � Aχ − f ( χ ) � ∞ = O (∆) and H χ ( f ( χ )) ≤ O ( σ ∆ ) .   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   A =   2 1 2 0     2 1 2 1     2 1 2 2   σ = � 2 1 3 2 i ≤ ∆

  68. Entropy bound for monotone matrices Theorem Let A be column-monotone matrix, max entry ≤ ∆ , sum of last row = σ . There are auxiliary RV f : � Aχ − f ( χ ) � ∞ = O (∆) and H χ ( f ( χ )) ≤ O ( σ ∆ ) . χ = (+1,-1,-1,+1)   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   A =   2 1 2 0     2 1 2 1     2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately

  69. Entropy bound for monotone matrices χ = (+1,-1,-1,+1)   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1     2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately

  70. Entropy bound for monotone matrices χ = (+1,-1,-1,+1) k   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1   2 k ∆   D 2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately ◮ For each interval D of length 2 k · ∆: ∆ g D ( χ ) := covered distance in D rounded to 1 . 1 k

  71. Entropy bound for monotone matrices χ = (+1,-1,-1,+1) k   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1   2 k ∆   D g D ( χ ) 2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately ◮ For each interval D of length 2 k · ∆: ∆ g D ( χ ) := covered distance in D rounded to 1 . 1 k

  72. Entropy bound for monotone matrices χ = (+1,-1,-1,+1) k   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1   2 k ∆   D g D ( χ ) 2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately ◮ For each interval D of length 2 k · ∆: ∆ g D ( χ ) := covered distance in D rounded to 1 . 1 k √ � ∆ · 2 k ∆ = 2 k/ 2 · ∆ ◮ Std-Dev for g D : max dep. · | D | ≤

  73. Entropy bound for monotone matrices χ = (+1,-1,-1,+1) k   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1   2 k ∆   D g D ( χ ) 2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately ◮ For each interval D of length 2 k · ∆: ∆ g D ( χ ) := covered distance in D rounded to 1 . 1 k √ � ∆ · 2 k ∆ = 2 k/ 2 · ∆ ◮ Std-Dev for g D : max dep. · | D | ≤ ◮ H ( g D ) ≤ G ( ∆ / 1 . 1 k 2 k/ 2 ∆ ) ≤ G (2 − k ) = O (log 2 k ) = O ( k )

  74. Entropy bound for monotone matrices χ = (+1,-1,-1,+1) k   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   σ A =   2 1 2 0     2 1 2 1   2 k ∆   D g D ( χ ) 2 1 2 2   2 1 3 2 i ◮ Idea: Describe random walk A 1 χ, . . . , A σ χ O (∆)-approximately ◮ For each interval D of length 2 k · ∆: ∆ g D ( χ ) := covered distance in D rounded to 1 . 1 k √ � ∆ · 2 k ∆ = 2 k/ 2 · ∆ ◮ Std-Dev for g D : max dep. · | D | ≤ ◮ H ( g D ) ≤ G ( ∆ / 1 . 1 k 2 k/ 2 ∆ ) ≤ G (2 − k ) = O (log 2 k ) = O ( k ) ◮ Total entropy of g : � 2 k ∆ · O ( k ) = O ( σ σ ∆ ). k ≥ 1

  75. Entropy bound for monotone matrices χ = (+1,-1,-1,+1)   1 0 0 0 A i χ 1 0 1 0     1 0 2 0     1 1 2 0   A =   2 1 2 0     2 1 2 1     2 1 2 2   2 1 3 2 i ◮ We know A i χ up to an error of:

Recommend


More recommend