Compact quadratizations for pseudo-Boolean functions Elisabeth Rodr´ ıguez-Heck joint work with Endre Boros (Rutgers University), and Yves Crama (University of Li` ege) January 10, 2019 23rd Combinatorial Optimization Workshop, Aussois
Pseudo-Boolean optimization General problem: pseudo-Boolean optimization : { 0 , 1 } n → R Given a pseudo-Boolean function f x ∈{ 0 , 1 } n f ( x ) . min p.1
Pseudo-Boolean optimization General problem: pseudo-Boolean optimization : { 0 , 1 } n → R Given a pseudo-Boolean function f x ∈{ 0 , 1 } n f ( x ) . min Theorem (Hammer et al., 1963) Every pseudo-Boolean function f : { 0 , 1 } n → R admits a unique multilinear expression. p.1
Pseudo-Boolean optimization General problem: pseudo-Boolean optimization : { 0 , 1 } n → R Given a pseudo-Boolean function f x ∈{ 0 , 1 } n f ( x ) . min Theorem (Hammer et al., 1963) Every pseudo-Boolean function f : { 0 , 1 } n → R admits a unique multilinear expression. ◮ Given f , finding its unique multilinear representation can be costly! (Size of the input: O (2 n )) p.1
Multilinear 0–1 optimization Assumption: f given as a multilinear polynomial Set of monomials S ⊆ 2 [ n ] , a S � = 0 for S ∈ S . � � min a S x i S ∈S i ∈ S s. t. x i ∈ { 0 , 1 } , for i = 1 , . . . , n Example: f ( x 1 , x 2 , x 3 ) = 9 x 1 x 2 x 3 + 8 x 1 x 2 − 6 x 2 x 3 + x 1 − 2 x 2 + x 3 p.2
Quadratization: definition and desirable properties Definition (Anthony, Boros, Crama, & Gruber, 2017) Given a pseudo-Boolean function f ( x ) where x ∈ { 0 , 1 } n , a quadratization g ( x , y ) is a function satisfying ◮ g is quadratic ◮ g ( x , y ) depends on the original variables x and on m auxiliary variables y ◮ satisfies f ( x ) = min { g ( x , y ) : y ∈ { 0 , 1 } m } ∀ x ∈ { 0 , 1 } n . p.3
Quadratization: definition and desirable properties Definition (Anthony et al., 2017) Given a pseudo-Boolean function f ( x ) where x ∈ { 0 , 1 } n , a quadratization g ( x , y ) is a function satisfying ◮ g is quadratic ◮ g ( x , y ) depends on the original variables x and on m auxiliary variables y ◮ satisfies f ( x ) = min { g ( x , y ) : y ∈ { 0 , 1 } m } ∀ x ∈ { 0 , 1 } n . Which quadratizations are “good”? p.3
Quadratization: definition and desirable properties Definition (Anthony et al., 2017) Given a pseudo-Boolean function f ( x ) where x ∈ { 0 , 1 } n , a quadratization g ( x , y ) is a function satisfying ◮ g is quadratic ◮ g ( x , y ) depends on the original variables x and on m auxiliary variables y ◮ satisfies f ( x ) = min { g ( x , y ) : y ∈ { 0 , 1 } m } ∀ x ∈ { 0 , 1 } n . Which quadratizations are “good”? ◮ Small number of auxiliary variables ( compact ). p.3
Quadratization: definition and desirable properties Definition (Anthony et al., 2017) Given a pseudo-Boolean function f ( x ) where x ∈ { 0 , 1 } n , a quadratization g ( x , y ) is a function satisfying ◮ g is quadratic ◮ g ( x , y ) depends on the original variables x and on m auxiliary variables y ◮ satisfies f ( x ) = min { g ( x , y ) : y ∈ { 0 , 1 } m } ∀ x ∈ { 0 , 1 } n . Which quadratizations are “good”? ◮ Small number of auxiliary variables ( compact ). ◮ Small number of positive quadratic terms ( x i x j , x i y j . . . ) (empirical measure of submodularity). ◮ ... p.3
Application in computer vision: image restoration Input: blurred image Output: restored image Image from the Corel database. p.4
Persistencies Weak Persistency Theorem (Hammer, Hansen, & Simeone, 1984) Let ( QP ) be a quadratic optimization problem on x ∈ { 0 , 1 } n , and let (˜ x , ˜ y ) be an optimal solution of the continuous standard linearization of ( QP ) n � � min c 0 + c j x j + c ij y ij j =1 1 ≤ i < j ≤ n s. t. y ij ≥ x i + x j − 1 i , j = 1 , . . . , n , i < j y ij ≤ x i i , j = 1 , . . . , n , i < j y ij ≤ x j i , j = 1 , . . . , n , i < j 0 ≤ y ij ≤ 1 i , j = 1 , . . . , n , i < j 0 ≤ x i ≤ 1 i = 1 , . . . , n such that ˜ x j = 1 for j ∈ O and ˜ x j = 0 for j ∈ Z . Then, for any minimizing vector x ∗ of ( QP ) switching x ∗ j = 1 for j ∈ O and x ∗ j = 0 for j ∈ Z will also yield a minimum of f . Update after talk: see also survey (Boros & Hammer, 2002). p.5
Persistencies ◮ The Weak Persistency Theorem is not the strongest form of persistency. p.6
Persistencies ◮ The Weak Persistency Theorem is not the strongest form of persistency. ◮ There are ways to compute, in polynomial time, a maximal set of variables to fix, based on a network flow algorithm (Boros et al., 2008). p.6
Persistencies ◮ The Weak Persistency Theorem is not the strongest form of persistency. ◮ There are ways to compute, in polynomial time, a maximal set of variables to fix, based on a network flow algorithm (Boros et al., 2008). ◮ In computer vision, image restoration and related problems of up to millions of variables are efficiently solved, thanks to the use of persistencies. p.6
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 p.7
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 Negative monomial (Kolmogorov & Zabih, 2004; Freedman & Drineas, 2005) n n � � − x i = y ∈{ 0 , 1 } − y ( min x i − ( n − 1)) i =1 i =1 ◮ One variable is sufficient. ◮ No positive quadratic terms. p.7
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 Negative monomial Check that, for every x ∈ { 0 , 1 } n , min y g ( x , y ) = − � n (Kolmogorov & Zabih, 2004; Freedman i =1 x i ., two cases: & Drineas, 2005) n n � � − x i = y ∈{ 0 , 1 } − y ( min x i − ( n − 1)) i =1 i =1 ◮ One variable is sufficient. ◮ No positive quadratic terms. p.7
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 Negative monomial Check that, for every x ∈ { 0 , 1 } n , min y g ( x , y ) = − � n (Kolmogorov & Zabih, 2004; Freedman i =1 x i ., two cases: & Drineas, 2005) If x i = 1 ∀ i , then min y − y , 1 n n minimum value of − 1 reached for � � − x i = y ∈{ 0 , 1 } − y ( min x i − ( n − 1)) y = 1. i =1 i =1 ◮ One variable is sufficient. ◮ No positive quadratic terms. p.7
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 Negative monomial Check that, for every x ∈ { 0 , 1 } n , min y g ( x , y ) = − � n (Kolmogorov & Zabih, 2004; Freedman i =1 x i ., two cases: & Drineas, 2005) If x i = 1 ∀ i , then min y − y , 1 n n minimum value of − 1 reached for � � − x i = y ∈{ 0 , 1 } − y ( min x i − ( n − 1)) y = 1. i =1 i =1 If ∃ i such that x i = 0, then 2 min y − Cy , where C ≤ 0, ◮ One variable is sufficient. minimum value of 0 reached for ◮ No positive quadratic terms. y = 0. p.7
Termwise quadratizations Main idea Quadratize monomial by monomial using disjoint sets of auxiliary variables. f ( x ) = − 35 x 1 x 2 x 3 x 4 x 5 + 50 x 1 x 2 x 3 x 4 − 10 x 1 x 2 x 4 x 5 + 5 x 2 x 3 x 4 + 5 x 4 x 5 − 20 x 1 Negative monomial Positive monomial (Kolmogorov & Zabih, 2004; Freedman (Ishikawa, 2011) & Drineas, 2005) n k � � n n x i = min y i ( c i , n ( −| x | + 2 i ) − 1) � � y ∈{ 0 , 1 } k − x i = y ∈{ 0 , 1 } − y ( min x i − ( n − 1)) i =1 i =1 i =1 i =1 + | x | ( | x | − 1) , 2 ◮ One variable is sufficient. ◮ Number of variables: k = ⌊ n − 1 2 ⌋ . ◮ No positive quadratic terms. ◮ � n � positive quadratic terms. 2 p.7
Upper bound for the positive monomial: ⌈ log( n ) ⌉ − 1 Theorem 3 (simplified version) Assume that n = 2 ℓ and let | x | = � n i =1 x i be the Hamming weight of x ∈ { 0 , 1 } n . Then, ℓ − 1 ℓ − 1 g ( x , y ) = 1 � � 2 i y i )( | x | − 2 i y i − 1) 2( | x | − i =1 i =1 is a quadratization of the positive monomial P n ( x ) = � n i =1 x i using ⌈ log( n ) ⌉ − 1 auxiliary variables. p.8
Upper bound for the positive monomial: ⌈ log( n ) ⌉ − 1 Theorem 3 (simplified version) Assume that n = 2 ℓ and let | x | = � n i =1 x i be the Hamming weight of x ∈ { 0 , 1 } n . Then, ℓ − 1 ℓ − 1 g ( x , y ) = 1 � � 2 i y i )( | x | − 2 i y i − 1) 2( | x | − i =1 i =1 is a quadratization of the positive monomial P n ( x ) = � n i =1 x i using ⌈ log( n ) ⌉ − 1 auxiliary variables. Proof idea: Check that, for every x ∈ { 0 , 1 } n , min y g ( x , y ) = � n i =1 x i . p.8
Recommend
More recommend