Maximizing Volume Subject to Combinatorial Constraints Sasho Nikolov University of Toronto Nikolov (UofT) Max Volume 1 / 17
The Problem(s) Outline The Problem(s) 1 Algorithm for the Maximum d -Subdeterminant 2 Extending to j < d and Beyond 3 Nikolov (UofT) Max Volume 2 / 17
The Problem(s) Maximum j -simplex Given points v 1 , . . . , v n ∈ R d Find the largest j-dimensional simplex ∆ ⊆ conv { v 1 , . . . , v n } . j -dim simplex = convex hull of j + 1 points Nikolov (UofT) Max Volume 3 / 17
The Problem(s) Maximum j -simplex Given points v 1 , . . . , v n ∈ R d Find the largest j-dimensional simplex ∆ ⊆ conv { v 1 , . . . , v n } . j -dim simplex = convex hull of j + 1 points Nikolov (UofT) Max Volume 3 / 17
The Problem(s) Maximum j -simplex Given points v 1 , . . . , v n ∈ R d Find the largest j-dimensional simplex ∆ ⊆ conv { v 1 , . . . , v n } . j -dim simplex = convex hull of j + 1 points Can assume ∆ = conv { v i 1 , . . . , v i j +1 } Nikolov (UofT) Max Volume 3 / 17
The Problem(s) Maximum j -subdeterminant Given Positive semidefinite n × n matrix M , rank M = d Find the j × j principal submatrix M S , S with the largest determinant S S M Nikolov (UofT) Max Volume 4 / 17
The Problem(s) Motivation Maximum Volume Simplex : A natural problem in computational geometry [GK94, GKL95] General class of problems: approximate a complicated body with a “simple” one contained inside it e.g. John ellipsoid: largest volume ellipsoid Maximum Subdeterminant : Applications in modeling diversity in machine learning [KT12] maximum entropy sampling [Lee06] low-rank matrix approximation [GT01] discrepancy theory [LSV86, Mat11] Maximizing volume ↔ maximizing diversity Nikolov (UofT) Max Volume 5 / 17
The Problem(s) What was known Approximation equivalence: � α ( j )-approximation for j -MSD = ⇒ α ( j )-approximation for j -MVS ⇒ ( j + 1) α ( j ) 2 -approximation for α ( j )-approximation for j -MVS = j -MSD ¸M13, DEFM14] NP-hard to approximate better than c j for [Kou06, C some constant c > 1 and any j = d Ω(1) . Approximation algorithms: ¸M09] j j / 2 -approximation for all j ; [Kha95, Pac04, C [DEFM14] (log d ) d / 2 -approximation for j = d . Nikolov (UofT) Max Volume 6 / 17
The Problem(s) Main Result Theorem There exists a deterministic polynomial time algorithm that approximates Maximum j-Subdeterminant up to a factor of j j j ! ≈ e j , � j j j ! ≈ e j / 2 . Maximum Volume j-Simplex up to a factor of Since j -dimensional volume is degree- j homogeneous, this is morally a constant factor approximation. Nikolov (UofT) Max Volume 7 / 17
Algorithm for the Maximum d -Subdeterminant Outline The Problem(s) 1 Algorithm for the Maximum d -Subdeterminant 2 Extending to j < d and Beyond 3 Nikolov (UofT) Max Volume 8 / 17
Algorithm for the Maximum d -Subdeterminant An Upped Bound from Ellipsoids Cholesky decomposition : M = V T V , V is d × n Columns of V : v 1 , . . . , v n . det( M S , S ) = det( V S ) 2 Nikolov (UofT) Max Volume 9 / 17
Algorithm for the Maximum d -Subdeterminant An Upped Bound from Ellipsoids Cholesky decomposition : M = V T V , V is d × n Columns of V : v 1 , . . . , v n . det( M S , S ) = det( V S ) 2 E a 2 a 1 0 Centered ellipsoid E s.t. v 1 , . . . , v n ∈ E : a 1 , . . . , a d : major axes of E Nikolov (UofT) Max Volume 9 / 17
Algorithm for the Maximum d -Subdeterminant An Upped Bound from Ellipsoids Cholesky decomposition : M = V T V , V is d × n Columns of V : v 1 , . . . , v n . det( M S , S ) = det( V S ) 2 E a 2 a 1 0 Centered ellipsoid E s.t. v 1 , . . . , v n ∈ E : a 1 , . . . , a d : major axes of E Hadamard’s Inequality : S : | S | = d det( V S ) 2 ≤ � a 1 � 2 . . . � a d � 2 S : | S | = d det( M S , S ) = max max Nikolov (UofT) Max Volume 9 / 17
Algorithm for the Maximum d -Subdeterminant The L¨ owner Ellipsoid L¨ owner Ellipsoid E L : minimum volume ellipsoid containing ± v 1 , . . . , ± v n E L minimizes the upper bound � a 1 � 2 . . . � a d � 2 ; Intuition : the body touches E L in all directions. Nikolov (UofT) Max Volume 10 / 17
Algorithm for the Maximum d -Subdeterminant The L¨ owner Ellipsoid L¨ owner Ellipsoid E L : minimum volume ellipsoid containing ± v 1 , . . . , ± v n E L minimizes the upper bound � a 1 � 2 . . . � a d � 2 ; Intuition : the body touches E L in all directions. Theorem ([Joh48]) E L = F · B d if and only if there exist weights c 1 , . . . , c n ≥ 0 so that n n � c i v i v T = FF T � c i = d . i i =1 i =1 Nikolov (UofT) Max Volume 10 / 17
Algorithm for the Maximum d -Subdeterminant The L¨ owner Ellipsoid L¨ owner Ellipsoid E L : minimum volume ellipsoid containing ± v 1 , . . . , ± v n E L minimizes the upper bound � a 1 � 2 . . . � a d � 2 ; Intuition : the body touches E L in all directions. Theorem ([Joh48]) E L = F · B d if and only if there exist weights c 1 , . . . , c n ≥ 0 so that n n � c i v i v T = FF T � c i = d . i i =1 i =1 The c i give a distribution over v i that approximates E L Idea : treat the c i as probabilities and sample Nikolov (UofT) Max Volume 10 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Nikolov (UofT) Max Volume 11 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Analysis : Binet-Cauchy formula and John’s Theorem: �� � E det( V S ) 2 = � det( V S ) 2 d ! p i i ∈ S S ⊆ [ n ]: | S | = d Nikolov (UofT) Max Volume 11 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Analysis : Binet-Cauchy formula and John’s Theorem: �� � E det( V S ) 2 = � det( V S ) 2 d ! p i i ∈ S S ⊆ [ n ]: | S | = d �� � = d ! � det( V S ) 2 c i d d i ∈ S S ⊆ [ n ]: | S | = d Nikolov (UofT) Max Volume 11 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Analysis : Binet-Cauchy formula and John’s Theorem: �� � E det( V S ) 2 = � det( V S ) 2 d ! p i i ∈ S S ⊆ [ n ]: | S | = d �� � = d ! � det( V S ) 2 c i d d i ∈ S S ⊆ [ n ]: | S | = d � n � = d ! � c i v i v T d d det i i =1 Nikolov (UofT) Max Volume 11 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Analysis : Binet-Cauchy formula and John’s Theorem: �� � E det( V S ) 2 = � det( V S ) 2 d ! p i i ∈ S S ⊆ [ n ]: | S | = d �� � = d ! � det( V S ) 2 c i d d i ∈ S S ⊆ [ n ]: | S | = d � n � = d ! � c i v i v T d d det i i =1 = d ! d d det( FF T ) = d ! d d � a 1 � 2 . . . � a d � 2 Nikolov (UofT) Max Volume 11 / 17
Algorithm for the Maximum d -Subdeterminant The Randomized Rounding Algorithm Algorithm : Sample S := { i 1 , . . . , i d } independently with replacement, where i is sampled with probability p i := c i d . Analysis : Binet-Cauchy formula and John’s Theorem: �� � E det( V S ) 2 = � det( V S ) 2 d ! p i i ∈ S S ⊆ [ n ]: | S | = d �� � = d ! � det( V S ) 2 c i d d i ∈ S S ⊆ [ n ]: | S | = d � n � = d ! � c i v i v T d d det i i =1 = d ! d d det( FF T ) = d ! d d � a 1 � 2 . . . � a d � 2 Derandomize using conditional expectations. Nikolov (UofT) Max Volume 11 / 17
Extending to j < d and Beyond Outline The Problem(s) 1 Algorithm for the Maximum d -Subdeterminant 2 Extending to j < d and Beyond 3 Nikolov (UofT) Max Volume 12 / 17
Extending to j < d and Beyond Relaxation Suggested by the Rounding Cue from the rounding algorithm: solve the optimization problem �� � n � det( V S ) 2 � Maximize c i s.t. c i = j S ⊆ [ n ]: | S | = j i ∈ S i =1 By the same analysis, randomized rounding gives a j j j ! -approximation. Nikolov (UofT) Max Volume 13 / 17
Extending to j < d and Beyond Relaxation Suggested by the Rounding Cue from the rounding algorithm: solve the optimization problem �� � n � det( V S ) 2 � Maximize c i s.t. c i = j S ⊆ [ n ]: | S | = j i ∈ S i =1 By the same analysis, randomized rounding gives a j j j ! -approximation. The objective equals n � c i v i v T ) = e j ( λ 1 , . . . , λ d ) , E j ( i =1 where e j is the degree j elementary symmetric polynomial, and λ 1 , . . . , λ d are the eigenvalues of � n i =1 c i v i v T . Nikolov (UofT) Max Volume 13 / 17
Recommend
More recommend