sparsifying sums of positive semidefinite matrices
play

Sparsifying sums of positive semidefinite matrices Cristiane Sato - PowerPoint PPT Presentation

Sparsifying sums of positive semidefinite matrices Cristiane Sato Joint work with Nick Harvey 1 and Marcel Silva 2 Federal University of the ABC Region, Brazil Center of Mathematics, Computing and Cognition 1 University of British Columbia 2


  1. Sparsifying sums of positive semidefinite matrices Cristiane Sato Joint work with Nick Harvey 1 and Marcel Silva 2 Federal University of the ABC Region, Brazil Center of Mathematics, Computing and Cognition 1 University of British Columbia 2 University of S˜ ao Paulo

  2. Cut Sparsifiers Theorem (Karger ’94) ◮ weighted graph G = ( V , E , w ) where w : E → R + , ◮ ε > 0 small There exists a subgraph H = ( V , F , y ) of G and y : F → R + s.t. | F | = O ( n ln n /ε 2 ) ◮ The weight of every cut is approximately preserved

  3. Cut Sparsifiers Theorem (Karger ’94) ◮ weighted graph G = ( V , E , w ) where w : E → R + , ◮ ε > 0 small There exists a subgraph H = ( V , F , y ) of G and y : F → R + s.t. | F | = O ( n ln n /ε 2 ) ◮ The weight of every cut is approximately preserved ◮ That is, w ( δ G ( S )) = (1 ± ε ) y ( δ H ( S )) , ∀ S ⊆ V

  4. Cut Sparsifiers Theorem (Karger ’94) ◮ weighted graph G = ( V , E , w ) where w : E → R + , ◮ ε > 0 small There exists a subgraph H = ( V , F , y ) of G and y : F → R + s.t. | F | = O ( n ln n /ε 2 ) ◮ The weight of every cut is approximately preserved ◮ That is, w ( δ G ( S )) = (1 ± ε ) y ( δ H ( S )) , ∀ S ⊆ V ◮ Application: Faster algorithms by preprocessing the graph

  5. Cut Sparsifiers Theorem (Karger ’94) ◮ weighted graph G = ( V , E , w ) where w : E → R + , ◮ ε > 0 small There exists a subgraph H = ( V , F , y ) of G and y : F → R + s.t. | F | = O ( n ln n /ε 2 ) ◮ The weight of every cut is approximately preserved ◮ That is, w ( δ G ( S )) = (1 ± ε ) y ( δ H ( S )) , ∀ S ⊆ V ◮ Application: Faster algorithms by preprocessing the graph ◮ How sparse can H be? ◮ Can we build H efficiently?

  6. Weighted Laplacians ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ Laplacian of G is the V × V matrix Lapl G s.t. Lapl G ( i , i ) = degree of i Lapl G ( i , j ) = − w i , j if ij ∈ E

  7. Weighted Laplacians ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ Laplacian of G is the V × V matrix Lapl G s.t. Lapl G ( i , i ) = degree of i Lapl G ( i , j ) = − w i , j if ij ∈ E ◮ Lapl G is positive semidefinite All eigenvalues are ≥ 0 Notation: Lapl G � 0

  8. Spectral sparsifiers Theorem (Spielman, Teng ’04) ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ ε > 0 small There are new weights y : E → R + s.t. ◮ has n polylog( n ) /ε 2 nonzero entries ◮ H := ( V , E , y ) satisfies Lapl G � Lapl H � (1 + ε ) Lapl G

  9. Spectral sparsifiers Theorem (Spielman, Teng ’04) ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ ε > 0 small There are new weights y : E → R + s.t. ◮ has n polylog( n ) /ε 2 nonzero entries ◮ H := ( V , E , y ) satisfies Lapl G � Lapl H � (1 + ε ) Lapl G Notation: A � B ⇐ ⇒ B − A is positive semidefinite ◮ nearly-linear time solvers for symmetric, diagonally-dominant linear systems (Spielman and Teng + Koutis, Miller, and Peng, 2004)

  10. Spectral sparsifiers Theorem (Spielman, Teng ’04) ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ ε > 0 small There are new weights y : E → R + s.t. ◮ has n polylog( n ) /ε 2 nonzero entries ◮ H := ( V , E , y ) satisfies Lapl G � Lapl H � (1 + ε ) Lapl G y may be found in ˜ O ( m ) time Notation: A � B ⇐ ⇒ B − A is positive semidefinite ◮ nearly-linear time solvers for symmetric, diagonally-dominant linear systems (Spielman and Teng + Koutis, Miller, and Peng, 2004)

  11. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise

  12. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise ◮ h T Lapl G h = � x ij ( h i − h j ) 2 ij ∈ E

  13. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise ◮ h T Lapl G h = � x ij ( h i − h j ) 2 ij ∈ E ◮ h T Lapl G h is the w -weight of the cut δ ( S ), i.e., w ( δ ( S ))

  14. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise ◮ h T Lapl G h = � x ij ( h i − h j ) 2 ij ∈ E ◮ h T Lapl G h is the w -weight of the cut δ ( S ), i.e., w ( δ ( S )) ◮ h T Lapl H h is the y -weight of the cut δ ( S ), i.e., y ( δ ( S ))

  15. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise ◮ h T Lapl G h = � x ij ( h i − h j ) 2 ij ∈ E ◮ h T Lapl G h is the w -weight of the cut δ ( S ), i.e., w ( δ ( S )) ◮ h T Lapl H h is the y -weight of the cut δ ( S ), i.e., y ( δ ( S )) ◮ Lapl H � Lapl G implies h T (Lapl H − Lapl G ) h ≥ 0 That is, h T Lapl H h ≥ h T Lapl G h

  16. Spectral sparsifiers are cut sparsifiers ◮ h the incidence vector of S ⊆ V , h v = 1 if v ∈ S 0, otherwise ◮ h T Lapl G h = � x ij ( h i − h j ) 2 ij ∈ E ◮ h T Lapl G h is the w -weight of the cut δ ( S ), i.e., w ( δ ( S )) ◮ h T Lapl H h is the y -weight of the cut δ ( S ), i.e., y ( δ ( S )) ◮ Lapl H � Lapl G implies h T (Lapl H − Lapl G ) h ≥ 0 That is, h T Lapl H h ≥ h T Lapl G h ◮ Lapl H � (1 + ε ) Lapl G implies h T Lapl H h ≤ (1 + ε ) h T Lapl G h

  17. Laplacian matrix as a sum of matrices ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ The Laplacian of G is the V × V matrix i j   i 1 − 1   �   Lapl G := w ij     j − 1 1 ij ∈ E  

  18. Laplacian matrix as a sum of matrices ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ The Laplacian of G is the V × V matrix   i j i 1   �   � � Lapl G := 1 − 1 w ij     j − 1 ij ∈ E  

  19. Laplacian matrix as a sum of matrices ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ The Laplacian of G is the V × V matrix   i j i 1   �   � � Lapl G := 1 − 1 w ij     j − 1 ij ∈ E   ◮ Lapl G is a sum of rank-one positive semidefinite matrices

  20. Sparsifiers of Sums of Rank-One PSD Matrices Theorem (Batson, Spielman, Srivastava ’09) ◮ B 1 , . . . , B m p.s.d. n × n matrices of rank one ◮ B := � i B i ◮ ε > 0 small There are new weights y ∈ R m + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ B � � i y i B i � (1 + ε ) B y may be found in O ( mn 3 /ε 2 ) time

  21. Sparsifiers of Sums of Rank-One PSD Matrices Theorem (Batson, Spielman, Srivastava ’09) ◮ B 1 , . . . , B m p.s.d. n × n matrices of rank one ◮ B := � i B i ◮ ε > 0 small There are new weights y ∈ R m + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ B � � i y i B i � (1 + ε ) B y may be found in O ( mn 3 /ε 2 ) time ◮ (Lee-Sun’15) Almost-linear time method

  22. Sparsifiers of Sums of Rank-One PSD Matrices Theorem (Batson, Spielman, Srivastava ’09) ◮ B 1 , . . . , B m p.s.d. n × n matrices of rank one ◮ B := � i B i ◮ ε > 0 small There are new weights y ∈ R m + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ B � � i y i B i � (1 + ε ) B y may be found in O ( mn 3 /ε 2 ) time

  23. Sparsifiers of Sums of PSD Matrices Theorem (de Carli Silva, Harvey, S., ’11) ◮ B 1 , . . . , B m p.s.d. n × n matrices of any rank ◮ B := � i B i ◮ ε > 0 small There are new weights y ∈ R m + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ B � � i y i B i � (1 + ε ) B y may be found in O ( mn 3 /ε 2 ) time

  24. Applications ◮ spectral sparsifiers of graphs with extra properties ◮ cut sparsifiers of uniform hypergraphs (specially 3-uniform) ◮ sparse solutions to semidefinite programs

  25. Sparsifiers with Costs Theorem ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ ε > 0 small There are new weights y : E → R + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ the reweighted graph H := ( V , E , y ) satisfies Lapl G � Lapl H � (1 + ε ) Lapl G y may be found in O ( mn 3 /ε 2 ) time

  26. Sparsifiers with Costs Theorem ◮ G = ( V , E , w ) a weighted graph, where w : E → R + ◮ ε > 0 small ◮ “costs” c : E → R + There are new weights y : E → R + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ the reweighted graph H := ( V , E , y ) satisfies Lapl G � Lapl H � (1 + ε ) Lapl G ◮ c T w ≤ c T y ≤ (1 + ε ) c T w y may be found in O ( mn 3 /ε 2 ) time

  27. Add extra info to Laplacian i j   i 1 − 1   �   w ij     j − 1 1 ij ∈ E  

  28. Add extra info to Laplacian i j   i w ij − w ij   �       j − w ij w ij ij ∈ E  

  29. Add extra info to Laplacian i j 0   i w ij − w ij     �     j − w ij w ij   ij ∈ E     0 c ij

  30. Cut Sparsifiers of 3-Uniform Hypergraphs Theorem ◮ G = ( V , E , w ) a weighted 3-uniform hypergraph, where w : E → R + ◮ i.e., E ⊆ � V � 3 ◮ ε > 0 small There are new weights y : E → R + s.t. ◮ y has O ( n /ε 2 ) nonzero entries ◮ the reweighted hypergraph H := ( V , E , y ) satisfies w ( δ G ( S )) ≤ y ( δ H ( S )) ≤ (1 + ε ) w ( δ G ( S )) ∀ S ⊆ V y may be found in O ( mn 3 /ε 2 ) time

Recommend


More recommend