a regularized least squares method for sparse low rank
play

A regularized least-squares method for sparse low-rank approximation - PowerPoint PPT Presentation

Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with Prashant Rai, Loic Giraldi, Anthony


  1. Workshop “Numerical methods for high-dimensional problems” April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with Prashant Rai, Loic Giraldi, Anthony Nouy GeM – Institut de Recherche en G´ enie Civil et M´ ecanique LUNAM Universit´ e UMR CNRS 6183 / Universit´ e de Nantes / Centrale Nantes

  2. Motivations Chorus ANR project • Aero-thermal regulation in an aircraft cabin 39 random parameters Data basis of 2000 evaluations of the model Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 2

  3. Motivations • In telecommunication: electromagnetic field and the Specific Absorption Rate (SAR) induced in the body Over 4 random parameters FDTD method: 2 days/run. Few evaluations of the model available Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 3

  4. Motivations • In telecommunication: electromagnetic field and the Specific Absorption Rate (SAR) induced in the body Over 4 random parameters FDTD method: 2 days/run. Few evaluations of the model available Aim Construct a surrogate model of the true model from a small collection of evaluations of the true model that allows fast evaluations of output quantities of interest, observables or objective function. Propagation: estimation of quantiles, sensitivity analysis ... Optimization or identification Probabilistic inverse problem Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 3

  5. ξ Uncertainty quantification using functional approaches • Stochastic/parametric models Uncertainties represented by “simple” random variables ξ = ( ξ 1 , · · · , ξ d ) : Θ → Ξ defined on a probability space (Θ , B , P ). u (ξ) Model Ideal approach Compute an accurate and explicit representation of u ( ξ ): � u ( ξ ) ≈ u α φ α ( ξ ) , ξ ∈ Ξ α ∈ I P where the φ α ( ξ ) constitute a suitable basis of multiparametric functions Polynomial chaos [Ghanem and Spanos 1991, Xiu and Karniadakis 2002, Soize and Ghanem 2004] Piecewise polynomials, wavelets [Deb 2001, Le Maˆ ıtre 2004, Wan 2005] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 4

  6. Motivations • Aproximation spaces S P = span { φ α ( ξ ) = φ (1) α 1 ( ξ 1 ) . . . φ ( d ) α d ( ξ d ); α ∈ I P } with a pre-defined index set I P , e.g. � � � � � � α ∈ N d ; | α | ∞ ≤ r α ∈ N d ; | α | 1 ≤ r α ∈ N d ; | α | q ≤ r ⊃ ⊃ , 0 < q < 1 Issue • Approximation of a high dimensional function u ( ξ ), ξ ∈ Ξ ⊂ R d #( I P ) ≈ 10 , 10 10 , 10 100 , 10 1000 , ... • Use of deterministic solvers in a black box manner Numerous evaluations of possibly fine deterministic models Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 5

  7. Motivations Objective Compute an approximation of u ∈ S P � u ( ξ ) ≈ u α φ α ( ξ ) α ∈ I P using few samples { u ( y q ) } Q q =1 where { y q } Q q =1 is a collection of sample points and the u ( y q ) are solutions of the deterministic problem Exploit structures of u ( ξ ) u ( ξ ) can be sparse on particular basis functions u ( ξ ) can have suitable low rank representations Can we exploit sparsity within low rank structure of u ? Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 6

  8. Outline Motivations and framework 1 Sparse low rank approximation 2 Tensor formats and algorithms 3 Canonical decomposition Tensor Train format Conclusion 4 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 7

  9. Outline Motivations and framework 1 Sparse low rank approximation 2 Tensor formats and algorithms 3 Canonical decomposition Tensor Train format Conclusion 4 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 7

  10. Low rank approximation Approximation of function u using tensor approximation methods • Exploit the tensor structure of function space � P k � S P = S 1 P 1 ⊗ . . . ⊗ S d S k φ ( k ) P d ; P k = span i i =1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8

  11. Low rank approximation Approximation of function u using tensor approximation methods • Exploit the tensor structure of function space � P k � S P = S 1 P 1 ⊗ . . . ⊗ S d S k φ ( k ) P d ; P k = span i i =1 • Low rank tensor subsets M M = { v = F M ( p 1 , p 2 , . . . , p n ) } with dim ( M ) = O ( d ) [Nouy 2010, Khoromskij and Schwab 2010, Ballani 2010, Beylkin et al 2011, Matthies and Zander 2012, Doostan et al 2012, ...] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8

  12. Low rank approximation Approximation of function u using tensor approximation methods • Exploit the tensor structure of function space � P k � S P = S 1 P 1 ⊗ . . . ⊗ S d S k φ ( k ) P d ; P k = span i i =1 • Low rank tensor subsets M M = { v = F M ( p 1 , p 2 , . . . , p n ) } with dim ( M ) = O ( d ) [Nouy 2010, Khoromskij and Schwab 2010, Ballani 2010, Beylkin et al 2011, Matthies and Zander 2012, Doostan et al 2012, ...] • Sparse low rank tensor subsets M m -sparse , ideally M m -sparse = { v = F M ( p 1 , p 2 , . . . , p n ); � p i � 0 ≤ m i ; 1 ≤ i ≤ n } with dim ( M m -sparse ) ≪ dim ( M ) . Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8

  13. Low rank approximation Least-squares in low rank subsets • Approximation of v ( ξ ) ∈ M defined by Q v ∈ M � u − v � 2 � u − v � 2 � | u ( y k ) − v ( y k ) | 2 min with Q = Q k =1 [Beylkin et al 2011, Doostan et al 2012] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9

  14. Low rank approximation Least-squares in low rank subsets • Approximation of v ( ξ ) ∈ M defined by Q v ∈ M � u − v � 2 � u − v � 2 � | u ( y k ) − v ( y k ) | 2 min with Q = Q k =1 [Beylkin et al 2011, Doostan et al 2012] • Approximation of v ( ξ ) ∈ M m − sparse defined by v ∈ M � u − v � 2 min Q s.t. � p i � 0 ≤ m i Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9

  15. Low rank approximation Least-squares in low rank subsets • Approximation of v ( ξ ) ∈ M defined by Q v ∈ M � u − v � 2 � u − v � 2 � | u ( y k ) − v ( y k ) | 2 min with Q = Q k =1 [Beylkin et al 2011, Doostan et al 2012] • Approximation of v ( ξ ) ∈ M m − sparse defined by n v ∈ M � u − v � 2 v ∈ M � u − v � 2 � min Q s.t. � p i � 0 ≤ m i → min Q + λ i � p i � 1 (Lasso) i =1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9

  16. Low rank approximation Least-squares in low rank subsets • Approximation of v ( ξ ) ∈ M defined by Q v ∈ M � u − v � 2 � u − v � 2 � | u ( y k ) − v ( y k ) | 2 min with Q = Q k =1 [Beylkin et al 2011, Doostan et al 2012] • Approximation of v ( ξ ) ∈ M m − sparse defined by n v ∈ M � u − v � 2 v ∈ M � u − v � 2 � min Q s.t. � p i � 0 ≤ m i → min Q + λ i � p i � 1 (Lasso) i =1 Alternating least-squares with sparse regularization For 1 ≤ i ≤ n and for fixed p j with j � = i p i � u − F M ( p 1 , . . . , p i , . . . , p n ) � 2 min Q + λ i � p i � 1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9

  17. Outline Motivations and framework 1 Sparse low rank approximation 2 Tensor formats and algorithms 3 Canonical decomposition Tensor Train format Conclusion 4 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 10

  18. Approximation in canonical tensor subset Rank-one canonical tensor subset � w = w (1) ⊗ . . . ⊗ w ( d ) ; w ( k ) ∈ S k P k s.t. w ( k ) ( ξ k ) = φ ( k ) ( ξ k ) T w ( k ) � R 1 = Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11

  19. Approximation in canonical tensor subset Rank-one canonical tensor subset � w = � φ , w (1) ⊗ . . . ⊗ w ( d ) � ; w ( k ) ∈ R P k � R 1 = � φ (1) ⊗ . . . ⊗ φ ( d ) � ( ξ ) and with dim ( R 1 ) = � d where φ = k =1 P k Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11

  20. Approximation in canonical tensor subset Rank-one canonical tensor subset � w = � φ , w (1) ⊗ . . . ⊗ w ( d ) � ; w ( k ) ∈ R P k , � w ( k ) � 1 ≤ γ k � R γ 1 = � φ (1) ⊗ . . . ⊗ φ ( d ) � 1 ) = � d where φ = ( ξ ) and with dim ( R γ k =1 P k Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11

  21. Approximation in canonical tensor subset Rank-one canonical tensor subset � w = � φ , w (1) ⊗ . . . ⊗ w ( d ) � ; w ( k ) ∈ R P k , � w ( k ) � 1 ≤ γ k � R γ 1 = � φ (1) ⊗ . . . ⊗ φ ( d ) � 1 ) = � d where φ = ( ξ ) and with dim ( R γ k =1 P k Rank- m tensor subsets m R γ 1 ,..., γ m w i ; w i ∈ R γ i � = { v = 1 } m i =1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11

Recommend


More recommend