numerical tensor methods and their applications
play

Numerical tensor methods and their applications I.V. Oseledets 2 - PowerPoint PPT Presentation

Numerical tensor methods and their applications I.V. Oseledets 2 May 2013 I.V. Oseledets Numerical tensor methods and their applications What is this course is about This course is mostly on numerical methods of linear algebra in multilinear


  1. Numerical tensor methods and their applications I.V. Oseledets 2 May 2013 I.V. Oseledets Numerical tensor methods and their applications

  2. What is this course is about This course is mostly on numerical methods of linear algebra in multilinear settings. I.V. Oseledets Numerical tensor methods and their applications

  3. What is this course is about This course is mostly on numerical methods of linear algebra in multilinear settings. Goal: develop universal tools for working with high-dimensional problems. I.V. Oseledets Numerical tensor methods and their applications

  4. All lectures 4 lectures, 2 May, 08:00 - 10:00: Introduction: ideas, matrix results, history. 7 May, 08:00 - 10:00: Novel tensor formats (TT, HT, QTT). 8 May, 08:00 - 10:00: Advanced tensor methods (eigenproblems, linear systems). 14 May, 08:00 - 10:00: Advanced topics, recent results and open problems. I.V. Oseledets Numerical tensor methods and their applications

  5. Lecture 1 Motivation Matrix background Canonical and Tucker formats Historical overview I.V. Oseledets Numerical tensor methods and their applications

  6. Motivation Main points High-dimensional problems appear in diverse applications Standard methods do not scale well in many dimensions I.V. Oseledets Numerical tensor methods and their applications

  7. Motivation Solution of high-dimensional differential and integral equations on fine grids Typical cost: O ( N 3 ) → O ( N ) or even O ( log α N ) . I.V. Oseledets Numerical tensor methods and their applications

  8. Motivation Ab initio computations and computational material design Protein-ligand docking (D. Zheltkov) Density functional theory for large clusters (V. Khoromskaia) 0.4 0.2 0 −30 −20 −10 0 30 20 10 10 0 20 −10 −20 30 −30 0.4 10 0 v (1) 1 v (1) 0.3 2 −1 10 v (1) 3 0.2 v (1) −2 4 10 v (1) 5 0.1 v (1) 6 10 −3 E F , n=61 0 E F , n=121 −4 10 E F , n=241 −0.1 E EN , n=61 10 −5 −0.2 E EN , n=121 E EN , n=241 −6 −0.3 10 −30 −20 −10 0 10 20 30 2 4 6 8 10 12 14 16 I.V. Oseledets Numerical tensor methods and their applications

  9. Motivation Construction of reduced order models for multiparametric/stochastic systems in engineering Diffusion problem ∇ a ( p ) ∆ u = f ( p ) , p = ( p 1 , p 2 , p 3 , p 4 ) Approximate u using only few snapshots. I.V. Oseledets Numerical tensor methods and their applications

  10. Motivation Data mining and compression Images Computational data (temperature) I.V. Oseledets Numerical tensor methods and their applications

  11. Why tensors are important The multivariate functions are related to the multivariate arrays, or tensors: I.V. Oseledets Numerical tensor methods and their applications

  12. Why tensors are important The multivariate functions are related to the multivariate arrays, or tensors: Take a function: f ( x 1 , . . . , x d ) Take tensor-product grid Get a tensor: A ( i 1 , . . . , i d ) = f ( x 1 ( i 1 ) , . . . , x d ( i d )) I.V. Oseledets Numerical tensor methods and their applications

  13. Literature T. Kolda and B. Bader, Tensor decompositions and applications, SIREV (2009) W. Hackbusch, Tensor spaces and numerical tensor calculus, 2012 L. Grasedyck, D. Kressner, C. Tobler, A literature survey of low-rank tensor approximation techniques, 2013 I.V. Oseledets Numerical tensor methods and their applications

  14. Software Some software will be used: Tensor Toolbox 2.5 (T. Kolda) TT-Toolbox (http://github.com/oseledets/TT-Toolbox) There is also a Python version (http://github.com/oseledets/ttpy) which has similar functionality now. I.V. Oseledets Numerical tensor methods and their applications

  15. Where tensors come from d -dimensional PDE: ∆ u = f , u = u ( x 1 , . . . , x d ) PDE with M parameters: A ( p ) u ( p ) = f ( p ) , u = u ( x , p 1 , . . . , p M ) Data (images, video, hyperspectral images) Latent variable models, joint probability distributions Factor models Many others I.V. Oseledets Numerical tensor methods and their applications

  16. Definitions A tensor is a d -dimensional array: A ( i 1 , . . . , i d ) , 1 ≤ i k ≤ n k Mathematically more correct definition: Tensor is a polylinear form. I.V. Oseledets Numerical tensor methods and their applications

  17. Definitions Tensors form a linear vector space. The natural norm is the Frobenius norm: � � || A || = | A ( i 1 , . . . , i d ) | 2 i 1 ,..., i d I.V. Oseledets Numerical tensor methods and their applications

  18. Curse of dimensionality Curse of dimensionality: Storage of a d -tensor with mode sizes n requires n d elements. I.V. Oseledets Numerical tensor methods and their applications

  19. Basic questions How to break the curse of dimensionality? How to perform (multidimensional) sampling? How to do everything efficiently and in a robust way? I.V. Oseledets Numerical tensor methods and their applications

  20. Real-life problems If you really need to compute something high-dimensional , there is usually a way: Monte Carlo Special basis sets (radial basis functions) Best N-term approximations (wavelets, sparse grids) But we want algebraic techniques. . . I.V. Oseledets Numerical tensor methods and their applications

  21. Separation of variables One of the few fruitful ideas is the idea of separation of variables I.V. Oseledets Numerical tensor methods and their applications

  22. What is separation of variables Separation rank 1: f ( x 1 , . . . , x d ) = u 1 ( x 1 ) u 2 ( x 2 ) . . . u d ( x d ) , More general: f ( x 1 , . . . , x d ) ≈ � r α = 1 u 1 ( x 1 , α ) . . . u d ( x d , α ) . I.V. Oseledets Numerical tensor methods and their applications

  23. Analytical examples How to compute separated representations? Analytical expressions (B. N. Khoromskij and many others): 1 f ( x 1 , . . . , x d ) = x 1 + ... + x d based on the identity � ∞ 1 x = 0 exp (− px ) dp r = log ε − 1 log δ − 1 I.V. Oseledets Numerical tensor methods and their applications

  24. Numerical computation of separated representations We can try to compute the separated decomposition numerically. How do we do that? I.V. Oseledets Numerical tensor methods and their applications

  25. Canonical format Tensors: Canonical format: A ( i 1 , . . . , i d ) ≈ � r α = 1 U 1 ( i 1 , α ) . . . U d ( i d , α ) What happens in d = 2? I.V. Oseledets Numerical tensor methods and their applications

  26. Two-dimensional case A ( i 1 , i 2 ) ≈ � r α = 1 U 1 ( i 1 , α ) U 2 ( i 2 , α ) I.V. Oseledets Numerical tensor methods and their applications

  27. Two-dimensional case A ( i 1 , i 2 ) ≈ � r α = 1 U 1 ( i 1 , α ) U 2 ( i 2 , α ) Matrix form: A ≈ UV ⊤ , Where U is n × r , V is m × r Approximate rank-r approximation I.V. Oseledets Numerical tensor methods and their applications

  28. SVD: definition The fabulous SVD (singular value decomposition): Every matrix can be represented as a product A = USV ∗ , where U , V are orthonormal, S is a diagonal matrix with singular values σ i ≥ 0 on the diagonal. I.V. Oseledets Numerical tensor methods and their applications

  29. SVD: complexity Complexity of the SVD is O ( n 3 ) (too much to compute O ( nr ) decomposition) I.V. Oseledets Numerical tensor methods and their applications

  30. SVD: complexity Complexity of the SVD is O ( n 3 ) (too much to compute O ( nr ) decomposition) Are there faster algorithms? I.V. Oseledets Numerical tensor methods and their applications

  31. Skeleton decomposition Yes: based on the skeleton decomposition A ≈ C � A − 1 R , C — r columns of A , R — r rows of A , � A — submatrix on the intersection. Ex.1: Prove it Ex.2: Have you met skeleton dec. before? I.V. Oseledets Numerical tensor methods and their applications

  32. Maximum volume principle What happens if the matrix is of approximate low rank? A ≈ R + E , rank R = r , || E || C = ε I.V. Oseledets Numerical tensor methods and their applications

  33. Maximum volume principle Select the submatrix � A such that volume is maximal (volume = absolute value of the determinant) || A − C � A − 1 R || ≤ ( r + 1 ) 2 ε I.V. Oseledets Numerical tensor methods and their applications

  34. Proof E. E. Tyrtyshnikov, S.A. Goreinov, On quasioptimality of skeleton approximation of a matrix in the Chebyshev norm, doi: 10.1134/S1064562411030355 � � A 11 A 12 , A = A 21 A 22 � � � A 11 A 21 � A 11 H = A − C � A − 1 A − 1 R = A − 11 A 21 Need: | h ij | ≤ ( r + 1 ) 2 δ r + 1 ( A ) I.V. Oseledets Numerical tensor methods and their applications

  35. Proof � A 11 � v Z = u ⊤ a ij Entry h ij can be found from: � � � � 0 I A 11 v Z = − u ⊤ A − 1 1 0 h ij 11 det Z = h ij det A 11 Therefore, | h − 1 ij | = || Z − 1 || C , | h ij | ≤ ( r + 1 ) σ r + 1 ( Z ) I.V. Oseledets Numerical tensor methods and their applications

  36. Proof Finally, σ r + 1 ( Z ) = min U Z , V z || Z − U Z V ⊤ Z || 2 ≤ ( r + 1 ) || Z − U Z V ⊤ Z || C ≤ ( r + 1 ) δ r + 1 ( A ) I.V. Oseledets Numerical tensor methods and their applications

  37. Maxvol algorithm(1) Ok, then, how to find a good submatrix? Crucial algorithm: Maxvol submatrix in a n × r matrix. Characteristic property: A is n × r , � � I A − 1 = A � | Z | ij ≤ 1 . , Z I.V. Oseledets Numerical tensor methods and their applications

Recommend


More recommend