tensor numerical methods in scientific computing basic
play

Tensor numerical methods in scientific computing: Basic theory and - PowerPoint PPT Presentation

Tensor numerical methods in scientific computing: Basic theory and initial applications Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Max-Planck-Institute for Mathematics in the Sciences, Leipzig Boris Khoromskij


  1. Tensor numerical methods in scientific computing: Basic theory and initial applications Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Max-Planck-Institute for Mathematics in the Sciences, Leipzig Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 1 / 44

  2. Outline of the talk 1 Introduction 2 Separation of variables: from canonical to tensor network formats 3 Quantized tensor approximation of logarithmic complexity Multi-resolution folding to quantized images: R 2 L �→ � L ℓ = 1 R 2 Theory of quantized approximation to multidimensional functional vectors Representation (approximation) of operators in quantized spaces 4 Large-scale eigenvalue problems in quantum chemistry 5 Stochastic/parametric PDEs and multi-dimensional preconditioning 6 High-dimensional dynamics: Fokker-Planck, master and molecular Schrödinger eqn. 7 Superfast QTT-FFT, convolution and QTT-FWT 8 Conclusions Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 2 / 44

  3. Main target: tractable methods for solving d -dimensional PDEs High-dimensional applications: d -dim. operators: Green’s functions, Fourier, convolution and wavelet transforms. Molecular systems: electronic structure, quantum molecular dynamics. PDEs in R d : quantum information, stochastic PDEs, dynamical systems. ◮ Elliptic (parameter-dependent) BVP: Find u ∈ H 1 0 (Ω) , s.t. Ω ∈ R d . H u := − div ( a grad u + u v ) + Vu = F in ◮ Elliptic EVP: Find a pair ( λ, u ) ∈ R × H 1 0 (Ω) , s.t. in Ω ∈ R d , H u = λ u � u , u � = 1 . ◮ Parabolic-type equations ( σ ∈ { 1 , i } ): Find u : R d × ( 0 , T ) → R , s.t. σ ∂ u u ( x , 0 ) ∈ H 2 ( R d ) : ∂ t + H u = 0 . Tensor methods adapt gainfully to main challenges: ◮ High spacial dimension: Ω = ( − b , b ) d ∈ R d ( d = 2 , 3 , ..., 100 , ... ). ◮ Multiparametric eq.: a ( y , x ) , F ( y , x ) , u ( y , x ) , y ∈ R M ( M = 1 , 2 , ..., 100 , ... ). Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 3 / 44

  4. Tensor numerical methods in higher dimensions: focus, building blocks, benefits Main focus: O ( d ) - numerical approximation to d -dimensional PDEs Basic ingredients: ◮ Traditional numerical methods. ◮ Numerical multilinear algebra ◮ Low-parametric separable approximation of d -variate functions: theory/algorithms. ◮ Tensor representation of linear operators: Green’s functions, convolution ( d ) , FFT ( d ) , wavelet, multi-particle Hamiltonians, preconditioners. ◮ Iterative solvers to steady-state and temporal PDEs on “tensor manifolds“. “Separation“ of variables beats ”curse of dimensionality“: ◮ O ( dN ) tensor numerical methods, N d → O ( dN ) . Super-compression: ◮ O ( d log N ) Quantized tensor approximation (QC, QTT), N d → O ( d log N ) . Guiding principle: ◮ Validation of numerical algorithms on real-life high-dimensional PDEs. Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 4 / 44

  5. Separable representation of (discrete) functions in a tensor-product Hilbert space Tensor-product Hilbert space, V n = V 1 ⊗ ... ⊗ V d , n = ( n 1 , ..., n d ) , n ℓ = dim V ℓ . ◮ Euclidean vector space V n = R n 1 × ... × n d , V ℓ = R n ℓ ( ℓ = 1 , ..., d ), � V = [ v i ] ∈ V n : � W , V � = i w i v i , i = ( i 1 , ..., i d ) : i ℓ ∈ I ℓ = { 1 , ..., n ℓ } . ◮ Tensors are functions of discrete variable, V n ∋ V : I 1 × ... × I d �→ R . Separable representation in V n : rank-1 tensors � d V = [ v i 1 ... i d ] = v ( 1 ) ⊗ . . . ⊗ v ( d ) ∈ V n , ℓ = 1 v ( ℓ ) v i 1 ... i d = : i ℓ ◮ The scalar product � d � W , V � = � w ( 1 ) ⊗ . . . ⊗ w ( d ) , v ( 1 ) ⊗ . . . ⊗ v ( d ) � = ℓ = 1 � w ( ℓ ) , v ( ℓ ) � V ℓ . ◮ Storage: Stor ( V ) = � d ℓ = 1 n ℓ ≪ dim V n = � d ℓ = 1 n ℓ . ◮ O ( d ) bilinear operations: addition, Hadamard product, contraction, convolution, ... Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 5 / 44

  6. Parametrization by separation of variables: from canonical to tensor network formats Def. Canonical R -term representation in V n : V ∈ C R ( V n ) , if [Hitchcock ’27, ...] � R k = 1 v ( 1 ) ⊗ . . . ⊗ v ( d ) v ( ℓ ) V = , ∈ V ℓ . k k k ◮ d = 2: rank- R matrices, V = � R k = 1 u k v T k . Visualizing canonical model, d = 3. (3) (3) (3) V V V r 2 1 b b b r 1 2 (2) (2) (2) V V ... V = + + + r 1 2 A (1) (1) (1) V V V r 2 1 ◮ Advantages: Storage = dRN , simple multilinear algebra. ◮ Limitations: C R ( V n ) is the non-closed set ⇒ lack of stable approximation methods. Example. f ( x ) = x 1 + ... + x d . rank Can ( f ) = d , but approximated by rank-2 elements � d ℓ = 1 ( 1 + ε x ℓ ) − 1 f ( x ) = lim . ε ε → 0 Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 6 / 44

  7. Orthogonal Tucker model Def. Rank r = [ r 1 , . . . , r d ] Tucker tensors: V ∈ T r ( V n ) if [Tucker ’66] r � b k 1 ... k d v ( 1 ) k 1 ⊗ . . . ⊗ v ( d ) T ℓ = span { v ( ℓ ) k ℓ } r ℓ k ℓ = 1 ⊂ R n ℓ . V = k d ∈ T 1 ⊗ ... ⊗ T d , k 1 ,..., k d = 1 ◮ d = 2: SVD of a rank- r matrix, A = UDV T , U ∈ R n × r , D ∈ R r × r , [Schmidt ’1905] ◮ Storage: drN + r d , r = max r ℓ ≪ N (efficient for d = 3, e.g. Hartree-Fock eq.). Beginning of tensor numerical methods: Tucker for 3D functions (e.g. f = e − r , 1 r ). [BNK, Khoromskaia ’07] Slater function, AR=10, n = 64 (3) 0 10 T r3 I3 1 I2 0.8 −5 10 r1 error 0.6 r2 B I 1 r3 V r2 0.4 (2) T E FN I 0.2 −10 3 10 0 E FE (1) I I1 2 0 T 5 E C 0 5 10 2 4 6 8 10 12 14 10 r1 Tucker rank Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 7 / 44

  8. Matrix Product States (MPS) factorization: In quantum physics/information: The matrix product states ( MPS ) and tree-tensor network states ( TNS ) of slightly entangled systems, matrix product operators ( MPO ), DMRG optimization. [White ’92; ..., Östlund, Rommer ’95; ..., Cirac, Verstraete ’06, ...] . Re-invented in numerical multilinear algebra: Hierarchical dimension splitting, O ( dr log d N ) -storage: [BNK ’06] . Hierarchical Tucker (HT) ≡ TNS: [Hackbusch, Kühn ’09] Tensor train (TT) ≡ MPS (open b.c.) [Oseledets, Tyrtyshnikov ’09] . Def. Tensor Train (MPS): Given r = ( r 1 , ..., r d ) , r d = 1, r 0 = 1. V ∈ TT [ r ] ⊂ V n is a parametrization by contracted product of tri-tensors in R r ℓ − 1 × n ℓ × r ℓ , � α G ( 1 ) α 1 [ i 1 ] G ( 2 ) α 1 α 2 [ i 2 ] · · · G ( d ) V [ i 1 ... i d ] = α d − 1 [ i d ] G ( 1 ) [ i 1 ] G ( 2 ) [ i 2 ] ... G ( d ) [ i d ] , ≡ G ( ℓ ) [ i ℓ ] is a r ℓ − 1 × r ℓ matrix, 1 ≤ i ℓ ≤ n ℓ . Example. f ( x ) = x 1 + ... + x d , rank TT ( f ) = 2. � � 1 � � � � 1 � � 0 1 0 f = x 1 1 ... . x 2 1 x d − 1 1 x d Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 8 / 44

  9. Benefits and limitations of the TT format Example. d = 5. r r r 2 3 4 r r r r 1 3 1 2 n n n 4 i 3 2 1 i n 3 5 i n i 2 4 1 r 4 . i 5 ◮ Advantages : Storage: dr 2 N ≪ N d , N = max n k . Efficient and robust MLA with polynomial scaling in r , linear scaling in d . Can be implemented by stable QR/SVD algorithms. ◮ Limitations : strong entanglements in a system, large mode-size N . Multilinear matrix-vector algebra and DMRG iterations cost: O ( dRr 3 N 2 ) d , R , r ∼ 10 2 , N ∼ 10 3 ÷ 10 4 – non-tractable local problems ? Rank bounds: 2 , r TT ≤ R Can , r Tuck ≤ r TT r Tuck ≤ R Can . Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 9 / 44

  10. Tensor network formats without loops Tucker TNS (HT) G ( 1234 ) G γ 12 γ 34 γ 1 γ 2 γ 3 γ 4 G ( 12 ) G ( 34 ) U ( 1 ) U ( 2 ) U ( 3 ) U ( d ) γ 1 γ 2 γ 3 γ 4 i 1 i 2 i 3 i d U ( 1 ) U ( 2 ) U ( 3 ) U ( 4 ) i 1 i 2 i 3 i 4 MPS (TT) QTT-Tucker α 1 α 2 α d − 1 α d − 1 α 1 α 2 · · · · · · G ( 1 ) G ( 2 ) G ( d ) G ( 1 ) G ( 2 ) G ( d ) γ 1 γ 2 γ d i 1 , 1 U ( 2 , 1 ) i 2 , 1 U ( d , 1 ) i d , 1 i 1 i 2 i d U ( 1 , 1 ) γ 1 , 1 . γ 2 , 1 . γ d , 1 . . . . . . . Canonical = � Tucker γ k ∈{ 1 } γ 1 , L − 1 γ 2 , L − 1 γ d , L − 1 i 1 , L U ( 2 , L ) i 2 , L U ( d , L ) i d , L U ( 1 , L ) Boris Khoromskij CEMRACS-2013, CIRM, Marseille-Luminy, 25.07.2013 Tensor methods for high-dim. PDEs () 10 / 44

Recommend


More recommend