Tensor numerical approach to space-time approximation of multi-dimensional parabolic equations Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 1 / 38 Tensor methods vs. the “curse of dimensionality”: N d → dN → d log N ? PDE based applications in higher dimensions: ◮ Elliptic (parameter-dependent) BV/spectral problems: u ∈ H 1 Ω ∈ R d . 0 (Ω) : H u := − div ( a grad u + u v ) + Vu = F (= λ u ) in ◮ Parabolic-type dynamics: Find u : R d × ( 0 , T ) → R , s.t. σ ∂ u u ( x , 0 ) ∈ H 2 ( R d ) : ∂ t + H u = F , σ ∈ { 1 , i } . Computational challenges: Multi-dimensionality, multi-parametric problems, huge grids in R d and in t ∈ [ 0 , T ] , multi-dimensional integral transforms (convolution), high oscillations in coefficients, redundancy in data representation. Big Data ≇ Big Knowledge ! (curse of redundancy) ⇒ Knowledge = o ( Data ) ? Parallelization issues Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 2 / 38
Separability concept in comput. quantum chemistry: spectra/dynamics 1929 , P.A.M. Dirac: The fundamental laws necessary for the mathematical treatment of large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved. 1961 , R. Bellman: In view of all that we have said in the foregoing sections, the many obstacles we appear to have surmounted, what casts the pall over our victory celebration? It is the curse of dimensionality, a malediction that has plagued the scientist from earliest days. ◮ Low-rank nonlinear tensor approximation vs. information redundancy. 1927-1966 , Hitchcock, Tucker: Canonical (CP) and Tucker tensor formats in chemometrics. 1992 , S. White: DMRG and Matrix Product States for quantum spin systems. 2003 , Dirac-Frenkel molecular dynamics on rank-1 tensor manifold. Since 2006 : Tensor numerical methods for PDEs in R d . Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 3 / 38 Outline of the talk Big-data compression via separation of variables: basic tensor formats 1 Range-separated tensor format: reduced model for many-particle interactions 2 Quantized-TT tensor approximation (QTT) of logarithmic complexity 3 4 O ( log n ) numerical quadratures by simple QTT algebraic operations Elliptic PDEs with highly oscillating coefficients in log-complexity: tensor 5 approximation vs. asymptotic homogenization PDEs on complex geometry: Low-rank approximation in iso-geometric analysis 6 d -dimensional dynamics: time stepping vs. equations in 2 + 1 , 3 + 1 ,..., d + 1 7 QTT integration for retarded potential IE (wave equation) Rank estimates Heat eqn., Fokker-Planck, chemical master equations Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 4 / 38
Separable representation in tensor-product Hilbert space ◮ Euclidean vector space V n = R n 1 × ... × n d = � d ℓ = 1 R n ℓ , n = ( n 1 , ..., n d ) , � V = [ v i ] ∈ V n : � W , V � = i w i v i , i = ( i 1 , ..., i d ) : i ℓ ∈ { 1 , ..., n ℓ } . Separable representations in V n via rank-1 tensors: � d V = [ v i 1 ... i d ] = v ( 1 ) ⊗ . . . ⊗ v ( d ) ∈ V n : ℓ = 1 v ( ℓ ) v i 1 ... i d = i ℓ . ◮ Storage: Stor ( V ) = � d ℓ = 1 n ℓ ≪ dim V n = � d ( nd ≪ n d ) . ℓ = 1 n ℓ ◮ The scalar product reduces to univariate operations � d � W , V � = � w ( 1 ) ⊗ . . . ⊗ w ( d ) , v ( 1 ) ⊗ . . . ⊗ v ( d ) � = ℓ = 1 � w ( ℓ ) , v ( ℓ ) � V ℓ . Fast multiliear algebra: d -dimensional Hadamard product, contraction, convolution product, addition etc. all reduce to 1 D -operations! Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 5 / 38 Low-parametric separable representations: canonical, Tucker, MPS (TT) formats ◮ Most commonly used tensor formats extend rank- R matrices: � R 2 ≡ UDV T ∈ R m × n . k = 1 u k v T k ≡ G 1 G T d = 2 : V = – Canonical (CP) tensors (multidimensional rank- R representation) [Hitchcock ’27] . – The orthogonal Tucker decomposition (multidimensional truncated SVD) [Tucker ’66] – Matrix product states ( MPS ) factorization [S. White ’92 et al.] ⇒ Variants of MPS: Tensor train ( TT ) [Oseledets, Tyrtyshnikov ’09] ; Hierarchical Tucker ( HT ) [Hackbusch, Kühn ’09] . Def. Canonical R -term representation in V n : V ∈ C R ( V n ) , � R k = 1 v ( 1 ) ⊗ . . . ⊗ v ( d ) v ( ℓ ) ∈ R n ℓ . V = k , k k ◮ Advantages: Storage = dRN , simple multilinear algebra, but hard for approximation. (3) (3) (3) V V V r 1 2 b b b r 2 1 (2) (2) (2) V V V ... = + + + r 2 1 A (1) (1) (1) V V V r 2 1 Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 6 / 38
Orthogonal Tucker decomposition Def. Rank r = ( r 1 , . . . , r d ) Tucker tensors: V ∈ � d ℓ = 1 T ℓ ⊂ V n , V ( ℓ ) ∈ R n ℓ × r ℓ r � k d = B × 1 V ( 1 ) × 2 · · · × d V ( d ) , b k 1 ... k d v ( 1 ) k 1 ⊗ . . . ⊗ v ( d ) T ℓ = span { v ( ℓ ) k ℓ } r ℓ k ℓ = 1 ⊂ R n ℓ V = k 1 ,..., k d = 1 ◮ Storage: drN + r d , r = max r ℓ ≪ N = max n ℓ . For functional tensors r = O ( log N ) . Slater function, AR=10, n = 64 0 (3) 10 T r3 I3 1 −5 0.8 10 I2 r1 error 0.6 B r2 I 1 V r3 0.4 r2 (2) T E FN 0.2 −10 I 10 0 3 E FE (1) I I1 T 0 2 5 E C 0 5 10 2 4 6 8 10 12 14 10 r1 Tucker rank Basics of tensor numerical approximation [BNK, CMAM ’06], [BNK, Khoromskaia ’07] : � x � , e −� x � Low-rank decomposition of functional tensors, e.g. for v ( x ) = e −� x � , 1 � x � : � V − V r � ≤ e − cr . Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 7 / 38 Matrix Product States (Tensor Train) factorization Def. MPS (TT) format: Given r = ( r 1 , ..., r d ) , r d = 1. V = [ v i ] ∈ TT [ r ] ⊂ V n is parametrized by contracted product of tri-tensors in R r ℓ − 1 × n ℓ × r ℓ , � α G ( 1 ) α 1 [ i 1 ] G ( 2 ) α 1 α 2 [ i 2 ] · · · G ( d ) α d − 1 [ i d ] ≡ G ( 1 ) [ i 1 ] G ( 2 ) [ i 2 ] ... G ( d ) [ i d ] , = v i 1 ... i d G ( ℓ ) [ i ℓ ] is an r ℓ − 1 × r ℓ matrix, 1 ≤ i ℓ ≤ n ℓ . Storage: dr 2 N ≪ N d . r r r 2 3 4 r r r r 1 3 1 2 n n n 4 i 3 2 1 i n 3 5 a = i i2 i3i4 i5 i i n 1 2 4 1 r 4 i . 5 d = 5: A = [ a i 1 i 2 i 3 i 4 i 5 ] ∈ R n 1 × n 2 × n 3 × n 4 × n 5 . For r = 1 all formats coincide ! 1 D (univariate) function related vectors f = { f ( x i ) } ∈ R N are non-compressible ? ◮ Find a hidden low-rank tensor structure in large functional vectors and matrices ! Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 8 / 38
Range-separated (RS) tensor format for approximating funct. with multiple cusps Figure: Villin protein (left). Short- and long-range parts of the reference Newton kernel 1 / � x � . [Benner, Khoromskaia, BNK ’16] ◮ Long range interaction potential in the large N 0 -particle system, x k ∈ R 3 , k = 1 , ..., N 0 , and the radial function p ( � x � ) (say, p ( � x � ) = 1 / � x � ) � N 0 x k , x ∈ Ω = [ − b , b ] 3 , P ( x ) = ν = 1 z k p ( � x − x k � ) , z k ∈ R , ◮ The interaction energy of the system of N 0 charged particles N N � � E N = E N ( x 1 , . . . , x N ) = 1 z k � x j − x k � . (1) z j 2 j = 1 k = 1 , k � = j Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 9 / 38 RS canonical/Tucker/TT formats: beneficial properties [Benner, Khoromskaia, BNK, arXiv: 1606.09218, 2016] Definition (RS-canonical (Tucker, TT) tensors). The RS-canonical tensor format defines the class of d -tensors A ∈ R n 1 ×···× n d , represented as a sum of a rank- R CP tensor U and a cumulated CP tensor generated by localized U 0 , s.t. rank ( U ν ) = rank ( U 0 ) ≤ R 0 , U ν = Replica ( U 0 ) , � R � N 0 k = 1 ξ k u ( 1 ) ⊗ · · · ⊗ u ( d ) A = + ν = 1 c ν U ν , with diam ( supp U ν ) ≤ n 0 = O ( 1 ) . k k Theorem The storage size for RS-canonical tensor is estimated by Stor ( A ) ≤ dRn + ( d + 1 ) N 0 + dR 0 n 0 . Each entry of an RS-CP tensor can be calculated at O ( dR + dR 0 n 0 ) cost. For ε -rank of the Tucker approximation to the long-range CP tensor U , r 0 = rank Tuck ( U ) : | r 0 | ≤ C b log 3 / 2 ( | log ( ε/ N 0 ) | ) , rank Can ( U ) ≤ | r 0 | 2 . Boris N. Khoromskij Linz, RICAM, Research Semester, WS2, 10.11.2016 Tensor approximation of parabolic PDEs 10 / 38
Recommend
More recommend