Proper Orthogonal Decomposition: Theory and Reduced-Order Modeling Stefan Volkwein M. Gubisch, O. Lass (U. Konstanz) M. Hinze (U Hamburg), K. Kunisch (U Graz) University of Konstanz, Department of Mathematics and Statistics, Numerics & Optimization Group Summerschool on Redued-Basis Methods, M¨ unchen, 16. September 2013 Stefan Volkwein POD: Theory and Reduced-Order Modeling 1 / 29
Introduction Motivation Motivation for our Research Areas [Grimm, Gubisch, Iapichino, Lass, Mancini, Trenz, V ., Wesche] Problem : time-variant, nonlinear, parametrized PDE systems Efficient and reliable numerical simulation in multi-query cases → finite element or finite volume discretizations too complex Multi-query examples fast simulation for different parameters on small computers parameter estimation, optimal design and feedback control → usage of a reduced-order S URROGATE M ODEL Time-variant, nonlinear coupled PDEs → methods from linear system theory not directly applicable Nonlinear model-order reduction → proper orthogonal decomposition and reduced-basis method Error control for reduced-order model → new a-priori and a-posteriori error analysis PDE — Partial Differential Equation Stefan Volkwein POD: Theory and Reduced-Order Modeling 2 / 29
Introduction Singular Value Decomposition (SVD) Singular Value Decomposition (SVD) Given vectors : y 1 ,..., y n ∈ R m Data matrix : Y = [ y 1 ,..., y n ] ∈ R m × n Singular value decomposition : U ∈ R m × m , V ∈ R n × n orthogonal � � D 0 U ⊤ YV = = Σ ∈ R m × n 0 0 with D = diag ( σ 1 ,..., σ d ) ∈ R d × d Singular values : σ 1 ≥ ... ≥ σ d > 0, rank Y = d Frobenius norm : � m � 1 / 2 n Y 2 for Y ∈ R m × n ∑ ∑ � Y � F = ij i = 1 j = 1 Approximation quality : d 2 � Y − Y ℓ � σ 2 F = ∑ i i = ℓ + 1 � � D ℓ 0 with Y ℓ = U V ⊤ and D ℓ = diag ( σ 1 ,..., σ ℓ ) 0 0 Stefan Volkwein POD: Theory and Reduced-Order Modeling 3 / 29
Introduction Singular Value Decomposition and Images d Approximation � Y − Y ℓ � 2 σ 2 F = i for a given Photo ∑ i = ℓ + 1 0,5% der Matrixbasis −> 45% Information 1% der Matrixbasis −> 56% Information 5% der Matrixbasis −> 76% Information 10% der Matrixbasis −> 85% Information 20% der Matrixbasis −> 92% Information Originalbild Stefan Volkwein POD: Theory and Reduced-Order Modeling 4 / 29
Outline of Lecture 1 Outline of Lecture 1 The method of Proper Orthogonal Decomposition (POD) Reduced-order modeling utilizing the POD method Stefan Volkwein POD: Theory and Reduced-Order Modeling 5 / 29
The POD Method The Method of Proper Orthogonal Decomposition (POD) Topics : Definition of a (discrete variant of the) POD basis Efficient computation of a POD basis POD for dynamical systems A continuous variant of the POD basis and asymptotic analysis Stefan Volkwein POD: Theory and Reduced-Order Modeling 6 / 29
The POD Method Discrete Variant of the POD Method POD as a Minimization Problem Given multiple snapshots : { y k j } n j = 1 ⊂ X , 1 ≤ k ≤ ℘ , with a (real) Hilbert space X Snapshot subspace : � � � y k � 1 ≤ j ≤ n and 1 ≤ k ≤ ℘ ⊂ X V = span j with dimension d ∈ { 1 ,..., min ( n ℘ , dim X ) } Proper Orthogonal Decomposition (POD) : for any ℓ ∈ { 1 ,..., d } solve ℘ n ℓ � � 2 � y k � y k { ψ i } ℓ ∑ ∑ � ∑ � i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ ( P ℓ ) min α j j − j , ψ i � X ψ i s.t. � X k = 1 j = 1 i = 1 with positive weights α j Optimal solution to (P ℓ ) : POD basis { ¯ ψ i } ℓ i = 1 of rank ℓ Orthogonal projection : define P ℓ : X → V ℓ = span { ¯ ψ 1 ,..., ¯ ψ ℓ } ⊂ V by ℓ P ℓ ψ = ∑ � ψ , ¯ ψ i � X ¯ for ψ ∈ X ψ i i = 1 ℘ n ℓ ℘ n � � 2 � � 2 � � y k � y k � y k j − P ℓ y k � � ⇒ ∑ ∑ α j j − ∑ j , ψ i � X ψ i X = ∑ ∑ α j � j X k = 1 j = 1 i = 1 k = 1 j = 1 Stefan Volkwein POD: Theory and Reduced-Order Modeling 7 / 29
The POD Method Discrete Variant of the POD Method Equivalent POD Formulation POD as a minimization problem : for any ℓ ∈ { 1 ,..., d } solve ℘ n ℓ � � 2 � y k � y k { ψ i } ℓ ∑ ∑ � ∑ � i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ ( P ℓ ) min α j j − j , ψ i � X ψ i s.t. � X k = 1 j = 1 i = 1 Orthonormal basis elements : for 1 ≤ j ≤ n and 1 ≤ k ≤ ℘ we have ℓ ℓ � � 2 2 2 � y k � y k X = � y k � y k � ∑ � ∑ j − j , ψ i � X ψ i j � X − j , ψ i � � X i = 1 i = 1 POD as a maximization problem : for ℓ ∈ { 1 ,..., d } solve ℘ n ℓ 2 � y k { ψ i } ℓ ( ˆ P ℓ ) ∑ ∑ ∑ i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ max α j j , ψ i � s.t. X k = 1 j = 1 i = 1 � ℓ � n � � 2 � y k ⇒ maximize the first ℓ Fourier coefficient j , ψ i � on average α j for all k ∑ ∑ X i = 1 j = 1 P ℓ ) : for Ψ = ( ψ 1 ,..., ψ ℓ ) ∈ X ℓ and Λ = (( λ ij )) ∈ R ℓ × ℓ define Lagrange functional for (ˆ ℘ n ℓ ℓ ℓ 2 � � � y k ∑ ∑ ∑ ∑ ∑ L (Ψ , Λ) = α j j , ψ i � X + λ ij δ ij −� ψ i , ψ j � X k = 1 j = 1 i = 1 i = 1 j = 1 Stefan Volkwein POD: Theory and Reduced-Order Modeling 8 / 29
The POD Method Discrete Variant of the POD Method Lagrangian Framework in (Infinite Dimensional) Optimization POD as a maximization problem : for any ℓ ∈ { 1 ,..., d } solve ℘ n ℓ 2 � y k { ψ i } ℓ ∑ ∑ ∑ ( ˆ P ℓ ) max α j j , ψ i � s.t. i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ X k = 1 j = 1 i = 1 P ℓ ) : for Ψ = ( ψ 1 ,..., ψ ℓ ) ∈ X ℓ and Λ = (( λ ij )) ∈ R ℓ × ℓ define Lagrange functional for (ˆ ℘ n ℓ ℓ ℓ 2 � � y k ∑ ∑ ∑ ∑ ∑ L (Ψ , Λ) = α j j , ψ i � X + λ ij δ ij −� ψ i , ψ j � X k = 1 j = 1 i = 1 i = 1 j = 1 Necessary optimality conditions : let ¯ ψ ℓ ) denote a solution to ( ˆ P ℓ ) Ψ = ( ¯ ψ 1 ,..., ¯ Constraint qualification condition : there is a Lagrange multiplier ¯ Λ = ((¯ λ ij )) with ∂ L ∂ L (¯ Ψ , ¯ (¯ Ψ , ¯ Λ) = 0 in X for 1 ≤ i ≤ ℓ and Λ) = 0 in R for 1 ≤ i , j ≤ ℓ ∂ψ i ∂λ ij ⇒ first-order necessary optimality conditions for ( ˆ P ℓ ) Stefan Volkwein POD: Theory and Reduced-Order Modeling 9 / 29
The POD Method Discrete Variant of the POD Method First-Order Necessary Optimality Conditions POD as a maximization problem : for any ℓ ∈ { 1 ,..., d } solve ℘ n ℓ 2 � y k ∑ ∑ ∑ { ψ i } ℓ ( ˆ P ℓ ) max α j j , ψ i � s.t. i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ X k = 1 j = 1 i = 1 First-order necessary optimality conditions : ¯ ψ ℓ ) and ¯ Λ = ((¯ Ψ = ( ¯ ψ 1 ,..., ¯ λ ij )) satisfy ∂ L (¯ Ψ , ¯ Λ) = 0 in X for 1 ≤ i ≤ ℓ and � ψ i , ψ j � X = δ ij for 1 ≤ i , j ≤ ℓ ∂ψ i ℘ n α j � ψ , y k j � X y k Summation operator : define R : X → X as R ψ = j for ψ ∈ X ∑ ∑ k = 1 j = 1 Theorem : X separable Hilbert space a) R is linear, compact, selfadjoint and nonnegative ψ i } i ∈ I and eigenvalues { ¯ b) there are eigenfunctions { ¯ λ i } i ∈ I with ψ i = ¯ ¯ λ 1 ≥ ¯ λ 2 ≥ ... ≥ ¯ λ d > ¯ R ¯ λ i ¯ ψ i , λ d + 1 = ... = 0 ψ i } ℓ i = 1 solves ( ˆ P ℓ ) and ( P ℓ ) c) { ¯ ℘ n ℘ n ℓ ℓ ℓ � � 2 2 ¯ X = ∑ ¯ � y k � y k � y k ∑ ∑ ∑ j , ¯ ∑ ∑ ∑ � ∑ j , ¯ ψ i � X ¯ � d) α j ψ i � X = λ i , α j j − ψ i λ i � k = 1 j = 1 i = 1 i = 1 k = 1 j = 1 i = 1 i >ℓ Stefan Volkwein POD: Theory and Reduced-Order Modeling 10 / 29
The POD Method Discrete Variant of the POD Method POD Basis Computation POD : for any ℓ ∈ { 1 ,..., d } solve ℘ n ℓ � � 2 � y k � y k { ψ i } ℓ ∑ ∑ � ∑ � i = 1 ⊂ X and � ψ i , ψ j � X = δ ij , 1 ≤ i , j ≤ ℓ ( P ℓ ) min α j j − j , ψ i � X ψ i s.t. � X k = 1 j = 1 i = 1 Eigenvalue problem : ℘ n j = ¯ λ 1 ≥ ¯ ¯ λ 2 ≥ ... ≥ ¯ ψ i , y k j � X y k R ¯ ∑ ∑ α j � ¯ λ i ¯ ψ i for 1 ≤ i ≤ ℓ, ψ i = λ ℓ > 0 k = 1 j = 1 ψ i } ℓ Approximation quality : POD basis { ¯ i = 1 of rank ℓ ℘ n ℓ � � 2 � y k � y k X = ∑ ¯ ∑ ∑ � ∑ � α j j − j , ¯ ψ i � X ¯ ψ i λ i � k = 1 j = 1 i = 1 i >ℓ ⇒ for fastly decreasing ¯ λ i ’s good approximation quality even for small ℓ ≪ d = dim V In practical computations : heuristical choice for ℓ by posing ℓ ℓ ¯ ¯ λ i λ i ∑ ∑ i = 1 i = 1 E ( ℓ ) = = ≈ 99 % ¯ ℘ n ∑ λ i 2 α j � y k j � ∑ ∑ i ∈ I X k = 1 j = 1 ⇒ { ¯ λ i } i >ℓ not required for the computation of E ( ℓ ) Stefan Volkwein POD: Theory and Reduced-Order Modeling 11 / 29
Recommend
More recommend