data driven reduced model construction with the time
play

Data-driven reduced model construction with the time-domain Loewner - PowerPoint PPT Presentation

Data-driven reduced model construction with the time-domain Loewner framework and operator inference Benjamin Peherstorfer Courant Institute of Mathematical Sciences New York University in collaboration with Serkan Gugercin and Karen Willcox


  1. Data-driven reduced model construction with the time-domain Loewner framework and operator inference Benjamin Peherstorfer Courant Institute of Mathematical Sciences New York University in collaboration with Serkan Gugercin and Karen Willcox 1 / 37

  2. Many-query applications control inference optimization visualization model calibration multi-discipline coupling uncertainty quantification 2 / 37

  3. Intro: Surrogate model types Surrogate models ◮ Approximate QoI of high-fidelity model ◮ Often significantly reduced costs Simplified surrogate models ◮ Drop nonlinear terms, average ◮ Early-stopping schemes Data-fit surrogate models ◮ Learn input-QoI map induced by hi-fi model ◮ SVMs, Gaussian processes, neural networks x K x 1 (Projection-based) reduced models x 2 ◮ Approximate state x k in low-dim space ◮ Project E , A , B , C onto low-dim space ◮ POD, interpolatory model reduction, RBM R N 3 / 37

  4. Intro: Surrogate model types Surrogate models ◮ Approximate QoI of high-fidelity model ◮ Often significantly reduced costs Simplified surrogate models ◮ Drop nonlinear terms, average ◮ Early-stopping schemes Data-fit surrogate models ◮ Learn input-QoI map induced by hi-fi model ◮ SVMs, Gaussian processes, neural networks x K x 1 (Projection-based) reduced models x 2 ◮ Approximate state x k in low-dim space ◮ Project E , A , B , C onto low-dim space ◮ POD, interpolatory model reduction, RBM R N 3 / 37

  5. Intro: Dynamical systems Consider linear time-invariant (LTI) system � Ex k + 1 = Ax k + B u k , k ∈ N , Σ : y k = Cx k ◮ Time-discrete single-input-single-output (SISO) LTI system ◮ System matrices E , A ∈ R N × N , B ∈ R N × 1 , C ∈ R 1 × N ◮ Input u k and output y k at time step t k , k ∈ N ◮ State x k at time step t k , k ∈ N ◮ Asymptotically stable Deriving QoI from outputs y k and states x k , k ∈ N ◮ High-dimensional state x k makes computing QoI expensive ◮ Repeated QoI computations can become prohibitively expensive ◮ Uncertainty propagation, statistical inference, control, . . . 4 / 37

  6. Intro: Classical (intrusive) model reduction Construct n -dim. basis V = [ v 1 , . . . , v n ] ∈ R N × n x K ◮ Proper orthogonal decomposition (POD) x 1 ◮ Interpolatory model reduction x 2 ◮ Reduced basis method (RBM) ◮ ... R N Project full model operators E , A , B , C onto reduced space N × N N × N N × p q × N ���� ���� ���� ���� E = V T ˜ A = V T ˜ B = V T ˜ ˜ , , B . , C = E V A V C V � �� � � �� � � �� � � �� � n × n n × n n × p q × n Construct reduced model � ˜ x k + 1 = ˜ x k + ˜ E ˜ A ˜ B u k , k ∈ N ˜ Σ : y k = ˜ ˜ C ˜ x k with n ≪ N and � y k − ˜ y k � small in appropriate norm In general, projection step requires full-model operators E , A , B , C 5 / 37

  7. Intro: Classical (intrusive) model reduction Construct n -dim. basis V = [ v 1 , . . . , v n ] ∈ R N × n x K ◮ Proper orthogonal decomposition (POD) x 1 ◮ Interpolatory model reduction x 2 ◮ Reduced basis method (RBM) ◮ ... R N Project full model operators E , A , B , C onto reduced space N × N N × N N × p q × N ���� ���� ���� ���� E = V T ˜ A = V T ˜ B = V T ˜ ˜ , , B . , C = E V A V C V � �� � � �� � � �� � � �� � n × n n × n n × p q × n Construct reduced model � ˜ x k + 1 = ˜ x k + ˜ E ˜ A ˜ B u k , k ∈ N ˜ Σ : y k = ˜ ˜ C ˜ x k with n ≪ N and � y k − ˜ y k � small in appropriate norm In general, projection step requires full-model operators E , A , B , C 5 / 37

  8. Intro: Black-box models and model reduction Full model often given as black box ◮ Operators E , A , B , C unavailable ◮ Time-step model Σ with inputs to obtain outputs and states � � T . . . u = u 1 u 2 u K � y 1 � T y = y 2 . . . y K   | | | X = x 1 x 2 · · · x K   | | | Goal: Learn reduced model from data u , y , and X ⇒ Learn reduced equations (in contrast to data-fit surrogates) ⇒ Discover the “dynamics” that govern full model Ex k +1 = Ax k + B u k input output y k = Cx k 6 / 37

  9. Intro: Black-box models and model reduction Full model often given as black box ◮ Operators E , A , B , C unavailable ◮ Time-step model Σ with inputs to obtain outputs and states � � T . . . u = u 1 u 2 u K � y 1 � T y = y 2 . . . y K   | | | X = x 1 x 2 · · · x K   | | | Goal: Learn reduced model from data u , y , and X ⇒ Learn reduced equations (in contrast to data-fit surrogates) ⇒ Discover the “dynamics” that govern full model black-box Ex k +1 = Ax k + B u k input dynamical output y k = Cx k system 6 / 37

  10. Intro: Black-box models and model reduction Full model often given as black box ◮ Operators E , A , B , C unavailable ◮ Time-step model Σ with inputs to obtain outputs and states � � T . . . u = u 1 u 2 u K � y 1 � T y = y 2 . . . y K   | | | X = x 1 x 2 · · · x K   | | | Goal: Learn reduced model from data u , y , and X ⇒ Learn reduced equations (in contrast to data-fit surrogates) ⇒ Discover the “dynamics” that govern full model black-box Ex k +1 = Ax k + B u k input dynamical output y k = Cx k system 6 / 37

  11. Literature overview System identification ◮ System identification [Ljung, 1987], [Viberg, 1995], [Qin, 2006], . . . ◮ Eigensystem realization [Kung, 1978], [Mendel, 1981], [Juang, 1985], [Kramer & Gugercin, 2016], . . . ◮ Finite impulse response estimation [Rabiner et al., 1978], [Mendel, 1991], [Abed-Meraim, 1997, 2010], . . . Learning in the frequency domain ◮ Loewner framework [Antoulas, Anderson, 1986], [Lefteriu, Antoulas, 2010], [Mayo, Antoulas, 2007], [Beattie, Gugercin, 2012], [Ionita, Antoulas, 2012], . . . ◮ Vector fitting [Drmac, Gugercin, Beattie, 2015a], [Drmac, Gugercin, Beattie, 2015b], . . . Learning from states ◮ Learning models with dynamic mode decomposition [Tu et al., 2013], [Proctor, Brunton, Kutz, 2016], [Brunton, Brunton, Proctor, Kutz, 2016], . . . ◮ Learning models [Chung, Chung, 2014], [Xie, Mohebujjaman, Rebholz, Iliescu, 2017], . . . ◮ Sparse identification [Brunton, Proctor, Kutz, 2016], . . . Machine learning ◮ Gaussian process regression, support vector machines, . . . 7 / 37

  12. Data-driven reduced model construction with time-domain Loewner models Inferring frequency-response data from time-domain data joint work with Serkan Gugercin and Karen Willcox 8 / 37

  13. Loewner: Transfer function Transfer function of LTI system Σ H ( z ) = C ( z E − A ) − 1 B , z ∈ C Consider reduced model ˜ Σ with A ) − 1 ˜ H ( z ) = ˜ ˜ C ( z ˜ E − ˜ B , z ∈ C Measure error of reduced transfer function ˜ H as � H − ˜ | H ( z ) − ˜ H � H ∞ = sup H ( z ) | | z | = 1 Relate to error in quantity of interest y � ℓ 2 ≤ � H − ˜ � y − ˜ H � H ∞ � u � ℓ 2 If ˜ H approximates well H , then know that ˜ y approximates well y 9 / 37

  14. Loewner: Interpolatory model reduction Select m = 2 n interpolation points z 1 , . . . , z m ∈ C Construct bases as � ( z 1 E − A ) − 1 B ( z n E − A ) − 1 B � ∈ R N × n . . . V = � ( z n + 1 E T − A T ) − 1 C T ( z n + n E T − A T ) − 1 C T � ∈ R N × n W = . . . Project (Petrov-Galerkin) to obtain operators ˜ ˜ ˜ ˜ E = W T EV , A = W T AV , B = W T B , C = CV Then obtain reduced model ˜ Σ with ˜ H H ( z i ) = ˜ H ( z i ) , i = 1 , . . . , m Requires full-model operators E , A , B , C 10 / 37

  15. Loewner: Interpolatory model reduction with Loewner Loewner framework derives ˜ Σ directly from H ( z 1 ) , . . . , H ( z m ) with L ij = H ( z i ) − H ( z n + j ) M ij = z i H ( z i ) − z n + j H ( z n + j ) , , i , j = 1 , . . . , n z i − z n + j z i − z n + j Reduced operators of ˜ Σ are H ( z n ) � T , � H ( z 1 ) ˜ ˜ ˜ E = − L , A = − M , B = . . . � H ( z n + 1 ) H ( z n + n ) � ˜ and C = . . . Data-driven (nonintrusive) construction of ˜ Σ ◮ No access to E , A , B , C required ◮ Requires transfer function values (frequency-response data) [Antoulas, Anderson, 1986] [Lefteriu, Antoulas, 2010] [Mayo, Antoulas, 2007] 11 / 37

  16. Loewner: Problem formulation Can time-step full LTI model Σ for K ∈ N time steps ◮ Given inputs u = [ u 0 , u 1 , . . . , u K − 1 ] T ∈ C K ◮ Compute outputs y = [ y 0 , y 1 , . . . , y K − 1 ] T ∈ C K via time stepping ◮ Transfer function H un available ( E , A , B , C , x k unavailable as well) Our goal is approximating transfer function values from y ◮ Given are interpolation points z 1 , . . . , z m ◮ Perform single time-domain simulation of Σ to steady state ◮ Derive approximate ˆ H ( z 1 ) , . . . , ˆ H ( z m ) values from output y ◮ Construct ˆ Σ that approximates (classical) Loewner ˜ Σ ˜ ˆ H ( z i ) = H ( z i ) ≈ H ( z i ) , i = 1 , . . . , m � �� � � �� � � �� � full model classical time-domain Loewner model Loewner model 12 / 37

Recommend


More recommend