PDE-Constrained Optimization ROM-Constrained Optimization Numerical Experiments Conclusion PDE-Constrained Optimization using Progressively-Constructed Reduced-Order Models Matthew J. Zahr and Charbel Farhat Institute for Computational and Mathematical Engineering Farhat Research Group Stanford University World Congress on Computational Mechanics XI July 20 - 25, 2014 Barcelona, Spain Advanced Reduced-Order Modeling Strategies for Parametrized PDEs and Applications II Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization ROM-Constrained Optimization Numerical Experiments Conclusion 1 PDE-Constrained Optimization 2 ROM-Constrained Optimization Basis Construction Reduced Sensitivities Training 3 Numerical Experiments Airfoil Design 4 Conclusion Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization ROM-Constrained Optimization Numerical Experiments Conclusion Motivation PDE-constrained is ubiquitous in engineering Design optimization Optimal control Parameter estimation (inverse problems) Notoriously expensive as many calls to a PDE solver may be required CFD, structural dynamics, acoustic models Good candidate for model reduction Many-query application Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization ROM-Constrained Optimization Numerical Experiments Conclusion Problem Formulation Goal: Rapidly solve PDE-constrained optimization problems of the form minimize f ( w , µ ) w ∈ R N , µ ∈ R p subject to R ( w , µ ) = 0 where R : R N × R p → R N is the discretized (nonlinear) PDE, w is the PDE state vector, µ is the vector of parameters, and N is assumed to be very large. Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization ROM-Constrained Optimization Numerical Experiments Conclusion Reduced-Order Model Model Order Reduction (MOR) assumption: state vector lies in low-dimensional affine subspace w ≈ w r = ¯ w + Φy where y ∈ R n are the reduced coordinates of w in the basis Φ ∈ R N × n , ¯ w piecewise constant in µ , and n ≪ N Substitute assumption into High-Dimensional Model (HDM), R ( w , µ ) = 0 R ( ¯ w + Φy , µ ) ≈ 0 Require projection of residual in low-dimensional left subspace , with basis Ψ ∈ R N × n to be zero R r ( y , µ ) = Ψ T R ( ¯ w + Φy , µ ) = 0 Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Reduced Optimization Problem ROM-constrained optimization problem minimize f ( ¯ w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 Issues that must be considered Basis construction Reduced sensitivity derivation Training Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion State-Sensitivity POD ∂ µ = Φ ∂ y ∂ w r MOR assumption: w ≈ w r = ¯ w + Φy = ⇒ ∂ µ Collect state and sensitivity snapshots by sampling HDM � � w ( µ 1 ) − ¯ w ( µ 2 ) − ¯ w ( µ n ) − ¯ X = w w · · · w � � ∂ w ∂ w ∂ w Y = ∂ µ ( µ 1 ) ∂ µ ( µ 2 ) · · · ∂ µ ( µ n ) Use Proper Orthogonal Decomposition to generate reduced bases from each individually Φ X = POD( X ) Φ Y = POD( Y ) Concatenate to get ROB � � Φ = Φ X Φ Y Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Sensitivities For gradient-based optimization, sensitivities are required HDM sensitivities � ∂ R � − 1 ∂ R ⇒ ∂ w R ( w ( µ ) , µ ) = 0 = ∂ µ = − ∂ w ∂ µ ROM sensitivities ⇒ ∂ w r ∂ µ = Φ ∂ y ∂ µ = ΦA − 1 B R r ( y ( µ ) , µ ) = 0 = � � Ψ T e j N � ∂ Φ + Ψ T ∂ R A = R j ∂ wΦ ∂ w j =1 � � Ψ T e j N � ∂ + Ψ T ∂ R B = − R j ∂ µ ∂ µ j =1 Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Minimum-Error Reduced Sensitivities True sensitivities of the ROM May be difficult to compute if Ψ = Ψ ( µ ) LSPG [Bui-Thanh et al 2008, Carlberg et al 2011] May not represent HDM sensitivities well Gradients of reduced optimization functions may not be close to the true gradients Define quantity that minimizes the sensitivity error in some norm Θ ≻ 0 � ∂ y || ∂ w ∂ µ = arg min ∂ µ − Φa || Θ a � � † − 1 ∂ R � ∂ y Θ 1 / 2 ∂ R Θ 1 / 2 Φ = ⇒ ∂ µ = − ∂ w ∂ µ Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Minimum-Error Reduced Sensitivities Select Θ 1 / 2 = ∂ R ∂ w � ∂ R � † ∂ R � ∂ y ∂ µ = − ∂ wΦ ∂ µ Exactly reproduce sensitivities at training points if sensitivity basis not truncated May cause convergence issues for reduced optimization problem Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Training: Offline-Online (Database) Approach Identify samples in offline phase to be used for training Collect snapshots by running HDM (state vector and sensitivities) Build ROB Φ Solve optimization problem f ( ¯ minimize w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 [Lassila et al 2010, Rozza et al 2010, Manzoni et al 2012] Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Offline-Online Approach HDM Optimizer HDM Compress RB, Φ ������� � ����� Φ ������������ � �������� HDM ������� � �������� ROM ��� � ������� ������� � ����� HDM ������� � ����� ������������ � �������� ������������ � �������� ������� � �������� ������� � ����� (b) Idealized ��� � ������� ������� � �������� ������������ � �������� ��� � ������� Optimization Trajectory ������� � �������� (a) Schematic of Algorithm ��� � ������� in Parameter Space R R R R R R R R HDM HDM HDM HDM O O O O O O O O M M M M M M M M (c) Breakdown of Computational Effort Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Training: Progressive Approach Collect snapshots by running HDM (state vector and sensitivities) at initial guess for optimization problem Build ROB Φ from sparse training Solve optimization problem minimize f ( ¯ w + Φy , µ ) y ∈ R n , µ ∈ R p Ψ T R ( ¯ subject to w + Φy , µ ) = 0 1 w + Φy , µ ) || 2 2 || R ( ¯ 2 ≤ ǫ Use solution of above problem to enrich training and repeat until convergence Similar approaches found in: [Arian et al 2000, Afanasiev et al 2001, Fahl 2001] Zahr and Farhat Progressive ROM-Constrained Optimization
PDE-Constrained Optimization Basis Construction ROM-Constrained Optimization Reduced Sensitivities Numerical Experiments Training Conclusion Progressive Approach HDM r Update RB Optimizer Φ ������� � ����� HDM Compress ������������ � �������� RB, Φ ������� � �������� ��� � ������� HDM ROM (b) Idealized Optimization Trajectory (a) Schematic of Algorithm in Parameter Space R R R R R R R R R HDM HDM HDM O O O O O O O O O M M M M M M M M M (c) Breakdown of Computational Effort Zahr and Farhat Progressive ROM-Constrained Optimization w µ ∗ ∗ w µ ∗ w � µ ∗ Φ ∗ y � Φ � y Φ y Φ ������� � ����� ������������ � �������� ������� � �������� ��� � ������� Φ ������� � ����� ������������ � �������� ������� � �������� ��� � ������� Φ ������� � ����� ������������ � �������� ������� � �������� ��� � �������
Recommend
More recommend