efficient pde optimization under uncertainty using
play

Efficient PDE Optimization under Uncertainty using Adaptive Model - PowerPoint PPT Presentation

Efficient PDE Optimization under Uncertainty using Adaptive Model Reduction and Sparse Grids Matthew J. Zahr Advisor: Charbel Farhat Computational and Mathematical Engineering Stanford University Joint work with: Kevin Carlberg (Sandia CA),


  1. Efficient PDE Optimization under Uncertainty using Adaptive Model Reduction and Sparse Grids Matthew J. Zahr Advisor: Charbel Farhat Computational and Mathematical Engineering Stanford University Joint work with: Kevin Carlberg (Sandia CA), Drew Kouri (Sandia NM) SIAM Annual Meeting MS 137: Model Reduction of Parametrized PDEs Boston, Massachusetts, USA July 15, 2016

  2. • ‒ ‒ ‒ • ‒ • Multiphysics optimization – a key player in next-gen problems Current interest in computational physics reaches far beyond analysis of a single configuration of a physical system into design (shape and topology) and control in an uncertain setting Micro-Aerial Vehicle EM Launcher Engine System

  3. PDE-constrained optimization under uncertainty Goal: Efficiently solve stochastic PDE-constrained optimization problems minimize E [ J ( u , µ , · )] µ ∈ R n µ subject to r ( u ; µ , ξ ) = 0 ∀ ξ ∈ Ξ r : R n u × R n µ × R n ξ → R n u discretized stochastic PDE J : R n u × R n µ × R n ξ → R quantity of interest u ∈ R n u PDE state vector µ ∈ R n µ (deterministic) optimization parameters ξ ∈ R n ξ stochastic parameters � E [ F ] ≡ F ( ξ ) ρ ( ξ ) d ξ Ξ Each function evaluation requires integration over stochastic space – expensive

  4. Proposed approach: managed two-level inexactness Two levels of inexactness to obtain an inexpensive, approximation model Anisotropic sparse grids used for inexact integration of risk measures Reduced-order models used for inexact evaluations at collocation nodes minimize F ( µ ) − → minimize m k ( µ ) µ ∈ R n µ µ ∈ R n µ

  5. Proposed approach: managed two-level inexactness Two levels of inexactness to obtain an inexpensive, approximation model Anisotropic sparse grids used for inexact integration of risk measures Reduced-order models used for inexact evaluations at collocation nodes minimize F ( µ ) − → minimize m k ( µ ) µ ∈ R n µ µ ∈ R n µ Manage inexactness with trust-region method Embedded in globally convergent trust-region method Error indicators to account for both sources of inexactness Refinement of integral approximation and reduced-order model via dimension-adaptive sparse grids and a greedy method over collocation nodes minimize m k ( µ ) µ ∈ R n µ minimize F ( µ ) − → µ ∈ R n µ subject to || µ − µ k || ≤ ∆ k

  6. The connection between the objective function and model First-order consistency [Alexandrov et al., 1998] m k ( µ k ) = F ( µ k ) ∇ m k ( µ k ) = ∇ F ( µ k ) The Carter condition [Carter, 1989, Carter, 1991] ||∇ F ( µ k ) − ∇ m k ( µ k ) || ≤ η ||∇ m k ( µ k ) || η ∈ (0 , 1) Asymptotic gradient bound [Heinkenschloss and Vicente, 2002] ||∇ F ( µ k ) − ∇ m k ( µ k ) || ≤ ξ min {||∇ m k ( µ k ) || , ∆ k } ξ > 0 Asymptotic gradient bound permits the use of error indicator: ϕ k ||∇ F ( µ ) − ∇ m k ( µ ) || ≤ ξϕ k ( µ ) ξ > 0 ϕ k ( µ k ) ≤ κ min {||∇ m k ( µ ) || , ∆ k }

  7. Trust region method with inexact gradients [Kouri et al., 2013] 1: Model update : Choose model m k and error indicator ϕ k ϕ k ( µ k ) ≤ κ min {||∇ m k ( µ ) || , ∆ k } 2: Step computation : Approximately solve the trust-region subproblem ˆ = arg min µ ∈ R n µ m k ( µ ) subject to || µ − µ k || ≤ ∆ k µ k 3: Step acceptance : Compute actual-to-predicted reduction F ( µ k ) − F (ˆ µ k ) ρ k = m k ( µ k ) − m k (ˆ µ k ) µ k +1 = ˆ if ρ k ≥ η 1 then µ k else µ k +1 = µ k end if 4: Trust-region update : if ρ k ≤ η 1 then ∆ k +1 ∈ (0 , γ || ˆ µ − µ k || ] end if if ρ k ∈ ( η 1 , η 2 ) then ∆ k +1 ∈ [ γ || ˆ µ − µ k || , ∆ k ] end if if ρ k ≥ η 2 then ∆ k +1 ∈ [∆ k , ∆ max ] end if

  8. Trust region method with inexact gradients and objective 1: Model update : Choose model m k and error indicator ϕ k ϕ k ( µ k ) ≤ κ min {||∇ m k ( µ ) || , ∆ k } 2: Step computation : Approximately solve the trust-region subproblem ˆ = arg min µ ∈ R n µ m k ( µ ) subject to || µ − µ k || ≤ ∆ k µ k 3: Step acceptance : Compute approximation of actual-to-predicted reduction ρ k = ψ k ( µ k ) − ψ k (ˆ µ k ) m k ( µ k ) − m k (ˆ µ k ) µ k +1 = ˆ if ρ k ≥ η 1 then µ k else µ k +1 = µ k end if 4: Trust-region update : if ρ k ≤ η 1 then ∆ k +1 ∈ (0 , γ || ˆ µ − µ k || ] end if if ρ k ∈ ( η 1 , η 2 ) then ∆ k +1 ∈ [ γ || ˆ µ − µ k || , ∆ k ] end if if ρ k ≥ η 2 then ∆ k +1 ∈ [∆ k , ∆ max ] end if

  9. Inexact objective function evaluations Asymptotic objective decrease bound [Kouri et al., 2014] µ k ) , r k } 1 /ω | F ( µ k ) − F (ˆ µ k ) + ψ k (ˆ µ k ) − ψ k ( µ k ) | ≤ σ min { m k ( µ k ) − m k (ˆ where ω ∈ (0 , 1), r k → 0, σ > 0 Asymptotic objective decrease bound permits the use of error indicator: θ k | F ( µ k ) − F ( µ ) + ψ k ( µ ) − ψ k ( µ k ) | ≤ σθ k ( µ ) σ > 0 µ k ) ω ≤ η min { m k ( µ k ) − m k (ˆ θ k (ˆ µ k ) , r k }

  10. Trust region method ingredients for global convergence Approximation models m k ( µ ) , ψ k ( µ ) Error indicators ||∇ F ( µ ) − ∇ m k ( µ ) || ≤ ξϕ k ( µ ) ζ > 0 | F ( µ k ) − F ( µ ) + ψ k ( µ ) − ψ k ( µ k ) | ≤ σθ k ( µ ) σ > 0 Adaptivity ϕ k ( µ k ) ≤ κ min {||∇ m k ( µ ) || , ∆ k } µ k ) ω ≤ η min { m k ( µ k ) − m k (ˆ θ k (ˆ µ k ) , r k } Global convergence lim inf ||∇ F ( µ k ) || = 0 k →∞

  11. First layer of inexactness : anisotropic sparse grids Stochastic collocation using anisotropic sparse grid nodes to approximate integral with summation minimize E [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ ⇓ minimize E I [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ I [Kouri et al., 2013, Kouri et al., 2014]

  12. Second layer of inexactness : reduced-order models Stochastic collocation of the reduced-order model over anisotropic sparse grid nodes used to approximate integral with cheap summation minimize E [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ ⇓ minimize E I [ J ( u , µ , · )] u ∈ R n u , µ ∈ R n µ subject to r ( u , µ , ξ ) = 0 ∀ ξ ∈ Ξ I ⇓ minimize E I [ J ( Φ y , µ , · )] y ∈ R k u , µ ∈ R n µ Φ T r ( Φ y , µ , ξ ) = 0 subject to ∀ ξ ∈ Ξ I

  13. First two ingredients for global convergence Approximation models built on two levels of inexactness m k ( µ ) = E I k [ J ( Φ k y ( µ , · ) , µ , · )] k [ J ( Φ ′ ψ k ( µ ) = k y ( µ , · ) , µ , · )] E I ′ Error indicators that account for both sources of error ϕ k ( µ ) = α 1 E 1 ( µ ; I k , Φ k ) + α 2 E 2 ( µ ; I k , Φ k ) + α 3 E 4 ( µ ; I k , Φ k ) θ k ( µ ) = β 1 ( E 1 ( µ ; I ′ k , Φ ′ k ) + E 1 ( µ k ; I ′ k , Φ ′ k )) + β 2 ( E 3 ( µ ; I ′ k , Φ ′ k ) + E 3 ( µ k ; I ′ k , Φ ′ k )) Reduced-order model errors E 1 ( µ ; I , Φ ) = E I ∪ N ( I ) [ || r ( Φ y ( µ , · ) , µ , · ) || ] �� �� � r λ ( Φ y ( µ , · ) , Ψ λ r ( µ , · ) , µ , · ) � �� E 2 ( µ ; I , Φ ) = E I ∪ N ( I ) �� Sparse grid truncation errors E 3 ( µ ; I , Φ ) = E N ( I ) [ |J ( Φ y ( µ , · ) , µ , · ) | ] E 4 ( µ ; I , Φ ) = E N ( I ) [ ||∇J ( Φ y ( µ , · ) , µ , · ) || ]

  14. Derivation of gradient error indicator For brevity, let J ( ξ ) ← J ( u ( µ , ξ ) , µ , ξ ) ∇J ( ξ ) ← ∇J ( u ( µ , ξ ) , µ , ξ ) J r ( ξ ) = J ( Φ y ( µ , ξ ) , µ , ξ ) ∇J r ( ξ ) = ∇J ( Φ y ( µ , ξ ) , µ , ξ ) r r ( ξ ) = r ( Φ y ( µ , ξ ) , µ , ξ ) r λ r ( ξ ) = r λ ( Φ y ( µ , ξ ) , Ψ λ r ( µ , ξ ) , µ , ξ ) Separate total error into contributions from ROM inexactness and SG truncation || E [ ∇J ] − E I [ ∇J r ] || ≤ E [ ||∇J − ∇J r || ] + || E [ ∇J r ] − E I [ ∇J r ] ||

  15. Derivation of gradient error indicator For brevity, let J ( ξ ) ← J ( u ( µ , ξ ) , µ , ξ ) ∇J ( ξ ) ← ∇J ( u ( µ , ξ ) , µ , ξ ) J r ( ξ ) = J ( Φ y ( µ , ξ ) , µ , ξ ) ∇J r ( ξ ) = ∇J ( Φ y ( µ , ξ ) , µ , ξ ) r r ( ξ ) = r ( Φ y ( µ , ξ ) , µ , ξ ) r λ r ( ξ ) = r λ ( Φ y ( µ , ξ ) , Ψ λ r ( µ , ξ ) , µ , ξ ) Separate total error into contributions from ROM inexactness and SG truncation || E [ ∇J ] − E I [ ∇J r ] || ≤ E [ ||∇J − ∇J r || ] + || E [ ∇J r ] − E I [ ∇J r ] || ≤ ζ ′ E � r λ � � � �� �� �� + E I c [ ||∇J r || ] α 1 || r || + α 2

  16. Derivation of gradient error indicator For brevity, let J ( ξ ) ← J ( u ( µ , ξ ) , µ , ξ ) ∇J ( ξ ) ← ∇J ( u ( µ , ξ ) , µ , ξ ) J r ( ξ ) = J ( Φ y ( µ , ξ ) , µ , ξ ) ∇J r ( ξ ) = ∇J ( Φ y ( µ , ξ ) , µ , ξ ) r r ( ξ ) = r ( Φ y ( µ , ξ ) , µ , ξ ) r λ r ( ξ ) = r λ ( Φ y ( µ , ξ ) , Ψ λ r ( µ , ξ ) , µ , ξ ) Separate total error into contributions from ROM inexactness and SG truncation || E [ ∇J ] − E I [ ∇J r ] || ≤ E [ ||∇J − ∇J r || ] + || E [ ∇J r ] − E I [ ∇J r ] || ≤ ζ ′ E � r λ � � �� � �� �� + E I c [ ||∇J r || ] α 1 || r || + α 2 � � � �� � r λ � �� �� � α 1 || r || + α 2 + α 3 E N ( I ) [ ||∇J r || ] � ζ E I∪N ( I )

Recommend


More recommend