Multilevel discrete least squares polynomial approximation Raúl Tempone Alexander von Humboldt Professor, RWTH-Aachen, KAUST Joint work with : A.-L. Haji-Ali (Heriot Watt) F. Nobile (EPFL), S. Wolfers (ex KAUST, now G-Research) DCSE Fall School on ROM and UQ, November 4–8, 2019
Contents 1. Problem framework 2. Weighted discrete least squares approximation 3. Multilevel least squares approximation 4. Application to random elliptic PDEs 5. Conclusions
PDEs with random parameters Consider a differential problem L ( u ; y ) = G (*) depending on a set of random parameters y = ( y 1 , . . . , y N ) ∈ Γ ⊂ R N with joint probability measure µ on Γ . We assume that (*) has a unique solution u ( y ) , in some suitable function space V , and we focus on a Quantity of Interest Q : V → R . Goal : approximate the whole response function y �→ f ( y ) := Q ( u ( y )) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E [ f ] , V ar [ f ] , etc. 1 /20
PDEs with random parameters Consider a differential problem L ( u ; y ) = G (*) depending on a set of random parameters y = ( y 1 , . . . , y N ) ∈ Γ ⊂ R N with joint probability measure µ on Γ . We assume that (*) has a unique solution u ( y ) , in some suitable function space V , and we focus on a Quantity of Interest Q : V → R . Goal : approximate the whole response function y �→ f ( y ) := Q ( u ( y )) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E [ f ] , V ar [ f ] , etc. 1 /20
PDEs with random parameters Consider a differential problem L ( u ; y ) = G (*) depending on a set of random parameters y = ( y 1 , . . . , y N ) ∈ Γ ⊂ R N with joint probability measure µ on Γ . We assume that (*) has a unique solution u ( y ) , in some suitable function space V , and we focus on a Quantity of Interest Q : V → R . Goal : approximate the whole response function y �→ f ( y ) := Q ( u ( y )) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E [ f ] , V ar [ f ] , etc. 1 /20
Polynomial approximation on downward closed sets Assume f ∈ L 2 µ (Γ) . We seek an approximation of f in a finite dimensional polynomial subspace �� N � n =1 y p n V Λ = span n , with p = ( p 1 , . . . , p N ) ∈ Λ with Λ ⊂ N N a downward closed index set. p 2 Definition . An index set Λ is downward closed if 5 4 p ∈ Λ and q ≤ p = ⇒ q ∈ Λ . 3 2 1 0 0 1 2 3 4 5 6 7 p 1 2 /20
Contents 1. Problem framework 2. Weighted discrete least squares approximation 3. Multilevel least squares approximation 4. Application to random elliptic PDEs 5. Conclusions
Weighted discrete least squares approximation ∈ Γ M from y (1) , . . . , y ( M ) � � 1. Sample independently M points a distribution ν ≪ µ , with density ρ = dν dµ ; 1 2. define the weight function w ( y ) = ρ ( y ) ; 3. find weighted discrete least squares approximation on V Λ M M = 1 y ( j ) � 2 � y ( j ) � � ˆ � with � g � 2 Π M f = argmin � f − v � M w g . M v ∈ V Λ j =1 � g � 2 Γ w ( y ) g ( y ) 2 ν ( d y ) = Γ g ( y ) 2 µ ( d y ) = � g � 2 � � � � = Here : E µ . M L 2 Algebraic system : let { φ j } | Λ | j =1 be a basis of V Λ , orthonormal w.r.t. Π M f ( y ) = � | Λ | � T µ , and ˆ � j =1 c j φ j ( y ) . Then, c = c 1 , . . . , c | Λ | satisfies G c = ˆ ˆ f , G i,j = ( φ i , φ j ) M , f i = ( f, φ i ) M . 3 /20
Weighted discrete least squares approximation ∈ Γ M from y (1) , . . . , y ( M ) � � 1. Sample independently M points a distribution ν ≪ µ , with density ρ = dν dµ ; 1 2. define the weight function w ( y ) = ρ ( y ) ; 3. find weighted discrete least squares approximation on V Λ M M = 1 y ( j ) � 2 � y ( j ) � � ˆ � with � g � 2 Π M f = argmin � f − v � M w g . M v ∈ V Λ j =1 � g � 2 Γ w ( y ) g ( y ) 2 ν ( d y ) = Γ g ( y ) 2 µ ( d y ) = � g � 2 � � � � = Here : E µ . M L 2 Algebraic system : let { φ j } | Λ | j =1 be a basis of V Λ , orthonormal w.r.t. Π M f ( y ) = � | Λ | � T µ , and ˆ � j =1 c j φ j ( y ) . Then, c = c 1 , . . . , c | Λ | satisfies G c = ˆ ˆ f , G i,j = ( φ i , φ j ) M , f i = ( f, φ i ) M . 3 /20
Weighted discrete least squares approximation ∈ Γ M from y (1) , . . . , y ( M ) � � 1. Sample independently M points a distribution ν ≪ µ , with density ρ = dν dµ ; 1 2. define the weight function w ( y ) = ρ ( y ) ; 3. find weighted discrete least squares approximation on V Λ M M = 1 y ( j ) � 2 � y ( j ) � � ˆ � with � g � 2 Π M f = argmin � f − v � M w g . M v ∈ V Λ j =1 � g � 2 Γ w ( y ) g ( y ) 2 ν ( d y ) = Γ g ( y ) 2 µ ( d y ) = � g � 2 � � � � = Here : E µ . M L 2 Algebraic system : let { φ j } | Λ | j =1 be a basis of V Λ , orthonormal w.r.t. Π M f ( y ) = � | Λ | � T µ , and ˆ � j =1 c j φ j ( y ) . Then, c = c 1 , . . . , c | Λ | satisfies G c = ˆ ˆ f , G i,j = ( φ i , φ j ) M , f i = ( f, φ i ) M . 3 /20
Optimally of discrete least squares approximation Theorem ( [Cohen-Migliorati 2017][Cohen-Davenport-Leviatan 2013] ) For arbitrary r > 0 define | Λ | κ r := 1 / 2(1 − log 2) � φ i ( y ) 2 . and K Λ ,w := sup w ( y ) 1 + r y ∈ Γ j =1 log M ≥ K Λ ,w M If , then κ r > 1 − 2 M − r . � � G − I � ≤ 1 � P 2 √ � f − ˆ √ w with prob. > 1 − 2 M − r . Π M f � L 2 µ ≤ (1 + 2) inf v ∈ V Λ � f − v � L ∞ � � � f − ˆ Π c M f � 2 v ∈ V Λ � f − v � 2 µ + 2 � f � 2 µ M − r E ≤ C M inf L 2 L 2 L 2 µ � � M →∞ where ˆ M f = ˆ Π c 4 κ r Π M f · 1 {� G − I �≤ 1 2 } and C M = 1 + − − − − → 1 . log M 4 /20
Sufficient number of points - uniform measure �� N � Uniform measure: µ = U i =1 Γ i [Chkifa-Cohen-Migliorati-Nobile-Tempone 2015] When sampling from the same distribution ( ν = µ and w = 1 ), then | Λ | ≤ K Λ , 1 ≤ | Λ | 2 . Hence, (unweighted) discrete least square is stable and optimally convergent under the condition log M ≥ | Λ | 2 M (quadratic proportionality) . κ r 5 /20
Sufficient number of points - optimal measure [Cohen-Migliorati 2017] For arbitrary µ , when sampling from the optimal measure | Λ | dν ∗ dµ ( y ) = ρ ∗ ( y ) = 1 � φ j ( y ) 2 K Λ ,w ∗ = | Λ | . = ⇒ | Λ | j =1 Hence, weighted discrete least squares stable and optimal with log M ≥ | Λ | M (linear proportionality) . κ r Sampling algorithms from the optimal distribution are available (marginalization [Cohen-Migliorati 2017] , acceptance rejection [H.-Nobile-Tempone-Wolfers, 2017] ) However, the optimal distribution depends on Λ . Not good for adaptive algorithms 6 /20
Sufficient number of points - Chebyshev measure Alternatively, for uniform measure µ (or more generally a product measure µ = ⊗ N j =1 µ j , with µ j doubling measure, i.e. µ j (2 I ) = Lµ j ( I ) ) one can sample from the arcsin (Chebyshev) distribution. log M ≥ C N M K Λ ,w ≤ C N | Λ | , | Λ | . κ r Still linear scaling but with a constant exponentially dependent on N . Advantage: the sampling measure does not depend on Λ . Good for adaptivity. 7 /20
Contents 1. Problem framework 2. Weighted discrete least squares approximation 3. Multilevel least squares approximation 4. Application to random elliptic PDEs 5. Conclusions
Multilevel least squares approximation In practice f ( y ) = Q ( u ( y )) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations f n ℓ , n ℓ ∈ N with increasing cost, s.t. ℓ →∞ � f − f n ℓ � L 2 lim µ = 0 , (or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λ m 0 ⊂ Λ m 1 ⊂ . . . ⊂ Λ m k ⊂ . . . such that lim inf � f − v � L 2 µ = 0 . k →∞ v ∈ V Λ mk Correspondingly, for each Λ m k we introduce a weighted discrete least squares projector ˆ M k Π M k using log M k = O ( | Λ m k | ) random points. 8 /20
Multilevel least squares approximation In practice f ( y ) = Q ( u ( y )) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations f n ℓ , n ℓ ∈ N with increasing cost, s.t. ℓ →∞ � f − f n ℓ � L 2 lim µ = 0 , (or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λ m 0 ⊂ Λ m 1 ⊂ . . . ⊂ Λ m k ⊂ . . . such that lim inf � f − v � L 2 µ = 0 . k →∞ v ∈ V Λ mk Correspondingly, for each Λ m k we introduce a weighted discrete least squares projector ˆ M k Π M k using log M k = O ( | Λ m k | ) random points. 8 /20
Multilevel least squares approximation In practice f ( y ) = Q ( u ( y )) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations f n ℓ , n ℓ ∈ N with increasing cost, s.t. ℓ →∞ � f − f n ℓ � L 2 lim µ = 0 , (or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λ m 0 ⊂ Λ m 1 ⊂ . . . ⊂ Λ m k ⊂ . . . such that lim inf � f − v � L 2 µ = 0 . k →∞ v ∈ V Λ mk Correspondingly, for each Λ m k we introduce a weighted discrete least squares projector ˆ M k Π M k using log M k = O ( | Λ m k | ) random points. 8 /20
Recommend
More recommend