adaptive approximation for multivariate linear problems
play

Adaptive Approximation for Multivariate Linear Problems with Inputs - PowerPoint PPT Presentation

Adaptive Approximation for Multivariate Linear Problems with Inputs Lying in a Cone Fred J. Hickernell Department of Applied Mathematics Center for Interdisciplinary Scientific Computation Illinois Institute of Technology hickernell@iit.edu


  1. Adaptive Approximation for Multivariate Linear Problems with Inputs Lying in a Cone Fred J. Hickernell Department of Applied Mathematics Center for Interdisciplinary Scientific Computation Illinois Institute of Technology hickernell@iit.edu mypages.iit.edu/~hickernell Joint work with Yuhan Ding, Peter Kritzer, and Simon Mak This work partially supported by NSF-DMS-1522687 and NSF-DMS-1638521 (SAMSI) RICAM Workshop on Multivariate Algorithms and Information-Based Complexity, November 9, 2018

  2. Thank you Thank you all for your participation

  3. Thank you Thank you all for your participation Thanks to Peter Kritzer and Annette Weihs for doing all the work of organizing

  4. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Context Tidbits from talks this week My reponse Many results for f ∈ Let’s obtain analogous re- Henryk, Klaus, Greg, Stefan, sults for f ∈ ? Erich, ... Houman Let’s learn the appropriate ker- Will only work for nice nel from the function data functions in a Klaus “This adaptive algorithm has no We want to construct theory” adaptive algorithms with theory Tractability Yes! Henryk Greg POD weights Yes! Mac Function values are expensive Yes! 3/21

  5. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Multivariate Linear Problems Successful algorithms A ( C , Λ ) := { A : C × ( 0 , ∞ ) → G such that Given f ∈ F find S ( f ) ∈ G , where � S ( f ) − A ( f , ε ) � G � ε ∀ f ∈ C ⊆ F , ε > 0 } S : F → G is linear, e.g., where A ( f , ε ) depends on function values, Λ std , � S ( f ) = R d f ( x ) ̺ ( x ) d x Fourier coefficients, Λ ser , or any linear function- als, Λ all , e.g., S ( f ) = f n � S ( f ) = ∂ f S app ( f , n ) = f ( x i ) g i , g i ∈ G ∂ x 1 i = 1 n − ∇ 2 S ( f ) = f , S ( f ) = 0 on boundary � � g i ∈ G S app ( f , n ) = f i g i , i = 1 n � g i ∈ G S app ( f , n ) = L i ( f ) g i , i = 1 A ( f , ε ) = S app ( f , n )+ stopping criterion C is a 4/21

  6. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Solvability 1 : A ( C , Λ ) � = ∅ Issues Construction: Find A ∈ A ( C , Λ ) Given f ∈ F find S ( f ) ∈ G , where S : F → G is linear Successful algorithms A ( C , Λ ) := { A : C × ( 0 , ∞ ) → G such that � S ( f ) − A ( f , ε ) � G � ε ∀ f ∈ C ⊆ F , ε > 0 } where A ( f , ε ) depends on function values, Λ std , Fourier coefficients, Λ ser , or any linear functionals, Λ all 1 Kunsch, R. J., Novak, E. & Rudolf, D. Solvable Integration Problems and Optimal Sample Size Selection. Journal of Complexity. To appear (2018). 2 Traub, J. F., Wasilkowski, G. W. & Woźniakowski, H. Information-Based Complexity. (Academic Press, Boston, 1988). 3 Novak, E. & Woźniakowski, H. Tractability of Multivariate Problems Volume I: Linear Information. EMS Tracts in Mathematics 6 (European Mathematical Society, Zürich, 2008). 5/21

  7. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Solvability 1 : A ( C , Λ ) � = ∅ Issues Construction: Find A ∈ A ( C , Λ ) Given f ∈ F find S ( f ) ∈ G , where Cost: cost ( A , f , ε ) = # of function data S : F → G is linear cost ( A , C , ε, ρ ) = max cost ( A , f , ε ) f ∈ C ∩ B ρ Successful algorithms B ρ := { f ∈ F : � f � F � ρ } A ( C , Λ ) := { A : C × ( 0 , ∞ ) → G such that Complexity 2 : comp ( A ( C , Λ ) , ε, ρ ) � S ( f ) − A ( f , ε ) � G � ε ∀ f ∈ C ⊆ F , ε > 0 } = A ∈ A ( C ,Λ ) cost ( A , C , ε, ρ ) min where A ( f , ε ) depends on function values, Optimality: Λ std , Fourier coefficients, Λ ser , or any linear cost ( A , C , ε, ρ ) � comp ( A ( C , Λ ) , ωε, ρ ) functionals, Λ all 1 Kunsch, R. J., Novak, E. & Rudolf, D. Solvable Integration Problems and Optimal Sample Size Selection. Journal of Complexity. To appear (2018). 2 Traub, J. F., Wasilkowski, G. W. & Woźniakowski, H. Information-Based Complexity. (Academic Press, Boston, 1988). 3 Novak, E. & Woźniakowski, H. Tractability of Multivariate Problems Volume I: Linear Information. EMS Tracts in Mathematics 6 (European Mathematical Society, Zürich, 2008). 5/21

  8. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Solvability 1 : A ( C , Λ ) � = ∅ Issues Construction: Find A ∈ A ( C , Λ ) Given f ∈ F find S ( f ) ∈ G , where Cost: cost ( A , f , ε ) = # of function data S : F → G is linear cost ( A , C , ε, ρ ) = max cost ( A , f , ε ) f ∈ C ∩ B ρ Successful algorithms B ρ := { f ∈ F : � f � F � ρ } A ( C , Λ ) := { A : C × ( 0 , ∞ ) → G such that Complexity 2 : comp ( A ( C , Λ ) , ε, ρ ) � S ( f ) − A ( f , ε ) � G � ε ∀ f ∈ C ⊆ F , ε > 0 } = A ∈ A ( C ,Λ ) cost ( A , C , ε, ρ ) min where A ( f , ε ) depends on function values, Optimality: Λ std , Fourier coefficients, Λ ser , or any linear cost ( A , C , ε, ρ ) � comp ( A ( C , Λ ) , ωε, ρ ) functionals, Λ all Tractability 3 : comp ( A ( C , Λ ) , ε, ρ ) � C ρ p ε − p d q 1 Kunsch, R. J., Novak, E. & Rudolf, D. Solvable Integration Problems and Optimal Sample Size Selection. Journal of Complexity. To appear (2018). 2 Traub, J. F., Wasilkowski, G. W. & Woźniakowski, H. Information-Based Complexity. (Academic Press, Boston, 1988). 3 Novak, E. & Woźniakowski, H. Tractability of Multivariate Problems Volume I: Linear Information. EMS Tracts in Mathematics 6 (European Mathematical Society, Zürich, 2008). 5/21

  9. Introduction Integration General Linear Problems Function Approximation Summary References Bonus Solvability 1 : A ( C , Λ ) � = ∅ Issues Construction: Find A ∈ A ( C , Λ ) Given f ∈ F find S ( f ) ∈ G , where Cost: cost ( A , f , ε ) = # of function data S : F → G is linear cost ( A , C , ε, ρ ) = max cost ( A , f , ε ) f ∈ C ∩ B ρ Successful algorithms B ρ := { f ∈ F : � f � F � ρ } A ( C , Λ ) := { A : C × ( 0 , ∞ ) → G such that Complexity 2 : comp ( A ( C , Λ ) , ε, ρ ) � S ( f ) − A ( f , ε ) � G � ε ∀ f ∈ C ⊆ F , ε > 0 } = A ∈ A ( C ,Λ ) cost ( A , C , ε, ρ ) min where A ( f , ε ) depends on function values, Optimality: Λ std , Fourier coefficients, Λ ser , or any linear cost ( A , C , ε, ρ ) � comp ( A ( C , Λ ) , ωε, ρ ) functionals, Λ all Tractability 3 : comp ( A ( C , Λ ) , ε, ρ ) � C ρ p ε − p d q Implementation in open source software 1 Kunsch, R. J., Novak, E. & Rudolf, D. Solvable Integration Problems and Optimal Sample Size Selection. Journal of Complexity. To appear (2018). 2 Traub, J. F., Wasilkowski, G. W. & Woźniakowski, H. Information-Based Complexity. (Academic Press, Boston, 1988). 3 Novak, E. & Woźniakowski, H. Tractability of Multivariate Problems Volume I: Linear Information. EMS Tracts in Mathematics 6 (European Mathematical Society, Zürich, 2008). 5/21

  10. Cones × Introduction Integration General Linear Problems Function Approximation Summary References Bonus Ball B ρ := { f ∈ F : � f � F � ρ } (Non-Convex) Cone C Assume set of inputs, C ⊆ F , is a cone, not a ball Cone means f ∈ C = ⇒ af ∈ C ∀ a ∈ R Cones are unbounded � � � S ( f ) − S app ( f , n ) � If we can bound the G for f ∈ cone, then we can typically also bound the error for af Philosophy: What we cannot observe about f is not much worse than what we can observe about f 6/21

  11. Introduction Integration General Linear Problems Function Approximation Summary References Bonus “But I Like !” How might you construct an algorithm if you insist on using ? Step 1 Pick with a default radius ρ , and assume input f ∈ B ρ Step 2 Choose n large enough so that � � � � � S ( f ) − S app ( f , n ) � � S − S app ( · , n ) � F → G ρ � ε G � n � where S app ( f , n ) = L i ( f ) g i i = 1 then return A ( f , ε ) = S app ( f , n ) 7/21

  12. Introduction Integration General Linear Problems Function Approximation Summary References Bonus “But I Like !” How might you construct anadaptive algorithm if you insist on using ? Step 1 Pick with a default radius ρ , and assume input f ∈ B ρ Step 2 Choose n large enough so that � � � � � S ( f ) − S app ( f , n ) � � S − S app ( · , n ) � F → G ρ � ε G � n � where S app ( f , n ) = L i ( f ) g i i = 1 Step 3 Let f min ∈ F be the minimum norm interpolant of the data L 1 ( f ) , . . . , L n ( f ) � � � f min � Step 4 If C F � ρ for some preset inflation factor, C , then return A ( f , ε ) = S app ( f , n ) ; � � � f min � otherwise, choose ρ = 2 C F , and go to Step 2 This succeeds for the cone defined as those functions in F whose norms are not much larger than their minimum norm interpolants. 7/21

Recommend


More recommend