Radial basis function partition of unity methods for PDEs RBF-PUM Elisabeth Larsson, Scientific Computing, Uppsala Outline University Introduction RBF–PUM Theoretical results Credit goes to a number of RBF-PUM collaborators Numerical results RBF-QR Convergence Robustness 3-D results Cost Adaptivity Summary Alfa Ali Alison Lina Victor Igor Heryudono Safdari Ramage von Sydow Shcherbakov Tominec Localized Kernel-Based Meshless Methods for Partial Differential Equations, ICERM, Aug 8, 2017 E. Larsson, Aug 8, 2017 (1 : 28)
Outline Introduction RBF-PUM RBF partition of unity methods for PDEs Outline Introduction Theoretical results RBF–PUM Theoretical results Numerical results Numerical results RBF-QR RBF-QR Convergence Robustness Convergence 3-D results Cost Robustness Adaptivity Summary 3-D results Cost Adaptivity Summary E. Larsson, Aug 8, 2017 (2 : 28)
Motivation for RBF-PUM Global RBF approximation + Ease of implementation in any dimension. RBF-PUM + Flexibility with respect to geometry. Outline Introduction + Potentially spectral convergence rates. RBF–PUM − Computationally expensive for large problems. Theoretical results Numerical results RBF-QR Convergence RBF-PUM Robustness 3-D results Cost ◮ Local RBF approximations on patches are blended Adaptivity Summary into a global solution using a partition of unity. ◮ Provides spectral or high-order convergence. ◮ Solves the computational cost issues. ◮ Allows for local adaptivity. Wendland (2002), Fasshauer (2007), Cavoretto, De Rossi, Perracchione et al., Larsson, Heryudono et al. E. Larsson, Aug 8, 2017 (3 : 28)
The RBF partition of unity method Global approximation P � RBF-PUM u ( x ) = ˜ w j ( x )˜ u j ( x ) Ω j Outline j =1 Introduction RBF–PUM PU weight functions Theoretical results Generate weight functions from Numerical results RBF-QR compactly supported C 2 Wendland functions Convergence Robustness 3-D results Cost ψ ( ρ ) = (4 ρ + 1)(1 − ρ ) 4 Adaptivity + Summary ψ i ( x ) using Shepard’s method w i ( x ) = j =1 ψ j ( x ) . � M Cover Each x ∈ Ω must be in the interior of at least one Ω j . Patches that do not contain unique points are pruned. E. Larsson, Aug 8, 2017 (4 : 28)
Differentiating RBF-PUM approximations Applying an operator globally M � ∆˜ u = ∆ w i ˜ u i + 2 ∇ w i · ∇ ˜ u i + w i ∆˜ u i RBF-PUM i =1 Outline Introduction Local differentiation matrices RBF–PUM Let u i be the vector of nodal values in patch Ω i , then Theoretical results Numerical results u i = A λ i , where A ij = φ j ( x i ) ⇒ RBF-QR Convergence Robustness L u i = A L A − 1 u i , where A L ij = L φ j ( x i ) . 3-D results Cost Adaptivity 0 Summary 50 The global differentiation matrix 100 Local contributions are added 150 into the global matrix. 200 250 300 350 0 100 200 300 E. Larsson, Aug 8, 2017 (5 : 28) nz = 10801
An RBF-PUM collocation method RBF-PUM Outline Introduction RBF–PUM Theoretical results Numerical results RBF-QR Convergence Choices & Implications Robustness 3-D results Cost ◮ Nodes and evaluation points coincide. Adaptivity Summary Square matrix, iterative solver available (Heryudono, Larsson, Ramage, von Sydow 2015). ◮ Global node set. Solutions ˜ u i ( x k ) = ˜ u j ( x k ) for x k in overlap regions. ◮ Patches are cut by the domain boundary. Potentially strange shapes and lowered local order. E. Larsson, Aug 8, 2017 (6 : 28)
An RBF-PUM least squares method RBF-PUM Outline Introduction RBF–PUM Theoretical results Numerical results RBF-QR Convergence Robustness 3-D results Choices & Implications Cost Adaptivity ◮ Each patch has an identical node layout. Summary Computational cost for setup is drastically reduced. ◮ Evaluation nodes are uniform. Easy to generate both local and global high quality node sets. ◮ Patches have nodes outside the domain. Good for local order, but requires denser evaluation points. E. Larsson, Aug 8, 2017 (7 : 28)
The RBF-PUM interpolation error M � α � E α = D α ( I ( u ) − u ) = � � D β w j D α − β ( I ( u j ) − u j ) β j =1 | β |≤| α | RBF-PUM Outline The weight functions Introduction For C k weight functions and | α | ≤ k RBF–PUM Theoretical results Numerical results RBF-QR � D α w j � L ∞ (Ω j ) ≤ C α Convergence , H j = diam (Ω j ) . Robustness H | α | 3-D results j Cost Adaptivity The local RBF interpolants (Gaussians) Summary Define the local fill distance h j (Rieger, Zwicknagl 2010) m j − d 2 −| α | � D α ( I ( u j ) − u j ) � L ∞ (˜ Ω j ) ≤ c α, j h � u j � N (˜ Ω j ) , j Ω j ) ≤ e γ α, j log( h j ) / √ � D α ( I ( u j ) − u j ) � L ∞ (˜ h j � u j � N (˜ Ω j ) . E. Larsson, Aug 8, 2017 (8 : 28)
RBF-PUM interpolation error estimates Algebraic estimate for H j / h j = c m j − d 2 −| α | �E α � L ∞ (Ω) ≤ K max 1 ≤ j ≤ M C j H � u � N (˜ j Ω j ) RBF-PUM K — Maximum # of Ω j overlapping at one point Outline m j — Related to the local # of points Introduction RBF–PUM ˜ Ω j — Ω j ∩ Ω Theoretical results Numerical results Spectral estimate for fixed partitions RBF-QR �E α � L ∞ (Ω) ≤ K max 1 ≤ j ≤ M Ce γ j log( h j ) / √ Convergence Robustness h j � u � N (˜ 3-D results Ω j ) Cost Adaptivity Implications Summary ◮ Bad patch reduces global order. ◮ Two refinement modes. ◮ Guidelines for adaptive refinement. E. Larsson, Aug 8, 2017 (9 : 28)
Error estimate for PDE approximation The PDE estimate RBF-PUM u − u � L ∞ (Ω) ≤ C P E L + C P � L · , X L + � ˜ Y , X � ∞ ( C M δ M + E L ), Outline where C P is a well-posedness constant and C M δ M is a Introduction small multiple of the machine precision. RBF–PUM Theoretical results Numerical results Implications RBF-QR Convergence Robustness ◮ Interpolation error E L provides convergence rate. 3-D results Cost Adaptivity ◮ Norm of inverse/pseudoinverse can be large. Summary ◮ Matrix norm better with oversampling. ◮ Finite precision accuracy limit involves matrix norm. Follows strategies from Schaback (2007) and Schaback (2016) E. Larsson, Aug 8, 2017 (10 : 28)
Does RBF-PUM require stable methods? In order to achieve convergence we have two options ◮ Refine patches such that diameter H decreases. RBF-PUM ◮ Increase node numbers such that N j increases. Outline ◮ In both cases, theory assumes ε fixed. Introduction RBF–PUM The effect of patch refinement Theoretical results Numerical results H = 1, ε = 4 H = 0 . 5, ε = 4 H = 0 . 25, ε = 4 RBF-QR Convergence 1 1 1 Robustness 3-D results Cost Adaptivity Summary 0 0 0 0 H 0 H 0 H The RBF–QR method: Stable as ε → 0 for N ≫ 1 Effectively a change to a stable basis. Fornberg, Piret (2007), Fornberg, Larsson, Flyer (2011), Larsson, Lehto, Heryudono, Fornberg (2013) E. Larsson, Aug 8, 2017 (11 : 28)
Effects on the local matrices Local contribution to a global Laplacian L j = ( W ∆ j A j + 2 W ∇ ⊙ A ∇ j + W j A ∆ j ) A − 1 . j j RBF-PUM Outline Typically: A j ill-conditioned, L j better conditioned. Introduction RBF–PUM j A − 1 Relative error in A ∆ RBF-QR for accuracy Theoretical results j without RBF-QR Numerical results RBF-QR ◮ Stable for small RBF 0 Convergence 10 Robustness shape parameters ε 3-D results Cost ◮ Change of basis N=10 N=20 N=40 Adaptivity −5 10 A = AQR − T ˜ D − T Summary 1 1 ◮ Same result in theory −10 10 A L ˜ A − 1 = A L A − 1 ˜ ◮ More accurate in practice −2 −1 0 10 10 10 ε E. Larsson, Aug 8, 2017 (12 : 28)
Poisson test problems in 2-D Domain Ω = [ − 2 , 2] 2 . RBF-PUM Uniform nodes in the collocation case. Outline Introduction RBF–PUM Theoretical results Numerical results RBF-QR Convergence Robustness 3-D results Cost Adaptivity Summary 1 u R ( x , y ) = 25 x 2 +25 y 2 +1 u T ( x , y ) = sin(2( x − 0 . 1) 2 ) cos(( x − 0 . 3) 2 )+sin 2 (( y − 0 . 5) 2 ) E. Larsson, Aug 8, 2017 (13 : 28)
Error results with and without RBF–QR ◮ Least squares RBF-PUM ◮ Fixed shape ε = 0 . 5 or scaled such that ε h = c ◮ Left: 5 × 5 patches Right: 55 points per patch RBF-PUM Outline Introduction Spectral mode Algebraic mode RBF–PUM 0 Theoretical results 0 10 10 Numerical results RBF-QR RBF−QR −2 10 Convergence −2 Robustness Direct 10 3-D results Error p=−7 −4 Scaled Error Cost 10 Adaptivity −4 10 Summary −6 10 RBF−QR p=1 Direct Scaled −8 −6 10 10 4 8 16 32 20 40 60 80 100 120 Patches per dimension Points per dimension ◮ With RBF–QR better results for H / h large. ◮ Scaled approach good until saturation. E. Larsson, Aug 8, 2017 (14 : 28)
Convergence as a function of patch size Runge Trig 0 0 10 10 −2 RBF-PUM 10 −2 Outline 10 −4 Introduction Error Error 10 RBF–PUM Theoretical results −4 −6 10 10 Numerical results RBF-QR Convergence −8 Robustness 10 3-D results −6 Cost 10 0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 Adaptivity H H Summary Collocation (dashed lines) and Least Squares (solid lines). ◮ Points per patch n = 28, 55, 91. ◮ Theoretical rates p = 4, 7, 10. ◮ Numerical rates p ≈ 3 . 9, 6.9, 9.8. E. Larsson, Aug 8, 2017 (15 : 28)
Recommend
More recommend