NUMERICAL INSTABILITY (CONT.) 1 G ( x , y ) = − 2 πλ 2 (log � x − y � + K 0 ( λ � x − y � )) . Why not use existing tech for log and K 0 and add together? (a) Interior problem (b) Exterior problem 10 1 10 1 Potential Potential 10 − 2 Gradient Gradient 10 − 2 Hessian Hessian 10 − 5 10 − 5 Error Error 10 − 8 10 − 8 10 − 11 10 − 11 10 − 14 10 − 14 10 − 5 10 − 2 10 1 10 − 6 10 − 3 10 0 λR λR � The error increases as the product of λ = Re /δ t and the radius of the disc R goes to zero.
THE MEANING OF λ R � Note that λ = Re /δ t
THE MEANING OF λ R � Note that λ = Re /δ t The value of λ R is small if
THE MEANING OF λ R � Note that λ = Re /δ t The value of λ R is small if The Reynolds number is small (viscous fluids)
THE MEANING OF λ R � Note that λ = Re /δ t The value of λ R is small if The Reynolds number is small (viscous fluids) The grid is fine
THE MEANING OF λ R � Note that λ = Re /δ t The value of λ R is small if The Reynolds number is small (viscous fluids) The grid is fine Time steps are relatively long
THE MEANING OF λ R � Note that λ = Re /δ t The value of λ R is small if The Reynolds number is small (viscous fluids) The grid is fine Time steps are relatively long Note that λ R < 1 when δ t > Re R 2 , i.e. when the CFL condition is violated. This regime is important for implicit methods for viscous fluids.
OUR GOAL Our goal: analytical formulas for the low rank interaction between well separated points which are stable for any λ R .
OUR GOAL Our goal: analytical formulas for the low rank interaction between well separated points which are stable for any λ R . Go back to basics: look that the separation of variables problem for the modified biharmonic equation
SEPARATION OF VARIABLES Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation: ∆(∆ − λ 2 ) u = 0 , x ∈ Ω , u = f , ∂ n u = g , x ∈ ∂ Ω .
SEPARATION OF VARIABLES Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation: ∆(∆ − λ 2 ) u = 0 , x ∈ Ω , u = f , ∂ n u = g , x ∈ ∂ Ω . Separation of Variables Representation ∞ u n ( r ) e in θ . � u ( r , θ ) = n = −∞
SEPARATION OF VARIABLES Let Ω be the interior or exterior of a disc of radius R and consider the modified biharmonic equation: ∆(∆ − λ 2 ) u = 0 , x ∈ Ω , u = f , ∂ n u = g , x ∈ ∂ Ω . Separation of Variables Representation ∞ u n ( r ) e in θ . � u ( r , θ ) = n = −∞ ODE for u n ( r ) � d 2 � � d 2 dr − n 2 dr − n 2 � dr 2 + 1 d dr 2 + 1 d r 2 − λ 2 u n ( r ) = 0 . r 2 r r
SEPARATION OF VARIABLES (CONT.) ODE for u n ( r ) � d 2 � � d 2 dr − n 2 dr − n 2 dr 2 + 1 d dr 2 + 1 d � r 2 − λ 2 u n ( r ) = 0 . r 2 r r Four linearly independent solutions: r | n | , I n ( λ r ), r −| n | , and K n ( λ r ).
SEPARATION OF VARIABLES (CONT.) ODE for u n ( r ) � d 2 � � d 2 dr − n 2 dr − n 2 dr 2 + 1 d dr 2 + 1 d � r 2 − λ 2 u n ( r ) = 0 . r 2 r r Four linearly independent solutions: r | n | , I n ( λ r ), r −| n | , and K n ( λ r ). Interior Problem By imposing continuity at r = 0, the functions r | n | and I n ( λ r ) are a basis for the interior problem.
SEPARATION OF VARIABLES (CONT.) ODE for u n ( r ) � d 2 � � d 2 dr − n 2 dr − n 2 dr 2 + 1 d dr 2 + 1 d � r 2 − λ 2 u n ( r ) = 0 . r 2 r r Four linearly independent solutions: r | n | , I n ( λ r ), r −| n | , and K n ( λ r ). Interior Problem By imposing continuity at r = 0, the functions r | n | and I n ( λ r ) are a basis for the interior problem. Exterior Problem By imposing decay conditions r = ∞ , the functions r −| n | and K n ( λ r ) are a basis for the exterior problem.
A BAD BASIS (EXT.) For the exterior problem, we have u n ( r ) = α n r −| n | + β n K n ( λ r ).
A BAD BASIS (EXT.) For the exterior problem, we have u n ( r ) = α n r −| n | + β n K n ( λ r ). Coefficient Recovery Problem R −| n | K n ( λ R ) � α n � � f n � − λ = . −| n | R −| n |− 1 β n g n 2 ( K n − 1 ( λ R ) + K n +1 ( λ R )) This problem is ill-conditioned for small λ R . Intuitively, this is because K n ( λ r ) and r −| n | are similar functions for small r .
A BAD BASIS (EXT.) For the exterior problem, we have u n ( r ) = α n r −| n | + β n K n ( λ r ). Coefficient Recovery Problem R −| n | K n ( λ R ) � α n � � f n � − λ = . −| n | R −| n |− 1 β n g n 2 ( K n − 1 ( λ R ) + K n +1 ( λ R )) This problem is ill-conditioned for small λ R . Intuitively, this is because K n ( λ r ) and r −| n | are similar functions for small r . Asymptotic Expansion for K n ( λ r ) | n |− 1 ( | n | − k − 1)! � 1 4 λ r 2 ) k + ( − 1) | n | +1 ln 2 λ r ) −| n | � K n ( λ r ) = 1 2 ( 1 ( − 1 � 2 λ r I n ( λ r ) k ! k =0 ∞ ( 1 4 λ r 2 ) k + ( − 1) | n | 1 2 λ r ) | n | � 2 ( 1 ( ψ ( k + 1) + ψ ( | n | + k + 1)) k !( | n | + k )! . k =0
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n .
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n . Definition of Q n Q n ( r ) = K n ( λ r ) − 2 | n |− 1 ( | n | − 1)! . λ | n | r | n |
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n . Definition of Q n Q n ( r ) = K n ( λ r ) − 2 | n |− 1 ( | n | − 1)! . λ | n | r | n | Q n has a different leading order term for small λ and R .
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n . Definition of Q n Q n ( r ) = K n ( λ r ) − 2 | n |− 1 ( | n | − 1)! . λ | n | r | n | Q n has a different leading order term for small λ and R . The pair ( Q n , K n ) is a better conditioned basis than ( r −| n | , K n ) in the small λ R regime.
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n . Definition of Q n Q n ( r ) = K n ( λ r ) − 2 | n |− 1 ( | n | − 1)! . λ | n | r | n | Q n has a different leading order term for small λ and R . The pair ( Q n , K n ) is a better conditioned basis than ( r −| n | , K n ) in the small λ R regime. Q n is still a solution of the ODE for u n because it’s a linear combo of r −| n | and K n .
A BETTER BASIS (EXT.) We can define a new basis function for the exterior problem which is not asymptotically similar to r −| n | and K n . Definition of Q n Q n ( r ) = K n ( λ r ) − 2 | n |− 1 ( | n | − 1)! . λ | n | r | n | Q n has a different leading order term for small λ and R . The pair ( Q n , K n ) is a better conditioned basis than ( r −| n | , K n ) in the small λ R regime. Q n is still a solution of the ODE for u n because it’s a linear combo of r −| n | and K n . It is simple to evaluate Q n with tweaks to existing software.
A BAD BASIS (INT.) For the interior problem, we have that u n ( r ) = α n r | n | + β n I n ( λ r ).
A BAD BASIS (INT.) For the interior problem, we have that u n ( r ) = α n r | n | + β n I n ( λ r ). Coefficient Recovery Problem R | n | I n ( λ R ) � α n � � f n � = . λ | n | R | n |− 1 β n g n 2 ( I n − 1 ( λ R ) + I n +1 ( λ R )) This problem is again ill-conditioned for small λ R .
A BAD BASIS (INT.) For the interior problem, we have that u n ( r ) = α n r | n | + β n I n ( λ r ). Coefficient Recovery Problem R | n | I n ( λ R ) � α n � � f n � = . λ | n | R | n |− 1 β n g n 2 ( I n − 1 ( λ R ) + I n +1 ( λ R )) This problem is again ill-conditioned for small λ R . Asymptotic Expansion for I n ( λ r ) � 2 k + | n | � λ r ∞ 2 1 1 2 | n | | n | ! ( λ r ) | n | + 2 | n | +2 ( | n | + 1)! ( λ r ) | n | +2 + · · · � I n ( λ r ) = k !( k + | n | )! = k =0
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n .
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n . Definition of P n � | n | 1 � λ r P n ( r ) = I n ( λ r ) − | n | ! . 2
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n . Definition of P n � | n | 1 � λ r P n ( r ) = I n ( λ r ) − | n | ! . 2 P n has a different leading order term for small λ and R .
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n . Definition of P n � | n | 1 � λ r P n ( r ) = I n ( λ r ) − | n | ! . 2 P n has a different leading order term for small λ and R . The pair ( r | n | , P n ) is a better conditioned basis than ( r | n | , I n ) in the small λ R regime.
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n . Definition of P n � | n | 1 � λ r P n ( r ) = I n ( λ r ) − | n | ! . 2 P n has a different leading order term for small λ and R . The pair ( r | n | , P n ) is a better conditioned basis than ( r | n | , I n ) in the small λ R regime. P n is still a solution of the ODE for u n because it’s a linear combo of r | n | and I n .
A BETTER BASIS (INT.) Again, we can define a new basis function for the interior problem which is not asymptotically similar to r | n | and I n . Definition of P n � | n | 1 � λ r P n ( r ) = I n ( λ r ) − | n | ! . 2 P n has a different leading order term for small λ and R . The pair ( r | n | , P n ) is a better conditioned basis than ( r | n | , I n ) in the small λ R regime. P n is still a solution of the ODE for u n because it’s a linear combo of r | n | and I n . It is simple to evaluate P n with tweaks to existing software.
NUMERICAL RESULTS (CONT.) Question What is the practical effect of the condition number of the coefficient recovery problem on the accuracy of the solution?
NUMERICAL RESULTS (CONT.) Numerical Experiment 1 . 00 n s � λ 2 c j G ( x , s j ) + λ d j ∂ v j , 1 G ( x , s j ) 0 . 75 u ( x ; λ ) = 0 . 50 j =1 0 . 25 + q j ∂ v j , 2 v j , 3 G ( x , s j ) . 0 . 00 − 0 . 25 For several values of λ and R : − 0 . 50 − 0 . 75 1 . 00 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 75 0 . 5 1 . 0 0 . 50 0 . 25 0 . 00 − 0 . 25 − 0 . 50 − 0 . 75 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0
NUMERICAL RESULTS (CONT.) Numerical Experiment 1 . 00 n s � λ 2 c j G ( x , s j ) + λ d j ∂ v j , 1 G ( x , s j ) 0 . 75 u ( x ; λ ) = 0 . 50 j =1 0 . 25 + q j ∂ v j , 2 v j , 3 G ( x , s j ) . 0 . 00 − 0 . 25 For several values of λ and R : − 0 . 50 Evaluate u and ∂ n u on ∂ Ω − 0 . 75 1 . 00 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 75 0 . 5 1 . 0 0 . 50 0 . 25 0 . 00 − 0 . 25 − 0 . 50 − 0 . 75 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0
NUMERICAL RESULTS (CONT.) Numerical Experiment 1 . 00 n s � λ 2 c j G ( x , s j ) + λ d j ∂ v j , 1 G ( x , s j ) 0 . 75 u ( x ; λ ) = 0 . 50 j =1 0 . 25 + q j ∂ v j , 2 v j , 3 G ( x , s j ) . 0 . 00 − 0 . 25 For several values of λ and R : − 0 . 50 Evaluate u and ∂ n u on ∂ Ω − 0 . 75 1 . 00 − 1 . 00 Solve corresponding separation of − 1 . 0 − 0 . 5 0 . 0 0 . 75 0 . 5 1 . 0 0 . 50 variables problem (order N = 50, using 0 . 25 100 points on ∂ Ω) with new and old basis 0 . 00 functions − 0 . 25 − 0 . 50 − 0 . 75 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0
NUMERICAL RESULTS (CONT.) Numerical Experiment 1 . 00 n s � λ 2 c j G ( x , s j ) + λ d j ∂ v j , 1 G ( x , s j ) 0 . 75 u ( x ; λ ) = 0 . 50 j =1 0 . 25 + q j ∂ v j , 2 v j , 3 G ( x , s j ) . 0 . 00 − 0 . 25 For several values of λ and R : − 0 . 50 Evaluate u and ∂ n u on ∂ Ω − 0 . 75 1 . 00 − 1 . 00 Solve corresponding separation of − 1 . 0 − 0 . 5 0 . 0 0 . 75 0 . 5 1 . 0 0 . 50 variables problem (order N = 50, using 0 . 25 100 points on ∂ Ω) with new and old basis 0 . 00 functions − 0 . 25 Evaluate error in potential, gradient, and − 0 . 50 Hessian − 0 . 75 − 1 . 00 − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0
NUMERICAL RESULTS (CONT.) Numerical Experiment 1 . 00 n s � λ 2 c j G ( x , s j ) + λ d j ∂ v j , 1 G ( x , s j ) 0 . 75 u ( x ; λ ) = 0 . 50 j =1 0 . 25 + q j ∂ v j , 2 v j , 3 G ( x , s j ) . 0 . 00 − 0 . 25 For several values of λ and R : − 0 . 50 Evaluate u and ∂ n u on ∂ Ω − 0 . 75 1 . 00 − 1 . 00 Solve corresponding separation of − 1 . 0 − 0 . 5 0 . 0 0 . 75 0 . 5 1 . 0 0 . 50 variables problem (order N = 50, using 0 . 25 100 points on ∂ Ω) with new and old basis 0 . 00 functions − 0 . 25 Evaluate error in potential, gradient, and − 0 . 50 Hessian − 0 . 75 − 1 . 00 Should be good to about machine − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0 precision, with some precision loss in the derivatives
NUMERICAL RESULTS (CONT.) Errors for the exterior problem: ( r −| n | , K n ) vs ( Q n , K n ). Top row: λ → 0. Bottom row: R → 0. 10 2 10 − 2 ( Q n , K n ) ( Q n , K n ) ( Q n , K n ) 10 − 1 10 − 1 10 − 4 ( r − n , K n ) ( r − n , K n ) ( r − n , K n ) Exact Difference Exact Difference Exact Difference 10 − 6 10 − 4 10 − 4 10 − 8 E u 10 − 7 E g E h 10 − 7 10 − 10 10 − 10 10 − 10 10 − 12 10 − 13 10 − 13 10 − 14 10 − 16 10 − 6 10 − 3 10 0 10 − 6 10 − 3 10 0 10 − 6 10 − 3 10 0 λR λR λR 10 1 10 − 1 ( Q n , K n ) ( Q n , K n ) ( Q n , K n ) 10 − 1 ( r − n , K n ) 10 − 2 ( r − n , K n ) ( r − n , K n ) 10 − 4 Exact Difference Exact Difference Exact Difference 10 − 4 10 − 5 10 − 7 E u E g E h 10 − 7 10 − 8 10 − 10 10 − 10 10 − 11 10 − 13 10 − 13 10 − 14 10 − 6 10 − 3 10 0 10 − 6 10 − 3 10 0 10 − 6 10 − 3 10 0 λR λR λR
NUMERICAL RESULTS (CONT.) Errors for the interior problem: ( r | n | , I n ) vs ( r | n | , P n ). Top row: λ → 0. Bottom row: R → 0. 10 − 2 10 1 ( r n , P n ) ( r n , P n ) ( r n , P n ) 10 − 1 10 − 4 ( r n , I n ) ( r n , I n ) 10 − 2 ( r n , I n ) 10 − 6 Exact Difference Exact Difference Exact Difference 10 − 4 10 − 5 10 − 8 10 − 7 E u E h E g 10 − 8 10 − 10 10 − 10 10 − 11 10 − 12 10 − 13 10 − 14 10 − 14 10 − 5 10 − 2 10 1 10 − 5 10 − 2 10 1 10 − 5 10 − 2 10 1 λR λR λR 10 1 ( r n , P n ) ( r n , P n ) ( r n , P n ) 10 − 2 10 − 1 ( r n , I n ) 10 − 2 ( r n , I n ) ( r n , I n ) 10 − 5 Exact Difference Exact Difference Exact Difference 10 − 4 10 − 5 10 − 7 E u 10 − 8 E g E h 10 − 8 10 − 10 10 − 11 10 − 11 10 − 13 10 − 14 10 − 14 10 − 5 10 − 2 10 1 10 − 5 10 − 2 10 1 10 − 5 10 − 2 10 1 λR λR λR
REALITY CHECK How is this a decomposition?
REALITY CHECK How is this a decomposition? Recall n � u ( x i ) = q j ∂ v j w j G ( x i , s j ) j =1 ∂ v 1 w 1 G ( x 1 , s 1 ) ∂ v 2 w 2 G ( x 1 , s 2 ) · · · ∂ vnwn G ( x 1 , s n ) ∂ v 1 w 1 G ( x 2 , s 1 ) ∂ v 2 w 2 G ( x 2 , s 2 ) · · · ∂ vnwn G ( x 2 , s n ) A = . . . . . . . . . ∂ v 1 w 1 G ( x m , s 1 ) ∂ v 2 w 2 G ( x m , s 2 ) · · · ∂ vnwn G ( x m , s n ) Well-separated points
ANALYTICAL DECOMPOSITION A = LR ⊺ .
ANALYTICAL DECOMPOSITION A = LR ⊺ . The form of L is straightforward Q p ( | x 1 − c | ) e ip θ 1 K p ( λ | x 1 − c | ) e ip θ 1 Q 0 ( | x 1 − c | ) K 0 ( λ | x 1 − c | ) · · · Q p ( | x 2 − c | ) e ip θ 2 K p ( λ | x 2 − c | ) e ip θ 2 Q 0 ( | x 2 − c | ) K 0 ( λ | x 2 − c | ) · · · L = . . . . . . . . . . . . Q p ( | x m − c | ) e ip θ m K p ( λ | x m − c | ) e ip θ m Q 0 ( | x m − c | ) K 0 ( λ | x m − c | ) · · ·
ANALYTICAL DECOMPOSITION A = LR ⊺ . The form of L is straightforward Q p ( | x 1 − c | ) e ip θ 1 K p ( λ | x 1 − c | ) e ip θ 1 Q 0 ( | x 1 − c | ) K 0 ( λ | x 1 − c | ) · · · Q p ( | x 2 − c | ) e ip θ 2 K p ( λ | x 2 − c | ) e ip θ 2 Q 0 ( | x 2 − c | ) K 0 ( λ | x 2 − c | ) · · · L = . . . . . . . . . . . . Q p ( | x m − c | ) e ip θ m K p ( λ | x m − c | ) e ip θ m Q 0 ( | x m − c | ) K 0 ( λ | x m − c | ) · · · What is R ⊺ ?
ANALYTICAL DECOMPOSITION A = LR ⊺ . The form of L is straightforward Q p ( | x 1 − c | ) e ip θ 1 K p ( λ | x 1 − c | ) e ip θ 1 Q 0 ( | x 1 − c | ) K 0 ( λ | x 1 − c | ) · · · Q p ( | x 2 − c | ) e ip θ 2 K p ( λ | x 2 − c | ) e ip θ 2 Q 0 ( | x 2 − c | ) K 0 ( λ | x 2 − c | ) · · · L = . . . . . . . . . . . . Q p ( | x m − c | ) e ip θ m K p ( λ | x m − c | ) e ip θ m Q 0 ( | x m − c | ) K 0 ( λ | x m − c | ) · · · What is R ⊺ ? It is the map from the sources to the coefficients separate each mode evaluate both solve 2 × 2 u and ∂ n u on disc modes with R ⊺ = boundary for coeffs FFT Note that there is an analytical formula for R ⊺ [Askham, 2017].
WHAT OF EFFICIENCY? Because the formulas for L and R ⊺ are known, forming these matrices is O (( m + n ) p ).
WHAT OF EFFICIENCY? Because the formulas for L and R ⊺ are known, forming these matrices is O (( m + n ) p ). The SVD, on the other hand is O ( mn 2 ).
WHAT OF EFFICIENCY? Because the formulas for L and R ⊺ are known, forming these matrices is O (( m + n ) p ). The SVD, on the other hand is O ( mn 2 ). Even randomized methods for the SVD are O ( mnp ).
WHAT OF EFFICIENCY? Because the formulas for L and R ⊺ are known, forming these matrices is O (( m + n ) p ). The SVD, on the other hand is O ( mn 2 ). Even randomized methods for the SVD are O ( mnp ). It is not always the case that sources are well-separated from targets. Can we make a stable FMM with the above?
AN FMM The preceding provides a stable fast multipole method A fast multipole method is based on:
AN FMM The preceding provides a stable fast multipole method A fast multipole method is based on: 1 a formula for representing the sum due to a localized subset of the points (a multipole expansion). ( Q n , K n )
AN FMM The preceding provides a stable fast multipole method A fast multipole method is based on: 1 a formula for representing the sum due to a localized subset of the points (a multipole expansion). ( Q n , K n ) 2 a formula for representing the sum due to points outside of a disc (a local expansion). ( r | n | , P n )
AN FMM The preceding provides a stable fast multipole method A fast multipole method is based on: 1 a formula for representing the sum due to a localized subset of the points (a multipole expansion). ( Q n , K n ) 2 a formula for representing the sum due to points outside of a disc (a local expansion). ( r | n | , P n ) 3 formulas for translating between these representations (translation operators). see the preprint!
AN FMM The preceding provides a stable fast multipole method A fast multipole method is based on: 1 a formula for representing the sum due to a localized subset of the points (a multipole expansion). ( Q n , K n ) 2 a formula for representing the sum due to points outside of a disc (a local expansion). ( r | n | , P n ) 3 formulas for translating between these representations (translation operators). see the preprint! 4 a hierarchical organization of source and target points in space
COMPUTING THE PARTICULAR SOLUTION
EVALUATING THE PARTICULAR SOLUTION To compute the particular solution, we need to evaluate integrals of the form � v ( x ) = Vf ( x ) := K ( x , y ) f ( y ) dy , Ω where K ( x , y ) = log | x − y | or K ( x , y ) = K 0 ( λ | x − y | ).
EVALUATING THE PARTICULAR SOLUTION To compute the particular solution, we need to evaluate integrals of the form � v ( x ) = Vf ( x ) := K ( x , y ) f ( y ) dy , Ω where K ( x , y ) = log | x − y | or K ( x , y ) = K 0 ( λ | x − y | ). No solve, just apply
EVALUATING THE PARTICULAR SOLUTION To compute the particular solution, we need to evaluate integrals of the form � v ( x ) = Vf ( x ) := K ( x , y ) f ( y ) dy , Ω where K ( x , y ) = log | x − y | or K ( x , y ) = K 0 ( λ | x − y | ). No solve, just apply Weakly singular integrand
EVALUATING THE PARTICULAR SOLUTION To compute the particular solution, we need to evaluate integrals of the form � v ( x ) = Vf ( x ) := K ( x , y ) f ( y ) dy , Ω where K ( x , y ) = log | x − y | or K ( x , y ) = K 0 ( λ | x − y | ). No solve, just apply Weakly singular integrand Expensive on an unstructured discretization (adpative quadrature, etc.)
EVALUATING THE PARTICULAR SOLUTION To compute the particular solution, we need to evaluate integrals of the form � v ( x ) = Vf ( x ) := K ( x , y ) f ( y ) dy , Ω where K ( x , y ) = log | x − y | or K ( x , y ) = K 0 ( λ | x − y | ). No solve, just apply Weakly singular integrand Expensive on an unstructured discretization (adpative quadrature, etc.) Fast methods for regular domains Disc solvers “Box codes” (Ethridge and Greengard, Cheng et al., Langston and Zorin)
BOX CODES, IN BRIEF Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves):
BOX CODES, IN BRIEF Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation of integrals to near machine precision)
BOX CODES, IN BRIEF Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation of integrals to near machine precision) (plane wave) FMM for far-field
BOX CODES, IN BRIEF Box codes (typically) work on level-restricted trees and are very efficient (density f defined on leaves): Limited number of possible local interactions (precomputation of integrals to near machine precision) (plane wave) FMM for far-field Very fast, even on adaptive grids
BOX CODE SPEED
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision.
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf V ˜ ˜ f ( x ): value of V ˜ f ( x ) computed using box code ǫ : precision of FMM
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf V ˜ ˜ f ( x ): value of V ˜ f ( x ) computed using box code ǫ : precision of FMM From multipole estimates: | ˜ V ˜ f ( x ) − V ˜ f ( x ) | ≤ ǫ � ˜ f � 1 ,
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf V ˜ ˜ f ( x ): value of V ˜ f ( x ) computed using box code ǫ : precision of FMM From multipole estimates: | ˜ V ˜ f ( x ) − V ˜ f ( x ) | ≤ ǫ � ˜ f � 1 , From triangle inequality and boundedness of V : | ˜ V ˜ ≤ ǫ | Ω | + C (Ω) � f − ˜ f ( x ) − Vf ( x ) | f � ∞ . � ˜ � ˜ f � ∞ f � ∞
BOX CODE ERROR ESTIMATE The application of a bounded operator is easy to analyze The box code computes the volume integral at collocation nodes to a specified precision. Notation: ˜ f : approximation to f by polynomials on each leaf V ˜ ˜ f ( x ): value of V ˜ f ( x ) computed using box code ǫ : precision of FMM From multipole estimates: | ˜ V ˜ f ( x ) − V ˜ f ( x ) | ≤ ǫ � ˜ f � 1 , From triangle inequality and boundedness of V : | ˜ V ˜ ≤ ǫ | Ω | + C (Ω) � f − ˜ f ( x ) − Vf ( x ) | f � ∞ . � ˜ � ˜ f � ∞ f � ∞ Gives an a priori error estimate (similar for ∇ V ).
EMBEDDING IN A BOX Let Ω be contained in a box Ω B and let f e | Ω = f be defined on all 0.4 of Ω B . Then 0.2 0.0 � Vf e ( x ) = G L ( x , y ) f e ( y ) dy 0.2 Ω B 0.4 is another particular solution and 0.4 0.2 0.0 0.2 0.4 Vf e can be computed using a box Figure: The domain Ω with an adaptive tree structure code. overlaying it.
Recommend
More recommend