Programming Derivatives of RBFs Robert Schaback Georg-August-Universität Göttingen Akademie der Wissenschaften zu Göttingen ICERM August 2017
Overview Motivation Examples Theory Remarks on Implementation Summary and Outlook
Motivation
Motivation: Need for Derivatives
Motivation: Need for Derivatives For unsymmetric collocation you have to take ∆
Motivation: Need for Derivatives For unsymmetric collocation you have to take ∆ For symmetric collocation you have to take ∆ and ∆ 2
Motivation: Need for Derivatives For unsymmetric collocation you have to take ∆ For symmetric collocation you have to take ∆ and ∆ 2 For divergence-free vector fields derived from kernels K you need ( ∇∇ T − ∆ · Id ) K
Motivation: Need for Derivatives For unsymmetric collocation you have to take ∆ For symmetric collocation you have to take ∆ and ∆ 2 For divergence-free vector fields derived from kernels K you need ( ∇∇ T − ∆ · Id ) K Students never get derivatives right
Idea: Recursion
Idea: Recursion Observation: Derivatives of RBFs often are (modified) RBFs
Idea: Recursion Observation: Derivatives of RBFs often are (modified) RBFs Assume RBF family { φ p ( r ) } p parametrized by p
Idea: Recursion Observation: Derivatives of RBFs often are (modified) RBFs Assume RBF family { φ p ( r ) } p parametrized by p Express derivatives via φ p ( r ) , φ p − 1 ( r ) etc.
Idea: Recursion Observation: Derivatives of RBFs often are (modified) RBFs Assume RBF family { φ p ( r ) } p parametrized by p Express derivatives via φ p ( r ) , φ p − 1 ( r ) etc. Observation: Strange pattern of derivative recursions
Idea: Recursion Observation: Derivatives of RBFs often are (modified) RBFs Assume RBF family { φ p ( r ) } p parametrized by p Express derivatives via φ p ( r ) , φ p − 1 ( r ) etc. Observation: Strange pattern of derivative recursions Observation: The pattern comes from the f -form of RBFs
Idea: Recursion on f -form
Idea: Recursion on f -form √ Write φ p ( r ) = f p ( r 2 / 2 ) or φ p ( 2 s ) = f p ( s ) , s = r 2 / 2
Idea: Recursion on f -form √ Write φ p ( r ) = f p ( r 2 / 2 ) or φ p ( 2 s ) = f p ( s ) , s = r 2 / 2 Well-known from Bocher-Schoenberg theory
Idea: Recursion on f -form √ Write φ p ( r ) = f p ( r 2 / 2 ) or φ p ( 2 s ) = f p ( s ) , s = r 2 / 2 Well-known from Bocher-Schoenberg theory Goal: Express f p derivatives via f p − 1 , f p − 2 etc.
Examples
Example: Gaussian φ ( r ) = exp ( − r 2 / 2 )
Example: Gaussian φ ( r ) = exp ( − r 2 / 2 ) f ( s ) = exp ( − s )
Example: Gaussian φ ( r ) = exp ( − r 2 / 2 ) f ( s ) = exp ( − s ) f ′ ( s ) = − f ( s )
Example: Multiquadrics φ m ( r ) = ( 1 + r 2 / 2 ) − m
Example: Multiquadrics φ m ( r ) = ( 1 + r 2 / 2 ) − m f m ( s ) = ( 1 + s ) − m
Example: Multiquadrics φ m ( r ) = ( 1 + r 2 / 2 ) − m f m ( s ) = ( 1 + s ) − m f ′ m ( s ) = − m f m + 1 ( s )
Example: Powers φ m ( r ) = r m
Example: Powers φ m ( r ) = r m √ 2 s ) m f m ( s ) = (
Example: Powers φ m ( r ) = r m √ 2 s ) m f m ( s ) = ( √ √ d 2 s = 1 / 2 s ds
Example: Powers φ m ( r ) = r m √ 2 s ) m f m ( s ) = ( √ √ d 2 s = 1 / 2 s ds √ √ f ′ 2 s ) m − 1 / m ( s ) = m ( 2 s = m f m − 2 ( s )
Example: Thin-Plate Splines φ 2 m ( r ) = r 2 m log r
Example: Thin-Plate Splines φ 2 m ( r ) = r 2 m log r √ √ 2 s ) 2 m log ( f 2 m ( s ) = ( 2 s )
Example: Thin-Plate Splines φ 2 m ( r ) = r 2 m log r √ √ 2 s ) 2 m log ( f 2 m ( s ) = ( 2 s ) √ √ √ √ 2 s ) 2 m − 1 log ( f ′ 2 s ) 2 m / ( 2 s ) 2 m ( s ) = 2 m ( 2 s ) / 2 s + ( ( 2 s ) m − 1 = 2 m f 2 m − 2 ( s ) + � �� � = polynomial
Example: Thin-Plate Splines φ 2 m ( r ) = r 2 m log r √ √ 2 s ) 2 m log ( f 2 m ( s ) = ( 2 s ) √ √ √ √ 2 s ) 2 m − 1 log ( f ′ 2 s ) 2 m / ( 2 s ) 2 m ( s ) = 2 m ( 2 s ) / 2 s + ( ( 2 s ) m − 1 = 2 m f 2 m − 2 ( s ) + � �� � = polynomial The polynomial part vanishes in the conditional positive definite setting
Example: Thin-Plate Splines φ 2 m ( r ) = r 2 m log r √ √ 2 s ) 2 m log ( f 2 m ( s ) = ( 2 s ) √ √ √ √ 2 s ) 2 m − 1 log ( f ′ 2 s ) 2 m / ( 2 s ) 2 m ( s ) = 2 m ( 2 s ) / 2 s + ( ( 2 s ) m − 1 = 2 m f 2 m − 2 ( s ) + � �� � = polynomial The polynomial part vanishes in the conditional positive definite setting Dealing with powers is clear
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r )
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r ) √ √ 2 s ) ν K ν ( f ν ( s ) = ( 2 s )
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r ) √ √ 2 s ) ν K ν ( f ν ( s ) = ( 2 s ) d dz ( z ν K ν ( z )) = − z ν K ν − 1 ( z )
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r ) √ √ 2 s ) ν K ν ( f ν ( s ) = ( 2 s ) d dz ( z ν K ν ( z )) = − z ν K ν − 1 ( z ) √ √ √ f ′ 2 s ) ν K ν − 1 ( ν ( s ) = − ( 2 s ) / 2 s = − f ν − 1 ( s )
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r ) √ √ 2 s ) ν K ν ( f ν ( s ) = ( 2 s ) d dz ( z ν K ν ( z )) = − z ν K ν − 1 ( z ) √ √ √ f ′ 2 s ) ν K ν − 1 ( ν ( s ) = − ( 2 s ) / 2 s = − f ν − 1 ( s ) This would not work without s = r 2 / 2
Matérn-Sobolev Kernels φ ν ( r ) = r ν K ν ( r ) √ √ 2 s ) ν K ν ( f ν ( s ) = ( 2 s ) d dz ( z ν K ν ( z )) = − z ν K ν − 1 ( z ) √ √ √ f ′ 2 s ) ν K ν − 1 ( ν ( s ) = − ( 2 s ) / 2 s = − f ν − 1 ( s ) This would not work without s = r 2 / 2 ν = m − d / 2 ⇒ ν − 1 means m ⇒ m − 1 or d ⇒ d + 2
Wendland Kernels φ d , k in C 2 k , SPD on R d , minimal degree ⌊ d / 2 ⌋ + 3 k + 1 ( 1 − r ) ℓ φ ℓ ( r ) := + � ∞ ( I φ )( r ) := t φ ( t ) dt r I k φ ⌊ d / 2 ⌋ + k + 1 ( r ) φ d , k ( r ) := √ I k φ ⌊ d / 2 ⌋ + k + 1 ( f d , k ( s ) := 2 s ) ( I φ ) ′ ( r ) = − r φ ( r ) √ √ √ f ′ 2 s I k − 1 φ ⌊ d / 2 ⌋ + k + 1 ( d , k ( s ) = − 2 s ) / 2 s √ − I k − 1 φ ⌊ ( d + 2 ) / 2 ⌋ + k − 1 + 1 ( = 2 s ) = − f d + 2 , k − 1 ( s ) This would not work without s = r 2 / 2
Laplacian φ ′′ ( r ) + ( d − 1 ) φ ′ ( r ) ∆ φ ( r ) = (singular!) r f ( r 2 / 2 ) φ ( r ) = φ ′ ( r ) r f ′ ( r 2 / 2 ) = r 2 f ′′ ( r 2 / 2 ) + f ′ ( r 2 / 2 ) φ ′′ ( r ) = r 2 f ′′ ( r 2 / 2 ) + d f ′ ( r 2 / 2 ) = 2 sf ′′ ( s ) + df ′ ( s ) ∆ φ = ∆ 2 φ 4 s 2 f ( 4 ) ( s ) + 4 sdf ( 3 ) ( s ) + d 2 f ′′ ( s ) = No visible singularities in f -form Other derivatives via e.g. dx φ ( r ) = φ ′ ( r ) x d r = r f ′ ( r 2 / 2 ) x r = xf ′ ( s )
Theory
General Result Theorem (Dimension walk) Radial Fourier transform F d on R d implies F d + 2 f ′ p = − F d f p
General Result Theorem (Dimension walk) Radial Fourier transform F d on R d implies F d + 2 f ′ p = − F d f p Closedness Assumption between { f p } p and { g q } q F d f p = g A ( d , p ) , F d g q = f B ( d , q )
General Result Theorem (Dimension walk) Radial Fourier transform F d on R d implies F d + 2 f ′ p = − F d f p Closedness Assumption between { f p } p and { g q } q F d f p = g A ( d , p ) , F d g q = f B ( d , q ) Theorem: f ′ p = − F d + 2 F d f p = − f B ( d + 2 , A ( d , p ))
General Result Theorem (Dimension walk) Radial Fourier transform F d on R d implies F d + 2 f ′ p = − F d f p Closedness Assumption between { f p } p and { g q } q F d f p = g A ( d , p ) , F d g q = f B ( d , q ) Theorem: f ′ p = − F d + 2 F d f p = − f B ( d + 2 , A ( d , p )) No separate derivative program needed
General Result Theorem (Dimension walk) Radial Fourier transform F d on R d implies F d + 2 f ′ p = − F d f p Closedness Assumption between { f p } p and { g q } q F d f p = g A ( d , p ) , F d g q = f B ( d , q ) Theorem: f ′ p = − F d + 2 F d f p = − f B ( d + 2 , A ( d , p )) No separate derivative program needed Derivatives and dimensions may be fractional
Proof of Dimension Walk Radial Fourier transform F ν for ν = ( d − 2 ) / 2: � ∞ f p ( s ) s ν H ν ( st ) ds ( F ν f p )( t ) = � ∞ 0 ( F ν f p )( t ) t ν H ν ( ts ) dt f p ( s ) = 0 ( z / 2 ) − ν J ν ( z ) ( − z 2 / 4 ) k H ν ( − z 2 / 4 ) = � ∞ = k = 0 k !Γ( k + ν + 1 ) H ′ = − H ν + 1 , d ⇒ d + 2 ν � ∞ f ′ ( F ν f p )( t ) t ν tH ′ p ( s ) = ν ( ts ) dt 0 � ∞ ( F ν f p )( t ) t ν + 1 H ′ = − ν + 1 ( ts ) dt 0 − F − 1 = ν + 1 F ν ( f p )( s ) F ν + 1 f ′ = − F ν f p p
Remarks on Implementation
Matrix Formulation Kernel matrix φ ( � x j − y k � 2 ) = f ( � x j − y k � 2 2 / 2 ) function dsqh=distsqh(X, Y) % X and Y are matrices with points as rows nX=length(X(:,1));nY=length(Y(:,1)); Xsh=sum((X.*X)’)/2; Ysh=sum((Y.*Y)’)/2; dsqh=repmat(Xsh’,1,nY)+repmat(Ysh,nX,1)-X*Y’;
Recommend
More recommend