On a new orthonormal basis for RBF native spaces and its fast computation Stefano De Marchi and Gabriele Santin Torino: June 11, 2014
Outline Introduction 1 Change of basis 2 3 WSVD Basis The new basis 4 5 Numerical Results Further work 6 References 7 2 of 33
Introduction RBF Approximation 1 Data: Ω ⊂ R n , X ⊂ Ω , test function f X = { x 1 , . . . , x N } ⊂ Ω f 1 , . . . , f N , where f i = f ( x i ) 3 of 33
Introduction RBF Approximation 1 Data: Ω ⊂ R n , X ⊂ Ω , test function f X = { x 1 , . . . , x N } ⊂ Ω f 1 , . . . , f N , where f i = f ( x i ) 2 Approximation setting: kernel K ε , N K (Ω) , N K ( X ) ⊂ N K (Ω) kernel K = K ε , positive definite and radial native space N K (Ω) (where K is the reproducing kernel) finite subspace N K ( X ) = span { K ( · , x ) : x ∈ X } ⊂ N K (Ω) 3 of 33
Introduction RBF Approximation 1 Data: Ω ⊂ R n , X ⊂ Ω , test function f X = { x 1 , . . . , x N } ⊂ Ω f 1 , . . . , f N , where f i = f ( x i ) 2 Approximation setting: kernel K ε , N K (Ω) , N K ( X ) ⊂ N K (Ω) kernel K = K ε , positive definite and radial native space N K (Ω) (where K is the reproducing kernel) finite subspace N K ( X ) = span { K ( · , x ) : x ∈ X } ⊂ N K (Ω) Aim Find s f ∈ N K ( X ) s.t. s f ≈ f 3 of 33
Introduction Problem setting and questions Problem: the standard basis of translates (data-dependent) of N K ( X ) is unstable and not flexible Question 1 Is it possible to find a “better” basis U of N K ( X ) ? Question 2 How to embed information about K and Ω in the basis U ? Question 3 Can we extract U ′ ⊂ U s.t. s ′ f is as good as s f ? 4 of 33
The “natural” basis The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909) Let K be a continuous, positive definite kernel on a bounded Ω ⊂ R n . Then K has an eigenfunction expansion with non-negative coefficients, the eigenvalues, s.t. ∞ � K ( x , y ) = λ j ϕ j ( x ) ϕ j ( y ) , ∀ x , y ∈ Ω . j = 0 Moreover, � λ j ϕ j ( x ) = K ( x , y ) ϕ j ( y ) dy := T [ ϕ j ]( x ) , ∀ x ∈ Ω , j ≥ 0 Ω orthonormal ∈ N K (Ω) { ϕ j } j > 0 ∞ orthogonal ∈ L 2 (Ω) , � ϕ j � 2 L 2 (Ω) = λ j { ϕ j } j > 0 − → 0 , � λ j = K ( 0 , 0 ) | Ω | , (the operator is of trace-class) j > 0 Notice: to find the functions ϕ i explicitely it is not always possible 5 of 33 [Fasshauer,McCourt 2012, for GRBF].
Change of basis Notation Letting Ω ⊂ R n and X = { x 1 , . . . , x N } ⊂ Ω T X = { K ( · , x i ) , x i ∈ X } : the standard basis of translates; U = { u i ∈ N K (Ω) , i = 1 , . . . , N } : another basis s.t. span ( U ) = span ( T X ) := N K (Ω) . At x ∈ Ω , T X and U can be expressed as the row vectors T ( x ) = [ K ( x , x 1 ) , . . . , K ( x , x N )] ∈ R N U ( x ) = [ u 1 ( x ) , . . . , u N ( x )] ∈ R N . we need also the scalar products N � � ( f , g ) 2 w j f ( x j ) g ( x j ) =: ( f , g ) 2 L 2 (Ω) := f ( x ) g ( x ) dx ≈ 2 ( X ) . ℓ w Ω j = 1 6 of 33
Change of basis General idea Some useful results [Pazouki,Schaback 2011] Change of basis Let A ij = K ( x i , x j ) ∈ R N × N . Any basis U arises from a factorization − 1 , where V U = ( u j ( x i )) 1 � i , j � N and C U is the matrix of change A = V U · C U of basis s.t. U ( x ) = T ( x ) · C U . Some consequences of this factorization 7 of 33
Change of basis General idea Some useful results [Pazouki,Schaback 2011] Change of basis Let A ij = K ( x i , x j ) ∈ R N × N . Any basis U arises from a factorization − 1 , where V U = ( u j ( x i )) 1 � i , j � N and C U is the matrix of change A = V U · C U of basis s.t. U ( x ) = T ( x ) · C U . Some consequences of this factorization 1. The interpolant P f , X at x can be written as N � P f , X ( x ) = Λ j ( f ) u j ( x ) = U ( x )Λ( f ) , ∀ x ∈ Ω j = 1 where Λ( f ) = [Λ 1 ( f ) , . . . , Λ N ( f )] T ∈ R N is a column vector of values of linear functionals defined by − 1 · A − 1 · f X = V U − 1 · f X , Λ( f ) = C U where f X is the column vector given by the evaluations of f at X . 7 of 33
Change of basis Consequences 2. If U is a N K (Ω) -orthonormal basis, we get the stability estimate � � � � P f , X ( x ) K ( 0 , 0 ) � f � K ∀ x ∈ Ω . (1) � � � � In particular, for fixed x ∈ Ω and f ∈ N K the values � U ( x ) � 2 and � Λ( f ) � 2 , are the same for all N K (Ω) -orthonormal bases independently on X � � U ( x ) � 2 � K ( 0 , 0 ) , � Λ( f ) � 2 � � f � K . (2) 8 of 33
Change of basis Other results [Pazouki,Schaback 2011] Change of basis Each N K (Ω) -orthonormal basis U arises from an orthornormal decomposition A = B T · B with − 1 , V U = B T = ( C U − 1 ) T . B = C U Each ℓ 2 ( X ) -orthonormal basis U arises from a decomposition − 1 = Q T A . A = Q · B with Q = V U , Q T Q = I , B = C U 9 of 33
Change of basis Other results [Pazouki,Schaback 2011] Change of basis Each N K (Ω) -orthonormal basis U arises from an orthornormal decomposition A = B T · B with − 1 , V U = B T = ( C U − 1 ) T . B = C U Each ℓ 2 ( X ) -orthonormal basis U arises from a decomposition − 1 = Q T A . A = Q · B with Q = V U , Q T Q = I , B = C U Notice: the best bases in terms of stability are the N K (Ω) -orthonormal ones! 9 of 33
Change of basis Other results [Pazouki,Schaback 2011] Change of basis Each N K (Ω) -orthonormal basis U arises from an orthornormal decomposition A = B T · B with − 1 , V U = B T = ( C U − 1 ) T . B = C U Each ℓ 2 ( X ) -orthonormal basis U arises from a decomposition − 1 = Q T A . A = Q · B with Q = V U , Q T Q = I , B = C U Notice: the best bases in terms of stability are the N K (Ω) -orthonormal ones! Q1: It is possible to find a “better” basis? Yes, we can! 9 of 33
WSVD Basis Main idea: I Q2: How to embed information on K and Ω in U ? Symmetric Nystr¨ om method [Atkinson,Han 2001] The main idea for the construction of our basis is to discretize the “natural” basis introduced in Mercer’s theorem. To this aim, consider on Ω a cubature rule ( X , W ) , that is a set of distinct points X = { x j } N j = 1 ⊂ Ω and a set of positive weights W = { w j } N j = 1 , N ∈ N , such that N � � f ( y ) dy ≈ f ( x j ) w j ∀ f ∈ N K (Ω) . (3) Ω j = 1 10 of 33
WSVD Basis Main idea: II Thus, the operator T K can be evaluated on X as � λ j ϕ j ( x i ) = K ( x i , y ) ϕ j ( y ) dy i = 1 , . . . , N , ∀ j > 0 , Ω and then discretized using the cubature rule by N � λ j ϕ j ( x i ) ≈ K ( x i , x h ) ϕ j ( x h ) w h i , j = 1 , . . . , N . (4) h = 1 Letting W = diag ( w j ) , it suffices to solve the following discrete eigenvalue problem in order to find the approximation of the eigenvalues and eigenfunctions (evaluated on X ) of T K [ f ] : λ v = ( A · W ) v (5) 11 of 33
WSVD basis Main idea: III A solution is to rewrite (4) using the fact that the weights are positive as N λ j ( √ w i ϕ j ( x i )) = ( √ w i Φ( x i , x h ) √ w h )( √ w h ϕ j ( x h )) � ∀ i , j = 1 , . . . , N , h = 1 (6) and then to consider the corresponding scaled eigenvalue problem � √ � √ √ � � √ � � = λ W · v W · A · W W · v which is equivalent to the previous one, now involving the √ √ symmetric and positive definite matrix A W := W · A · W . 12 of 33
WSVD Basis Definition { λ j , ϕ j } j > 0 are then approximated by eigenvalues/eigenvectors of √ √ A W := W · A · W . This matrix is normal, then a singular value decomposition of A W is a unitary diagonalization. Definition: A weighted SVD basis U is a basis for N K ( X ) s.t. √ √ W − 1 · Q · Σ , W · Q · Σ − 1 V U = C U = − 1 , then A W = Q · Σ 2 · Q T is the SVD . since A = V U C U Here Σ jj = σ j , j = 1 , . . . , N and σ 2 1 ≥ · · · ≥ σ 2 N > 0 are the singular values of A W . 13 of 33
WSVD Basis Properties This basis is in fact an approximation of the “natural” one (provided w i > 0, � N i = 1 w i = | Ω | ) 14 of 33
WSVD Basis Properties This basis is in fact an approximation of the “natural” one (provided w i > 0, � N i = 1 w i = | Ω | ) Properties of the new basis U (cf. [De Marchi-Santin 2013]) N u j ( x ) = 1 w i u j ( x i ) K ( x , x i ) ≈ 1 � T K [ u j ]( x ) , σ 2 σ 2 j i = 1 j ∀ 1 � j � N , ∀ x ∈ Ω ; N K (Ω) -orthonormal ℓ w 2 ( X ) -orthogonal, � u j � 2 2 ( X ) = σ 2 ∀ u j ∈ U ℓ w j N � σ 2 j = K ( 0 , 0 ) | Ω | j = 1 14 of 33
WSVD Basis Approximation Interpolant: s f ( x ) = � N j = 1 ( f , u j ) K u j ( x ) ∀ x ∈ Ω � � M WDLS: s f := argmin � f − g � 2 ( X ) : g ∈ span { u 1 , . . . , u M } ℓ w Weighted Discrete Least Squares as truncation: Let f ∈ N K (Ω) , 1 � M � N . Then ∀ x ∈ Ω M M M ( f , u j ) ℓ w ( f , u j ) ℓ w 2 ( X ) 2 ( X ) M � � � f ( x ) = u j ( x ) = u j ( x ) = ( f , u j ) K u j ( x ) s ( u j , u j ) ℓ w σ 2 2 ( X ) j = 1 j = 1 j j = 1 15 of 33
Recommend
More recommend