generating sparse representations by adaptive multiscale
play

Generating Sparse Representations by Adaptive Multiscale - PowerPoint PPT Presentation

Generating Sparse Representations by Adaptive Multiscale Approximations Angela Kunoth University of Cologne, Germany Angela Kunoth Generating Sparse Representations by Adaptive Multiscale Approximations 1 Generating Sparse Representations


  1. Generating Sparse Representations by Adaptive Multiscale Approximations Angela Kunoth University of Cologne, Germany Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 1

  2. Generating Sparse Representations by Adaptive Multiscale Approximations Angela Kunoth University of Cologne, Germany Central topic: Efficient extraction, representation and analysis of information: sparse representations Goal: Maximal gain of knowledge with (ideally) provable minimal amount of degrees of freedom and work Essential ingredients: adaptive multiscale/wavelet representations Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 1

  3. Generating Sparse Representations by Adaptive Multiscale Approximations Angela Kunoth University of Cologne, Germany Central topic: Efficient extraction, representation and analysis of information: sparse representations Goal: Maximal gain of knowledge with (ideally) provable minimal amount of degrees of freedom and work Essential ingredients: adaptive multiscale/wavelet representations Problem classes: ◮ Explicitly given information: fit and/or analysis of (multivariate, nonlinear) data on nonuniform grids ◮ Implicitly given information: • solution of (elliptic or parabolic) partial differential equations (PDEs) • PDE-constrained control problems Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 1

  4. Part I: Explicitly Given Data: Approximation of Surfaces Problem: Given P = { ( x 1 , z 1 ) , . . . , ( x N , z N ) } not uniformly distributed points X = { x 1 , . . . , x N } ⊂ R n n ∈ { 1 , 2 , 3 } Z = { z 1 , . . . , z N } ⊂ R Goal: Construct function f : R n → R representing P Example for n = 2: Solution method: Adaptive coarse–to–fine construction with thresholding Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 2

  5. An Adaptive Coarse–to–Fine Method with Thresholding [Casta˜ no, Kunoth ’03–’06] � Ansatz: f ( x ) = d λ ψ λ ( x ) Λ appropriate set of indices λ = ( j , k , e ) λ ∈ Λ Multiscale basis functions: { ψ λ } λ ∈ Λ preorthogonal (boundary–adapted) B–spline wavelets N � ( z i − f ( x i )) 2 Fitting of { d λ } λ ∈ Λ using Approximation: min i =1 N ( z i − f ( x i )) 2 + ν � f � 2 � Approximation with regularization: min H α i =1 Approximation (with regularization) ❀ normal equations ( A T A (+ ν D )) d = A T z ⇐ ⇒ : ( M (+ ν D )) d = b N M ∈ R (#Λ) × (#Λ) � M λ,λ ′ = ψ λ ( x i ) ψ λ ′ ( x i ) i =1 N � b ∈ R (#Λ) b λ = z i ψ λ ( x i ) i =1 Typical structure of M Amount of data N ≫ #Λ degrees of freedom Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 3

  6. � Choice of index set Λ in f = d λ ψ λ in order to . . . λ ∈ Λ . . . get a reasonable reconstruction . . . avoid processing redundant information # P = 10 . 000 points #Λ a = 33 #Λ b = 8614 Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 4

  7. Data Driven Coarse–To–Fine Construction of Λ 1. Tree at (coarsest) level j = 3 Λ 3 ˜ 2. Discard children containing less than q data points in support Λ 3 � 3. Compute approximation f 3 := d λ ψ λ λ ∈ ˜ Λ3 4. Threshold small coefficients . . . and get tree for level j = 4 Λ 4 . . . . . . repeat 2.–4. 5. Stop at highest level J only determined by data Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 5

  8. Numerical Performance of Multilevel Functions: Spline–Wavelets — Hierarchical Bases Solution of A T Ad = A T z Hierarchical basis B–spline wavelet basis 160 . 000 gridded data points from GTOPO30 Digital Elevation Model Error decay at highest level J = 7: log(error)/CG iterations: no nesting nested iterations Further issues: Regularization: Construction of surfaces with smoothness constraints Robust regression: Handling of outliers [Casta˜ no, Kunoth, IEEE Trans. Image Proc., 2006] Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 6

  9. Example from Photogrammetric Application with fixed α , ν 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Original data vertical view of original data (section) sampling geometry (section) 3D point set (330.000 points) of industrial site taken by Leica Cyrax 2500, Prof. Staiger, GH Essen J=6 J=5 J=4 ky kx reconstruction for J = 4 coefficients of wavelets of type (1 , 1) reconstruction for J = 6 wavelet reconstruction with regularization: ν = 0 . 01, α = 4 thresholding parameter ε = 1 e − 3 #Λ 6 = 2623 full grid: 16384 coefficients Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 7 treatment of outliers . . . multilevel GCV [Casta˜ no, Kunoth, Numer. Algor. ’05]

  10. Part II: Implicitly Given Data: Optimal Control Problems Constrained by a Parabolic PDE Given y ∗ ( t , · ) ω > 0 end time T > 0 initial condition y 0 f � T � T � y ( t , · ) − y ∗ ( t , · ) � 2 � u ( t , · ) � 2 1 ω minimize J ( y , u ) = Z dt + U dt 2 2 0 0 y ′ ( t ) + A ( t ) y ( t ) subject to = f ( t ) + u ( t ) a.e. t ∈ (0 , T ) =: I (PDE) y (0) = y 0 y ′ := ∂ ∂ t y y = y ( t , x ) state u = u ( t , x ) control U = Y ′ = H − 1 (Ω) control space Y = H 1 Z = Y = H 1 0 (Ω) state space 0 (Ω) observation space � A ( t ) : Y → Y ′ � A ( t ) v ( t , · ) , w ( t , · ) � := [ ∇ v ( t , x ) · ∇ w ( t , x ) + v ( t , x ) w ( t , x )] dx Ω Ω ⊂ R d A ( t ) 2nd order linear selfadjoint coercive & continuous operator on Y Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 8

  11. Part II: Implicitly Given Data: Optimal Control Problems Constrained by a Parabolic PDE Given y ∗ ( t , · ) ω > 0 end time T > 0 initial condition y 0 f � T � T � y ( t , · ) − y ∗ ( t , · ) � 2 � u ( t , · ) � 2 1 ω minimize J ( y , u ) = Z dt + U dt 2 2 0 0 y ′ ( t ) + A ( t ) y ( t ) subject to = f ( t ) + u ( t ) a.e. t ∈ (0 , T ) =: I (PDE) y (0) = y 0 y ′ := ∂ ∂ t y y = y ( t , x ) state u = u ( t , x ) control U = Y ′ = H − 1 (Ω) control space Y = H 1 Z = Y = H 1 0 (Ω) state space 0 (Ω) observation space � A ( t ) : Y → Y ′ � A ( t ) v ( t , · ) , w ( t , · ) � := [ ∇ v ( t , x ) · ∇ w ( t , x ) + v ( t , x ) w ( t , x )] dx Ω Ω ⊂ R d A ( t ) 2nd order linear selfadjoint coercive & continuous operator on Y PDE-constrained control problem requires repeated solution of (PDE) ❀ y ′ ( t ) + A ( t ) y ( t ) = f ( t ) + u ( t ) y (0) = y 0 ❀ requires fast solver as core ingredient Conventional time discretizations (e.g., Crank-Nicolson method) ❀ requires fast solver for elliptic PDE in each time step Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 8

  12. Numerical Solution of a Single Elliptic PDE Elliptic PDE Ay = f s.th. � Av � Y ′ ∼ � v � Y find y ∈ Y : � v , Ay � = � v , f � for all v ∈ Y ❀ Conventional finite element discretization on a uniform grid: Y h ⊂ Y dim Y h < ∞ A h y h = f h ❀ Obstructions: ◦ Large linear systems of equations iterative solver ❀ ◦ High desired accuracy ❀ small h ❀ larger problem ❀ worse condition cond 2 ( A h ) ∼ h − 2 ◦ Resolution of singularities in data and/or geometry small h ❀ Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 9

  13. Numerical Solution of a Single Elliptic PDE Elliptic PDE Ay = f s.th. � Av � Y ′ ∼ � v � Y find y ∈ Y : � v , Ay � = � v , f � for all v ∈ Y ❀ Conventional finite element discretization on a uniform grid: Y h ⊂ Y dim Y h < ∞ A h y h = f h ❀ Obstructions: ◦ Large linear systems of equations iterative solver ❀ ◦ High desired accuracy ❀ small h ❀ larger problem ❀ worse condition cond 2 ( A h ) ∼ h − 2 ◦ Resolution of singularities in data and/or geometry small h ❀ Ingredients for Efficient Numerical Solution: (i) Multilevel preconditioner C h multigrid methods, BPX preconditioner, wavelet discretization cond 2 ( C h A h ) ∼ 1 ❀ Proofs: [Braess, Hackbusch ’80s], [Dahmen, Kunoth ’92], [Oswald ’92] (ii) Nested iteration (iii) Additionally: adaptive refinement (for nonsmooth solutions) a–posteriori error estimation local grid refinement ❀ convergence/convergence rates? ❀ Angela Kunoth — Generating Sparse Representations by Adaptive Multiscale Approximations 9

Recommend


More recommend