stochastic heat kernel estimation on sampled manifolds
play

Stochastic Heat Kernel Estimation on Sampled Manifolds Symposium on - PowerPoint PPT Presentation

Stochastic Heat Kernel Estimation on Sampled Manifolds Symposium on Geometry Processing 2017 Tristan Aumentado-Armstrong & Kaleem Siddiqi July 4, 2017 Centre for Intelligent Machines & School of Computer Science McGill University,


  1. Stochastic Heat Kernel Estimation on Sampled Manifolds Symposium on Geometry Processing 2017 Tristan Aumentado-Armstrong & Kaleem Siddiqi July 4, 2017 Centre for Intelligent Machines & School of Computer Science McGill University, Canada

  2. Table of contents 1. Introduction 2. Theoretical Background 3. Algorithm 4. Empirical Results 5. Discussion 1

  3. Introduction

  4. Sampled Manifolds and Heat Kernels Many datasets can be viewed as random samples from a Riemannian manifold. (e.g. 3D point clouds, natural images). The underlying manifold has an associated heat kernel , with many applications in computer vision and graphics: • Feature extraction (Sun et al, 2009; Gebal et al, 2009) • Shape matching (Ovsjanikov et al, 2010) • Shape retrieval (Bronstein et al, 2011) Introduction 2

  5. Heat Kernel Applications for Shapes I Invariance properties Feature point detection & (Bronstein et al, 2010.) description (Sun et al, 2009) Introduction 3

  6. Heat Kernel Applications for Shapes II Segmentation (Gebal et al, Symmetry (Sun et al, 2009; 2009) Ovsjanikov et al, 2010) Introduction 4

  7. Heat Kernel Applications for Shapes III Point matching (Ovsjanikov et al, 2010) Introduction 5

  8. Heat Kernel Applications for Shapes IV Shape retrieval & global description (Ovsjanikov et al, 2009) Introduction 6

  9. Computing the Heat Kernel The most straightforward spectral approach involves a global eigenvalue problem. Others have devised more sophisticated approaches: • Multiresolution prolongation (Vaxman et al, 2010) • Rational Chebyshev approximation (Patan´ e and Spagnuolo, 2014) Introduction 7

  10. A Stochastic Perspective The heat kernel on a manifold has been studied with stochastic methods since Ito (1950). Basic idea: the heat kernel describes the transition density function of Brownian motion on the manifold. Question Can we use this outlook to compute the heat kernel? What properties would such an algorithm have? Introduction 8

  11. Answer We show how to use a Monte Carlo algorithm, via trajectories simulated on the sampled manifold. Introduction 9

  12. Theoretical Background

  13. The Laplace-Beltrami Operator Let ( M, g ) be a Riemannian manifold in R D with dim ( M ) = d . The Laplace-Beltrami operator (LBO) on it is given by: ∆ g = div g grad g = ∇ g · ∇ g In local coordinates, this can also be written as: � � � ∂ 2 ∂ g jk g jk Γ ℓ ∆ g = − jk ∂x j ∂x k ∂x ℓ � �� � j k ℓ � �� � Diffusion Convection where Γ ℓ jk are the Christoffel symbols. Theory 10

  14. The Riemannian Heat Equation The heat kernel is the fundamental solution to the heat equation on the manifold: ∆ g u = ∂ ∂tu In spectral form: � ∞ K t ( x, y ) = exp( − λ i t ) φ i ( x ) φ i ( y ) i =0 where ∆ g φ i = − λ i φ i . The heat kernel K t ( x, y ) is our quantity of interest. Theory 11

  15. Stochastic Calculus on Manifolds (I) An Ito Diffusion is a stochastic differential equation (SDE): d X t = µ ( X t ) dt + σ ( X t ) d B t � �� � � �� � Drift Diffusion The drift (convection) and diffusion coefs define the SDE. Changing µ Changing σ Theory 12

  16. Stochastic Calculus on Manifolds (II) Every Ito diffusion has an infinitesimal generator : � � � ∂ 2 µ ℓ ∂ ∂x ℓ + 1 [ σ σ T ] ij L = ∂x i ∂x j 2 i j ℓ that encodes its short-time behaviour. Definition: Brownian motion (BM) on a manifold BM on ( M, g ) is an Ito Diffusion with L = ∆ g / 2 . For BM, the infinitesimal generator can be rewritten as: � � � ∂ 2 L = 1 2∆ g = 1 − 1 ∂ g jk 2 g jk Γ ℓ ∂x j ∂x k + jk 2 ���� ∂x ℓ � �� � j k ℓ [ σ σ T ] jk µ ℓ Theory 13

  17. Stochastic Calculus on Manifolds (III) Hence, given local coordinates on a manifold, we can simulate Brownian motion on it via: � dX i t = µ i ( X t ) dt + σ i k ( X t ) dB k t k with convection (drift) µ and diffusion σ coefficients given by: � � µ k = − 1 g ij Γ k ij 2 i j σ = √ g via the metric tensor g . Theory 14

  18. Relation to the Manifold Heat Kernel The BM X t is a Markov process with a transition density function p ( x, t | y ) . There is an intuitive connection between p ( x, t | y ) & K t ( x, y ) : • p ( x, t | y ) : probability density of a random walk (BM) on ( M, g ) reaching x from y in time t . • K t ( x, y ) : amount of some substance that diffuses over ( M, g ) from x to y in time t . Prop. (Hsu, 2002): Equivalence of p ( x, t | y ) & K t ( x, y ) p ( x, t | y ) = K t ( x, y ) is the heat kernel of ∂ t u = L u . Theory 15

  19. Transition Density Estimation (I) y We may estimate p ( t, x | y ) instead of K t ( x, y ) . But how? x High p ( x, t | y ) A Monte Carlo approach. y Use kernel density estimation (KDE) over a set of n T sample trajectories. x Low p ( x, t | y ) Theory 16

  20. Transition Density Estimation (II) Kernel density estimation (KDE) performs distance weighting . � � n T � p ( x, t | y ) = 1 X ( j ) K δ x,t − y n T j =1 • X ( j ) x,t : trajectory j at time t starting from x • K δ : a kernel function (e.g. Gaussian density) • δ : the kernel bandwidth (controls the smoothing level) Includes some theoretical guarantees. Problem : assumes process is in Euclidean space, not on a curved manifold Theory 17

  21. Transition Density Estimation (III) KDE on manifolds is possible, but requires geodesic distances. However, we can approximate the manifold KDE. Prop. (Ozakin & Gray, 2009): Manifold KDE Approx. � � n T � || X ( j ) x,t − y || 1 � K t ( x, y ) = Ψ n T δ d δ h h j =1 with X ( j ) x,t trajectory j at time t starting from x , Ψ a kernel, || · || the ambient norm, and δ h the bandwidth. Intuition: at small scales, ( M, g ) has intrinsic distances like R D , but has surface area like R d . Theory 18

  22. Algorithm

  23. Outline of Algorithm The algorithm has three steps: 1. Local surface construction 2. Stochastic trajectory generation 3. Transition density estimation Algorithm 19

  24. 1. Moving Least Squares Surface Construction Use PCA on neighbours N k ( p ) to get local coordinates per point p . Fit z p with weighted moving least squares (MLS), where: � γ i,j x i y j z p ( x, y ) = Global Coordinates i,j with deg ( z p ) = 2 . Local surface : Λ p = { x, y, z p ( x, y ) } Local Coordinates Algorithm 20

  25. 2. Stochastic Trajectory Generation (I) Can use standard stochastic numerical integration on a Λ p : � t � t X t = X s + µ ( X r ) dr + σ ( X r ) dB r s s Problem: how to transition from Λ p to Λ q ? Repeat until T max : • Simulate for time T δ on Λ p • Find closest point q to X t • Project X t to Λ q & p ← q Algorithm 21

  26. 2. Stochastic Trajectory Generation (II) Problem : evaluating µ and σ is expensive. Assumption : minor geometric effects on X t are less important than the variance between trajectories at high t . Solution : if t > τ , linearize the curvature terms in the SDE. Similar to the multiresolution idea (Vaxman et al, 2010). Algorithm 22

  27. 3. Kernel Density Estimation (I) Parameters : • P : input points ( | P | = N ) • n T : trajectories per point • S : set of feature points • H T : times of interest Approach : • Generate n T trajectories from each q ∈ S • ∀ p ∈ P, q ∈ S, t ∈ H T : � � n T −|| X ( j ) � q,t − p || 2 1 � K t ( p, q ) = √ exp δ 2 n T δ d π d h h j =1 which is manifold KDE with a Gaussian kernel. This fills K t ( x, y ) as an | S | × N × | H T | array. Algorithm 23

  28. 3. Kernel Density Estimation (II) Problem : how to choose the kernel bandwidth δ h ? − 1 (4+ d ) (a) Theory (Milstein et al, 2004) shows optimal δ h ≈ n . T (b) Use the short-time autodiffusion expansion: � � 1 1 + R ( p ) t � K t ( p, p ) ≈ (4 πt ) d/ 2 6 where R ( p ) is the Ricci curvature (for d = 2 , R = 2 K ). Calibrate δ h with � K t ( p, p ) around the theory-based value using a few small times. Algorithm 24

  29. Algorithmic Complexity 1. Moving least squares surface construction O ( N log( N )) 2. Stochastic trajectory generation O ( | S | n T T max [log( N ) /T δ + C step /h ]) 3. Kernel density estimation O ( n T | S | N | H T | ) Assumes: using a KD-tree in 3D Algorithm 25

  30. Empirical Results

  31. The Sphere (I) We sample N = 500 points from S 2 . Manifold fidelity: T δ || X t || t Green: || X t || = 1 . Blue: || X t || = 1 . 005 . The true K t can be computed from the spherical harmonics . Experiments 26

  32. The Sphere (II) n T =70 n T =150 n T =1000 K t ( x, x ) K t ( x, y ) (Antipodal) t t t Green: stochastic estimate; Blue: analytic value. Experiments 27

  33. TOSCA Meshes (I) Only the surface construction step (1) is changed by having a mesh structure (i.e. choosing the neighbours N k ( p ) ). Manifold Fidelity: (Problems if high curvature & T δ ) Experiments 28

  34. TOSCA Meshes (II) Compare to linear FEM spectral approach (Reuter et al, 2006): Stoch. Spec. Stochastic approach has sparser log arrival-time maps ( n T = 500 ). Experiments 29

  35. Timing on Larger Point Clouds (I) Using KD-trees. (No edge/face information used). Fix : T δ = 0 . 25 , h s = 0 . 05 , n T = 200 , T max = 500 , | H T | = 5 . Experiments 30

Recommend


More recommend