digital geometry processing
play

DIGITAL GEOMETRY PROCESSING Algorithms for Representing, Analyzing - PowerPoint PPT Presentation

DIGITAL GEOMETRY PROCESSING Algorithms for Representing, Analyzing and Comparing 3D shapes Today Surface Scanning, Processing and Reconstruction 3D shape acquisition (scanning) Shape alignment: Iterative Closest Point (ICP) Point


  1. Local Alignment • What does it mean for an alignment to be good ? Intuition: want corresponding points to be close after transformation. Problems 1. We don’t know what points correspond. 2. We don’t know the optimal alignment.

  2. Iterative Closest Point (ICP) • Approach: iterate between finding correspondences and finding the transformation: x 1 y 1 x 2 y 2 Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N X k R x i + t � y i k 2 2 i =1

  3. Iterative Closest Point (ICP) • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  4. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  5. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  6. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  7. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  8. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  9. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  10. Iterative Closest Point • Approach: iterate between finding correspondences and finding the transformation: Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1

  11. Iterative Closest Point • Requires two main computations: 1. Computing nearest neighbors. 2. Computing the optimal transformation

  12. ICP: Nearest Neighbor Computation Closest points y ∈ Y k y � x i k y i = arg min n How to find closest points efficiently ? n Straightforward complexity: number of points on , number of points on . n divides the space into Voronoi cells V ( y 2 Y ) = { z 2 R 3 : k y � z k < k y 0 � z k 8 y 0 2 Y 6 = y } n Given a query point , determine to which cell it belongs .

  13. ICP: Nearest Neighbor Computation Closest points y ∈ Y k y � x i k y i = arg min n How to find closest points efficiently ? n Straightforward complexity: number of points on , number of points on .

  14. Closest points: Voronoi Cells 1 0.8 0.6 0.4 0.2 0 .2 .4 .6 V ( y ) .8 -1 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 V ( y 2 Y ) = { z 2 R 3 : k y � z k < k y 0 � z k 8 y 0 2 Y 6 = y } Source: M. Bronstein

  15. Closest points: Voronoi Cells Approximate nearest neighbors 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 .2 .2 .4 .4 .6 .6 .8 .8 -1 -1 M. Bronstein 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 n To reduce search complexity, approximate Voronoi cells. n Use binary space partition trees (e.g. kd-trees or octrees ). n Approximate nearest neighbor search complexity: .

  16. ICP: Optimal Transformation Problem Formulation: 1. Given two sets points: in . Find the rigid transform: { x i } , { y i } , i = 1 ..n that minimizes: N R , t X k R x i + t � y i k 2 2 i =1

  17. ICP: Optimal Transformation Problem Formulation: 1. Given two sets points: in . Find the rigid transform: { x i } , { y i } , i = 1 ..n that minimizes: N R , t X k R x i + t � y i k 2 2 i =1

  18. ICP: Optimal Transformation Problem Formulation: 1. Given two sets points: in . Find the rigid transform: { x i } , { y i } , i = 1 ..n N that minimizes: R , t X k R x i + t � y i k 2 2 i =1 2. Closed form solution with rotation matrices: i =1 ( y i − µ Y )( x i − µ X ) T 1. Construct: , where C = P N µ X = 1 P i x i , µ N 2. Compute the SVD of C: C = U Σ V T , µ Y = 1 P i y i , N 1. If det( UV T ) = 1 , R opt = UV T Σ V T , ˜ R opt = U ˜ 2. Else Σ = diag(1 , 1 , . . . , − 1) 3. Set t opt = µ Y − R opt µ X Note that C is a 3x3 matrix. SVD is very fast. Arun et al., Least-Squares Fitting of Two 3-D Point Sets

  19. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . N 2. Find deformation minimizing: X k R x i + t � y i k 2 2 i =1 Convergence: P N i =1 d 2 ( x i , Y ) • at each iteration decreases. • Converges to local minimum • Good initial guess: global minimum. [Besl&McKay92]

  20. Variations of ICP 1. Selecting source points (from one or both scans): sampling 2. Matching to points in the other mesh 3. Weighting the correspondences 4. Rejecting certain (outlier) point pairs 5. Assigning an error metric to the current transform 6. Minimizing the error metric w.r.t. transformation

  21. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N X k R x i + t � y i k 2 2 i =1

  22. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N X k R x i + t � y i k 2 2 i =1 Problem: uneven sampling

  23. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N N d ( R x i + t, P ( y i )) 2 = ( R x i + t − y i ) T n y i X X � 2 � i =1 i =1 Solution: Minimize distance to the tangent plane Chen, Medioni, ‘91

  24. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N N d ( R x i + t, P ( y i )) 2 = ( R x i + t − y i ) T n y i X X � 2 � i =1 i =1 Solution: Minimize distance to the tangent plane Kok-Lim Low, ‘04

  25. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N � 2 ( R x i + t − y i ) T n y i X � � R opt , t opt = arg min = R T R =Id , t i =1 Question: How to minimize the error? Challenge : Although the error is quadratic (linear derivative), the space of rotation matrices is not linear. Problem: No closed form solution!

  26. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: N � 2 ( R x i + t − y i ) T n y i X � � R opt , t opt = arg min = R T R =Id , t i =1 Common Solution: Linearize rotation. Assume rotation angle is small. : axis, k r k R x i ≈ x i + r × x i : angle of rotation. k r k 2 R ( r, α ) x i = x i cos( α ) + ( r × x i ) sin( α ) + r ( r T x i )(1 − cos( α )) Note: follows from Rodrigues’s formula And first order approximations: sin( α ) ≈ α , cos( α ) ≈ 1

  27. Iterative Closest Point. Given a pair of shapes, X and Y, iterate: 1. For each find nearest neighbor . 2. Find deformation minimizing: r, t N 2 X ( x i + r × x i + t − y i ) T n y i � � E ( r, t ) = i =1 ∂ ∂ Setting: and leads to a 6x6 linear system ∂ rE ( r, t ) = 0 ∂ tE ( r, t ) = 0 Ax = b ✓ r ◆ T X ✓ ◆ ✓ ◆ x i × n y i x i × n y i ✓ ◆ x i × n y i X ( y i − x i ) T n y i A = b = x = t n y i n y i n y i

  28. Iterative Closest Point. Aligning the bunny to itself.

  29. 3d Point Cloud Processing Typically point cloud sampling of a shape is insufficient for most applications. Main stages in processing: 1. Shape scanning (acquisition) 2. If have multiple scans, align them. 3. Smoothing – remove local noise and outliers. 4. Estimate surface normals. 5. Surface reconstruction • Implicit representation • Triangle mesh

  30. Normal Estimation and Outlier Removal Fundamental problems in Point cloud processing. Although seemingly very different, can be solved with the same general approach.

  31. Normal Estimation Assume we have a clean sampling of the surface. Our goal is to find the best approximation of the tangent direction, and thus the normal to the line.

  32. Normal Estimation Assume we have a clean sampling of the surface. Our goal is to find the best approximation of the tangent direction, and thus the normal to the line.

  33. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i Goal: find best approximation of the normal at P. Method: Given line l through P with normal n , for another point p i : d ( p i , l ) 2 = (( p i � P ) T n ) 2 = (( p i � P ) T n ) 2 if k n k = 1 n T n

  34. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i Goal: find best approximation of the normal at P. k Method: Find n , minimizing for a set of k points (e.g. k X d ( p i , l ) 2 nearest neighbors of P. i =1 k X (( p i − P ) T n ) 2 n opt = arg min k n k =1 i =1

  35. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i Using Lagrange multiplier: k ! ∂ − λ ∂ X (( p i − P ) T n ) 2 n T n � � = 0 ∂ n ∂ n i =1 k X 2( p i − P )( p i − P ) T n = 2 λ n i =1

  36. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i Using Lagrange multiplier: k ! ∂ − λ ∂ X (( p i − P ) T n ) 2 n T n � � = 0 ∂ n ∂ n i =1 k ! X ( p i − P )( p i − P ) T n = λ n C n = λ n ⇒ i =1

  37. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i The normal n must be an eigenvector of the matrix: k C n = λ n X ( p i − P )( p i − P ) T C = i =1 Moreover, since: k k n k =1 n T C n (( p i − P ) T n ) 2 = arg min X n opt = arg min k n k =1 i =1

  38. Normal Estimation Assume we have a clean sampling of the surface. n d ( P i , l ) l P P i The normal n must be an eigenvector of the matrix: k C n = λ n X ( p i − P )( p i − P ) T C = i =1 Moreover, n opt must be the eigenvector corresponding to the smallest eigenvalue of C .

  39. Normal Estimation Method Outline (PCA): n P 1. Given a point P in the point cloud, find its k nearest neighbors. C = P k 2. Compute i =1 ( p i − P )( p i − P ) T 3. n: eigenvector corresponding to the smallest eigenvalue of C .

  40. Normal Estimation Method Outline (PCA): n P 1. Given a point P in the point cloud, find its k nearest neighbors. C = P k 2. Compute i =1 ( p i − P )( p i − P ) T 3. n: eigenvector corresponding to the smallest eigenvalue of C . k k P = 1 ( p i − P )( p i − P ) T , Variant on the theme: use X X C = p i k i =1 i =1

  41. Normal Estimation n r P Critical parameter: k . Because of uneven sampling typically fix a radius r , and use all points inside a ball of radius r . How to pick an optimal r ?

  42. Normal Estimation Curvature effect Due to curvature, large r can lead to estimation bias. Due to noise, small r can lead to errors

  43. Normal Estimation source: Mitra et al. ‘04 For sufficiently dense sampling: Estimation error under Gaussian noise for different values of curvature (2D)

  44. Normal Estimation – Neighborhood Size source: Mitra et al. ‘04 1x noise 2x noise Unfortunately, the curvature is not known in practice. Difficult to pick optimal size.

  45. Normal and Curvature Estimation Use a variety of clean 3D models to train a deep learning- based method to estimate normals and curvature. Estimated normals Estimated Curvature Training set of 3D triangle meshes PCPNET: Learning Local Shape Properties from Raw Point Clouds, Proc. Eurographics 2018, Guerrero, Kleiman, O., Mitra

  46. Outlier Removal Goal: remove points that do not lie close to a surface.

  47. Outlier Estimation – PCA P θ j θ i v P i P j k Since the covariance matrix: X ( p i − P )( p i − P ) T C = i =1 For any vector v, the Releigh quotient: v T Cv k � 2 if k v k = 1 ( p i � P ) T v X � = v T v i =1 k X ( k p i � P k cos( θ i )) 2 = i =1 ( p i − P ) Intuitively, v min , maximizes the sum of angles to each vector .

  48. Outlier Estimation v θ i P P i v T Cv If all the points are on a line, then and is large. λ max ( C ) min = λ min ( C ) = 0 v T v v There exists a direction along which the point cloud has no variability. If points are scattered randomly, then: . λ max ( C ) ≈ λ min ( C ) λ 1 ≈ 1 λ 1 λ 2 small λ 2

  49. Outlier Estimation – PCA v θ i P P i v T Cv If all the points are on a line, then and is large. λ max ( C ) min = λ min ( C ) = 0 v T v v There exists a direction along which the point cloud has no variability. If points are scattered randomly, then: . λ max ( C ) ≈ λ min ( C ) � 1 Thus, can remove points where for some threshold. > ✏ � 2 � 2 In 3d we expect two zero eigenvalues, so use for some threshold. > ✏ � 3

  50. 3d Point Cloud Processing Typically point cloud sampling of a shape is insufficient for most applications. Main stages in processing: 1. Shape scanning (acquisition) 2. If have multiple scans, align them. 3. Smoothing – remove local noise and outliers. 4. Estimate surface normals. 5. Surface reconstruction • Implicit representation • Triangle mesh

  51. 3d Point Cloud Reconstruction Main Goal: Construct a polygonal (e.g. triangle mesh) representation of the point cloud. Reconstruction algorithm curve/ surface PCD

  52. 3d Point Cloud Reconstruction Main Problem: Data is unstructured. E.g. in 2D the points are not ordered. Reconstruction algorithm curve/ surface PCD

  53. 3d Point Cloud Reconstruction Main Problem: Data is unstructured. E.g. in 2D the points are not ordered. Inherently ill-posed (aka difficult) problem. Reconstruction algorithm curve/ surface PCD

  54. 3d Point Cloud Reconstruction Today: Reconstruction through Implicit models.

  55. Implicit surfaces Given a function f( x ), the surface is defined as: { x , s.t. f ( x ) = 0 } f ( x ) > 0 f ( x ) < 0 f ( x, y ) = x 2 + y 2 − r 2

  56. Implicit surfaces Some 3d scanning technologies (e.g. CT, MRI) naturally produce implicit representations CT scans of human brain

  57. Implicit surfaces Converting from a point cloud to an implicit surface: x ( x − p ) T n p x − p p Simplest method: 1. Given a point x in space, find nearest point p in PCD. f ( x ) = ( x − p ) T n p 2. Set – signed distance to the tangent plane. Hugues Hoppe: Surface reconstruction from unorganized points

  58. Implicit surfaces Converting from a point cloud to an implicit surface: x ( x − p ) T n p x − p f ( x ) > 0 p f ( x ) < 0 Simplest method: 1. Given a point x in space, find nearest point p in PCD. f ( x ) = ( x − p ) T n p 2. Set – signed distance to the tangent plane. Hugues Hoppe: Surface reconstruction from unorganized points

Recommend


More recommend