smoothed analysis of icp
play

Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii - PowerPoint PPT Presentation

Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii (Stanford University) Matching Datasets Problem: Given two point sets A and B, translate A to best match B. ICP: Iterative Closest Point Problem: Given two point sets A and B,


  1. Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii (Stanford University)

  2. Matching Datasets Problem: Given two point sets A and B, translate A to best match B.

  3. ICP: Iterative Closest Point Problem: Given two point sets A and B, translate A to best match B. Example: A = B =

  4. ICP: Iterative Closest Point Problem: Given two point sets A and B, translate A to best match B. Example: A = B =

  5. ICP: Iterative Closest Point Problem: Given two point sets A and B, translate A to best match B. Example: A = B = Which is best? � � a + x − N B ( a + x ) � 2 min x φ ( x ) = 2 a ∈ A

  6. ICP: Iterative Closest Point |A| = |B| = n Given , A , B 1. Begin with some translation x 0 2. Compute for each N B ( a + x i ) a ∈ A N B ( a + x i ) − a � 3. Fix , compute optimal x i +1 = N B ( · ) |A| a ∈ A A = B =

  7. ICP: Iterative Closest Point |A| = |B| = n Given , A , B 1. Begin with some translation x 0 2. Compute for each N B ( a + x i ) a ∈ A N B ( a + x i ) − a � 3. Fix , compute optimal x i +1 = N B ( · ) |A| a ∈ A A = B =

  8. ICP: Iterative Closest Point |A| = |B| = n Given , A , B 1. Begin with some translation x 0 2. Compute for each N B ( a + x i ) a ∈ A N B ( a + x i ) − a � 3. Fix , compute optimal x i +1 = N B ( · ) |A| a ∈ A A = B =

  9. Notes on ICP Accuracy: Bad in worst case ⇒ O ( n n ) Time to converge: Never repeat the function N B ( · )

  10. Notes on ICP Accuracy: Bad in worst case ⇒ O ( n n ) Time to converge: Never repeat the function N B ( · ) Better Bounds? [ESE SoCG 2006]: in 2d Ω( n log n ) O ( dn 2 ) d

  11. Notes on ICP Accuracy: Bad in worst case ⇒ O ( n n ) Time to converge: Never repeat the function N B ( · ) Better Bounds? [ESE SoCG 2006]: in 2d Ω( n log n ) O ( dn 2 ) d Ω ( n 2 /d ) d We tighten the bounds and show:

  12. Notes on ICP Accuracy: Bad in worst case ⇒ O ( n n ) Time to converge: Never repeat the function N B ( · ) Better Bounds? [ESE SoCG 2006]: in 2d Ω( n log n ) O ( dn 2 ) d Ω ( n 2 /d ) d We tighten the bounds and show: But ICP runs very fast in practice, and the worst case bounds don’t do it justice.

  13. When Worst Case is Too Bad The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ...

  14. When Worst Case is Too Bad The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice.

  15. When Worst Case is Too Bad The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice From Worst to ...? Best-case? Average-case?

  16. When Worst Case is Too Bad The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice From Worst to ...? Best-case? Average-case? Smoothed Analysis (Spielman & Teng ‘01)

  17. Smoothed Analysis What is smoothed analysis: Add some random noise to the input Look at the worst case expected complexity

  18. Smoothed Analysis What is smoothed analysis: Add some random noise to the input Look at the worst case expected complexity How do we add random noise? Easy in geometric settings... perturb each point by N (0 , σ ) “Let P be a set of n points in general position...”

  19. Notes on ICP Accuracy: Bad in worst case ⇒ O ( n n ) Time to converge: Never repeat the function N B ( · ) Better Bounds? [ESE SoCG 2006]: in 2d Ω( n log n ) O ( dn 2 ) d Ω ( n 2 /d ) d We tighten the bounds and show: But ICP runs very fast in practice, and the worst case bounds don’t do it justice. � 2 � Diam n O (1) Theorem: Smoothed complexity of ICP is σ

  20. Proof of Theorem Outline: bound the minimal potential drop that occurs in every step. Two cases: 1. Small number of points change their NN assignments Bound the potential drop from recomputing the translation. ⇒ 2. Large number of points change their NN assignments Bound the potential drop from new nearest neighbor ⇒ assignments. In both cases, Quantify how “general” is the general position obtained after smoothing.

  21. Proof: Part I Warm up: If every point is perturbed by then the minimum N (0 , σ ) 1 − n 2 ( �/σ ) d distance between points is at least with probability . �

  22. Proof: Part I Warm up: If every point is perturbed by then the minimum N (0 , σ ) 1 − n 2 ( �/σ ) d distance between points is at least with probability . � Proof: Consider two points and . Fix the position of . The random q p p 1 − ( �/σ ) d perturbation of will put it at least away with probability . q �

  23. Proof: Part I Warm up: If every point is perturbed by then the minimum N (0 , σ ) 1 − n 2 ( �/σ ) d distance between points is at least with probability . � Proof: Consider two points and . Fix the position of . The random q p p 1 − ( �/σ ) d perturbation of will put it at least away with probability . q � Easy generalization: Consider sets of up to points. k P = { p 1 , p 2 , . . . , p k } , Q = { q 1 , q 2 , . . . , q k } � � 1 − n 2 k ( �/σ ) d Then: with probability � p i − q i � ≥ � p ∈ P q ∈ Q

  24. Proof: Part I Warm up: If every point is perturbed by then the minimum N (0 , σ ) 1 − n 2 ( �/σ ) d distance between points is at least with probability . � Proof: Consider two points and . Fix the position of . The random q p p 1 − ( �/σ ) d perturbation of will put it at least away with probability . q � Easy generalization: Consider sets of up to points. k P = { p 1 , p 2 , . . . , p k } , Q = { q 1 , q 2 , . . . , q k } � � 1 − n 2 k ( �/σ ) d Then: with probability . � p i − q i � ≥ � p ∈ P q ∈ Q We will take and . � = σ/poly ( n ) k = O ( d )

  25. Proof: part I (cont) N B ( a + x i ) − a � Recall: x i +1 = |A| a ∈ A If only points changed their NN assignments, then with high k probability . � x i +1 − x i � ≥ � /n

  26. Proof: part I (cont) N B ( a + x i ) − a � Recall: x i +1 = |A| a ∈ A If only points changed their NN assignments, then with high k probability . � x i +1 − x i � ≥ � /n Fact. For any set with as its mean, and any point . c ( S ) S y � s − y � 2 = | S | · � c ( S ) − y � 2 + � � � s − c ( S ) � 2 s ∈ S s ∈ S n · ( � /n ) 2 = � 2 /n Thus the total potential dropped by at least:

  27. Proof: part II Suppose many points change their NN assignments. What could go wrong?

  28. Proof: part II Suppose many points change their NN assignments. What could go wrong? A = B =

  29. Proof: part II Suppose many points change their NN assignments. What could go wrong? A = B =

  30. Proof: Part II Cont What can we say about the points? Every active point in must be A near the bisector of two points in . B Then the translation vector must lie in this slab.

  31. Proof: Part II Cont What can we say about the points? Every active point in must be A near the bisector of two points in . B For a different point the slab has a different orientation: And the translation vector must lie in this slab as well.

  32. Proof: Part II Cont But if the slabs are narrow, because of the perturbation their orientation will appear random.

  33. Proof: Part II Cont But if the slabs are narrow, because of the perturbation their orientation will appear random. Intuitively, we do not expect a large ( ) number of slabs to have a ω ( d ) common intersection. Thus we can bound the minimum slab width from below.

  34. Proof: Finish Theorem. With probability ICP will finish after at most 1 − 2 p � 2 � D p − 2 /d ) O ( n 11 d iterations. σ O ( dn 2 ) d Since ICP always runs in at most iterations, we can take p = O ( dn 2 ) − d to show that the smoothed complexity is polynomial.

  35. Proof: Finish Theorem. With probability ICP will finish after at most 1 − 2 p � 2 � D p − 2 /d ) O ( n 11 d iterations. σ O ( dn 2 ) d Since ICP always runs in at most iterations, we can take p = O ( dn 2 ) − d to show that the smoothed complexity is polynomial. n 11 Many union bounds ⇒ But, linear in ! d

Recommend


More recommend