shape co analysis and constrained clustering
play

Shape Co-analysis and constrained clustering Daniel Cohen-Or - PowerPoint PPT Presentation

Shape Co-analysis and constrained clustering Daniel Cohen-Or Tel-Aviv University 1 High-level Shape analysis [Mehra et al. 08] [Fu et al. 08] [Kalograkis et al. 10] Shape abstraction Upright orientation Learning segmentation [Kim et al.


  1. Shape Co-analysis and constrained clustering Daniel Cohen-Or Tel-Aviv University 1

  2. High-level Shape analysis [Mehra et al. 08] [Fu et al. 08] [Kalograkis et al. 10] Shape abstraction Upright orientation Learning segmentation [Kim et al. 12] [Mitra et al. 10] [Wang et al. 11] Exploration of shape Illustrating assemblies Symmetry hierarchy collections 2

  3. Segmentation and Correspondence Segmentation Correspondence 3

  4. Individual vs. Co-segmentation 4

  5. Individual vs. Co-segmentation 5

  6. Similar geometries can be associated with different semantics Challenge 6

  7. Similar semantics can be represented by different geometries Challenge 7

  8. Large set are more challenging Methods do not give perfect results 8

  9. Descriptor-based unsupervised co-segmentation [Sidi et al.11] 11

  10. co-segmentation Clustering in feature space 12

  11. Clustering (basic stuff) Takes a set of points,

  12. Clustering (basic stuff) Takes a set of points, and groups them into several separate clusters

  13. Clustering is not easy… – Clean separation to groups not always possible – Must make “hard splitting” decisions – Number of groups not always known, or can be very difficult to determine from data

  14. Clustering is hard!

  15. Clustering is hard! Hard to determine number of clusters

  16. Clustering is hard! Hard to determine number of clusters

  17. Clustering is hard! Hard to decide where to split clusters

  18. Clustering Hard to decide where to split clusters

  19. Clustering • Two general types of input for Clustering: – Spatial Coordinates (points, feature space), or – Inter ‐ object Distance matrix

  20. Clustering Spatial Coordinates (points, feature space), or Inter ‐ object Distance matrix ? K ‐ Means, EM, Mean ‐ Shift, Linkage, DBSCAN, Spectral Clustering

  21. Clustering 101

  22. Initial co-segmentation • Over-segmentation mapped to a descriptor space ( geodesic distance, shape diameter function, normal histogram) High ‐ dimensional feature space 24

  23. Co ‐ segmentation Points represent some kind of object parts, and we want to cluster them as means to co ‐ segment the set of objects

  24. Clustering • Underlying assumptions behind all clustering algorithms: – Neighboring points imply similar parts.

  25. Clustering • Underlying assumptions behind all clustering algorithms: – Distant points imply dissimilar parts.

  26. Clustering • When assumptions fail, result is not useful: – Similar parts are distant in feature space

  27. Clustering • When assumptions fail, result is not useful: – Dissimilar parts are close in feature space

  28. Clustering • Assumptions might fail because: – Data is difficult to analyze – Similarity/Dissimilarity of data not well defined – Feature space is insufficient or distorted

  29. Supervised Clustering Add training set of labeled data (pre ‐ clustered)

  30. Supervised Clustering Then clustering becomes easy…

  31. Supervised segmentation [Kalogerakis et al.10, van Kaick et al. 11] Input Labeled shape shape Head Nec k Torso Leg Training shapes Tail Ear 34

  32. Semi ‐ Supervised Clustering • Supervision as pair ‐ wise constraints: – Must Link and Cannot ‐ Link Can’t Link Must Link

  33. Semi ‐ Supervised Clustering • Cluster data while respecting constraints

  34. Learning from labeled and unlabeled data 39

  35. Supervised learning 40

  36. Unsupervised learning 41

  37. Semi-supervised learning 42

  38. Constrained clustering Cannot Link Must Link 43

  39. Active Co ‐ analysis of a Set of Shapes Wang et al. SIGGRAPH ASIA 2012

  40. Active Co-Analysis • A semi-supervised method for co-segmentation with minimal user input 45

  41. Automatically suggest the user which constraints can be effective Initial Co-segmentation Constrained Clustering Final result Active Learning 46

  42. Constrained Clustering Cannot Link Must Link 53

  43. Spring System • A spring system is used to re ‐ embed all the points in the feature space, so the result of clustering will satisfy constraints.

  44. Spring System • Result of clustering after re ‐ embedding (mistakes marked with circle): Result of Spring Re ‐ embedding and Clustering Final Result

  45. Spring System Neighbor Spring Random Springs Can’t Link Must ‐ Link

  46. 64

  47. Constrained clustering & Co-segmentation 65

  48. Uncertain points • “Uncertain” points are located using the Silhouette Index: Darker points have lower confidence 68

  49. Silhouette Index • Silhouette Index of node x : 69

  50. Constraint Suggestion • Pick super-faces with lowest confidence • Pick the highest confidence super-faces • Ask the user to add constraints between such pairs ? 70

  51. 71

  52. Candelabra: 28 shapes, 164 super-faces,24 constraints 73

  53. Fourleg: 20 shapes, 264 super-faces,69 constraints 74

  54. Tele-alien: 200 shapes, 1869 super-faces,106 constraints 75

  55. Vase: 300 shapes, 1527 super-faces,44 constraints 76

  56. Cannot ‐ Link Springs

  57. Constraints as Features CVPR 2013 • Goal : Modify data so distances fit constraints • Basic idea : – Convert constraints into extra ‐ features that are added to the data (augmentation) – Recalculate the distances – Unconstrained clustering of the modified data – Clustering result more likely to satisfy constraints • Apply this idea to Cannot ‐ Link constraints • Must ‐ Link constraints handled differently

  58. Cannot ‐ link Constraints • Points should be distant. • What value should be given: D(c 1 , c 2 ) = X ? – Should relate to max(D(x, y)), but how? • If modified, how to restore triangle ‐ inequality?

  59. Constraints as Features • Solution: – Add extra ‐ dimension, where Cannot ‐ Link pair is far away (±1):

  60. Constraints as Features • Solution: – Add extra ‐ dimension, where Cannot ‐ Link pair is far away (±1): – What values should other points be given?

  61. Constraints as Features • Values of other points: – Points closer to c 1 should have values closer to +1, – Points closer to c 2 should have values closer to ‐ 1 • Formulation: • Simple distance does not convey real closeness.

  62. Constraints as Features • Point A should be “closer” to c1, despite smaller Euclidean distance. A A c 2 c 1

  63. Constraints as Features • Use a Diffusion Map, where this holds true. A A c 2 c 1

  64. Constraints as Features • Diffusion Maps related to random walk process on a graph • Affinity Matrix: • Eigen ‐ Analysis of normalized A forms a Diffusion Map:

  65. Constraints as Features • Use Diffusion Map distances: • Calculate value of each point in new dimension:

  66. Constraints as Features • Create new distance matrix, of distances in the new extra dimension: • Add distance matrix per Cannot ‐ Link: • Cluster data by modified distance matrix

  67. Constraints as Features Original Springs Features

  68. Constraints as Features!!! Unconstrained clustering of the modified data

  69. Results – UCI (CVPR 2013)

  70. Summary • A new semi ‐ supervised clustering method. • Constraints are embedded into the data, reducing the problem to an unconstrained setting.

  71. Thank you! 105

  72. Thank you! 114

Recommend


More recommend