Evaluation Evaluation Used datasets: – Berkeley Segmentation Dataset (BSDS500) [AMFM11]: 500 natural images. – NYU Depth Dataset (NYUV2) [SHKF12]: 1449 images of indoor scenes with depth information. Figure: Images and corresponding ground truth segmentations from the BSDS500 and the NYUV2. David Stutz | October 7th, 2014 31
Evaluation Evaluation Parameters have been optimized on training sets with respect to: – Boundary Recall Rec : the fraction of boundary pixels in the ground truth segmentation correctly detected in the superpixel segmentation. → 100% is best. – Undersegmentation Error UE : the error made when comparing the ground truth segmentation to the superpixel segmentation. → 0% is best. Qualitative and quantitative comparison on test sets. David Stutz | October 7th, 2014 32
Evaluation Evaluation Parameters have been optimized on training sets with respect to: – Boundary Recall Rec : the fraction of boundary pixels in the ground truth segmentation correctly detected in the superpixel segmentation. → 100% is best. – Undersegmentation Error UE : the error made when comparing the ground truth segmentation to the superpixel segmentation. → 0% is best. Qualitative and quantitative comparison on test sets. David Stutz | October 7th, 2014 32
Evaluation Evaluation Parameters have been optimized on training sets with respect to: – Boundary Recall Rec : the fraction of boundary pixels in the ground truth segmentation correctly detected in the superpixel segmentation. → 100% is best. – Undersegmentation Error UE : the error made when comparing the ground truth segmentation to the superpixel segmentation. → 0% is best. Qualitative and quantitative comparison on test sets. David Stutz | October 7th, 2014 32
Evaluation • Qualitative Table of Contents 1 Introduction Goals 2 Related Work 3 SEEDS 4 SEEDS with Depth 5 Evaluation 6 Qualitative Quantitative Runtime Conclusion 7 David Stutz | October 7th, 2014 33
Evaluation • Qualitative Qualitative Comparison – FH Figure: Superpixel segmentations generated by FH . David Stutz | October 7th, 2014 34
Evaluation • Qualitative Qualitative Comparison – SLIC Figure: Superpixel segmentations generated by SLIC . David Stutz | October 7th, 2014 35
Evaluation • Qualitative Qualitative Comparison – oriSEEDS Figure: Superpixel segmentations generated by oriSEEDS . David Stutz | October 7th, 2014 36
Evaluation • Qualitative Qualitative Comparison – reSEEDS* Figure: Superpixel segmentations generated by reSEEDS* . David Stutz | October 7th, 2014 37
Evaluation • Qualitative Qualitative Comparison – SEEDS3D Figure: Superpixel segmentations generated by SEEDS3D . David Stutz | October 7th, 2014 38
Evaluation • Qualitative Qualitative Comparison – VCCS Figure: Superpixel segmentations generated by VCCS . David Stutz | October 7th, 2014 39
Evaluation • Quantitative Table of Contents 1 Introduction Goals 2 Related Work 3 SEEDS 4 SEEDS with Depth 5 Evaluation 6 Qualitative Quantitative Runtime Conclusion 7 David Stutz | October 7th, 2014 40
Evaluation • Quantitative Quantitative Comparison – BSDS500 1 0 . 1 BSDS500: oriSEEDS 0 . 98 reSEEDS* 0 . 08 0 . 96 Rec UE 0 . 06 0 . 94 0 . 92 0 . 04 0 . 9 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 41
Evaluation • Quantitative Quantitative Comparison – BSDS500 1 0 . 1 BSDS500: SLIC 0 . 98 oriSEEDS 0 . 08 reSEEDS* 0 . 96 Rec UE 0 . 06 0 . 94 0 . 92 0 . 04 0 . 9 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 42
Evaluation • Quantitative Quantitative Comparison – BSDS500 1 0 . 1 BSDS500: FH 0 . 98 SLIC 0 . 08 oriSEEDS reSEEDS* 0 . 96 Rec UE 0 . 06 0 . 94 0 . 92 0 . 04 0 . 9 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 43
Evaluation • Quantitative Quantitative Comparison – NYUV2 1 0 . 19 NYUV2: 0 . 18 oriSEEDS 0 . 98 reSEEDS* 0 . 16 SEEDS3D 0 . 14 0 . 96 Rec UE 0 . 12 0 . 94 0 . 1 0 . 92 0 . 08 0 . 91 0 . 07 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 44
Evaluation • Quantitative Quantitative Comparison – NYUV2 1 0 . 19 NYUV2: 0 . 18 FH SLIC 0 . 98 0 . 16 oriSEEDS reSEEDS* 0 . 14 0 . 96 SEEDS3D Rec UE 0 . 12 0 . 94 0 . 1 0 . 92 0 . 08 0 . 91 0 . 07 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 45
Evaluation • Quantitative Quantitative Comparison – NYUV2 1 0 . 19 NYUV2: 0 . 18 FH SLIC 0 . 98 0 . 16 oriSEEDS reSEEDS* 0 . 14 0 . 96 SEEDS3D Rec UE VCCS 0 . 12 0 . 94 0 . 1 0 . 92 0 . 08 0 . 91 0 . 07 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 46
Evaluation • Runtime Table of Contents 1 Introduction Goals 2 Related Work 3 SEEDS 4 SEEDS with Depth 5 Evaluation 6 Qualitative Quantitative Runtime Conclusion 7 David Stutz | October 7th, 2014 47
Evaluation • Runtime Comparison – Runtime Runtime is an important aspect, especially for realtime applications. Runtime (in seconds) based on: – i7 @ 3.4GHz with 16GB RAM. – No multi-threading and no GPU. Pixel counts: – BSDS500: 481 · 321 = 154401 pixels. – NYUV2: 608 · 448 = 272384 pixels. David Stutz | October 7th, 2014 48
Evaluation • Runtime Comparison – Runtime BSDS500 NYUV2 0 . 35 0 . 45 oriSEEDS 0 . 4 0 . 3 reSEEDS* SEEDS3D 0 . 3 0 . 2 t t 0 . 2 0 . 1 0 . 1 0 . 05 0 . 05 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 49
Evaluation • Runtime Comparison – Runtime BSDS500 NYUV2 0 . 35 0 . 45 FH 0 . 4 0 . 3 SLIC oriSEEDS reSEEDS* 0 . 3 SEEDS3D 0 . 2 t t 0 . 2 0 . 1 0 . 1 0 . 05 0 . 05 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 50
Evaluation • Runtime Comparison – Runtime BSDS500 NYUV2 0 . 35 0 . 45 FH 0 . 4 0 . 3 SLIC oriSEEDS reSEEDS* 0 . 3 SEEDS3D 0 . 2 VCCS t t 0 . 2 0 . 1 0 . 1 0 . 05 0 . 05 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 51
Evaluation • Runtime Comparison – Runtime – Discussion FH is pretty fast with ∼ 60 ms on the BSDS500. – Cannot be sped up further. However, SLIC and SEEDS can be sped up: – SLIC and SEEDS run iteratively. → Reduce number of iterations T . – Reduce the size Q of the color histograms used by SEEDS . David Stutz | October 7th, 2014 52
Evaluation • Runtime Comparison – Runtime BSDS500 NYUV2 0 . 35 0 . 45 T = 10 : 0 . 4 0 . 3 SLIC T = 2 , Q = 7 3 : oriSEEDS 0 . 2 reSEEDS* t t 0 . 2 0 . 1 0 . 1 0 . 05 0 . 05 0 . 03 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 53
Evaluation • Runtime Comparison – Runtime BSDS500 NYUV2 0 . 35 0 . 45 T = 10 : 0 . 4 0 . 3 SLIC T = 1 : SLIC T = 2 , Q = 7 3 : 0 . 2 t t oriSEEDS 0 . 2 reSEEDS* T = 1 , Q = 3 3 : 0 . 1 0 . 1 oriSEEDS 0 . 05 0 . 05 reSEEDS* 0 . 03 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 54
Evaluation • Runtime Comparison – Runtime 1 0 . 12 BSDS500: T = 10 : 0 . 98 0 . 1 SLIC T = 2 , Q = 7 3 : 0 . 96 oriSEEDS 0 . 08 Rec reSEEDS* UE 0 . 94 0 . 06 0 . 92 0 . 04 0 . 9 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 55
Evaluation • Runtime Comparison – Runtime 1 0 . 12 BSDS500: T = 10 : 0 . 98 0 . 1 SLIC T = 1 : SLIC 0 . 96 0 . 08 T = 2 , Q = 7 3 : Rec UE oriSEEDS 0 . 94 reSEEDS* 0 . 06 T = 1 , Q = 3 3 : 0 . 92 oriSEEDS 0 . 04 reSEEDS* 0 . 9 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 56
Evaluation • Runtime Comparison – Runtime 1 0 . 18 NYUV2: T = 10 : 0 . 98 0 . 16 SLIC T = 2 , Q = 7 3 : 0 . 96 0 . 14 oriSEEDS Rec reSEEDS* UE 0 . 94 0 . 12 0 . 92 0 . 1 0 . 9 0 . 08 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 57
Evaluation • Runtime Comparison – Runtime 1 0 . 18 NYUV2: T = 10 : 0 . 98 0 . 16 SLIC T = 1 : SLIC 0 . 96 0 . 14 T = 2 , Q = 7 3 : Rec UE oriSEEDS 0 . 94 0 . 12 reSEEDS* T = 1 , Q = 3 3 : 0 . 92 0 . 1 oriSEEDS reSEEDS* 0 . 9 0 . 08 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 58
Conclusion Table of Contents 1 Introduction Goals 2 Related Work 3 SEEDS 4 SEEDS with Depth 5 Evaluation 6 Qualitative Quantitative Runtime Conclusion 7 David Stutz | October 7th, 2014 59
Conclusion Conclusion – First Part The conclusion is split up into three observations. Conclusion 1: Our implementation of SEEDS offers state-of-the-art performance while providing realtime! In addition: – Number of superpixels is controllable. – Compactness is adjustable. – Allows to trade performance for runtime. David Stutz | October 7th, 2014 60
Conclusion Conclusion – Second Part Conclusion 2: Using depth information for superpixel segmentation does not show significant performance increase. – At least for SEEDS . Possible explanations: – Performance of SEEDS leaves little room for improvement. – Scenes from the NYUV2 are highly cluttered and provided depth images have low quality. David Stutz | October 7th, 2014 61
Conclusion Conclusion – Third Part Conclusion 3: Many superpixel algorithms show state-of-the-art performance. Therefore, other aspects become important: – Runtime – Ease-of-use (implementation, parameters etc.) – Control over the number of superpixels – Compactness parameter Based on these considerations, our implementation of SEEDS is an excellent choice. David Stutz | October 7th, 2014 62
Conclusion Conclusion – Third Part Conclusion 3: Many superpixel algorithms show state-of-the-art performance. Therefore, other aspects become important: – Runtime – Ease-of-use (implementation, parameters etc.) – Control over the number of superpixels – Compactness parameter Based on these considerations, our implementation of SEEDS is an excellent choice. David Stutz | October 7th, 2014 62
Conclusion The End – Thanks Thank you for your attention. david.stutz@rwth-aachen.de Questions? David Stutz | October 7th, 2014 63
Appendix Appendix – SEEDS Input: image I , block size w × h , levels L , histogram size Q Output: superpixel segmentation S 1. // Initialization: 2. group w × h pixels to form blocks at level l = 1 3. for l = 2 to L 4. group 2 × 2 blocks at level ( l − 1) to form blocks at level l 5. for l = 1 to L 6. // For l = L , these are the initial superpixels. for each block B ( l ) 7. at level l i i ( q ) is the fraction of pixels in B ( l ) 8. // h B ( l ) falling in bin q . i 9. compute color histogram h B ( l ) i David Stutz | October 7th, 2014 64
Appendix Appendix – SEEDS Input: image I , block size w (1) × h (1) , levels L , histogram size Q Output: superpixel segmentation S 10. // Block updates: 11. for l = L − 1 to 1 for each block B ( l ) 12. at level l i let S j be the superpixel B ( l ) 13. belongs to i 14. if a neighboring block belongs to a different superpixel S k // ∩ ( h, h ′ ) = � Q 15. q =1 min( h ( q ) , h ′ ( q )) . 16. then if ∩ ( h B ( l ) i , h S k ) > ∩ ( h B ( l ) i ) i , h S j − B ( l ) then assign B ( l ) 17. to superpixel S k i David Stutz | October 7th, 2014 65
Appendix Appendix – SEEDS Input: image I , block size w (1) × h (1) , levels L , histogram size Q Output: superpixel segmentation S 18. // Pixel updates: 19. for n = 1 to N 20. let S j be the superpixel x n belongs to 21. if a neighboring pixel belongs to a different superpixel S k 22. // h ( x n ) denotes the bin of pixel x n . 23. then if h S k ( h ( x n )) > h S j ( h ( x n )) 24. then assign x n to superpixel S k 25. return S David Stutz | October 7th, 2014 66
Appendix Appendix – SEEDS Input: image I , block size w (1) × h (1) , levels L , histogram size Q Output: superpixel segmentation S 19. // Mean pixel updates: 20. for n = 1 to N 21. let S j be the superpixel x n belongs to 22. if a neighboring pixel belongs to a different superpixel S k 23. // d ( x n , S j ) = � I ( x n ) − I ( S j ) � 2 + β � x n − µ ( S j ) � 2 . 24. then if d ( x n , S k ) < d ( x n , S j ) 25. then assign x n to superpixel S k 26. return S David Stutz | October 7th, 2014 67
Appendix Appendix – Comparison – BSDS500 1 0 . 1 NYUV2: FH 0 . 98 TP 0 . 08 SLIC ERS 0 . 96 oriSEEDS Rec UE reSEEDS* 0 . 06 0 . 94 0 . 92 0 . 04 0 . 91 0 . 03 500 1 , 000 500 1 , 000 Superpixels Superpixels David Stutz | October 7th, 2014 68
Appendix Appendix – Comparison – NYUV2 1 0 . 19 NYUV2: 0 . 18 FH TP 0 . 98 0 . 16 SLIC ERS 0 . 14 0 . 96 oriSEEDS Rec UE reSEEDS* 0 . 12 SEEDS3D 0 . 94 DASP 0 . 1 VCCS 0 . 92 0 . 08 0 . 91 0 . 07 500 1 , 000 1 , 500 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 69
Appendix Appendix – Runtime It can be shown that SEEDS runs linear in the number of pixels N : (1) O ( QTN ) with – Q the number of histogram bins, – T the number of iterations at each level. However, in practice, the runtime also depends on the number of levels L ! David Stutz | October 7th, 2014 70
Appendix Appendix – Runtime It can be shown that SEEDS runs linear in the number of pixels N : (1) O ( QTN ) with – Q the number of histogram bins, – T the number of iterations at each level. However, in practice, the runtime also depends on the number of levels L ! David Stutz | October 7th, 2014 70
Appendix Appendix – Runtime BSDS500 NYUV2 0 . 35 0 . 45 T = 10 : 0 . 4 0 . 3 SLIC T = 1 : SLIC T = 2 , Q = 7 3 : 0 . 2 t t oriSEEDS 0 . 2 reSEEDS* T = 1 , Q = 3 3 : 0 . 1 0 . 1 oriSEEDS 0 . 05 0 . 05 reSEEDS* 0 . 03 0 0 500 1 , 000 500 1 , 000 1 , 500 Superpixels Superpixels David Stutz | October 7th, 2014 71
Appendix Appendix – Boundary Recall Let G be a ground truth segmentation and S be a superpixel segmentation. Some definitions [NP12]: – True Positives TP ( G, S ) : The number of boundary pixels in G for which there is a boundary pixel in S in range r . – False Negatives FN ( G, S ) : The number of boundary pixels in G for which there is no boundary pixel in S in range r . Boundary Recall is defined as TP ( G, S ) (2) Rec ( G, S ) = TP ( G, S ) + FN ( G, S ) . David Stutz | October 7th, 2014 72
Appendix Appendix – Undersegmentation Error Let G be a ground truth segmentation, S be a superpixel segmentation and N be the total number of pixels. Undersegmentation Error is defined as UE ( G, S ) = 1 � � . (3) min( | S j ∩ G i | , | S j − G i | ) N G i ∈ G S j ∩ G i � = ∅ David Stutz | October 7th, 2014 73
P . Arbeláez, M. Maire, C. Fowlkes, and J. Malik. From contours to regions: An empirical evaluation. In Computer Vision and Pattern Recognition, Conference on , pages 2294–2301, Miami, Florida, June 2009. P . Arbeláez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. Pattern Analysis and Machine Intelligence, Transactions on , 33(5):898–916, May 2011. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P . Fua, and S. Süsstrunk. SLIC superpixels. Technical report, École Polytechnique Fédérale de Lausanne, Lusanne, Switzerland, June 2010. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P . Fua, and S. Süsstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. Pattern Analysis and Machine Intelligence, Transactions on , 34(11):2274–2281, November 2012. David Stutz | October 7th, 2014 73
C. Bishop. Pattern Recognition and Machine Learning . Springer Verlag, New York, 2006. J. Borovec and J. Kybic. jSLIC: Superpixels in ImageJ. In Computer Vision Winter Workshop , 2014. A. Barla, F. Odone, and A. Verri. Histogram intersection kernel for image classification. In Image Processing, International Conference on , volume 3, pages 513–516, Barcelona, Spain, September 2003. G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools , 2000. ❤tt♣✿✴✴♦♣❡♥❝✈✳♦r❣✴ . Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. David Stutz | October 7th, 2014 73
Pattern Analysis and Machine Intelligence, Transactions on , 23(11):1222–1239, November 2001. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms . MIT Press, Cambridge, 2009. C. Conrad, M. Mertz, and R. Mester. Contour-relaxed superpixels. In Energy Minimization Methods in Computer Vision and Pattern Recognition , volume 8081 of Lecture Notes in Computer Science , pages 280–293. Springer Berlin Heidelberg, 2013. J. Chang, D. Wei, and J. W. Fisher. A video representation using temporal superpixels. In Computer Vision and Pattern Recognition, Conference on , pages 2051–2058, Portland, Oregon, June 2013. F. Drucker and J. MacCormick. Fast superpixels for video analysis. David Stutz | October 7th, 2014 73
In Motion and Video Computing, Workshop on , pages 1–8, Snowbird, Utah, December 2009. H. Fu, X. Cao, D. Tang, Y. Han, and D. Xu. Regularity preserved superpixels and supervoxels. Multimedia, Transactions on , 16(4):1165–1175, June 2014. P . F. Felzenswalb and D. P . Huttenlocher. Efficient graph-based image segmentation. Computer Vision, International Journal of , 59(2), 2004. D. Forsyth and J. Ponce. Computer Vision: A Modern Approach . Prentice Hall Professional Technical Reference, New Jersey, 2002. S. Gupta, P . Arbeláez, and J. Malik. Perceptual organization and recognition of indoor scenes from RGB-D images. In Computer Vision and Pattern Recognition, Conference on , pages 564–571, Portland, Oregon, June 2013. David Stutz | October 7th, 2014 73
S. Holzer, R. B. Rusu, M. Dixon, S. Gedikli, and N. Navab. Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images. In Intelligent Robots and Systems, International Conference on , pages 2684–2689, Vilamoura, Portugal, October 2012. V. Jain, S. C. Turaga, K. L. Briggman, M. N. Helmstaedter, W. Denk, and H. S. Seung. Learning to agglomerate superpixel hierarchies. In Neural Information Processing Systems, Conference on , pages 648–656. Curran Associates, December 2011. K. Klasing, D. Althoff, D. Wollherr, and M. Buss. Comparison of surface normal estimation methods for range sensing applications. In Robotics and Automation, International Conference on , pages 3206–3211, Kobe, Japan, May 2009. A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi. David Stutz | October 7th, 2014 73
TurboPixels: Fast superpixels using geometric flows. Pattern Analysis and Machine Intelligence, Transactions on , 31(12):2290–2297, December 2009. M. Y. Lui, O. Tuzel, S. Ramalingam, and R. Chellappa. Entropy rate superpixel segmentation. In Computer Vision and Pattern Recognition, Converence on , pages 2097–2104, Providence, Rhode Island, June 2011. R. Mester, C. Conrad, and A. Guevara. Multichannel segmentation using contour relaxation: Fast super-pixels and temporal propagation. In Image Analysis , volume 6688 of Lecture Notes in Computer Science , pages 250–261. Springer Berlin Heidelberg, 2011. M. Meilˇ a. Comparing clusterings by the variation of information. In Learning Theory and Kernel Machines , volume 2777 of Lecture Notes in Computer Science , pages 173–187. Springer Berlin Heidelberg, 2003. David Stutz | October 7th, 2014 73
M. Meilˇ a. Comparing clusterings: an axiomatic view. In Machine Learning, International Conference on , pages 577–584, Bonn, Germany, 2005. D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. Pattern Analysis and Machine Intelligence, Transactions on , 26(5):530–549, May 2004. D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, International Conference on , volume 2, pages 416–423, Vancouver, Canada, July 2001. G. Mori. Guiding model search using segmentation. David Stutz | October 7th, 2014 73
In Computer Vision, International Conference on , volume 2, pages 1417–1423, Beijing, China, October 2005. A. P . Moore, S. J. D. Prince, J. Warrell, U. Mohammed, and G. Jones. Superpixel lattices. In Computer Vision and Pattern Recognition, Conference on , pages 1–8, Anchorage, Alaska, June 2008. A. P . Moore, S. J. D. Prince, and J. Warrell. Lattice cut - constructing superpixels using layer constraints. In Computer Vision and Pattern Recognition, Conference on , pages 2117–2124, San Francisco, California, June 2010. G. Mori, X. Ren, A. A. Efros, and J. Malik. Recovering human body configurations: combining segmentation and recognition. In Computer Vision and Pattern Recognition, Conference on , volume 2, pages 326–333, Washington, D.C., June 2004. P . Neubert and P . Protzel. David Stutz | October 7th, 2014 73
Superpixel benchmark and comparison. In Forum Bildverarbeitung , Regensburg, Germany, November 2012. S. Osher and R. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces . Springer Verlag, New York, 2003. J. Papon, A. Abramov, M. Schoeler, and F. Wörgötter. Voxel cloud connectivity segmentation - supervoxels for point clouds. In Computer Vision and Pattern Recognition, Conference on , pages 2027–2034, Portland, Oregon, June 2013. F. Perbet and A. Maki. Homogeneous superpixels from random walks. In Machine Vision and Applications, Conference on , pages 26–30, Nara, Japan, June 2011. W. M. Rand. Objective criteria for the evaluation of clustering methods. American Statistical Association, Journal of the , 66(336):846–850, 1971. David Stutz | October 7th, 2014 73
X. Ren and L. Bo. Discriminatively trained sparse code gradients for contour detection. In Advances in Neural Information Processing Systems , volume 25, pages 584–592. Curran Associates, 2012. R. B. Rusu, N. Blodow, and M. Beetz. Fast point feature histograms (FPFH) for 3D registration. In Robotics and Automation, International Conference on , pages 3212–3217, Kobe, Japan, May 2009. X. Ren, L. Bo, and D. Fox. RGB-(D) scene labeling: Features and algorithms. In Computer Vision and Pattern Recognition, Conference on , pages 2759–2766, Providence, Rhode Island, June 2012. R. B. Rusu and S. Cousins. 3D is here: Point Cloud Library (PCL). In Robotics and Automation, International Conference on , Shanghai, China, May 2011. David Stutz | October 7th, 2014 73
C. Rohkohl and K. Engel. Efficient image segmentation using pairwise pixel similarities. In Pattern Recognition , volume 4713 of Lecture Notes in Computer Science , pages 254–263. Springer Berlin Heidelberg, 2007. M. Reso, J. Jachalsky, B. Rosenhahn, and J. Ostermann. Temporally consistent superpixels. In Computer Vision, International Conference on , pages 385–392, Sydney, Australia, December 2013. X. Ren and J. Malik. Learning a classification model for segmentation. In Computer Vision, International Conference on , pages 10–17, Nice, France, October 2003. C. Y. Ren and I. Reid. gSLIC: a real-time implementation of SLIC superpixel segmentation. Technical report, University of Oxford, Oxford, England, 2011. B. C. Russell, A. Torralba, K. P . Murphy, and W. T. Freeman. David Stutz | October 7th, 2014 73
LabelMe: A database and web-based tool for image annotation. Computer Vision, International Journal of , 77(1-3):157–173, 2008. R. B. Rusu. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments . PhD thesis, Technische Universität München, Munich, Germany, 2009. N. Silberman and R. Fergus. Indoor scene segmentation using a structured light sensor. In Computer Vision Workshops, International Conference on , pages 601–608, Barcelona, Spain, November 2011. A. Schick, M. Fischer, and R. Stiefelhagen. Measuring and evaluating the compactness of superpixels. In Pattern Recognition, International Conference on , pages 930–934, Tsukuba, Japan, November 2012. N. Silberman, D. Hoiem, P . Kohli, and R. Fergus. Indoor segmentation and support inference from RGBD images. David Stutz | October 7th, 2014 73
In Computer Vision, European Conference on , volume 7576 of Lecture Notes in Computer Science , pages 746–760. Springer Berlin Heidelberg, 2012. J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence, Transactions on , 22(8):888–905, August 2000. J. Shotton, J. Winn, C. Rother, and A. Criminisi. TextonBoost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context. Computer Vision, International Journal of , 81(1):2–23, 2009. D. Tang, H. Fu, and X. Cao. Topology preserved regular superpixel. In Multimedia and Expo, International Conference on , pages 765–768, Melbourne, Australia, July 2012. J. Tighe and S. Lazebnik. David Stutz | October 7th, 2014 73
SuperParsing: Scalable nonparametric image parsing with superpixels. In Computer Vision, European Conference on , volume 6315 of Lecture Notes in Computer Science , pages 352–365. Springer Berlin Heidelberg, 2010. R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. Pattern Analysis and Machine Intelligence, Transactions on , 29(6):929–944, June 2007. O. Veksler, Y. Boykov, and P . Mehrani. Superpixels and supervoxels in an energy optimization framework. In Computer Vision, European Conference on , volume 6315 of Lecture Notes in Computer Science , pages 211–224. Springer Berlin Heidelberg, 2010. M. van den Bergh, X. Boix, G. Roig, B. de Capitani, and L. van Gool. SEEDS: Superpixels extracted via energy-driven sampling. David Stutz | October 7th, 2014 73
In Computer Vision, European Conference on , volume 7578 of Lecture Notes in Computer Science , pages 13–26. Springer Berlin Heidelberg, 2012. M. van den Bergh, X. Boix, G. Roig, B. de Capitani, and L. van Gool. SEEDS: Superpixels extracted via energy-driven sampling. Computing Research Repository , abs/1309.3848, 2013. M. van den Bergh, G. Roig, X. Boix, S. Manen, and L. van Gool. Online video seeds for temporal window objectness. In Computer Vision, International Conference on , pages 377–384, Sydney, Australia, December 2013. A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms. ❤tt♣✿✴✴✇✇✇✳✈❧❢❡❛t✳♦r❣✴ , 2008. L. Vincent and P . Soille. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. David Stutz | October 7th, 2014 73
Pattern Analysis and Machine Intelligence, Transactions on , 13(6):583–598, June 1991. A. Vedaldi and S. Soatto. Quick shift and kernel methods for mode seeking. In Computer Vision, European Conference on , volume 5305 of Lecture Notes in Computer Science , pages 705–718. Springer Berlin Heidelberg, 2008. D. Weikersdorfer. Efficiency by Sparsity: Depth-Adaptive Superpixels and Event-based SLAM . PhD thesis, Technische Universität München, Munich, Germany, 2014. D. Weikersdorfer, D. Gossow, and M. Beetz. Depth-adaptive superpixels. In Pattern Recognition, International Conference on , pages 2087–2090, Tsukuba, Japan, November 2012. S. Wang, H. Lu, F. Yang, and M.-H. Yang. David Stutz | October 7th, 2014 73
Recommend
More recommend