ob objec ect aware g e guidance e for auton onom omou ous
play

Ob Objec ect-aware G e Guidance e for Auton onom omou ous S - PowerPoint PPT Presentation

Ob Objec ect-aware G e Guidance e for Auton onom omou ous S Scene e Reconstruction on Ligang Liu, Xi Xia , Han Sun, Qi Shen, Juzhan Xu, Bin Chen, Hui Huang, Kai Xu University of Science and Technology of China Shenzhen University


  1. Ob Objec ect-aware G e Guidance e for Auton onom omou ous S Scene e Reconstruction on Ligang Liu, Xi Xia , Han Sun, Qi Shen, Juzhan Xu, Bin Chen, Hui Huang, Kai Xu University of Science and Technology of China Shenzhen University National University of Defense Technology

  2. Background • Commodity RGB-D sensors Microsoft Kinect PrimeSense Intel RealSense

  3. Background • RGB-D sensor allows real-time reconstruction KinectFusion [Izadi et al. 2011]

  4. Background • Other real-time reconstruction methods Voxel Hashing ElasticFusion [Nießner et al. 2013] [Whelan et al. 2015]

  5. Background • Indoor scene reconstruction -> 3D object models

  6. Background • Human scanning is a laborious task [Kim et al. 2013] Time consuming Inaccurate scanning

  7. Background • Modern robots are more and more reliable and controllable. Unimation, 1958 Fetch, 2015

  8. Motivation Never feel Automatic tired Stable and accurate

  9. Goal

  10. Existing Works • High quality scanning and reconstruction of single object [Wu et al. 2014]

  11. Existing Works • Global path planning and exploration [Xu et al. 2017]

  12. Existing Works • Active reconstruction and segmentation [Xu et al. 2015]

  13. Existing Works • Local view planning for recognition [Xu et al. 2016]

  14. Conclusion of Existing Works • Two pass scene reconstruction and understanding. • Can only use low-level information in first exploration pass. First Pass Second Pass exploration & reconstruction segmentation & recognition [Xu et al. 2017] [Nan et al. 2012]

  15. Conclusion of Existing Works • Two pass scene reconstruction and understanding. • Can only use low-level information in first exploration pass. First Pass Second Pass reconstruction & segmentation object recognition [Xu et al. 2015] [Xu et al. 2016]

  16. The Main Challenge

  17. Motivation • Human explore unknown scenes object by object !

  18. Motivation • Human tend to scan object by object !

  19. Our Solution • Key idea : Online recognized objects serve as an important guidance map for planning the robot scanning.

  20. The Next Best Object Problem Which object should I scan next? Object of Interest (OOI)

  21. Overview Objectness Based Segmentation Objectness Objectness Based Objectness Based Local View Planning Global Path Planning

  22. Model-Driven Objectness • Objectness should measure both similarity and completeness

  23. Partial Matching Query Dataset Model Dataset

  24. Partial Matching Query Dataset Model 3DMatch [Zeng et al. 2016]

  25. Partial Matching Query Dataset Model

  26. Model-Driven Objectness

  27. Model-Driven Objectness • Objectness should measure both similarity and completeness

  28. Next Best Object Objectness Distance Orientation Size

  29. Technical Challenge • How to segment and recognize objects during reconstruction? Missing data Recognition and segmentation constitute a chicken-egg problem

  30. Pre-segmentation [Whelan et al. 2015] [Tateno et al. 2015] Pre-segmented Components Indoor object Scanned Model

  31. Post-segmentation • Couples segmentation and recognition in the same optimization

  32. Post-segmentation

  33. Post-segmentation Results Pre-segmentation Post-segmentation

  34. Dataset Construction

  35. Dataset Construction Two advantages: Decrease the difference between CAD model and scanned model • Segmented components & component pairs can make retrieval easier •

  36. The Next Best View Problem Which view of the OOI should I scan next? ? ? ? ?

  37. Next Best View

  38. System Pipeline Key techniques : Objectness based segmentation Objectness based reconstruction • • Pre-segmentation The next best object (NBO) • • Post-segmentation (important) The next best view (NBV) • •

  39. Evaluation • Virtual scene dataset SUNCG (66 scenes) ScanNet (38 scenes)

  40. Comparison • Comparing object recognition with PointNet++ [Qi et al. 2017]

  41. Comparison • Comparing Rand Index of segmentation

  42. Comparison • Comparing object coverage rate and quality against tensor field guided autoscanning [Xu et al. 2017] Depth noise

  43. Comparison • Comparing object coverage rate and quality against tensor field guided autoscanning [Xu et al. 2017]

  44. More Results

  45. Limitations No similar models Cluttered scenes

  46. Limitations & Future Works Single object Group structure

  47. Future Works Combine image-based method Driverless car with LiDAR

  48. Conclusion • An object-guided approach for autonomous scene exploration, reconstruction, and understanding • Model-driven objectness • Objectness-based segmentation • Objectness-based NBO strategy • Objectness-based NBV strategy • Coupled global exploration and local scanning • Coupled segmentation and recognition

  49. Thank you! Q & A

Recommend


More recommend