single view and multi view planar
play

Single-View and Multi-View Planar Models for Dense Monocular Mapping - PowerPoint PPT Presentation

Single-View and Multi-View Planar Models for Dense Monocular Mapping Alejo Concha, Jos M. Fcil and Javier Civera SLAMLab Robotics, Perception and Real-Time Group Universidad de Zaragoza, Spain International Workshop on Lines, Planes and


  1. Single-View and Multi-View Planar Models for Dense Monocular Mapping Alejo Concha, José M. Fácil and Javier Civera SLAMLab – Robotics, Perception and Real-Time Group Universidad de Zaragoza, Spain International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping (LPM 2017) September 28, 2017, IROS 2017, Vancouver.

  2. Index • Motivation • Background (direct mapping) • Dense monocular mapping. • Superpixels in monocular mapping • Superpixel triangulation. • Dense mapping using superpixels. • Superpixel fitting. • Learning-based planar models in monocular mapping • Data-driven primitives • Layout • Deep models • Conclusions

  3. Motivation • The scene model is limited in feature-based monocular SLAM. • Our goal: Dense mapping from monocular (RGB) image sequences

  4. Background: Dense Monocular Mapping High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Sparse/Semi-d. Dense

  5. Dense Monocular Mapping: Low Texture High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Dense

  6. Superpixels (mid-level) • Image segmentation based on color and 2D distance. • Decent features for textureless areas • We assume that homogeneous color regions are almost planar. High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Sparse/Semi-d. Dense Superpixels Dense + Sup.

  7. Dense Mapping: Low Texture High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Dense

  8. Semi-dense Mapping: Low Texture High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Sparse/Semi-d.

  9. 2D Superpixels: Low Texture High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Superpixels

  10. Superpixel Triangulation  Multiview model: Homography ( ℎ = 𝐿(𝑆 + 𝑢𝑜/𝑒)𝐿 −1 )  Error: Contour reprojection error (ɛ)  Montecarlo Initialization: For every superpixel we create several reasonable { 𝑜, 𝑒 } hypothesis and rank them by their error. H

  11. Superpixel Triangulation  Multiview model: Homography ( ℎ = 𝐿(𝑆 + 𝑢𝑜/𝑒)𝐿 −1 )  Error: Contour reprojection error (ɛ)  Mapping: Minimize the reprojection error. H

  12. Superpixels in low-textured areas High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Superpixels

  13. Using Superpixels in Monocular SLAM

  14. Dense + Superpixels

  15. Dense + Superpixels High Texture Low Texture Accuracy Density Cost Accuracy Density Cost Dense + Sup.

  16. Dense + Superpixels (5 centimetres error!) PMVS (high-gradient pixels) Dense (TV-regularization) Video (input) Based on Richard A Newcombe, Steven J Lovegrove, and Yasutaka Furukawa and Jean Ponce. Accurate, Andrew J Davison. Dtam: Dense tracking and mapping in dense, and robust multiview stereopsis. IEEE real-time. In Computer Vision (ICCV), 2011 IEEE Transactions on Pattern Analysis and Machine International Conference on, pages 23202327. IEEE, 2011. Intelligence, 32(8):13621376, 2010. PMVS + Superpixels Superpixels Dense + Superpixels Alejo Concha, Wajahat Hussain, Luis Montano and Javier Alejo Concha and Javier Civera. Using Civera, Manhattan and Piecewise-Planar Constraints for Superpixels in Monocular SLAM. ICRA Dense Monocular Mapping, RSS 2014. 2014

  17. Fitting 3D Superpixels to Semi-dense Maps • TV-regularization is expensive, GPU might be needed for real-time. • Semidense mapping and superpixels is a reasonable option cheaper than TV-regularization (CPU) and with a small loss on density. • Having a semidense map superpixels can be initialized via SVD more accurately and at a lower cost. • LIMITATION: We need parallax!! Code at https://github.com/alejocb/dpptam

  18. Data-driven primitives (mid-level)  Feature discovery on RGB-D training data.  Extracts patterns that are consistent in D and discriminative in RGB  At test time, from a single RGB view we can predict mid-level depth patterns. .

  19. Multiview Layout (high-level) (a) Sparse/Semidense reconstruction. (b) Plane normals from 3D vanishing points (image VP, backprojection, 3D clustering). (c) Plane distances from a sparse/semidense multiview reconstruction. (d) Superpixel segmentation, geometric and photometric feature extraction. (e), (f) Classification (Adaboost)

  20. Superpixels, Data-Driven Primitives and Layout

  21. Superpixels, Data-Driven Primitives and Layout • NYU dataset, high- parallax sequences

  22. Superpixels, Data-Driven Primitives and Layout • NYU dataset, low-parallax sequences

  23. Single-View Depth Prediction • Several networks already exist ( Eigen14, Eigen15 , Liu15, Liu15, Chakrabarti16, Cao16, Godard16, Ummenhofer 16…)

  24. Deep Learning Depth vs. Multiview Depth Deep Learning Depth Multiview Depth Fairly accurate in all pixels Very accurate in high-gradient pixels, inaccurate in low-gradient ones Fairly accurate for single view Very accurate for high-parallax motion, inaccurate for low-parallax one No model for the error Good model for the error Approximate scale 3D reconstruction up to scale Errors depend on the image content Errors depend on the geometry

  25. Fusing depth from deep learning and multiple views • The fusion is not trivial. • Our assumption is • No uncertainty for CNN • In general, deep learning depth depth. is more accurate • Errors come from different • Multiple view more accurate sources. for high texture - high parallax

  26. Results • The error of deep learning depth is ~50% lower than multi-view one. • Our fusion reduces the error ~10% over the deep learning results. • The scale invariant metric shows that our fusion fixes the structure. • Deep depth generalizes well (Eigen15 was trained on NYU but is accurate on TUM)

  27. Conclusions (no free lunch!)  Point-based features (low-level)  High accuracy iff ↑texture and ↑parallax.  Superpixels (mid-level)  High accuracy iff ↓texture and ↑parallax .  Data-driven primitives (mid-level)  Fair accuracy for → ↑ texture and ↓parallax.  Not fully dense.  Layout (high-level)  Fair accuracy even for ↓texture and ↓parallax .  Assumes a predetermined scene shape.  Deep learning (mid/high-level)  Fair accuracy even for ↓texture and ↓parallax.  Fully dense.  More general.

Recommend


More recommend