wide rgb d for scaled layout reconstruction
play

Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, - PowerPoint PPT Presentation

Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS


  1. Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver

  2. RGB-D cameras provide valuable information, but with limited FOV

  3. Fisheye cameras are able to view the whole scene, but lack depth information

  4. Our proposal: Use both * Hybrid camera system * Depth camera provides 3D certainty and scale * Fisheye camera is able to view 180º of field of view

  5. How? With layout reconstruction * Presented in ECCV 2016: * “Peripheral Expansion of Depth Information via Layout Estimation” * A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero,

  6. * Watch video at: https://youtu.be/nQYvhAhvv6U

  7. Outline of the method

  8. Outline of the method

  9. Calibration problem * Fisheye calibration has to be performed separately to model distortion properly

  10. Calibration * New method that combines * RGB to depth calibration [1] * Omnidirectional camera models [2] [1] C. Herrera et al. “Joint depth and color camera calibration with distortion correction”, PAMI 2012 [2] D. Scaramuzza et al. “A toolbox for easily calibrating omnidirectional cameras”, IROS 2006

  11. Calibration A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero, “A novel hybrid camera system with depth and fisheye cameras”. International Conference on Pattern Recognition (2016)

  12. Outline of the method

  13. Lines extraction To avoid rectification of the image, we use a method that extract lines directly from omnidirectional images with revolution symmetry J. Bermudez-Cameo, G. Lopez-Nicolas, J.J. Guerrero, “Automatic Line Extraction in Uncalibrated Omnidirectional Cameras with Revolution Symmetry”. International Journal of Computer Vision (2015)

  14. Extraction of the VPs Manhattan environments are assumed. We extract the 3 VPs in a two-stage optimization: 1. With the normals of the 3D points 2. Final extraction with lines (more accuracy)

  15. Line classification * Three main directions * Above/below horizon * Long lines * Associated to 3D plane intersections

  16. Outline of the method

  17. Line projection and scaling Lines below horizon are intersected with floor plane to have its 3D coordinates

  18. Line projection and scaling The height of the ceiling is computed assuming floor/ceiling symmetry. In the 2D plane, contours should overlap.

  19. Line projection and scaling (Example)

  20. Corner extraction We extract four types of corner, either in floor or ceiling plane

  21. Corner extraction Corners are scored to favour their appearance in the layout hypotheses generation when: * Lines are longer * Lines are closer to the intersection point * It is formed by more lines * Lines are associated to 3D intersections

  22. Corner extraction (example)

  23. Outline of the method

  24. Hypotheses generation 1. Draw 2-5 corners increasing probability of appearance with the scores 2. Sort them clockwise 3. Join corners with walls oriented in Manhattan directions 4. Possibility to add undetected corners to keep alternatively-oriented Manhattan walls 5. Close the layout

  25. Hypotheses generation example

  26. Invalid hypotheses

  27. Hypotheses in 3D

  28. Outline of the method

  29. Layout evaluation methods * Sum of Scores (SS) * Sum of Edges (SE) * Angle Coverage (AC) [3] D.C. Lee et al. “Geometric * Orientation Map (OM) – from [3] reasoning for single image structure recovery”, CVPR 2009

  30. Experimental evaluation: * We created our own data, including: * RGB-D + Fisheye camera system: 70 images * Google Tango * We measure the quality of the layout extraction with the Pixel Accuracy, i.e. the number of pixels of the resulting labeled image that matches the manually labeled ground truth (Pixel Accuracy, in %)

  31. Experimental results * With few hypotheses we obtain good results à Corner extraction and scoring works well * The method removing the depth information gets considerably worse results

  32. Results tango + scaling

  33. Results tango + scaling

  34. Bonus: New calibration method Extrinsic calibration of multiple RGB- D cameras from line observations. A. Perez-Yus, E. Fernandez-Moral, G. Lopez-Nicolas, J.J. Guerrero, P. Rives IEEE Robotics and Automation Letters 2018

  35. New calibration method

  36. New calibration method

  37. Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver

Recommend


More recommend