collaborative visual slam framework for a multi robot
play

Collaborative Visual SLAM Framework for a Multi-Robot System Nived - PowerPoint PPT Presentation

Collaborative Visual SLAM Framework for a Multi-Robot System Nived Chebrolu, David Marquez-Gamez and Philippe Martinet 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles Hamburg, Germany 28th September, 2015 1 / 27


  1. Collaborative Visual SLAM Framework for a Multi-Robot System Nived Chebrolu, David Marquez-Gamez and Philippe Martinet 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles Hamburg, Germany 28th September, 2015 1 / 27

  2. Motivation for a collaborative system Multi-robot system for disaster relief operations 1 1 Picture taken from project SENEKA, Fraunhofer IOSB 2 / 27

  3. Contribution of this paper System For Collaborative Visual SLAM 3 / 27

  4. Perception sensor Monocular Camera 4 / 27

  5. Main components of the system Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 5 / 27

  6. Monocular visual SLAM Goal Given a sequence of images, obtain the trajectory of the camera and the structure/model of the environment. Mono SLAM PTAM DTAM 6 / 27

  7. Large-Scale Direct Visual SLAM LSD-SLAM Output 7 / 27

  8. Monocular SLAM: System Overview Tracking Depth Estimation Map Optimization 8 / 27

  9. Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 9 / 27

  10. Place Recognition System: Context Where? Where? Is the place already visited? 10 / 27

  11. FAB-MAP Approach Overlap Detection Scheme 11 / 27

  12. Experimental Results - A Simple Scenario Image Num. P(Seen) P(New) 1 0.991 0.001 2 0.085 0.910 3 0.922 0.002 4 0.991 0.001 5 0.911 0.131 12 / 27

  13. Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 13 / 27

  14. Merging Maps: Context What is the transformation between two views? 14 / 27

  15. Procedure for Merging Maps Refine Estimate Using Direct Image Alignment Initial Estimate Using Horn's Method Refine Final Estimate Using ICP 15 / 27

  16. Experimental Results: Input (a) First Image (b) Second Image (c) Depth map for (d) Depth map for first image second image 16 / 27

  17. Experimental Results: Output (e) Before Applying (f) After Applying Transformation Transformation 17 / 27

  18. Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 18 / 27

  19. Overall Scheme Figure: Overall scheme of our collaborative SLAM system 19 / 27

  20. Experimental Results Case study: Experimental Settings Robotic Platform: Two Turtlebots. Sensor: uEye Monocular Camera with wide-eye lens. Images: 640 × 480 pixels @ 30 Hz Environment: Area: 20 m × 20 m , Indoor (Semi-Industrial) Computation: Core 2 Duo Laptop Software: ROS, OpenCV, g 2 o library. 20 / 27

  21. At Instance 1 (a) Robot R 1 (b) Robot R 2 21 / 27

  22. At Instance 2 (c) Robot R 1 (d) Robot R 2 22 / 27

  23. At Instance 3 (e) Robot R 1 (f) Robot R 2 23 / 27

  24. Global Map (g) Combined Trajectory (h) Combined Depth Map Global map computed at the central server. 24 / 27

  25. Summary A collaborative visual SLAM framework with: Monocular SLAM process for each robot. 1 Detection of scene overlap amongst several 2 robots Global map computation fusing measurements 3 from all robots. Feedback mechanism for global information to 4 be communicated back to each robot. 25 / 27

  26. Scope For Future Work Investigate advantage due to feedback in terms 1 of localization accuracy and map quality. Towards a Decentralized system: Direct robot 2 to robot communication. Adapting for a hybrid team of robots (ex. 3 UAVs and ground robots). 26 / 27

  27. Thank you Thank you very much for your attention !!! Q&A 27 / 27

Recommend


More recommend