Collaborative Visual SLAM Framework for a Multi-Robot System Nived Chebrolu, David Marquez-Gamez and Philippe Martinet 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles Hamburg, Germany 28th September, 2015 1 / 27
Motivation for a collaborative system Multi-robot system for disaster relief operations 1 1 Picture taken from project SENEKA, Fraunhofer IOSB 2 / 27
Contribution of this paper System For Collaborative Visual SLAM 3 / 27
Perception sensor Monocular Camera 4 / 27
Main components of the system Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 5 / 27
Monocular visual SLAM Goal Given a sequence of images, obtain the trajectory of the camera and the structure/model of the environment. Mono SLAM PTAM DTAM 6 / 27
Large-Scale Direct Visual SLAM LSD-SLAM Output 7 / 27
Monocular SLAM: System Overview Tracking Depth Estimation Map Optimization 8 / 27
Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 9 / 27
Place Recognition System: Context Where? Where? Is the place already visited? 10 / 27
FAB-MAP Approach Overlap Detection Scheme 11 / 27
Experimental Results - A Simple Scenario Image Num. P(Seen) P(New) 1 0.991 0.001 2 0.085 0.910 3 0.922 0.002 4 0.991 0.001 5 0.911 0.131 12 / 27
Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 13 / 27
Merging Maps: Context What is the transformation between two views? 14 / 27
Procedure for Merging Maps Refine Estimate Using Direct Image Alignment Initial Estimate Using Horn's Method Refine Final Estimate Using ICP 15 / 27
Experimental Results: Input (a) First Image (b) Second Image (c) Depth map for (d) Depth map for first image second image 16 / 27
Experimental Results: Output (e) Before Applying (f) After Applying Transformation Transformation 17 / 27
Monocular Visual Place Recognition Merging Maps SLAM System Collaborative SLAM Framework 18 / 27
Overall Scheme Figure: Overall scheme of our collaborative SLAM system 19 / 27
Experimental Results Case study: Experimental Settings Robotic Platform: Two Turtlebots. Sensor: uEye Monocular Camera with wide-eye lens. Images: 640 × 480 pixels @ 30 Hz Environment: Area: 20 m × 20 m , Indoor (Semi-Industrial) Computation: Core 2 Duo Laptop Software: ROS, OpenCV, g 2 o library. 20 / 27
At Instance 1 (a) Robot R 1 (b) Robot R 2 21 / 27
At Instance 2 (c) Robot R 1 (d) Robot R 2 22 / 27
At Instance 3 (e) Robot R 1 (f) Robot R 2 23 / 27
Global Map (g) Combined Trajectory (h) Combined Depth Map Global map computed at the central server. 24 / 27
Summary A collaborative visual SLAM framework with: Monocular SLAM process for each robot. 1 Detection of scene overlap amongst several 2 robots Global map computation fusing measurements 3 from all robots. Feedback mechanism for global information to 4 be communicated back to each robot. 25 / 27
Scope For Future Work Investigate advantage due to feedback in terms 1 of localization accuracy and map quality. Towards a Decentralized system: Direct robot 2 to robot communication. Adapting for a hybrid team of robots (ex. 3 UAVs and ground robots). 26 / 27
Thank you Thank you very much for your attention !!! Q&A 27 / 27
Recommend
More recommend