Indoor Localization Robot Indoor Localization Robot Based on Computer Vision Ubiquitous Computing Course 2015 Soliman Nasser
Outline ● Why Localization? ● Why Computer Vision? ● GPS ● Motion Capture System ● Triangulation ● PnP ● 3D Cameras ● SLAM ● Summary
Why Localization ? 1)History …....
Why Localization ? 2)Robot Navigation and Mapping
Why Localization ? 3)Guiding (Museum...) 4)Airports, Malls, Supermarket, … 5)more and more... . . .
Why Computer Vision ? In previous lectures, we saw a lot of techniques for Indoor Localization ! (Which is not CV) Question: Problem solved?
Why Computer Vision ? In previous lectures, we saw a lot of techniques for Indoor Localization ! (Which is not CV) Question: Problem solved? Answer:
Why Computer Vision ? Microsoft Indoor Localization Competition - IPSN 2015
Why Computer Vision ? Microsoft Indoor Localization Competition - IPSN 2015 Problem ?? Accuracy
Why Not GPS ? * works great outdoor Eample: Dji Phantom hovering outdoor – windy day /home/soliman/UbiquitousComputing/phantom-outdoor.mp4
Why Not GPS ? ** doesn't work indoor There is no GPS signal Eample: Dji Phantom indoor /home/soliman/UbiquitousComputing/phantom-indoor.mov
Motion Capture System Indusry manufactoring: Vicon, OptiTrack, … Accuracy: millimeters Cost: Very ^ very ^ very High IR Spectrum → Only indoor Used usually in research labs
Motion Capture System Very High FPS localiztion: 100 – 1000 FPS 6DOF
Motion Capture System
Motion Capture System Example: Autonomous micro-quadcopter Flying indoor (lab) 100 FPS – why we need such a high FPS? /home/soliman/UbiquitousComputing/ladybird-neimanem.mp4 /home/soliman/UbiquitousComputing/ladybird-mute.mov /home/soliman/UbiquitousComputing/VCQ.mov
Triangulation Triangulation is the process of determining 3D world coordinates for an object given 2D views from multiple cameras.
Triangulation
Prespective n Point The aim of the Perspective-n-point problem is to determine the position and orientation of a camera given its intrinsic parameters and a set of n correspondences between 3D points 3D lines.
Prespective n Point
Prespective n Point How we can estimate 6DOF from chessboard?!! 1)Chesboard corners detection (2D points) 2)Given 3D points – constant 3)Fiding corresponding between 2D and 3D 4)Solve PnP
Prespective n Point Example: *AR Drone (as a camera) *Chessboard (known 3D points) AR Drone Parrot, flying automously and follow the chessboard. (About 10FPS “only“) /home/soliman/UbiquitousComputing/ARDrone-PnP.mp4
3D Cameras 3D Cameras: Unlike normal 2D cameras, 3D cameras output an RGB-D matrix. RGB-D: besides a normal RGB output, depth info is also available.
3D Cameras Microsoft Kinect 360: Patterns are projected (IR lights – different frequencies). Triangulation method applied with the IR camera to estimate depth. Why not using 2 cameras? Works indoor only – why?
RGBDSLAM RGBDSLAM = RGBD + SLAM SLAM : S imultaneous L ocalization A nd M apping
SLAM using Kinect TurtleBot – robot with kinect Localizaion and Mapping using kinect /home/soliman/UbiquitousComputing/turtlebot.mp4
Summary Triangulation PnP 3D GPS Other cameras sensors Accuracy Very High Med Very High High Low (limited dist) Cost $$$$ $$ $$ $ $ Time Low Low+ Med Med Low complexity Indoor/Outdoor Usually In Usua In Out In/Out lly In Usage High Med Med Low Depends complexity
END Thank you for listening :)
Recommend
More recommend