3D Vision Torsten Sattler and Martin Oswald Spring 2018
3D Vision • Understanding geometric relations • between images and the 3D world • between images • Obtaining 3D information describing our 3D world • from images • from dedicated sensors
3D Vision • Extremely important in robotics and AR / VR • Visual navigation • Sensing / mapping the environment • Obstacle detection, … • Many further application areas • A few examples …
Google Tango (officially discontinued, lives on as ARCore)
Google Tango
Image-Based Localization
Geo-Tagging Holiday Photos (Li et al. ECCV 2012)
Augmented Reality (Middelberg et al. ECCV 2014)
Large-Scale Structure-from-Motion Video credit: Johannes Schönberger
Virtual Tourism
3D Urban Modeling UNC/UKY UrbanScape project
3D Urban Modeling
Mobile Phone 3D Scanner
Mobile Phone 3D Scanner
Self-Driving Cars
Self-Driving Cars
Self-Driving Cars
Micro Aerial Vehicles
Mixed Reality Microsoft HoloLens
Virtual Reality
Raw Kinect Output: Color + Depth http://grouplab.cpsc.ucalgary.ca/cookbook/index.php/Technologies/Kinect
Human-Machine Interface
3D Video with Kinect
Autonomous Micro-Helicopter Navigation Use Kinect to map out obstacles and avoid collisions
Dynamic Reconstruction
Performance Capture
Performance Capture (Oswald et al. ECCV 14)
Performance Capture
Motion Capture
Interactive 3D Modeling (Sinha et al. Siggraph Asia 08) collaboration with Microsoft Research (and licensed to MS)
Scanning Industrial Sites as-build 3D model of off-shore oil platform
Scanning Cultural Heritage
Cultural Heritage Stanford ’ s Digital Michelangelo Digital archive Art historic studies
Archaeology accuracy ~1/500 from DV video (i.e. 140kb jpegs 576x720)
Forensics • Crime scene recording and analysis
Forensics
Sports
Surgery
3D Vision Course Team Martin Oswald Nikolay Savinov Peidong Liu Torsten Sattler CNB G103.2 CAB G 81.1 CAB G 84.2 CNB 104 peidong.liu@inf.ethz.ch torsten.sattler@inf.ethz.ch martin.oswald@inf.ethz.ch nikolay.savinov@inf.ethz.ch Katarina Tóthóva Johannes Schönberger Federico Camposeco CAB G 102.2 CAB G 85.1 CAB G 86.3 katarina.tothova@inf.ethz.ch jsch@inf.ethz.ch federico.camposeco@inf.ethz.ch
Course Objectives • To understand the concepts that relate images to the 3D world and images to other images • Explore the state of the art in 3D vision • Implement a 3D vision system/algorithm
Learning Approach • Introductory lectures: • Cover basic 3D vision concepts and approaches. • Further lectures: • Short introduction to topic • Paper presentations ( you ) (seminal papers and state of the art, related to your projects) • 3D vision project: • Choose topic, define scope (by week 4) • Implement algorithm/system • Presentation/demo and paper report Grade distribution • Paper presentation & discussions: 25% • 3D vision project & report: 75%
Materials Slides and more http://www.cvg.ethz.ch/teaching/3dvision/ Also check out on-line “shape-from-video” tutorial: http://www.cs.unc.edu/~marc/tutorial.pdf http://www.cs.unc.edu/~marc/tutorial/ Textbooks: • Hartley & Zisserman, Multiple View Geometry • Szeliski, Computer Vision: Algorithms and Applications
Schedule Feb 19 Introduction Feb 26 Geometry, Camera Model, Calibration Mar 5 Features, Tracking / Matching Mar 12 Project Proposals by Students Mar 19 Structure from Motion (SfM) + papers Mar 26 Dense Correspondence (stereo / optical flow) + papers Apr 2 Bundle Adjustment & SLAM + papers Apr 9 Student Midterm Presentations Arp16 Easter break Apr 23 Multi-View Stereo & Volumetric Modeling + papers Whitsundite Apr 30 May 7 3D Modeling with Depth Sensors + papers May 14 3D Scene Understanding + papers May 21 4D Video & Dynamic Scenes + papers May 28 Student Project Demo Day = Final Presentations
Fast Forward • Quick overview of what is coming…
Camera Models and Geometry Pinhole camera or Geometric transformations in 2D and 3D
Camera Calibration • Know 2D/3D correspondences, compute projection matrix also radial distortion (non-linear)
Feature Tracking and Matching Harris corners, KLT features, SIFT features key concepts: invariance of extraction, descriptors to viewpoint, exposure and illumination changes
3D from Images L 2 m 1 C 1 M? M L 1 Triangulation - calibration m 2 l 2 - correspondences C 2
Epipolar Geometry Fundamental matrix Essential matrix Also how to robustly compute from images
Structure from Motion Initialize Motion Initialize Structure (P 1 ,P 2 compatibel with F) (minimize reprojection error) Extend motion Extend structure (compute pose through matches (Initialize new structure, seen in 2 or more previous views) refine existing structure)
Visual SLAM • Visual Simultaneous Navigation and Mapping (Clipp et al. ICCV’09)
Stereo and Rectification Warp images to simplify epipolar geometry Compute correspondences for all pixels
Multi-View Stereo
Joint 3D Reconstruction and Class Segmentation (Haene et al CVPR13) reconstruction only (isotropic smoothness prior) joint reconstruction and segmentation ■ Building (ground, building, vegetation, stuff) ■ Ground ■ Vegetation ■ Clutter
Structured Light • Projector = camera • Use specific patterns to obtain correspondences
Papers and Discussion • Will cover recent state of the art • Each student team will present a paper (5min per team member), followed by discussion • “Adversary” to lead the discussion • Papers will be related to projects/topics • Will distribute papers later (depending on chosen projects)
Projects and reports • Project on 3D Vision-related topic • Implement algorithm / system • Evaluate it • Write a report about it • 3 Presentations / Demos: • Project Proposal Presentation (week 4) • Midterm Presentation (week 8) • Project Demos (week 15) • Ideally: Groups of 3 students
Course project example: Build your own 3D scanner! Example: Bouguet ICCV’98
Project Topics
DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks Goal: The goal is to implement a deep recurrent convolutional neural network for end-to-end visual odometry [1] Description: Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimization, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This project is to implement a novel end-to- end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems. [1] Wang et. al., DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks, ICRA 2017 Peidong Liu, CNB D102 peidong.liu@inf.ethz.ch Recommended : Python and prior knowledge in machine learning
Recommend
More recommend