features
play

Features & Correspondences Kevin Kser Spring 2013 - PowerPoint PPT Presentation

3D Photography: Features & Correspondences Kevin Kser Spring 2013 http://www.cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Introduction Feb 25 Geometry, Camera Model, Calibration Mar 4 Features,


  1. 3D Photography: Features & Correspondences Kevin Köser Spring 2013 http://www.cvg.ethz.ch/teaching/2013spring/3dphoto/

  2. Schedule (tentative) Feb 18 Introduction Feb 25 Geometry, Camera Model, Calibration Mar 4 Features, Tracking/Matching Mar 11 Project Proposals by Students Mar 18 Epipolar Geometry Mar 25 Stereo Vision Apr 1 Easter Apr 8 Structure from Motion / SLAM Apr 15 Project Updates (Sechseläuten in afternoon) Apr 22 Active Ranging, Structured Light Apr 29 Volumetric Modeling May 6 Mesh-based Modeling May 13 Shape-from-X May 20 Pentecost / White Monday May 27 Student Project Demo Day

  3. Today: Features & Correspondences M 2D-3D 2D-3D m i+1 m i 2D-2D Correspondences are at the heart of 3D reconstruction from images

  4. Feature matching vs. tracking Image-to-image correspondences are key to passive triangulation-based 3D reconstruction Extract features independently Extract features in first images and then match by comparing and then try to find same descriptors feature back in next view What is a good feature?

  5. Comparing image regions Compare intensities pixel-by-pixel I(x,y) I´(x,y) Dissimilarity measures Sum of Square Differences

  6. Feature points • Required properties: • Well-localized • Stable across views, „repeatable“ (i.e. same 3D point should be extracted as feature for neighboring viewpoints)

  7. Feature point extraction Find points (local image patches) that differ as much as possible from all neighboring points homogeneous edge corner

  8. Feature point extraction • Approximate SSD for small displacement Δ • Image difference, square difference for pixel • SSD for window

  9. Feature point extraction homogeneous edge corner Find points for which the following is maximum i.e. maximize smallest eigenvalue of M

  10. Harris corner detector • Use small local window: • Maximize „cornerness“: • Only use local maxima, subpixel accuracy through second order surface fitting • Select strongest features over whole image and over each tile (e.g. 1000/image, 2/tile)

  11. Simple matching • for each corner in image 1 find the corner in image 2 that is most similar and vice-versa • Only compare geometrically compatible points • Keep mutual best matches What transformations does this work for?

  12. Comparing image regions Compare intensities pixel-by-pixel I(x,y) I´(x,y) Dissimilarity measures Sum of Square Differences

  13. Comparing image regions Compare intensities pixel-by-pixel I(x,y) I´(x,y) Similarity measures Zero-mean Normalized Cross Correlation

  14. Feature matching: example 3 2 4 0.96 -0.40 -0.16 -0.39 0.19 3 2 -0.05 0.75 -0.47 0.51 0.72 4 1 5 -0.18 -0.39 0.73 0.15 -0.75 1 5 -0.27 0.49 0.16 0.79 0.21 0.08 0.50 -0.45 0.28 0.99 What transformations does this work for? What level of transformation do we need?

  15. Wide baseline matching • Requirement to cope with larger variations between images  • Translation, rotation, scaling geometric • Foreshortening transformations  photometric • Non-diffuse reflections changes • Illumination

  16. Invariant detectors Rotation invariant Scale invariant Affine invariant (approximately invariant w.r.t. perspective/viewpoint)

  17. 2D Transformations of a Local Patch Block In practise Matching hardly observable in small patches ! e.g. MSER

  18. Example: Find Correspondences between these images using the MSER Detector [Matas„02]

  19. MSER Features Local regions, not points !

  20. Extremal Regions: - Much Brighter than Surrounding - Use intensity threshold

  21. Extremal Regions: - OR: Much Darker than Surrounding - Use intensity threshold

  22. Regions: Connected Pixels at some threshold - Region Size = # Pixels - Maximally stable: Size Constant near some threshold

  23. A Sample Feature

  24. „T“ is maximally stable with respect to surrounding

  25. - Compute „center of gravity“ - Compute Scatter (PCA / Ellipsoid)

  26. Ellipse abstracts from pixels ! Geometric representation: position/size/shape Different Images: Different positions/sizes/shapes

  27. Still: How to compare ? Idea: Normalize to „Default“ Position/Size/Shape !  e.g. Circle of Radius 16 Pixels !

  28. Idea: Normalize to „Default“ Position/Size/Shape ! Ok, but 2D orientation ?

  29. • Idea (Lowe„99): Run over all Pixels: Chart Local Gradient Orientation in Histogram • Find Dominant Orientation in Histogram • Rotate Local Patch into Dominant Orientation

  30. Each normalized patch obtained from single image !

  31. Wrap-up MSER • detect sets of pixels brighter/darker than surr. • fit elliptical shape to pixel set • warp image so that ellipse becomes circle • rotate to dominant gradient direction [other constructions posible as well]  Affine normalization of feature leads to similar patches in different views ! Two MSERegions: Are they in correspondence ?

  32. Traditional Matching Approach: Compare Regions (Sum of Squared Differences) - Small Misalignment - Brightness Change  More Tolerant Comparison ?

  33. SIFT Descriptor [Lowe„99]: - Brightness offset: use only gradients ! Gradient Orientation/Magnitude Gradient Magnitude Partition into Sectors For each Sector: - Store Orientations of Gradients !

  34. Quantize Gradient Orientation, e.g. 45° Steps 35 12 10 25 … 29 Gradient Orientation/Magnitude Orientation Histogram per Sector Orientation Histogram (Magnitude as Weight) m Sectors with n Orientations: (m·n) values Construct Vector

  35. Summed Gradient Memory Comparison Magnitudes for 11x11 Patch: Different Sectors and Raw Grey Values = Orientations 121 Bytes 35 0.12 0.04 12 Normalize 0.03 10 (Suppresses „SIFT 0.08 25 Changing Descriptor“: … … Contrast 128 Bytes 29 0.10 Effects) … …

  36. Wrap Up: Normalized Patch Comparison vs. Descriptor Usage of Gradients:  Intensity Offset Compensation Subdivision into Sectors / Per-Sector Histogram:  Small Alignment Error Compensation Normalization of Histogram Vector:  Image Contrast Compensation But most important: Avoid sudden „descriptor jumps“

  37. Classical Histogram (Quantization 45°): 22° quantized/rounded to 0° 23° quantized/rounded to 45° Small Differences can lead to different bins ! Feature Position, Size, Shape, Orientation uncertain, Image Content noisy ! Descriptor MUST tolerate this (no sudden changes !) Solution: „Soft - Binning“ !

  38. Histogram (Quantization 45°): 20° 22° If orientation is 3° different, all measurements go to second bin ! 2.0 1.0 => Sudden Change in Histogram from 0° 45° 90° … (2 0 0 0) to (0 2 0 0) Classical (closest bin)

  39. Histogram (Quantization 45°): Soft Weights: 20° 22° „Bin Correctness“ 0.56 0.44 1.07 0.56 0.93 0.44 If orientation is 3° different, descriptor changes only 0° 45° 90° … gradually ! Soft-Binning

  40. Wrap-up Detector: - Find interesting regions (position/size/shape) - Assign dominant gradient orientation - Normalize regions Descriptor: - Compute „signature“ upon normalized region - Behave smoothly in presence of distortions: brightness changes / normalization inaccuracies

  41. For each Region: 128-dim. Descriptor How to find correspondences ?

  42. Matching Scenario I Two images in a dense foto sequence: - think about maximum movement d (e.g. 50 pixel) - Search in a window +/- d of old position - Compare descriptors, choose most similar

  43. Matching Scenario II Two arbitrary images / Wide baseline - Compare every descriptor with every other (e.g. GPU) - OR: Find small set of matches, predict others - OR: Find nearest neighbor in descriptor space

  44. Searching Descriptor Space Key Ideas: - Each descriptor consists of 128 numbers Imagine vector from IR 128 - Correspondending descriptors: not far apart ! - Arrange all descriptors of image 1 in kd-tree (imagine „octree“ but with more dims) - For each descriptor of image 2: Find (approximate) nearest neighbor in tree

  45. Searching Descriptor Space „Learn“ important dimensions of 128D space for a given scene, e.g. PCA or LDA Project descriptors to important dimensions, use kd-tree

  46. Matching Techniques Spatial Search Window: - Requires/exploits good prediction - Can avoid far away similar-looking features - Good for sequences Descriptor Space: - Initial tree setup - Fast lookup for huge amounts of features - More sophisticated outlier detection required - Good for asymmetric (offline/online) problems, registration, initialization, object recognition, wide baseline matching

  47. Correspondence Verification Features have only very local view => Mismatches How to detect ? - Discard Matches with low similarity - Delete „non - distinctive“ features (those with close match in same image or with similar 2nd best match) - Check for bi-directional consistency - Geometric verification, e.g. RANSAC

  48. Object Detection / Pose Estimation using single MSER Affine feature: position(2x), shape(3x), orientation(1x) 6 degree of freedom, more than 2D simple point !  “conjugate rotation property” equivalent to l   ) determines “projective” scale factor when going to

  49. 2D Transformations of a Local Patch Block Matching e.g. MSER e.g. SIFT

Recommend


More recommend