3d camera calibration
play

3D Camera Calibration Nichola Abdo and Andr Borgeat March 5 th 2010 - PowerPoint PPT Presentation

3D Camera Calibration Nichola Abdo and Andr Borgeat March 5 th 2010 1/ 16 Motivation 3D Imaging Acquisition of 3D information is important for many computer vision and robotics applications. Stereo vision cameras combine several


  1. 3D Camera Calibration Nichola Abdo and André Borgeat March 5 th 2010 1/ 16

  2. Motivation 3D Imaging • Acquisition of 3D information is important for many computer vision and robotics applications. • Stereo vision cameras combine several images to obtain depth measurements. This is computationally demanding and prone to errors. • Laser scanners are expensive and require a mechanism for scanning a laser beam to cover the entire scene. 2/ 16

  3. The Photonic Mixer Device (PMD) PMD Cameras Source: Ringbeck and Hagebeuker • PMD cameras operate by time of flight (TOF), providing both intensity and distance measurements. • Distance is related to the phase shift between the reference and received signals. • Measurements are obtained by all pixels simultaneously (no need for scanning), resulting in high frame rates. 3/ 16

  4. Sources for Erroneous Depth Measurements Numerous sources that affect the measurements • Distance The distance calculation assumes a perfect sinusoidal modulation, which is practically not given, leading to a distance-dependent oscillating error. • Location of the pixel in the image The individual sensors need time to propagate the signal depending on their location in the sensor array. • Intensity The brighter the image, the more it is shifted towards the camera. • Other sources Temperature, shutter time, multiple reflections, ambient light, edges, over-/ undersaturation 4/ 16

  5. Related Work • Lindner and Kolb: B-splines approximation for the distance-related error and the intensity-related error (2006, 2007). • Kahlmann et al. accounted for the distance and shutter-time related errors using a look-up table approach (2006). • Fuchs and May modeled the distance-related error and the pixel-location-related error as polynomials (2007). Calibration also involved computing the transformation between the camera and end-effector. 5/ 16

  6. Depth Calibration Setup • PMD camera attached to the end effector of a robotic arm. • The arm is moved to different configurations and images are taken of a wall from different view-angles. • The pose of the robot is accurately given by the robot control. • A laser range finder provides the location of the wall in the world coordinate system. PMD-[vision] O3 camera attached to the robotic arm 6/ 16

  7. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  8. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  9. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  10. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  11. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  12. 3D Projection of the Depth Images v of a pixel v = ( r , c ) of the i th image: 3D coordinate x i x i v = w T tt T s A ( v , D i v − E i v ) D i v : Distance measurement E i v : Error in the distance measurement A : Projection accounts for lens distortion etc. t T s : Sensor-to-TCP transformation sensor origin relative to robot arm w T t : End-effector pose location of the robot arm 7/ 16

  13. Modeling the Depth Error Baseline, Fuchs and May (2006) • Used as basis for our work • Similar setup (camera attached to a robotic arm) • Explicit representation of the different error sources • Error Model consisting of: • Distance-related error D modeled as polynomial m � D ( D i d k [ D i v ] k v ) = k = 0 • Pixel-location-related error P modeled as function linear in row and column P 1 ( v ) = p 0 + p 1 r + p 2 c ⇒ Baseline error model: • E 0 = D ( D i v ) + P 1 ( v ) 8/ 16

  14. Modeling the Depth Error Extensions (I) – Pixel-Location-Related Error 1 Function for pixel-location-related error doesn’t seem to fit Error (in cm) vs. Row Error (in cm) vs. Column ⇒ Different error term: P 2 ( v ) = p 0 + p 1 ( r − r 0 ) 2 + p 2 ( c − c 0 ) 2 9/ 16

  15. Modeling the Depth Error Extensions (II) – Intensity-Related Error 2 Intensity-related error: visible with the naked eye , data available Two candidates: • Plain Intensities I 1 ( I i v ) = i 0 + i 1 [ I i v ] + i 2 [ I i v ] 2 • Normalized Intensitites N i v = i n · I i v · [ D i v ] 2 I 2 ( I i v , D i v ) = i 0 + i 1 [ N i v ] + i 2 [ N i v ] 2 ⇒ Two extended error models: • E A = D ( D i v ) + P 2 ( v ) + I 1 ( I i v ) • E B = D ( D i v ) + P 2 ( v ) + I 2 ( I i v , D i v ) 10/ 16

  16. Calibration • Find a parameterization a ⋆ of E and t T s that minimizes the error • Assume a fixed, plane wall with known location ( n , d ) , the error of the distance measurement of a pixel is given by f i v ( a ) = n T � x i � ( x i + d v : 3D Projection of a pixel) v • Using a number of images i taken from different locations, the calibration task can be formulated as a ⋆ = argmin a � � f i v ( a ) 2 v i • Can be solved using techniques for nonlinear Least Square Estimation (e.g. Levenberg-Marquardt ) 11/ 16

  17. Experiments Setup • 42 images from different locations • 20 images of the plain white wall • 22 images of a checkerboard pattern • Training and testing done using 6-fold cross validation Results • All techniques significantly reduced the error • Both our techniques outperformed the baseline • No significant difference between our two candidates 0 E 0 E A E B Mean 26 . 54 15 . 14 10 . 90 11 . 68 2 . 57 SE ± 6 . 48 ± 4 . 26 ± 4 . 13 ± 3 . 44 Average error in millimeters 12/ 16

  18. Qualitative Results Sensor-to-Tool-Center-Point Transformation Plane after applying the sensor-to-TCP Uncorrected plane transformation 13/ 16

  19. Qualitative Results Intensity Correction Error without intensity correction Error with intensity correction 14/ 16

  20. Conclusions • Our extended model outperforms the baseline model. • Accounting for the intensity-related error clearly improves the accuracy of the distance measurement. • There are different approaches considering the intensity-related error that we did not have the time to compare against (e.g. Lindner and Kolb (2007)). • Room for improvement: • Unexplained high variance in the individual results and some strange measurements • There are probably other important error sources not accounted for, but one has to be cautious when extending the model as a larger number of parameters could lead to overfitting. 15/ 16

  21. Questions 16/ 16

Recommend


More recommend