using a raster display device for photometric stereo
play

Using a Raster Display Device for Photometric Stereo Nathan Funk - PowerPoint PPT Presentation

DEPARTMENT OF COMPUTING SCIENCE Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Overview 1. Background


  1. DEPARTMENT OF COMPUTING SCIENCE Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007

  2. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Overview 1. Background 2. Model 3. Experiments 4. Conclusions 2

  3. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 1. Background • Knowledge about lighting simplifies shape from shading • Controlling lighting helps • Controlling lighting is not easy 3

  4. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Motivation • Idea: Use a display to control lighting! • Use it to perform photometric stereo 4

  5. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Related Work • Zongker (1999), Schechner (2003) Use displays as light source • Clark (CRV 2006) “Photometric Stereo with Nearby Planar Distributed Illuminants” • Equivalent light source for an image • Only offers theoretical analysis 5

  6. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Photometric Stereo • Proposed by Woodham (1978) • 2 Steps: 1 2 Photometric Depth from Stereo Surface Normals Input Images Surface Normals Surface Depth 6

  7. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Photometric Stereo • For a single Lambertian surface point r ˆ R max( 0 , L n ) • Radiance = � � Albedo Light source vector Normal r r r Simplified where ˆ R L N N n = � = � r • Known: R , L r • Unknown: N r R N • Can not uniquely determine from single 7

  8. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Photometric Stereo R L L L N � � � � � � 1 1 x 1 y 1 z x � � � � � � N M M M M = y � � � � � � R L L L N � � � � � � � � � � � � n nx nx nx z Radiance Light vectors Scaled normal r r R = L N • Need 3 or more radiance values for each normal 8

  9. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 2. Model Screen Scene Camera 9

  10. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 2. Model • Need – R Radiance values (inferred from images) – L Light vectors (incl. intensity) • Challenges – Screens are not distant light sources – Screen’s light can be directional (LCDs) – Inverse square law is significant 10

  11. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 2. Model • Distant illumination not achievable – Instead: 50x50 squares of pixels – 6 sources  6 images 11

  12. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Screen Position Calibration Display (showing a calibration pattern) Mirror Mirror calibration pattern Camera Alternative method: Francken (CRV ‘07) 12

  13. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Lighting Model 1 2 3 Display cross-section Screen directionality 1 Unattenuated R function U Attenuation R ( , ) R f ( , ) 2 � � = � � P U R , ( ) � � Inverse square law P I 3 = S 2 r 13

  14. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Depth from Surface Normals n 2 n Scene Camera 1 z 1 z 2 • Constraints form a large homogeneous r linear system r = H z 0 Coefficient matrix Depth values • Solve system with sparse matrix routines 14

  15. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 3. Experiments Enclosure LCD Screen Camera Scene 15

  16. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 3. Experiments • Quantitative evaluation on synthetic and real images • Objects – Sphere (Ping-Pong ball) – Stanford Bunny (printed on a 3D printer) 16

  17. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Captured Images Images displayed on screen Processed captured images 17

  18. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Synthetic Images 18

  19. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Real Images 19

  20. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Real Images 20

  21. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Real Images 21

  22. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Real Images 22

  23. 1 BACKGROUND 2. MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Future work • Use more advanced methods – Allow surface specularity – Increase precision • Examine different displays and cameras – Reduce image noise • Integration with other methods – E.g. combine with multiple view vision 23

  24. 1 BACKGROUND 2. MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Potential • Shape-from-shading – Capture objects, faces in front of home computer; assist in face recognition… • Lighting estimation, Image relighting, Image based rendering, Surface reflectance measurement… 24

  25. 1 BACKGROUND 2. MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Acknowledgements D E P A R T M E N T O F COMPUTING SCIENCE 25

  26. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Questions? Slides available at: singularsys.com/research 26

  27. 1 BACKGROUND 2 MODEL. 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Screen Directionality Calibration 27

  28. 1 BACKGROUND 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS Results on Synthetic Images Sphere Stanford Bunny 28

Recommend


More recommend