i see 3d
play

I-see-3D ! An interactive and immersive system that dynamically - PowerPoint PPT Presentation

I-see-3D ! An interactive and immersive system that dynamically adapts 2D projections to the location of a users eyes S. Pi erard, V. Pierlot, A. Lejeune and M. Van Droogenbroeck INTELSIG, Montefiore Institute, University of Li` ege,


  1. I-see-3D ! An interactive and immersive system that dynamically adapts 2D projections to the location of a user’s eyes S. Pi´ erard, V. Pierlot, A. Lejeune and M. Van Droogenbroeck INTELSIG, Montefiore Institute, University of Li` ege, Belgium IC3D – December, 5th 2012 1 / 18

  2. Outline Introduction 1 Method 2 Results 3 Conclusion 4 2 / 18

  3. Outline Introduction 1 Method 2 Results 3 Conclusion 4 3 / 18

  4. Trompe-l’œils give the illusion of 3D at one viewpoint This is the work of the artist Julian Beever . 4 / 18

  5. Our goal Our non-intrusive system projects a large trompe-l’œil on the floor, with head-coupled perspective. It gives the illusion of a 3D immersive and interactive environment with 2D projectors. ◮ The user does not need to wear glasses, nor to watch a screen ◮ The user can move freely within the virtual environment ◮ Several range sensors are used (scanners, kinects) ◮ Multiple projectors can be used to cover the whole area 5 / 18

  6. Some cues from which we can infer 3D 3D = { scene structure, depth, thickness, occlusions, . . . } Cues: ◮ perspective ◮ lighting (reflections, shadows, . . . ) ◮ motion of the observer and objects ◮ knowledge (familiar objects: geometry, typical size) ◮ stereoscopy ◮ . . . 6 / 18

  7. Some previous systems with head-coupled perspective The rendered images depend on the user’s viewpoint. Cruz-Neira et al. , 1992 Lee, 2008 Francone et al. , 2011 In those works, the surfaces (screens or walls) are rectangular. There is no deformation between the computed images and those on the surfaces. A surface is a“window”on the virtual world. The projection is perspective. In our system, there is a deformation between the computed image and the one on the floor. We take into account the parameters of the projectors. The projection is not perspective. 7 / 18

  8. Outline Introduction 1 Method 2 Results 3 Conclusion 4 8 / 18

  9. We use multiple sensors to estimate the head position The selected non-intrusive sensors behave perfectly in darkness: ◮ low-cost range cameras (kinects) are placed around the scene ◮ several range laser scanners observe an horizontal plane located 15 cm above the floor. 9 / 18

  10. The non-intrusive head localization procedure kinect 1 kinect 2 laser scanner 1 laser scanner 2 pose recovery pose recovery data fusion and analysis ** ** ** validation gate (rejection of outliers) head position * * * estimation and uncertainty Kalman filter head position estimation single head location hypothesis = * multiple head location hypothesis = ** The filter has been optimized in order to minimize the variance of its output while keeping the bias (delay) in an acceptable range. We use the constant white noise acceleration (CWNA) model. 10 / 18

  11. The head-coupled projection z projector image plane ( u , v ) ( x h , y h , z h ) w 1 upper clipping plane 0 ( x , y , z ) virtual point lower clipping plane 0 ( x f , y f , z f = 0) projector calibration head localization system s s ′ u z h 0 − x h 0    m 1 , 1 m 1 , 2 0 m 1 , 4     x  s s ′ v 0 z h − y h 0 m 2 , 1 m 2 , 2 0 m 2 , 4 y          =  s s ′ w        z h − max( z ) 0 0 min ( s ) 0 0 z      0 0    max( z )        s s ′ m 3 , 1 m 3 , 2 0 m 3 , 4 1 0 0 − 1 z h 11 / 18

  12. Implementation ◮ We use the OpenGL and OpenNI libraries. → OpenNI → (3D pose recovery) ◮ Our system can be implemented without any shader. ◮ We take care of the clipping planes (limited viewing volume). ◮ The method is accurate to the pixel (the images are rendered directly in the projector’s image plane). ◮ The virtual lights are placed at the real lights locations. ◮ The shadows are rendered using Carmack’s reverse algorithm. 12 / 18

  13. Outline Introduction 1 Method 2 Results 3 Conclusion 4 13 / 18

  14. A first video taken from the user’s viewpoint 14 / 18

  15. Another video taken from an external viewpoint 15 / 18

  16. Outline Introduction 1 Method 2 Results 3 Conclusion 4 16 / 18

  17. Conclusions Our system gives the illusion of 3D to a single user. A virtual scene is projected all around him on the floor with head-coupled perspective. The user can walk freely in the virtual world and interact with it directly. ◮ Multiple sensors are used in order to recover the head position. The estimation is provided by a Kalman filter. ◮ The selected sensors behave perfectly in total darkness, and the user does not need to wear anything. ◮ The whole system (sensors and projectors) can be calibrated in less than 10 minutes. ◮ The projection is neither orthographic nor perspective. The rendering method is accurate to the pixel: the images are rendered directly in the projector’s image plane. 17 / 18

  18. How to cite this work S. Pi´ erard, V. Pierlot, A. Lejeune, and M. Van Droogenbroeck. I-see-3D! An interactive and immersive system that dynamically adapts 2D projections to the location of a user’s eyes. In International Conference on 3D Imaging (IC3D) , Li` ege, Belgium, December 2012. 18 / 18

Recommend


More recommend