CS262 – Computer Vision Lect 4 - Image Formation John Magee 25 January, 2017 Slides courtesy of Diane H. Theriault
Question of the Day: • Why is Computer Vision hard?
All this effort to make sure the LIGHTING is good for a movie. Why is more light needed for a good quality movie? What factors affect how much light reaches the film or image sensor? Why does your cell phone take such lousy pictures at a party? How does this all affect Computer Vision?
How are images formed 1. Light is emitted from a light source 2. Light hits a surface 3. Light interacts with the surface 4. Reflected light enters camera aperture 5. Sensor of camera interprets light Szeliski Ch 2.2 Don’t worry about all the details of the math Shapiro & Stockman Ch. 6, Ch. 2 (https://courses.cs.washington.edu/courses/cse576/99sp/book.html
Light is emitted • Point light sources radiates (emits) light uniformly in all directions • Properties of light: – Color spectrum (Wavelength distribution) – Intensity (Watts / Area * Solid Angle) • Note: A solid angle is like a cone • Note: “Area” light sources, like fluorescent lights, are a little different
Light hits a surface Surface orientation is very important for The amount of incident light determining the amount of incident light! that falls on a surface (irradiated light) Distance: ≈ 2.5 m Orientation: 45 degrees size of the surface • Solid angle: 11.4 degrees foreshortening solid angle of light • subtended by the surface depends on distance to light and Distance: 5 m orientation of surface Distance: 2.5 m Orientation: 0 degrees Orientation: 0 degrees Solid angle: 16.4 degrees Solid angle: 22.6 degrees attenuation
Light Interacts with a surface Some light absorbed due to surface color The orientation of a surface is What happens to the rest? defined by its “normal vector” which sticks straight up out of the surface. Simplified BRDF modeled with two components: “Lambertian”, “flat” or “matte” • component : light radiated equally in all directions Bi-direction reflectance function: “BRDF” expresses : “Specular”, “shiny”, or • “highlight” component: the amount, direction, and color spectrum of – radiated light is reflected reflected light across the normal from the depending on incoming light the amount, direction, and color spectrum of – incoming light
Reflected light enters a camera Object Pinhole Model location focal distance / focal length Optical axis focal plane / Scene Depth image plane Center of Projection Image location Red triangle (behind camera) and blue triangle (in front of camera) are similar: • therefore: Given any three terms, you can determine the fourth •
Reflected light enters a camera For given focal length, “Lens Equation leads to • A “blur circle” or “circle of confusion” results when projections of objects are not • focused on the image plane. The size of the blur circle depends on the distance to the object and the size of the aperture. The allowable size of the blur circle (e.g. a pixel) determines the allowable range • of depths in the scene (“depth of field”) Note: The “F number” or “f stop” commonly used in photography is the ratio of • focal length to aperture size. (http://www.dofmaster.com/dofjs.html)
Camera sensor interprets light http://micro.magnet.fsu.edu/optics/li ghtandcolor/vision.html • Image is quantized into pixels to go from physical size of projection to pixel coordinates Szeliski 2.3, Shapiro & Stockman 2.2
Now what? Interaction between light, • objects, and the camera leads to images The way image values change • hopefully tells us something about the objects, the light, and the camera
Image Gradients “the way image values change” image derivative • “ Gradient ” at a particular point (x, y) is a vector that points in the direction of largest change • Gradient can be in Cartesian (x, y) or Polar (magnitude, angle) coordinates • Every point in an image may have a different gradient vector • Friday’s lab and this week’s homework will be devoted to image gradients and edges.
Discussion Questions: What influences are mixed together when we observe the light reflected from a • surface? In order to infer surface orientation, what assumptions do we need to make? Can • we construct restricted imaging conditions that make this job easier? In order to infer surface properties, what assumptions do we need to make? Can • we construct restricted imaging conditions that make this job easier? What are some things we would like to know about objects that we can’t directly • observe, even if we could correctly reconstruct surface orientation, color, texture, and reflectance properties? (hint: clothes) What steps could we take to try to understand those things, given the image information Think of some ways that we could define the scope of some tasks that we might be • able to do, even if all we have is the image appearance and we can’t infer scene structure and surface orientation and properties.
Light incident on a surface • The amount of light that falls on a surface (irradiated light) – size of the surface – solid angle of light subtended by the surface – Surfaces that are further away from the light subtend a smaller solid angle attenuation – Surfaces that are turned away from the light subtend a smaller solid angle foreshortening
Image Gradients The gradient is a vector like any other vector. It just happens to represent • the way the values of the image are changing. One way to compute gradient: “finite differences”: • Just compute the difference between each pixel and the previous one (horizontally and vertically). Switching from the Cartesian representation (x,y) to the polar • representation (magnitude, direction) is often helpful, and very, very important. Friday’s lab and this week’s homework will be devoted to image gradients and edges.
Recommend
More recommend