CS488 Visible-Surface Determination Luc R ENAMBOT 1
Visible-Surface Determination • So far in the class we have dealt mostly with simple wireframe drawings of the models • The main reason for this is so that we did not have to deal with hidden surface removal • Now we want to deal with more sophisticated images so we need to deal with which parts of the model obscure other parts of the model 2
Examples The following sets of images show a wireframe version, a wireframe version with hidden line removal, and a solid polygonal representation of the same object 3
Examples
Drawing Order If we do not have a way of determining which surfaces are visible then which surfaces are visible depends on the order in which they are drawn with surfaces being drawn later appearing in front of surfaces drawn previously 5
Principles • We do not want to draw surfaces that are hidden. If we can quickly compute which surfaces are hidden, we can bypass them and draw only the surfaces that are visible • For example, if we have a solid 6 sided cube, at most 3 of the 6 sides are visible at any one time, so at least 3 of the sides do not even need to be drawn because they are the back sides 6
Principles • We also want to avoid having to draw the polygons in a particular order. We would like to tell the graphics routines to draw all the polygons in whatever order we choose and let the graphics routines determine which polygons are in front of which other polygons • With the same cube as above we do not want to have to compute for ourselves which order to draw the visible faces, and then tell the graphics routines to draw them in that order. 7
Principles • The idea is to speed up the drawing, and give the programmer an easier time, by doing some computation before drawing • Unfortunately these computations can take a lot of time, so special purpose hardware is often used to speed up the process 8
Techniques • Two types of approaches • Object space • Image space 9
Object Space • Object space algorithms do their work on the objects themselves before they are converted to pixels in the frame buffer. • The resolution of the display device is irrelevant here as this calculation is done at the mathematical level of the objects • For each object a in the scene • Determine which parts of object a are visible (involves comparing the polygons in object a to other polygons in a and to polygons in every other object in the scene) 10
Image Space • Image space algorithms do their work as the objects are being converted to pixels in the frame buffer • The resolution of the display device is important here as this is done on a pixel by pixel basis • For each pixel in the frame buffer • Determine which polygon is closest to the viewer at that pixel location • Determine the color of the pixel with the color of that polygon at that location 11
Approaches • As in our discussion of vector vs raster graphics earlier in the term • The mathematical ( object space ) algorithms tended to be used with the vector hardware • Whereas the pixel based ( image space ) algorithms tended to be used with the raster hardware 12
Homogeneous Coordinates • When we talked about 3D transformations we reached a point near the end when we converted the 3D (or 4D with homogeneous coordinates) to 2D by ignoring the Z values • Now we will use those Z values to determine which parts of which polygons (or lines) are in front of which parts of other polygons 13
Technique • There are different levels of checking that can be done: • Object • Polygon • Part of a Polygon 14
Transparency • There are also times when we may not want to cull out polygons that are behind other polygons • If the frontmost polygon is transparent then we want to be able to 'see through' it to the polygons that are behind it as shown below 15
Transparent Objects Which objects are transparent in the scene? 16
Coherence • We used the idea of coherence before in our line drawing algorithm • We want to exploit 'local similarity' to reduce the amount of computation needed • This is how compression algorithms work 17
Coherence • Face - properties (such as color, lighting) vary smoothly across a face (or polygon) • Depth - adjacent areas on a surface have similar depths • Frame - images at successive time intervals tend to be similar • Scan Line - adjacent scan lines tend to have similar spans of objects • Area - adjacent pixels tend to be covered by the same face • Object - if objects are separate from each other (ie they do not overlap) then we only need to compare polygons of the same object, and not one object to another • Edge - edges only disappear when they go behind another edge or face • Implied Edge - line of intersection of 2 faces can be determined by the endpoints of the intersection 18
Extent • Rather than dealing with a complex object, it is often easier to deal with a simpler version of the object • In 2D: a bounding box • In 3D: a bounding volume 19
Bounding Box • We convert a complex object into a simpler outline, generally in the shape of a box • Every part of the object is guaranteed to fall within the bounding box 20
Bounding Box • Checks can then be made on the bounding box to make quick decisions (ie does a ray pass through the box.) • For more detail, checks would then be made on the object in the box. • There are many ways to define the bounding box 21
Bounding Box • The simplest way is to take the minimum and maximum X, Y, and Z values to create a box • You can also have bounding boxes that rotate with the object, bounding spheres, bounding cylinders, etc. 22
Back-Face Culling • Back-face culling • an object space algorithm • Works on 'solid' objects which you are looking at from the outside • That is, the polygons of the surface of the object completely enclose the object 23
Normals • Every planar polygon has a surface normal, that is, a vector that is normal to the surface of the polygon • Actually every planar polygon has two normals • Given that this polygon is part of a 'solid' object we are interested in the normal that points OUT, rather than the normal that points in 24
Back Face Front facing • OpenGL specifies that all polygons be drawn such that the vertices are given in counterclockwise order as you look at the visible side of polygon in order to generate the 'correct' normal. Back facing • Any polygons whose normal points away from the viewer is a 'back-facing' polygon and does not need to be further investigated 25
Computing • To find back facing polygons, the dot product of the surface normal of each polygon is taken with a vector from the center of projection to any point on the polygon • The dot product is then used to determine what direction the polygon is facing: • greater than 0 : back facing • equal to 0 : polygon viewed on edge • less than 0 : front facing 26
Dot Product • a.b = |a| |b| cos(theta) • a.b = ax*bx + ay*by+ az*bz • a.b = 0 • orthogonal vectors 27
Example 28
OpenGL • OpenGL back-face culling is turned on using: • glCullFace (GL_BACK); • glEnable (GL_CULL_FACE); 29
Remarks • Back-face culling can very quickly remove unnecessary polygons • Unfortunately there are often times when back-face culling can not be used • if you wish to make an open-topped box - the inside and the outside of the box both need to be visible, so either two sets of polygons must be generated, one set facing out and another facing in, or back-face culling must be turned off to draw that object 30
Depth Buffer • Early on we talked about the frame buffer which holds the color for each pixel to be displayed • This buffer could contain a variable number of bytes for each pixel depending on whether it was a grayscale, RGB, or color indexed frame buffer • All of the elements of the frame buffer are initially set to be the background color • As lines and polygons are drawn the color is set to be the color of the line or polygon at that point 31
Depth Buffer • We now introduce another buffer which is the same size as the frame buffer but contains depth information instead of color information 32
Z-Buffering • Image-space algorithm • All of the elements of the z-buffer are initially set to be 'very far away’ • Whenever a pixel color is to be changed, the depth of this new color is compared to the current depth in the z- buffer • If this color is 'closer' than the previous color the pixel is given the new color • The z-buffer entry for that pixel is updated as well • Otherwise, the pixel retains the old color, the z-buffer retains its old value 33
Algorithm for each polygon for each pixel p in the polygon's projection { //z ranges from -1 to 0 pz = polygon's normalized z-value at (x, y); if (pz > zBuffer[x, y]) // closer to the camera { zBuffer[x, y] = pz; framebuffer[x, y] = colour of pixel p } } 34
Recommend
More recommend