Hidden surface removal Visibility of primitives � Clipping algorithms will discard objects or parts � We don’t want to waste time rendering primitives which don’t contribute to the final image. of objects that are outside of the viewing volume � A scene primitive can be invisible for 3 reasons: � But, that does not solve the problem of one objects � Primitive lies outside field of view blocking the view of another object � Primitive is back-facing (under certain conditions) � Hidden surface algorithms deal with this problem � Primitive is occluded by one or more objects nearer the � Some algorithms are more correctly called visible viewer surface algorithms but the two names are used � How do we remove these efficiently? interchangeably. � How do we identify these efficiently? 21/10/2005 Lecture 9 1 21/10/2005 Lecture 9 2 The visibility problem. Visible surface algorithms. Hidden/Visible Surface/Line Elimination/Determination Removal of faces facing away from the viewer. � Requirements Removal of faces obscured by closer objects. � Handle diverse set of geometric primitives � Handle large number of geometric primitives Classification: Sutherland, Sproull, Schumacher (1974): � Object Space � Geometric calculations involving polygons � Floating point precision: Exact � Often process scene in object order � Image Space � Visibility at pixel samples � Integer precision � Often process scene in image order 21/10/2005 Lecture 9 3 21/10/2005 Lecture 9 4 1
Visible surface algorithms. Back face culling. Object based methods � We saw in modelling, that the vertices of polyhedra are oriented in an anticlockwise manner when viewed � Consider objects pairwise at a time, do it iteratively, comparing each from outside – surface normal N points polygon with the rest of polygons: out. � A and B both completely visible – display both � Project a polygon. � A completely obscure B – display only A (vice versa) � Test z component of surface � A and B partially obscure each other – calculate visible parts of each polygon normal. If negative – cull, since normal points away from viewer. � Complexity is O(k 2 ) – regard the determination of which case it is and any � Or if N . V > 0 we are viewing the required calculation of visible parts of each polygon as a single operation back face so polygon is obscured ( A polygon faces away from the viewer if the Image based methods angle between the surface normal (N) and the viewing direction (V) is less than 90 degrees � per pixel, consider a ray that leaves the center of projection and passes V.N > 0 ) through a pixel and decide which object should appear at the pixel and � Only works for convex objects without what colour/light/texture it should be drawn in holes, ie. closed orientable manifolds. 21/10/2005 Lecture 9 5 21/10/2005 Lecture 9 6 Back face culling How de we handle overlapping? � Back face culling can be applied anywhere in the pipeline: world or camera coords, NDC (normalised device co- ordinate), image space. � Where is the best point? What portion of the scene is eliminated, on average? � Depends on application How about drawing the polygons in the “right order” so that we get the correct result ( eg. blue, then green, then peach)? � If we clip our scene to the view frustrum, then remove all back-facing polygons – are we done? Is it just a sorting problem ? Yes it is for 2D, but in 3D we can encounter intersecting polygons or groups of non-intersecting � NO! Most views involve overlapping polygons. polygons which form a cycle where order is impossible (later). 21/10/2005 Lecture 9 7 21/10/2005 Lecture 9 8 2
Z-buffer Algorithm Z-buffer Algorithm � Some polygons will be obscured by others - we only want � We require two buffers: to draw the visible polygons � frame buffer to hold colour of each � Suppose polygons have been passed through the projection pixel � z-buffer to hold depth information for transformation, with the z coordinate retained (ie the depth A each pixel information) - suppose z normalized to range 0 to 1 B � For each pixel (x,y), we want to draw the polygon nearest � Initialize all depth(x,y) to 0 and y to the camera, ie largest z refresh(x,y) to background colour y � For each pixel compare depth value z z 2 x to current depth(x,y) � if z > depth(x,y) then z 1 z � depth(x,y)=z x � Frame buffer (x,y) = I surface (x,y) z camera (gouraud/phong shading) 21/10/2005 Lecture 9 9 21/10/2005 Lecture 9 10 Z-buffer Algorithm Determining depth. Fill each pixel with background and set each z colour to infinity Use plane equation : For each polygon P in the scene Do + + + = For each pixel (x,y) in P's projection Do Ax By Cz D 0 calculate z-coordinate, z p of p at (x,y) If at ( x , y ), z value evaluates to z 1 , IF z p < value in Z-buffer at ( x,y ) + at ( x ∆ x,y ) , now value of z is : THEN replace value in Z-buffer at (x,y) by Zp − − − D Ax By colour pixel (x,y) in colour of p = z A C − ∆ z ( x ) End IF 1 C End For End For -Only one subtraction needed -Depth coherence. 21/10/2005 Lecture 9 11 21/10/2005 Lecture 9 12 3
Z-compositing Z Buffer - Strengths and Weaknesses Colour photograph. Advantage Can use depth other than � Simple to implement in hardware. � Add additional z interpolator for each primitive. from polygons. � Memory for z-buffer is now not expensive Laser range return. � Diversity of primitives – not just polygons. Reflected laser power � Unlimited scene complexity � Don’t need to calculate object-object intersections. Disadvantage � Extra memory and bandwidth � Waste time drawing hidden objects � Limited precision for depth calculations in complex scenes can be a problem Data courtesy of UNC. 21/10/2005 Lecture 9 13 21/10/2005 Lecture 9 14 Ray casting. Computing ray-object intersections. � Sometimes referred to as Ray-tracing . � The heart of ray tracing. � Involves projecting an imaginary ray from the � e.g. sphere ( the easiest ! ). centre of projection (the viewers eye) through the Expand, substitute for x , y & z . centre of each pixel into the scene. Express line in parametric form. Gather terms in t. ⇒ Quadratic equation in t. x = x + t ∆ x ; y = y + t ∆ y ; z = z + t ∆ z 0 Solve for t. Scene Equation for a sphere : -No roots – ray doesn’t intersect. − 2 + − 2 + − 2 = 2 ( x a ) ( y b ) ( z c ) r - 1 root – ray grazes surface. Eyepoint - 2 roots – ray intersects sphere, (entry and exit) Window 21/10/2005 Lecture 9 15 21/10/2005 Lecture 9 16 4
Ray-polygon intersection. Ray casting. � Easy to implement for a variety of primitives – only need a ray-object � Not so easy ! intersection function. 1. Determine whether ray intersects polygon’s plane. � Pixel adopts colour of nearest intersection. 2. Determine whether intersection lies within polygon. � Easiest to determine (2) with an orthographic projection � Can draw curves and surfaces exactly – not just triangles ! onto the nearest axis and the 2D point-in-polygon test. � Can generate new rays inside the scene to correctly handle visibility with reflections, refraction etc – recursive ray-tracing . z � Can be extended to handle global illumination. Ray � Can perform area-sampling using ray super-sampling. x � But… too expensive for real-time applications. y 21/10/2005 Lecture 9 17 21/10/2005 Lecture 9 18 Depth Sort Methods Examples of Ray-traced images. � The Depth Sort Algorithm initially sorts the faces in the object into back to front order. � The faces are then scan converted in this order onto the screen. � Thus a face near the front will obscure a face at the back by overwriting it at any points where their projections overlap. � This accomplishes hidden-surface removal without any complex intersection calculations between the two projected faces. This latter technique of painting the object in back to front order is often called The Painter's Algorithm � The Depth Sort algorithm is a hybrid algorithm in that it sorts in object space and does the final rendering in image space. 21/10/2005 Lecture 9 19 21/10/2005 Lecture 9 20 5
Recommend
More recommend