week 7 friday what did we talk about last time lighting
play

Week 7 - Friday What did we talk about last time? Lighting in - PowerPoint PPT Presentation

Week 7 - Friday What did we talk about last time? Lighting in MonoGame Cube example Antialiasing Partially transparent objects significantly increase the difficulty of rendering a scene We will talk about really difficult


  1. Week 7 - Friday

  2.  What did we talk about last time?  Lighting in MonoGame  Cube example  Antialiasing

  3.  Partially transparent objects significantly increase the difficulty of rendering a scene  We will talk about really difficult effects like frosted glass or light bending later  Just rendering transparent objects at all is a huge pain because the Z-buffer doesn't work anymore  Workarounds:  Screen door transparency  Sorting  Depth peeling

  4.  We render an object with a checkerboard pattern of holes in it, leaving whatever is beneath the object showing through  Problems:  It really only works for 50% transparent objects  Only one overlapping transparent object really works  But it is simple and inexpensive

  5.  Most transparency methods use the over operator, which combines two colors using the alpha of the one you're putting on top  c 0 = α s c s + (1 - α s ) c d  c s is the new (source) color  c d is the old (destination) color  c o is the resulting (over) color  α s is the opacity (alpha) of the object

  6.  The over operator is order dependent  To render correctly we can do the following:  Render all the opaque objects  Sort the centroids of the transparent objects in distance from the viewer  Render the transparent objects in back to front order  To make sure that you don't draw on top of an opaque object, you test against the Z-buffer but don't update it

  7.  It is not always possible to sort polygons  They can interpenetrate  Hacks:  At the very least, use a Z-buffer test but not replacement  Turning off culling can help  Or render transparent polygons twice: once for each face

  8.  It is possible to use two depth buffers to render transparency correctly  First render all the opaque objects updating the first depth buffer  Make second depth buffer maximally close  On the second (and future) rendering passes, render those fragments that are closer than the z values in the first depth buffer but further than the value in the second depth buffer  Update the second depth buffer  Repeat the process until no pixels are updated

  9. 1 layer 2 layers 3 layers 4 layers

  10.  Alpha values can be used for antialiasing, by lowering the opacity of edges that partially cover pixels  Additive blending is an alternative to the over operator  c 0 = α s c s + c d  This is only useful for effects like glows where the new color never makes the original darker  Unlike transparency, it can be applied in any order

  11.  I don't want to go deeply into gamma  The trouble is that real light has a wide range of color values that we need to store in some limited range (such as 0 – 255)  Then, we have to display these values, moving back from the limited range to the "real world" range

  12.  Physical computations should be performed in the linear (real) space  To convert that linear space into nonlinear frame buffer space, we have to raise values by a power, typically 0.45 for PCs and 0.55 for Macs  Each component of physical color (0.3, 0.5, 0.6) is raised to 0.45 giving (0.582, 0.732, 0.794) then scaled to the 0-255 range, giving (148, 187, 203)

  13.  Usually, gamma correction is taken care of for you  If you are writing something where you need to do computations in the "real life" color space (such as a raytracer), you may have to worry about it  Calculations in the wrong space can have visually unrealistic effects

  14.  We've got polygons, but they are all one color  At most, we could have different colors at each vertex  We want to "paint" a picture on the polygon  Because the surface is supposed to be colorful  To appear as if there is greater complexity than there is (a texture of bricks rather than a complex geometry of bricks)  To apply other effects to the surface such as changes in material or normal

  15.  We never get tired of pipelines  Go from object space to parameter space  Go from parameter space to texture space  Get the texture value  Transform the texture value Value Object Projector Texture Texture Parameter Corresponder Transformed Obtain value transform space function space function space value value function

  16.  The projector function goes from the model space (a 3D location on a surface) to a 2D ( u , v ) coordinate on a texture  Usually, this is based on a map from the model to the texture, made by an artist  Tools exist to help artists "unwrap" the model  Different kinds of mapping make this easier  In other scenarios, a mapping could be determined at run time

  17.  From ( u , v ) coordinates we have to find a corresponding texture pixel (or texel )  Often this just maps directly from u , v ∈ [0,1] to a pixel in the full width, height range  But matrix transformations can be applied  Also, values outside of [0,1] can be given, with different choices of interpretation

  18.  Usually the texture value is just an RGB triple (or an RGB α value)  But, it could be procedurally generated  It could be a bump mapping or other surface data  It might need some transformation after retrieval

  19.  Image texturing  Magnification and minification  Mipmapping  Anisotropic filtering  MonoGame examples  Textures in shader code

  20.  No class on Monday or Wednesday!  Because of October Break and then the debates  Keep working on Project 2  Keep working on Assignment 3  Keep reading Chapter 6

Recommend


More recommend