Week 7 -Wednesday
What did we talk about last time? Exam 1 Before that Specular and diffuse shading Aliasing
VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[36] { // Front face //Bottom face new VertexPositionNormalTexture(new Vector3(1, 1, 1), Vector3.UnitZ, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, -1, 1), -Vector3.UnitY, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, -1, 1), Vector3.UnitZ, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(1, -1, -1), -Vector3.UnitY, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, 1), Vector3.UnitZ, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(-1, -1, -1), -Vector3.UnitY, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(1, 1, 1), Vector3.UnitZ, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, -1, -1), -Vector3.UnitY, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, -1, 1), Vector3.UnitZ, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, -1, 1), -Vector3.UnitY, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, -1, 1), Vector3.UnitZ, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(1, -1, 1), -Vector3.UnitY, new Vector2(0, 1)), //Right face //Top face new VertexPositionNormalTexture(new Vector3(1, 1, 1), Vector3.UnitX, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, 1, -1), Vector3.UnitY, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, 1, -1), Vector3.UnitX, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(1, 1, -1), Vector3.UnitY, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(1, -1, 1), Vector3.UnitX, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(1, 1, 1), Vector3.UnitY, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(1, -1, -1), Vector3.UnitX, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, 1, 1), Vector3.UnitY, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, -1, 1), Vector3.UnitX, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, 1), Vector3.UnitY, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(1, 1, -1), Vector3.UnitX, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, -1), Vector3.UnitY, new Vector2(0, 1)), //Left face //Back face new VertexPositionNormalTexture(new Vector3(-1, 1, 1), -Vector3.UnitX, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(1, 1, -1), -Vector3.UnitZ, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, -1, 1), -Vector3.UnitX, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, -1), -Vector3.UnitZ, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, -1), -Vector3.UnitX, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(-1, -1, -1), -Vector3.UnitZ, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(-1, 1, -1), -Vector3.UnitX, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, -1, -1), -Vector3.UnitZ, new Vector2(.5f, 0)), new VertexPositionNormalTexture(new Vector3(-1, -1, 1), -Vector3.UnitX, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(1, -1, -1), -Vector3.UnitZ, new Vector2(1, 1)), new VertexPositionNormalTexture(new Vector3(-1, -1, -1), -Vector3.UnitX, new Vector2(0, 1)), new VertexPositionNormalTexture(new Vector3(1, 1, -1), -Vector3.UnitZ, new Vector2(0, 1)), }; Note: These cube vertices are what we did in class. They might not be organized in the most helpful way, and their texture coordinates are totally wrong.
When sampling any continuous thing (image, sound, wave) into a discrete environment (like the computer), multiple samples can end up being indistinguishable from each other This is called aliasing We can reduce aliasing by carefully considering how sampling and reconstruction of the signal is done
Ever seen wheels of a car spinning the wrong way? Without enough samples, it may be impossible to tell which way it's spinning You need a sampling frequency twice as high as the maximum frequency of the events to reconstruct the original signal Called the Nyquist limit
Jaggies are caused by insufficient sampling A simple method to increase sampling is full-scene antialiasing , which essentially renders to a higher resolution and then averages neighboring pixels together The accumulation buffer method is similar, except that the rendering is done with tiny offsets and the pixel values summed together
A variety of FSAA schemes exist with different tradeoffs between quality and computational cost
For non-interactive render speeds, the A-buffer can be used The A-buffer generates a coverage mask for each fragment for each pixel Fragments are thrown away if they have z -buffer values that are higher than fragments with full coverage Final pixel color is based on fragment merging
Supersampling techniques (like FSAA) are very expensive because the full shader has to run multiple times Multisample antialiasing ( MSAA ) attempts to sample the same pixel multiple times but only run the shader once Expensive angle calculations can be done once while different texture colors can be averaged Color samples are not averaged if they are off the edge of a pixel
Active research is still trying to find techniques with good visual output and good computational performance Stochastic (random) sampling reduces the visual repetition of some artifacts Sharing samples between pixels can reduce overall cost
Transparency Texturing basics
Keep working on Project 2 Keep reading Chapter 6 Transparency Textures
Recommend
More recommend