week 9 monday what did we talk about last time mipmapping
play

Week 9 - Monday What did we talk about last time? Mipmapping - PowerPoint PPT Presentation

Week 9 - Monday What did we talk about last time? Mipmapping Summed area tables Anisotropic filtering We have been using BasicEffect to achieve most of our shading BasicEffect gets so much done that it's tempting not to move


  1. Week 9 - Monday

  2.  What did we talk about last time?  Mipmapping  Summed area tables  Anisotropic filtering

  3.  We have been using BasicEffect to achieve most of our shading  BasicEffect gets so much done that it's tempting not to move any further  But more complex effects can be achieved by writing shader code ourselves  To start, open up the MonoGame pipepline, right-click on the Content folder, and choose Add > New Item  Choose Effect from the wizard and name it something that ends with .fx

  4.  We need to declare the variables we are going to use at the top of the file  These usually include at least the following (which are already given in the template) float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose;  We're also going to add an ambient and diffuse light float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4 DiffuseLightDirection = float4(0.7071f, 0.7071f, 0, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = .5;

  5.  We also have to define structures to take the input and the output  These vary because different vertex formats include different data (position, normals, colors, texture coordinates) struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; };  The simplest possible input and output would have position only  The POSITION0 is called a semantic  Semantics are used to tell the shader what the purpose of a variable is so that it can pass the right data in and out

  6.  The job of the vertex shader is, at the very least, to transform a vertex from model space to world space to view space to clip space  It can also do normal and color calculations VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; }

  7.  The pixel shader must find the final color of the pixel fragment  This pixel shader uses a diffuse shading model  The computed lighting is added to the ambient lighting float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); }

  8.  You're allowed to name your vertex and pixel shaders anything you want  You specify which you're going to use in a technique  At this level, techniques only have one pass, but it is possible to use multiple techniques to achieve interesting effects technique Diffuse { pass Pass1 { VertexShader = compile VS_SHADERMODEL VertexShaderFunction(); PixelShader = compile PS_SHADERMODEL PixelShaderFunction(); } }

  9.  You have to load the shader like other content effect = Content.Load<Effect>("Diffuse");  Then, we run a loop similar to the earlier one, setting the parameters for the effect object foreach( ModelMesh mesh in model.Meshes ) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; effect.Parameters["World"]. SetValue(mesh.ParentBone.Transform * world); effect.Parameters["View"].SetValue(view); effect.Parameters["WorldInverseTranspose"]. SetValue(Matrix.Transpose(Matrix.Invert(mesh.ParentBone.Transform * world))); effect.Parameters["Projection"]. SetValue(projection); } mesh.Draw(); }

  10.  Image textures are the most common, but 3D volume textures can be used  These textures store data in a ( u , v , w ) coordinate space  Even volume textures can be mipmapped  Quadrilinear interpolation!  In practice, volume textures are usually used for fog, smoke, or explosions  3D effects that are inconsistent over the volume

  11.  A cube map is a kind of texture map with 6 faces  Cube maps are used to texture surfaces based on direction  They are commonly used in environment mapping  A ray is made from the center of the cube out to the surface  The component with the largest magnitude selects which of the 6 faces  The other components are used for ( u , v ) coordinates  Cube maps can cause awkward seams when jumping between faces

  12.  You will never need to worry about this in this class, but texture memory space is a huge problem  There are many different caching strategies, similar ones used for RAM:  Least Recently Used (LRU) : Swap out the least recently used texture, very commonly used  Most Recently Used (MRU) : Swap out the most recently used texture, use only during thrashing  Prefetching can be useful to maintain consistent frame rates

  13.  JPEG and PNG are common compression techniques for regular images  In graphics hardware, these are too complicated to be decoded on the fly  That's why the finished MonoGame projects have pre-processed .tkb files  Most DirectX texture compression divides textures into 4 x 4 tiles  Two 16-bit RGB values are recorded for each tile  Each texel uses 2 bits to select one of the two colors or two interpolated values between them

  14.  Ericsson texture compression (ETC) is used in OpenGL  It breaks texels into 2 x 4 blocks with a single color  It uses per-pixel luminance information to add detail to the blocks  Normal maps (normals stored as textures) allow for interesting compression approaches  Only x and y components are needed since the z component can be calculated  The x and y can then be stored using the BC5 format for two channels of color data

  15.  A procedural texture is made by computing a function of u and v instead of looking up a texel in an image  Noise functions are often used to give an appearance of randomness  Volume textures can be generated on the fly  Values can be returned based on distance to certain feature points (redder colors near heat, for example)

  16.  Textures don't have to be static  The application can alter them over time  Alternatively, u and v values can be remapped to make the texture appear to move  Matrix transformations can be used for zoom, rotation, shearing, etc.  Video textures can be used to play back a movie in a texture  Blending between textures can allow an object to transform like a chameleon

  17.  The lighting we have discussed is based on material properties  Diffuse color  Specular color  Smoothness coefficient m  A texture can be used to modify these values on a per-pixel basis  A normal image texture can be considered a diffuse color map  One that affects specular colors is a specular color map (usually grayscale)  One that affects m is a gloss map

  18.  Alpha values allow for interesting effects  Decaling is when you apply a texture that is mostly transparent to a (usually already textured) surface  Cutouts can be used to give the impression of a much more complex underlying polygon  1-bit alpha doesn't require sorting  Cutouts are not always convincing from every angle

  19.  Bump mapping  BRDFs

  20.  Make sure you're solid on bump mapping from Chapter 6  Read Chapter 7 (at a high level)  Finish Assignment 3  Due tonight by midnight!  Work on Project 2

Recommend


More recommend