Week 10 -Wednesday
What did we talk about last time? Fresnel reflection Snell's Law Microgeometry effects Implementing BRDFs Ambient lighting Image based rendering
A more complicated tool for area lighting is environment mapping (EM) The key assumption of EM is that only direction matters Light sources must be far away The object does not reflect itself In EM, we make a 2D table of the incoming radiance based on direction Because the table is 2D, we can store it in an image
The radiance reflected by a mirror is based on the reflected view vector r = 2( n • v ) n – v The reflectance equation is: = L ( v ) R ( θ ) L ( r ) o F o i where RF is the Fresnel reflectance and L i is the incoming radiance from vector r
Steps: Generate or load a 2D image representing the environment 1. For each pixel that contains a reflective object, compute the normal at the 2. corresponding location on the surface Compute the reflected view vector from the view vector and the normal 3. Use the reflected view vector to compute an index into the environment map 4. Use the texel for incoming radiance 5.
It doesn't work well with flat surfaces The direction doesn't vary much, mapping a lot of the surface to a narrow part of the environment map Normal mapping combined with EM helps a lot The range of values in an environment map may be large (to cover many light intensities) As a consequence, the space requirements may be higher than normal textures
Blinn and Newell used a longitude/latitude system with a projection like Mercator ϕ is longitude and goes from 0 to 2 π ρ is latitude and goes from 0 to π We can compute these from the normalized reflected view vector: ρ = arccos(- r z ) ϕ = atan2( r y , r x ) Problems There are too many texels near the poles The seam of the left and the right halves cannot easily be interpolated across
Imagine the environment is viewed through a perfectly reflective sphere The resulting sphere map (also called a light probe ) is what you'd see if you photographed such a sphere (like a Christmas ornament) The sphere map has a basis giving its own coordinate system ( h , u , f ) The image was generated by looking along the f axis, with h to the right and u up (all normalized)
To use the sphere map, convert the surface normal n and the view vector v to the sphere space by multiplying by the following matrix: h h h 0 x y z u u u 0 x y z f f f 0 x y z 0 0 0 1 Sphere mapping only shows the environment on the front of the sphere It is view dependent
Cubic environmental mapping is the most popular current method Fast Flexible Take a camera, render a scene facing in all six directions Generate six textures For each point on the surface of the object you're rendering, map to the appropriate texel in the cube
Pros Fast, supported by hardware View independent Shader Model 4.0 can generate a cube map in a single pass with the geometry shader Cons It has better sampling uniformity than sphere maps, but not perfect (isocubes improve this) Still requires high dynamic range textures (lots of memory) Still only works for distant objects
We have talked about using environment mapping for mirror-like surfaces The same idea can be applied to glossy (but not perfect) reflections By blurring the environment map texture, the surface will appear rougher For surfaces with varying roughness, we can simply access different mipmap levels on the cube map texture
Environment mapping can be used for diffuse colors as well Such maps are called irradiance environment maps Because the viewing angle is not important for diffuse colors, only the surface normal is used to decide what part of the irradiance map is used
Rather than go through the rigmarole with vertex buffers, I'm going to use a cube model cube = Content.Load<Model>("cube"); Likewise, I can use a special cube texture for skyboxes stored in a TextureCube This texture has 6 sub-textures for top, bottom, left, right, front and back cubeTexture = Content.Load<TextureCube>("Sunset");
We need projections, a camera location, a texture, and a special kind of sampler for cube textures float4x4 World; float4x4 View; float4x4 Projection; float3 Camera; Texture SkyBoxTexture; samplerCUBE SkyBoxSampler = sampler_state { texture = <SkyBoxTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; };
Vertex shader input and output are simple Only the position is needed for input, and only the position and texture coordinate are needed for output struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 TextureCoordinate : TEXCOORD0; };
Other than projection, the vertex shader gives a direction as a 3D coordinate The pixel shader uses the direction to look up the value in the texture cube VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TextureCoordinate = worldPosition - Camera; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate)); }
Environment mapping can easily be done in shaders using the same cube texture we used for skyboxes We use the same lookup, but we have to compute the reflection from the camera off the surface and out to the cube
To the skybox shader code, we add a world inverse transpose for transforming model normal and a tint color float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; float4 TintColor = float4(1, 1, 1, 1); Texture EnvironmentTexture; samplerCUBE EnvironmentSampler = sampler_state { texture = <EnvironmentTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; };
The only addition to the vertex shader is a normal, which helps determine the direction to reflect in struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 Reflection : TEXCOORD0; };
The vertex shader adds in normal transformation The pixel shader uses the reflect() intrinsic to find the reflection from the cube map VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 ViewDirection = worldPosition - Camera; float3 Normal = normalize(mul(normalize(input.Normal), WorldInverseTranspose)); output.Reflection = reflect(normalize(ViewDirection), Normal); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return TintColor * texCUBE(EnvironmentSampler, normalize(input.Reflection)); }
The result would look better if the ship had more vertices
Work day for Project 2
Keep reading Chapter 8 Start reading Chapter 9 We'll talk about global illumination on Monday
Recommend
More recommend