Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image ‐ Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI)
Image Processing Graphics concerned with creating artificial scenes from geometry and shading descriptions Image processing Input is an image Output is a modified version of input image Image processing operations include altering images, remove noise, super ‐ impose images
Image Processing Example: Sobel Filter Original Image Sobel Filter
Image Processing Image processing the output of graphics rendering is called post ‐ processing To post ‐ process using GPU, rendered output usually written to offscreen buffer (e.g. color image, z ‐ depth buffer, etc) Image in offscreen buffer treated as texture, mapped to screen ‐ filling quadrilateral Pixel shader invoked on each element of texture
Image Negative Another example
Image Distortion
Image Sharpening
Embossing
Toon Rendering
Toon Rendering for Non ‐ Photorealistic Effects
Blurring For some operations, texture element may be combined with neighboring texture elements (blurring) Without motion blur With motion blur
Texture Animation using Image Processing Use GPU to modify textures from frame to frame Animations such as fluid flow can be done this way Example: simulating rain by Tatarchuk et al
Heat Shimmer
Color Correction Color correction uses a function to convert colors in an image to some other color Why color correct? Mimic appearance of a type of film Portray a particular mood Convert from one color space to another Example of conversion from RGB to CIE’s XYZ color space X 0 . 412453 0 . 357580 0 . 180423 R Y 0 . 212671 0 . 715160 0 . 072169 G Z 0 . 019334 0 . 119193 0 . 950227 B
Color Correction
Color Correction
High Dynamic Range Sun’s brightness is about 60,000 lumens Dark areas of earth has brightness of 0 lumens Basically, world around us has range of 0 – 60,000 lumens (High Dynamic Range) However, monitor has ranges of colors between 0 – 255 (Low Dynamic Range) New file formats have been created for HDR images (wider ranges). (E.g. OpenEXR file format)
High Dynamic Range Some scenes contain very bright + very dark areas Using uniform scaling factor to map actual intensity to displayed pixel intensity means: Either some areas are unexposed, or Some areas of picture are overexposed Under exposure Over exposure
Tone Mapping Process of scaling intensities in real world images (e.g HDR images) to fit in displayable range Try to capture feeling of real scene: non ‐ trivial Example: If coming out of dark tunnel, lights should seem bright
Types of Tone Mapping Operators Global: Use same scaling factor for all pixels Local: Use different scaling factor for different parts of image Time ‐ dependent: Scaling factor changes over time Time independent: Scaling factor does NOT change over time Real ‐ time rendering usually does NOT implement local operators due to their complexity
Tone Mapping Operators
Simple (Global) Tone Mapping Methods
Tone Mapping If range of input values is small, compute average then scale so that average in displayable range Simple average may cause a few large values to dominate Reinhard suggested to use logarithm instead when summing pixel values 1 L exp log( L ( x , y )) w w N x , y is the log ‐ average luminance, avoids log of 0 L w is the luminance at pixel (x,y) L w ( x , y )
Tone Mapping Once log ‐ average luminance is computed, can then define tone mapping operator a ( , ) ( , ) L x y L x y w L w is resulting luminance L ( x , y ) a parameter is key of the scene (a = 0.18 is normal) High key minimizes contrasts and shadows. E.g. a = 0.72 Low key maximizes contrasts between light and dark. E.g. a = 0.045
Tone Mapping: Effects of a
Lens Flare and Bloom Caused by lens of eye/camera when directed at light Halo – refraction of light by lens Ciliary Corona – Density fluctuations of lens Bloom – Scattering in lens, glow around light Halo, Bloom, Ciliary Corona – top to bottom
Lens Flare and Bloom Use set of textures for glare effects Each texture is bill boarded Alpha map – how much to blend Can be given colors for corona Overlap all of them ! Animate – create sparkle
Depth of Field In photographs, a range of pixels in focus Pixels outside this range are out of focus This effect is known as Depth of field
Depth of Field using Accumulation Buffer Jitter view position, add weighted samples to accumulation buffer After multiple rendering passes, display picture Downside: Multiple rendering passes is expensive
Depth of Field using Scattering Scatter shading value of each location on a surface to neighboring pixel Sprites used to represent circles of influence Pixel value is averaged sum of all overlapping circles
Motion Blur Antialiasing is spatial blurring In cameras, caused by exposing film to moving objects Motion blur: Blurring of samples taken over time Makes fast moving scenes appear less jerky 30 fps + motion blur better than 60 fps + no motion blur
Motion Blur Accumulation buffer can be used to create blur Basic idea is to average series of images over time Move object to set of positions occupied in a frame, blend resulting images together
Motion Blur Can blur moving average of frames. E.g blur 8 images When you render frame 9, subtract frame 1, etc Velocity buffer: blur in screen space using velocity of objects
Fog Fog was part of OpenGL fixed function pipeline Using shaders, fog applied to scene just before display Shaders can generate more elaborate fog Fog is atmospheric effect A little better realism Help in determining distances
Fog example Often just a matter of Choosing fog color Choosing fog model Turning it on
Rendering Fog Color of fog: color of surface: c c s f f ( 1 f ) f [ 0 , 1 ] c c c p f s How to compute f ? 3 ways: linear, exponential, exponential-squared Linear: z z end p f z z end start
Fog d f z f e p Exponential 2 ( d f z ) Squared exponential f e p Exponential derived from Beer’s law Beer’s law: intensity of outgoing light diminishes exponentially with distance
Fog f values for different depths can be pre ‐ computed and stored in a table on GPU Distances used in f calculations are planar Can also use Euclidean distance from viewer or radial distance to create radial fog
Different Atmospheres More generally, we can simulate better skies
Volume Rendering Volumetric data is represented as volumetric pixels (voxels) Rendering Voxels (CT/MRI) Methods Implicit surface techniques to convert voxel samples into polygonal surfaces (called isosurfaces) Voxel Data as set of 2D image slices (Lacroute & Levoy) Splatting – Voxel represented by alpha blended circular object (splat) , that drops of in opacity at fringes Volume slices as textured Quads (OpenGL Volumizer API)
Volumetric Texturing Represent objects as sequence of semi ‐ transparent textures Good for rendering fuzzy or hairy objects
References Kutulakos K, CSC 2530H: Visual Modeling, course slides UIUC CS 319, Advanced Computer Graphics Course slides David Luebke, CS 446, U. of Virginia, slides Chapter 2 of RT Rendering Suman Nadella, CS 563 slides, Spring 2005
Recommend
More recommend