System Architectures Computer Graphics Rendering Pipeline Jonathan Thaler Department of Computer Science 1 / 86
Introduction Remember Pipes & Filters... 2 / 86
Pipes & Filters Definition Pipes and Filters is a pattern to implement component-based data transfor- mation problems . A sequence of processing steps on a data stream is expressed using components called Filters , connected through channels called Pipes . Figure: The Pipes and Filters architectural style divides a larger processing task into a sequence of smaller, independent processing steps ( Filters ) that are connected by channels ( Pipes ). 3 / 86
Pipes & Filters Definition The central concept in Pipes and Filters is processing of a data stream . A data stream is understood as a sequence of uniform data entities (bytes, characters of a character set, digitized audio signal,...), which are simply given by a sequence. Sender and receiver agree on the semantics of the sequence (image, table, audio signal,...). As a consequence, it is not clear, for example, when the stream has reached its end - i.e. there isn’t any structural information on the end of a data stream. Figure: A pipes and filters example for processing an incoming order. 4 / 86
Other Known Usages Computer Graphics The data structure is an incremental data stream, starting with vertices, to edges, to fragments, to pixels on screen, going through a number of transforma- tions from 3D space to 2D on screen. 5 / 86
Introduction Computer Graphics Pipeline 6 / 86
Computer Graphics Pipeline Figure: Real-time rendering in the game Fortnite. 7 / 86
Computer Graphics Pipeline Figure: Real-time rendering in the game Doom Eternal. 8 / 86
Computer Graphics Pipeline Figure: Real-time rendering in the game Destiny 2. 9 / 86
Computer Graphics Pipeline Physically-Based Rendering 10 / 86
Computer Graphics Pipeline Figure: Photorealistic rendering of various materials and surfaces. 11 / 86
Computer Graphics Pipeline Figure: Photorealistic rendering of light scattering with photon tracing. 12 / 86
Computer Graphics Pipeline Figure: Photorealistic rendering of translucent material with subsurface scattering. 13 / 86
Computer Graphics Pipeline Towards photorealistic Real-Time Rendering 14 / 86
Computer Graphics Pipeline Figure: Towards photorealistic real-time rendering with the CryEngine. 15 / 86
Computer Graphics Pipeline Figure: Towards photorealistic real-time rendering with the Unreal Engine 4. 16 / 86
Computer Graphics Pipeline Figure: Towards photorealistic real-time rendering with the Unreal Engine 4. 17 / 86
Computer Graphics Pipeline The Rendering Pipeline 18 / 86
Computer Graphics Pipeline The central problem of 3D computer graphics is how to arrive from 3D model coordinates at 2D screen coordinates. Figure: From 3D model to 2D screen space. 19 / 86
Computer Graphics Pipeline Definition The rendering pipeline generates (renders) a two-dimensional image, given a virtual camera, three-dimensional objects, light sources, and other elements such as material properties. Figure: The pipeline stages execute in parallel, with each stage dependent upon the result of the previous stage. 20 / 86
Computer Graphics Pipeline There are four pipeline stages (1st on CPU/GPU, 2nd, 3rd + 4th on GPU): 1. The Application Stage is driven by the application and is therefore typically implemented in software running on general-purpose CPUs. 2. The Geometry Processing stage deals with transformations, projections, and all other types of geometry handling. 3. The Rasterization Stage typically takes as input three vertices, forming a triangle, and finds all pixels that are considered inside that triangle, then forwards these to the next stage. 4. The Pixel Processing Stage executes a program per pixel to determine its color and may perform depth testing to see whether it is visible or not. 21 / 86
Computer Graphics Pipeline Application Stage Implements the specific domain logic and performs general-purpose computing tasks. General-purpose computing tasks are traditionally: IO, collision detection, particles, global acceleration algorithms, animation, physics simulation,... Some tasks such as physics simulations tend to be executed on GPUs, therefore the separation of tasks between CPU and GPU is blurred . Submits draw commands to the GPU for rendering. Is highly domain specific and will be discussed a bit more in the chapter on Game Engine Architecture. 22 / 86
Computer Graphics Pipeline Geometry Stage The geometry stage is typically performed on a graphics processing unit (GPU) that contains many programmable cores as well as fixed-operation hardware . The geometry processing stage on the GPU is responsible for most of the per-triangle and per-vertex operations. 23 / 86
Geometry Stage Geometry Stage Computes what is to be drawn, how it should be drawn, and where it should be drawn. It runs on the GPU and is responsible for most of the per-triangle and per-vertex operations. 24 / 86
Geometry Stage Vertex Shading: View Transformation 25 / 86
Geometry Stage Vertex Shading: Projection 26 / 86
Geometry Stage Clipping 27 / 86
Geometry Stage Screen Mapping 28 / 86
Computer Graphics Pipeline Rasterization Stage Given the transformed and projected vertices with their associated shading data from geometry processing, the goal of the rasterization stage is to find all pixels that are inside a triangle being rendered. 29 / 86
Rasterization Stage Rasterization Stage All the primitives that survive clipping in the geometry stage are rasterized : all pixels that are inside a primitive are found and sent further down the pipeline to pixel processing. Rasterization is the conversion from two-dimensional vertices in screen space - each with a z-value (depth value) and various shading information associated with each vertex - into pixels on the screen . Rasterization is a synchronization point between geometry and pixel processing: triangles are formed from vertices and sent down to pixel processing. 30 / 86
Rasterization Stage Triangle Setup : differentials, edge equations, and other data for the triangle are computed. Fixed-function hardware is used for this task and is therefore not fully programmable through shaders. Triangle Traversal : finding which samples (antialiasing) or pixels are inside a triangle. A pixel inside a triangle is referred to as a fragment . Each triangle fragment’s properties are generated using data interpolated among the three triangle vertices. These properties include the fragment’s depth , as well as any shading data from the geometry stage. It is also here that perspective-correct interpolation over the triangles is performed. All pixels or samples that are inside a primitive are then sent to the pixel processing stage.. 31 / 86
Computer Graphics Pipeline Pixel Processing Stage The goal is to compute the color of each pixel of each visible primitive. 32 / 86
Pixel Processing Stage Pixel Processing Stage Triangles that have been associated with any textures (images) are rendered with these images applied to them as desired. Visibility is resolved via the z-buffer algorithm, along with optional discard and stencil tests. Each object is processed in turn, and the final image is then displayed on the screen. 33 / 86
Pixel Processing Stage Pixel Shading 34 / 86
Pixel Processing Stage Merging with z-Buffer 35 / 86
Computer Graphics Pipeline From 3D Model to 2D Screen Coordinates A detailed and technical discussion of how to arrive from a 3D model at 2D coordinates. 36 / 86
From 3D Model to 2D Screen Coordinates Definition The key tools for projecting three dimensions down to two are a viewing model, use of homogeneous coordinates , application of linear transformations by matrix multiplication, and setting up a viewport mapping . 37 / 86
From 3D Model to 2D Screen Coordinates Definition The common transformation process for producing the desired view is analogous to taking a photograph with a camera . 1. Viewing transformation : move the camera to the location you want to shoot from and point the camera the desired direction. 2. Modeling transformation : move the subject to be photographed into the desired location in the scene. 3. Projection transformation : choose a camera lens or adjust the zoom . 4. Apply the transformations : take the picture. 5. Viewport transformation : stretch or shrink the resulting image to the desired picture size. 38 / 86
From 3D Model to 2D Screen Coordinates Model-View Transform Steps 1 and 2 can be considered doing the same thing, but inverse (opposites) of each other. Normally they are combined together as the Model-View Trans- form . With the Model-View Transform we arrive at a single, unified space for assembling objects into a scene which is also called Eye Space . 39 / 86
From 3D Model to 2D Screen Coordinates Transformations Coordinate systems for transforming from 3D model coordinates into 2D screen coordinates. 40 / 86
From 3D Model to 2D Screen Coordinates Coordinate systems for transforming from 3D model coordinates into 2D screen coordinates. Visualisation of the various transformation steps. 41 / 86
Recommend
More recommend