week 2 friday what did we talk about last time graphics
play

Week 2 - Friday What did we talk about last time? Graphics - PowerPoint PPT Presentation

Week 2 - Friday What did we talk about last time? Graphics rendering pipeline Geometry Stage We're going to start by drawing a 3D model Eventually, we'll go back and create our own primitives Like other MonoGame content, the


  1. Week 2 - Friday

  2.  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage

  3.  We're going to start by drawing a 3D model  Eventually, we'll go back and create our own primitives  Like other MonoGame content, the easiest way to manage it is to add it to your Content folder and load it through the Content Management Pipeline  MonoGame can load (some) .fbx , .x , and .obj files  Note that getting just the right kind of files (with textures or not) is sometimes challenging

  4.  First, we declare a member variable to hold the model Model model;  Then we load the model in the LoadContent() method model = Content.Load<Model>("Ship");

  5.  To draw anything in 3D, we need a world matrix, a view matrix and a projection matrix Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0)); Matrix view = Matrix. CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY); Matrix projection = Matrix.CreatePerspectiveFieldOfView(0.9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100.0f);  Since you'll need these repeatedly, you could store them as members

  6.  The world matrix controls how the model is translated, scaled, and rotated with respect to the global coordinate system Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));  This code makes a matrix that moves the model 0 units in x , 0 units in y , and 0 units in z  In other words, it does nothing

  7.  The view matrix sets up the orientation of the camera  The easiest way to do so is to give  Camera location  What the camera is pointed at  Which way is up Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY);  This camera is at (0, 0, 7), looking at the origin, with positive y as up

  8.  The projection matrix determines how the scene is projected into 2D  It can be specified with  Field of view in radians  Aspect ratio of screen (width / height)  Near plane location  Far plane location Matrix projection = Matrix.CreatePerspectiveFieldOfView( .9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100f);

  9.  Drawing the model is done by drawing all the individual meshes that make it up  Each mesh has a series of effects  Effects are used for texture mapping, visual appearance, and other things  They need to know the world, view, and projection matrices foreach(ModelMesh mesh in model.Meshes) { foreach(BasicEffect effect in mesh.Effects) { effect.World = world; effect.View = view; effect.Projection = projection; } mesh.Draw(); }

  10.  I did not properly describe an important optimization done in the Geometry Stage: backface culling  Backface culling removes all polygons that are not facing toward the screen  A simple dot product is all that is needed  This step is done in hardware in MonoGame and OpenGL  You just have to turn it on  Beware: If you screw up your normals, polygons could vanish

  11.  For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline  This pipeline contains three conceptual stages: Decides Produces what, Renders material how, and the final Application Geometry Rasterizer to be where to image rendered render

  12.  The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space  Doing so is called:  Rasterization  Scan Conversion  Note that the word pixel is actually a portmanteau for "picture element"

  13.  As you should expect, the Rasterization Stage is also divided into a pipeline of several functional stages: Triangle Triangle Pixel Merging Setup Traversal Shading

  14.  Data for each triangle is computed  This could include normals  This is boring anyway because fixed-operation (non- customizable) hardware does all the work

  15.  Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel  The properties of this fragment are created by interpolating data from the vertices  Again, boring, fixed-operation hardware does this

  16.  This is where the magic happens  Given the data from the other stages, per-pixel shading (coloring) happens here  This stage is programmable, allowing for many different shading effects to be applied  Perhaps the most important effect is texturing or texture mapping

  17.  Texturing is gluing a (usually) 2D image onto a polygon  To do so, we map texture coordinates onto polygon coordinates  Pixels in a texture are called texels  This is fully supported in hardware  Multiple textures can be applied in some cases

  18.  The final screen data containing the colors for each pixel is stored in the color buffer  The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel  Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

  19.  To deal with the question of visibility, most modern systems use a Z -buffer or depth buffer  The Z -buffer keeps track of the z -values for each pixel on the screen  As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z -buffer (which is then updated)  This is called a depth test

  20.  Pros  Polygons can usually be rendered in any order  Universal hardware support is available  Cons  Partially transparent objects must be rendered in back to front order (painter's algorithm)  Completely transparent values can mess up the z buffer unless they are checked  z -fighting can occur when two polygons have the same (or nearly the same) z values

  21.  A stencil buffer can be used to record a rendered polygon  This stores the part of the screen covered by the polygon and can be used for special effects  Frame buffer is a general term for the set of all buffers  Different images can be rendered to an accumulation buffer and then averaged together to achieve special effects like blurring or antialiasing  A back buffer allows us to render off screen to avoid popping and tearing

  22.  This pipeline is focused on interactive graphics  Micropolygon pipelines are usually used for film production  Predictive rendering applications usually use ray tracing renderers  The old model was the fixed-function pipeline which gave little control over the application of shading functions  The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware

  23.  GPU architecture  Programmable shading

  24.  Read Chapter 3  Start on Assignment 1, due next Friday, September 13 by 11:59  Keep working on Project 1, due Friday, September 27 by 11:59  Amazon Alexa Developer meetup  Thursday, September 12 at 6 p.m.  Here at The Point  Hear about new technology  There might be pizza…

  25.  Want a Williams-Sonoma internship?  Visit http://wsisupplychain.weebly.com/  Interested in coaching 7-18 year old kids in programming?  Consider working at theCoderSchool  For more information: ▪ Visit https://www.thecoderschool.com/locations/westerville/ ▪ Contact Kevin Choo at kevin@thecoderschool.com ▪ Ask me!

Recommend


More recommend