hello i am morgan mcguire presenting a collaboration with
play

Hello. I am Morgan McGuire, presenting a collaboration with Julie - PDF document

Hello. I am Morgan McGuire, presenting a collaboration with Julie Dorsey, Eric Haines, John Hughes, Steve Marschner, Matt Pharr, and Peter Shirley on A Taxonomy of Bidirectional Scattering Distribution Function Lobes for Rendering Engineers With


  1. Hello. I am Morgan McGuire, presenting a collaboration with Julie Dorsey, Eric Haines, John Hughes, Steve Marschner, Matt Pharr, and Peter Shirley on A Taxonomy of Bidirectional Scattering Distribution Function Lobes for Rendering Engineers With the word “engineer” we intend to include researchers, faculty, and students. People who design rendering systems; This is different from scientists and engineers in adjacent fields, people in manufacturing, and content creators, who have different needs and have their own taxonomies. 1

  2. We release this work under a creative commons license to facilitate reuse for other authors and teachers. 2

  3. To introduce the problem we addressing in this work, I’ll begin with a story. (I will intentionally misattribute my coauthors’ contributions and positions to improve the story…and to tease them, which I assure you comes from my admiration and gratitude of being in such illustrious company.) 3

  4. Pete gathered us online several months ago. He noted that there’s a lack of consistency in the terminology used to describe materials in rendering and proposed that we standardize for our own future book editions, courses, and papers in order to reduce confusion when moving between contexts. [click] So, we surveyed major work across several fields and five decades and sought a reasonable set of terms and concepts. 4

  5. The first challenge that we noted is that that “material” has many different meanings in the field of computer graphics and interactive techniques taken as a whole. [click] “Material” can describe the properties of: • physical simulation, including density, chemical properties, friction coefficients, rigidity, etc. • audio simulation, including the sounds it makes when struck with different objects • modeling, including shape of detail features—fur, tiles, bumps, etc. • surface rendering for outermost surface, or multiple thin layers of the surface, including emission and light reflection • volumetric rendering, including phase functions • game logic, including whether it can be picked up by the player, is breakable, can be traversed by characters 5

  6. For example if you tell me that you have a sphere with a particular material on it, I personally assume something like this, where the color and size of the highlight are the space of variation. But to an artist using the Substance Designer tool, these are all the same object with a different “material”. [click] Coming at this first from rendering, I hadn’t thought about the coverage mask in the wicker example, but that makes sense. That “material” might include a displacement map or all of the complexity of a full 3D modeled world— 6

  7. —down to individual procedurally-generated sailboats and houses…took me aback! We addressed this by narrowing our goal to from “materials” to “appearance under thin surface rendering” at the level of “uniform small patches” within a pixel or texel… 7

  8. Yet, we found the terminology for surface appearance still inconsistent. [click] I think the most common definition problem in rendering is “what does specular mean?” To some, it is a surface that creates perfectly sharp mirror reflection. To others it is a surface with mirror reflection or refracted transmission. To still others, it is any blurred reflection that is near the mirror reflection direction. Vendors of paint, metal, and cloth; optical physicists; astronomers; natural media artists; CGI artists; game programmers; film VFX engineers; academic researchers, faculty, and students…all have different words for describing the appearance of this statue even when focusing solely on appearance. And we recognize that we can’t really discuss appearance as a property of a surface anyway… 8

  9. The passive appearance results from: 1. the combination of the two media, one on either side of a surface where light scatters 2. the incident illumination conditions 3. the imaging system’s sensitivities (e.g., camera’s sensor response or human retina) 4. the context within the image (light adaptation, bloom, local contrast…all of which happen in the human visual system as well as modern cameras!) Passive just means that the object isn’t glowing; if it is, that’s another thing… Our conclusion was to abandon materials and appearance. 9

  10. Because “Appearance” is the result of shading. We are better served in rendering by describing the key actor in shading, [click] which is the bidirectional scattering distribution function (BSDF) [click] Which is not a property of an object but an emergent property of an interface, although in practice it is common to assume surfaces are in air and attach BSDFs to them. 10

  11. The BSDF is a function of the incoming and outgoing direction vectors at a surface. It is the ratio of the change in outgoing radiance to the change in incident irradiance over small solid angles. It has units of “per steradian”. This means that the BSDF is a distribution of light scattering in every possible direction. --------- If I choose some incoming direction of light, then I can visualize a cross section of the BSDF with a cartoon like this: The gray box is some medium, such as glass, the top of the gray box is the interface between air and the glass…let’s say that it is ground (rough, sand-blasted) glass to make this interesting. The blue arrow is the incoming light’s direction of propagation that I’ve chosen. And then here’s the distribution of scattered light…for ground glass, there’s probably a lot of reflection around the mirror direction, but a fair bit that scatters in every direction, and then some that propagates forward into the glass by transmission but 11

  12. is diffused by that rough surface, which is why it provides some privacy on a shower door by “blurring” what is seen through it. In order to draw this, I had to fix both degrees of freedom in the incoming vector and one degree of freedom in the outgoing vector, leaving only the outgoing angle in the plane of the screen. If I choose a different incoming direction or a different orientation for the diagram, then we’ll see a different distribution in 2D. 11

  13. 12

  14. In practice, a rendering system mainly uses the BSDF in two ways. I’ll illustrate these with toy renderer pseudocode. As shown on the left, it will evaluate the BSDF function for “direct illumination”. This happens in your rasterization pixel shader or ray hit shader. This is what everyone spent a lot of time on in the early 80’s, and systems using rasterization continued to spend most of their computation on until recently. As shown on the right with this mock path tracer, a Monte Carlo renderer will also sample directions with a probability distribution proportional to the BSDF (and usually some other things, like the cosine of the angle of incidence and the incoming lighting). Sampling is where the computational and research emphasis is today for offline rendering, and increasingly for game rendering. This sampling use case is also the primary motivation for today’s talk, as we’ll see in a minute. 13

  15. Unfortunately, the BSDF terminology in the literature isn’t any better than the appearance terminology. In fact, it is mostly the same. 14

  16. Unfortunately, the BSDF terminology in the literature isn’t any better than the appearance terminology. In fact, it is mostly the same. 15

  17. When a renderer is sampling a BSDF, it has to handle light scattering differently depending on whether it is described by a discrete or continuous probability distribution function. On the right we show a schematic of a PDF for a surface that always produces a discrete set of light rays from one incident light ray, such as a perfect mirror, and photographs of interfaces between glass, metal, or water and air that could be approximated by such a BSDF.. Now, the BSDF for a mirror-reflector isn't a function in the sense we're used to: for every input it 'evaluates' to either zero or infinity. Consider that in the context of our two main use cases for BSDFs: evaluation for direct illumination and sampling for monte carlo ray tracing methods. It is not useful to evaluate the mirror-reflector BSDF. It is either infinity (which we can’t use for shading, and will occur for a single direction, thus with zero probability) or zero…so there was nothing to compute either way. We’d like to classify such BSDFs to branch in the code and exclude them from evaluation. 16

  18. For the sampling use case, we need to employ discrete (probability “mass”) sampling algorithms instead of continuous probability density sampling, so we again will require a code branch. This is the implication of monte carlo for taxonomy that I promised you. This branch point in the code becomes a branch point in the taxonomy. For the case where a single incident light ray produces a continuous distribution, we can further subdivide the scenarios, which will be driven by the monte carlo considerations… 16

Recommend


More recommend