The BRDF and Microfacet Theory

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

Previously, I built a basic dielectric shader in RenderMan, which afforded me an opportunity to re-learn the relevant computer graphics maths.  I was not particularly concerned with physical correctness. My blurry reflection and refraction was just due to a large number of samples in a widening cone around the reflection/transmission direction, with all samples weighed equally. As I have continued my explorations with shading, I have discovered that this is not particularly physically-based, nor is it efficient.

I plan to build on these concepts and create a physically-plausible material along the lines of the mia_material_x (mental ray) material. There are several stages to implementing a shader of this type. First, I plan to implement a number of diffuse and specular models to better understand the underlying maths and practice implementation of a microfacet-based BRDF. Next, I will add reflection functionality, but by importance sampling the specular BRDF instead of the standard cone around the reflection vector.

RenderMan Pro Server 16 introduced a number of new features for physically-plausible shading, particularly Multiple Importance Sampling (MIS). MIS-aware shaders follow a somewhat strict structure utilizing RSL 2.0 structs and shader objects. Once I have a basic understanding of the maths for the various components, I will move on to creating the class-based shader and structs required to work with RPS 16's MIS features.

In the several times I have attended SIGGRAPH, the technical papers were always miles over my head. A secondary goal of this project is to spend enough time learning and recalling the relevant computer graphics math so that I can gain a general understanding of a shading-related technical paper from a quick reading.

There are quite a few additional features I plan to implement over the coming weeks and months, including AOVs, BTDF refraction, transluscency, emission, other BRDF models, and smooth integration into Slim (currently not practical since RPS 16 is not integrated into RenderMan for Maya).

Before I describe the individual models implemented, I want to go over some basic concepts at work. In shading, we assume that the local illumination at a surface point depends on only two vectors - the view direction and the light direction (though there may be multiple lights as we will see later). The Bidirectional Reflectance Distribution Function (BRDF) represents the relationship between the light and view vectors and describes the light response at the shading point based on those two variables. Given a view direction v, the BRDF defines the relative contributions of each incoming (light) direction l. The BRDF is 'bidirectional' because it can also evaluate all outgoing directions v given an incoming (light) direction l. This reciprocal nature is a defining feature of all physically-based BRDFs.

BRDFs are also energy conserving - that is to say the total amount of energy reflected as light is less than or equal to the energy of incident light. This is not the case in some other less involved specular models.

The other key concept at work with physically-based BRDFs is microfacet theory. Suppose that a micropolygon of a surface has a normal n. Microfacet theory suggests that, while n represents a kind of average normal of the surface at that point, it is actually made up of microscopic gouges. These microscopic gouges are themselves made up of tiny microfacets that are each optically flat.

The properties of this microstructure define how the surface will respond to light from a given view direction (or light direction). Given a view direction v and light direction l, using basic reflection math, it is clear that the vector halfway between (h) represents the normal of the surface that reflects v into l (or l into v). In other words, this halfway vector h defines which of the tiny microfacets we are concerned with, given where we are looking from and where the lights are. Based on our input parameters of l and v, we can compute h.

With h defined, we can describe the overall BRDF for this particular view and light direction. The general form of a microfacet-based BRDF, based on a modified Cook-Torrance model [1], is 

The equation may seem daunting at first, but each component will be described in simple-to-understand terms without math.

F(l,h) is the Fresnel function. This is present in any shader with reflections, and simply defines how reflections are low when viewed straight on and intensified at glancing angles (for most non-metal surfaces). In most shaders, this is implemented based on the macro-surface normal n, but in the BRDF, we are concerned with the fresnel effect of the microfacets with normal h. The difference is subtle at low roughness but very pronounced as roughness increases.

D(h) is the Distribution function. The distribution of microfacets is defined by this function unique to each BRDF model. This function determines the overall 'smoothness' of the surface, and is most responsible for the size and shape of the specular highlight. This function also has some parameter representing "roughness" and basically tells us what concentration this important vector h has among all of the various facets pointed all over the place. The higher that concentration, the 'smoother' the surface. A result of infinity would mean that all of the microfacets had the same normal (perfect mirror reflection).

G(l,v,h) is the Geometry function. This function accounts for occlusions in the surface. While the distribution term D(h) defines what concentration of microfacets have a normal h, it doesn't define whether or not we can see all of those faces (we can't). The geometry term G has two parts, shadowing and masking. Shadowing means a microfacet is not visible to light direction l and is thus not illuminated and not contributing to the reflection response. Masking means a microfacet is not visible to the view direction v (even though it may be illuminated), and thus is not contributing to the reflection response. In the real world, some of these reflections would bounce around and eventually become visible, but the error is so small as to not be worth the compute time currently.

The denominator 4(n.l)(n.v) is a normalization factor whose description is too involved for this webpage (a complete explanation can be found in Walter's 2007 paper extending the Cook-Torrance model. [2],[3]).

REFERENCES

[1] A Reflectance Model for Computer Graphics
[2] Microfacet Models for Refraction through Rough Surfaces
[3] Microfacet Models for reflection and refraction - slides