New website, more active blog?

I've finally switched over to Squarespace and intend to keep this blog a bit more active.  Work has been incredibly busy, but I've been playing with some things on the side that I'll be posting soon enough.  Thanks for stopping by!

Blast from the past...

So yesterday I woke up to 20,000 views of one of my very old videos - one that's never really had any interest to anyone other than myself.

I put this together years ago as a learning exercise, turns out someone on Reddit found it and posted it in the Star Trek subreddit.  I managed to track it down and have a chat with quite a few interested Star Trek fans.  Considering this project was my most ambitious personal project (originally I wanted to do an entire sequence) and my latest project was Star Trek Into Darkness, it's been a nice opportunity to remember what got me into this industry in the first place.

RSL 2.0 Shader Objects

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

Prior to this project, I had never attempted anything involving combined shading or shader objects. RSL 2.0 introduced two new concepts - shader objects and C-style structs. Both provide benefits to the shading pipeline speed.

First, the new shader objects (or class-based shaders) can contain separate methods for displacement, surface, and opacity in the simplest implementation. This allows for the sharing of data between these methods and more efficient execution by breaking the shading process into separate methods. A key example is the opacity method. In RSL 1.0 shaders, the shader may execute all the way to the end, only to end up with an opacity of 0. Malcolm mentioned this as a scenario with shading of tree leaves, and I suspected then that the speed could be improved by solving opacity first, and then wrapping the rest of the surface shader in a conditional based on the opacity being greater than some threshold. In RSL 2.0, the opacity method executes before the surface method, and will not run the surface shader if the opacity is below the threshold. This way we can accomplish the same behavior without ugly changes to the surface shader code. Additionally, values from these internal methods are cached, offering further speed advantages.

A more advanced version of shader objects utilizes multiple functions to break up the lighting process, as well as an initialization function to precompute values and speed up the shading process. This framework, along with user-created functions, is essential to creating a MIS-aware material in RPS 16.

//A simple example of a class-based setup
class rslTwoShader( shader params ) {
	float classVar;
	stdrsl_ShadingContext m_shadingCtx;
	
	public void construct() {
	}
	public void begin() {
		m_shadingCtx->init(); //initialize data struct
	}
	public void displacement() {
		//Runs first, modifies P and the normal
	}
	public void opacity() { //optional
		//Runs next, gives opportunity to exit before shading
	}
	public void prelighting() { //optional
		//Precompute any non-light-specific values for lighting
		//Useful for interactive relighting efficiency
	}
	public void lighting() {
		//Light-position dependent step, run per light
	}
	public void postlighting() { //optional
		//Any post-processing required
	}
	public void userCreatedFunc() {
	}
}

RSL 2.0 also introduced C-style structs. Structs can be used to store functions and data for reuse across the shading pipeline, and mainly serve to organize code and facilitate reuse. In my case, I used several Pixar structs and a custom struct for my final shader. One good example is Pixar's ShadingContext struct, which stores a myriad of data about the current shading point and provides many utility functions for dealing with that data. The ShadingContext struct is intialized in the begin() method of a class-based shader, and can be used throughout the shading pipeline for easy access to ray depth, the faceforward normal, tangents, etc.

RPS16 PHYSICALLY-PLAUSIBLE-SHADING

RenderMan Pro Server 16's physically-plausible shading makes use of both structs and class-based shaders. These shaders are constructed like any other, but utilize new lighting integrators and new required functions for multiple importance sampling.

First, an overview of the shading process with respect to multiple importance sampling. I have already covered how in some cases it is better for the material to generate the sample directions, and in other cases it is better for the light to provide sample directions to the material. With both lights and shaders, two new functions must be defined in RPS 16 to work with the MIS system.

The generateSamples() method is defined in both the material and light, and is used to store the response of that portion of the final result. In the case of the light, generateSamples() stores the sample directions and the light radiance. In the case of the material, it stores the sample directions and the material response at that direction (the evaluation of the BRDF and the PDF, but not the lighting).

//used as a part of the full shader object
public void generateSamples(string distribution;
		output __radiancesample samples[]) {
	
	//distribution = "specular", "indirectspecular", etc.
	
	if(distribution == "specular") {
		/* append to the samples array - if it already has any,
		they are from the lights */
		
		color materialresponse = 0;
		float materialpdf = 0;
		for(i = start_size; i < size; i+=1) {
			//do my whole BRDF calculation, resulting in:
			matresp = refl * (F*G*D*)/4*(Vn.Nn)*(Ln.Nn);
			matpdf = (D*(H.Nn))/(4*(H.Vn)); //see paper for details
			//store this material response, pdf, light dir and distance
			samples[i]->setMaterialSamples(matresp, matpdf, Ln, 1e30);
		}
	}
}

Next, the evaluateSamples() method must be defined for both the material and the light. In the case of a light, evaluateSamples takes the samples generated by the material (already containing the material response and pdf), and adds the light radiance for that sample direction, thus creating a full sample with lighting and material response. In the case of the material, the sample direction already contains information about the light radiance, and material response and PDF are added to create a full sample.

These samples are stored internally by RPS 16 in a radiance cache and can be reused in the new diffuselighting() and specularlighting() shader methods, delivering speedups in some cases.

float evaluateSamples(string distribution;
	output _radiancesample samples[]) {
	if(distribution == "specular") {
		for(i = 0; i < num_samps; i+=1) {
			//direction provided by light generateSamples()
			vector Ln = samples[i]->direction;
			
			//evaluate BRDF as above
			samples[i]->setMaterialResponse(matresp,matpdf);
		}
	} else if (distribution == "diffuse") {
		//implement lambert or oren-nayar diffuse here
		//right now diffuse is only using light samples since it is
		//inefficient to sample the whole hemisphere of the material
		for(i = 0; i < num_samps; i+=1) {
			float cosTheta = (samples[i]->direction).Nn; //Lambert
			if(cosTheta > 0) {
				matresp = cosTheta * diffuseColor / PI;
			}
			samples[i]->setMaterialResponse(matresp, 0); //pdf = 0
		}
	}
}

There are also two new integrators, directlighting() and indirectspecular(), where these samples are put to use. These functions invoke the generateSamples() and evaluateSamples() functions of both the materials and lights, and internally handle the multiple importance weightings. The directlighting() function includes a "heuristic" parameter to adjust the balance between light and material samples based on different research.

public void lighting(output color Ci, Oi) {
	__radiositysample samples[];
	
	directlighting(this, getlights(), "mis" 1, "heuristic", "veachpower2",
			"materialsamples", samples, "diffuseresult", diff_col,
			"specularresult", spec_col);
	color indir_spec = indirectspecular(this, "materialsamples", samples);
	color indir_diff = indirectdiffuse(P, Nn, num_diffuse_samps);
	
	Ci += (Kd * (indir_diff + diff_col) * (1-fresnel)) + spec_color + indir_spec;
}

FumeFX to RenderMan - Maya Integration

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

INTEGRATING WITH MAYA - PASSES

Based on the unweildy nature of the previous steps, I decided to investigate adding my own passes to the RenderMan Pass Settings Tree in Maya. I started this process by examining the Slim templates for the All Purpose Material Subsurface Scattering component and the Base Volume. After analyzing the Slim template code, I isolated the relevant TCL code to create these render passes. Like any of these experiments, a long series of deeper and deeper searches into the RenderMan Slim templates and TCL header files finally illuminated what actually happens with the TCL code in the main Slim template. I will not spend any time going over the basics of Slim template creation, but the interesting bit of the Slim template is posted below.

parameter float Disable {
   label {Disable Bake}
   subtype switch
   default 1
   msghandler {
      SetValue {
         set atl sss
         set ch _radiance_t
         set sty volumeScatter 
         set prop %obj
         set app [$prop GetAppearance]
         set map [[$app GetProperties -name *Crew*] GetValue]
         set d [$prop GetValue]
         set bakedFileProp [$app GetProperties -name *BakedFile*]
         set ptcFileProp [$app GetProperties -name *PtcFile*]
         set bakePassIdProp [$app GetProperties -name *BakePassId*]
         set bakePassClassProp [$app GetProperties -name *BakePassClass*]
         $bakedFileProp SetValue "\[bakemap -atlas $atl -chan $ch -map $map
                                 -style $sty -disable $d\]"
         $ptcFileProp SetValue "\[ptcfile $atl $map $sty\]" 
         $bakePassIdProp SetValue "\[bakepassid $atl $map $sty\]"
         $bakePassClassProp SetValue "\[bakepassclass $sty\]"
      }
   }
}
slimattribute string Crew {
   default {$SHADERGROUP}
}
parameter string CurrentPassId {
   provider variable locked
   default {$PASSID}
}
parameter string BakePassId {
   default {}
   provider variable locked
}
parameter string BakePassClass {
   default {}
   provider variable locked
}
parameter string PtcFile {
   default {}
   provider variable locked
}
parameter string BakedFile {
   default {}
   provider variable locked
}

The basic element of interest here is the msghandler SetValue. This TCL method runs when the value of the associated parameter changes. In this case, it was a checkbox for enabling or disabling the point cloud bake. When the point cloud is enabled, parameters are retrieved about the set of objects with the shader applied ("Crew") and the current Pass ID, and four TCL functions are set in strings that will create the relevant render passes in the Pass Settings dialog. I have not isolated the area yet where these are finally executed, but I am fairly certain these are evaluated by some TCL code that is executed at render-time. This code is mostly borrowed from the BaseVolume material.

Now that RenderMan will happily create the additional passes, I have control over the various associated render settings (shading rate, number of bounces, etc.). Next, I added conditionals to my shader code to only bake the map if the prepass was the current pass, and only use the point cloud result in the final render pass. These are not accessed in the standard way and are instead accessed via the various hidden parameters set in the previous code block.

//BakedFile and Disable are set as shader params
if(BakedFile != "" && Disable != 1) {
	color indirLight = 0;					
	texture3d(BakedFile, P, N, "_indirectdiffuse", indirLight);
	sAccum += indirLight * Intensity;
}

//Disable, CurrentPassId, BakePassId, PtcFile are all set as shader params
if(Disable != 1 && rdepth == 0 && CurrentPassId == BakePassId) {
	float area = area(P, "dicing");
	if(OI != color(0) && PtcFile != "") {
		bake3d(PtcFile, "_area,_radiosity,_extinction,Cs", P, N,
			"interpolate", 1,
			"_area", area, "_radiosity", CI, 
			"_extinction", OI, "Cs", CI);
}

I am extremely happy with how this smooths the process of using the shader and am much clearer (though still finding the last few details) on how these more complex Slim templates interface with RenderMan for Maya.

INTEGRATING WITH MAYA - AOVS

The last major feature I hoped to add was support for AOVs. Having a small amount of experience with standard RSL shader AOVs, I thought this would take only a couple of minutes to implement with simple output varying parameters. While it was not at all this simple, I was able after all to find a fairly elegant way to implement AOVs.

//A simple RSL shader with an AOV
surface hasAOV ( output varying color extraChan = 0 ) {
	//will create an AOV with the surface colored red
	extraChan = color(1,0,0);
}

First, a bit of background about how Slim shader templates work. A Slim template is a bunch of TCL defining input and output parameters followed by a block of RSL code. Inside this RSL code, values are assigned to output parameters and at first glance everything appears exactly like a regular RSL shader. The actual process is much more complicated than that, but can be described in brief. The RSLFunction defined in a Slim template is implemented in the resulting .sl file as just that - a function. The actual surface shader code calls the function and assigns the output parameters to the actual shader outputs. First, a very simple Slim template:

slim 1 extensions cutter {
extensions fundza cutr {
	template shadingmodel HasColor {
		collection shadingmodel result {
			access output
			display hidden
			parameter color CI {
				access output
				}
			parameter color OI {
				access output
				}
			}
		RSLFunction {
		void cutrHasColor (
			output color CI;
			output color OI;
			)
		{
		OI = Os;
		CI = Os * color(1,0,0);
		}
} } } }

This template simply creates an RSL function called cutrHasColor and sets the OI and CI values (no AOVs yet). The resulting .sl code (reduced):

surface HasColor () {
	void cutrHasColor ( output color CI;
			output color OI; )
	{
		OI = Os;
		CI = Os * color(1,0,0);
	}
	
	color tmp2;
	color tmp3;
	
	cutrHasColor(tmp2, tmp3);
	Ci = tmp2;
	Oi = tmp3;
}

A close look at this .sl file, created by Slim, shows that the RSLFunction block from Slim is just carried over as a function inside the shader. The shader is called and the output values are assigned to the correct required output values of the surface shader (Ci and Oi). This is all well and good, but I discovered quickly that if I tried simply adding a few more output parameters to the result collection of the Slim template and function, they didn't translate over to my final .sl code. It would still just set Ci and Oi. A bit more research led me to the concept of Visualizers.

Visualizers are a type of dynamic shader template, where that last bit of RSL is defined. In this case, the overall template has a shadingmodel type. This relates to ashadingmodel visualizer, which defines two outputs, OI and CI, and plugs them into Oi and Ci of the final RSL shader. There are many built in visualizers, and one of particular interest is the shadingmodel_aov visualizer. This visualizer references a built-in Pixar .h file with a list of predefined AOVs and runs some TCL loops to define all of the RSL code for their use. This file can be modified, but lives within the install directory and is quite inconvenient to work with. However, when Malcolm told me about this header file, I did some more digging and decided on a different approach by defining my own visualizer inside my Slim template.

Basically I wanted my template to have three AOVs - Fire, Smoke, and Indirect. This way the user in post would have control over the color of the particular element as well as the intensity of the indirect component. Ultimately, my output collection for the Slim template now needed 5 outputs instead of the standard two. As I mentioned, just adding them causes no errors, but did not hook them to the final RSL code in any meaningful way. I added the following visualizer to my code:

slim 1 extensions kevin_george {
extensions kgeorge kdg {
	template visualizer shadingmodel_kdg {
		collection shadingmodel_kdg shadingmodel {
			detail mustvary
			parameter color CI {
				detail mustvary
				default {1 1 1}
			}
			parameter color OI {
				detail mustvary
				default {1 1 1}
			}
			parameter color Fire {
				detail mustvary
				provider connection
				default {0 0 0}
			}
			parameter color Smoke {
				detail mustvary
				provider connection
				default {0 0 0}			
			}
			parameter color IndirFire {
				detail mustvary
				provider connection
				default {0 0 0}			
			}
		}
		
		RSLDeclare output varying color Fire 0
		RSLDeclare output varying color Smoke 0
		RSLDeclare output varying color IndirFire 0	
		
		RSLMain {
			generate
			output "Ci = [getvar CI];"
			output "Oi = [getvar OI];"
			output "Fire = [getvar Fire];"
			output "Smoke = [getvar Smoke];"
			output "IndirFire = [getvar IndirFire];"
		}
	}

This section is including a lot of code, but the above basically defines the parameters of a new template 'type' called shadingmodel_kdg that expects five output parameters instead of the two. Additionally, the RSLDeclare statements will add the the actual ouput varibles to the .sl file, and the 5 commands in RSLMain write the final values to the .sl file.

Still unwilling to cooperate, RenderMan did not recognize my new template type as attachable, so I was no longer able to attach my shader to geometry. To tell RenderMan that this new type is attachable, the slim.ini file must be modified with the following command:

SetPref SlimAttachableMap(shadingmodel_kdg) surface

With this final step complete, I was able to assign my shader and output AOVs with RenderMan for Maya.

More valuable information can be found in the RMS 3 documentation under the section Slim -> Templates -> Templates: Advanced Topics [3].

FUTURE DEVELOPMENTS

While I accomplished much of my desired goal, I still would like to extend this shader to be a bit more universal by learning and utilizing the same channel names as would be used with Maya Fluids. This extension should make it easier for me to integrate my shader with Maya Fluid simulations as well, instead of being totally reliant on my custom primvar channel names.

On the last day of the project, I found a huge oversight on my part - temparature is really not the best method to determine where fire resides. I was using this approach because FumeFX does not allow the user access to the actual 'fire' parameter. However, this does not consider that for very hot fire, there will be very hot convection areas above the flames that are not visible. I have determined that I can deal with this issue by adapting my code to also export the fuel channel and look forward to the next version of the shader.

I would also like more control over the color and opacity - preferably through some means to set a color gradient for each of these parameters for fire and smoke to dial in and art direct the final look.

Originally, I planned to use the vector velocity data from the FumeFX simulation as a sort of motion vectors, but I did not have time to implement this feature. I'm not certain it's as straightforward as exporting a motion vector pass for compositing, but I also plan to explore ways to use this velocity data in future versions.

REFERENCES

RenderMan Pro Server App Note - Volume Rendering
Stefan-Boltzmann Law
RenderMan Studio: Slim Templates - Advanced Topics
Production Volume Rendering - Siggraph 2011 Course Notes 

Sampling BRDFs and MIS

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

At first, I was eager to use uniform sampling to quickly get some results from my BRDF implementation. To do this, I used a gather() loop simply to generate random ray directions, then I evaluated the BRDF with those random ray directions as l. Many of these samples go through the entire BRDF calculation only to be discarded because they affect a normal of the microfacet structure not visible to the view direction for one reason or another. An interesting side effect of this is that for very rough evaluations, I could get decent renders with reasonable sample counts. As I approached mirror reflection, however, the number of samples required skyrocketed. This is expected, since the area of the reflection is so much smaller for mirror reflections, and I'm still just shooting rays all over the place and most of them are being discarded. Obviously a better method is needed.

To extend these BRDF models from simple punctual light support, the BRDF must be sampled. The first approach is a simple uniform sampling of the entire hemisphere above the surface. To do this, I used a gather() loop to generate random ray directions, which I considered as my light direction l. Many of these samples go through the entire BRDF calculation only to be discarded because they affect a normal of the microfacet structure not visible to the view direction for one reason or another. An interesting side effect of this is that for very rough surfaces, I could achieve smooth renders with reasonable sample counts. As I approached mirror reflection, the number of samples required skyrocketed. This is expected, since the area of reflection is so much smaller for mirror reflections, but I am just shooting rays all over the hemisphere and almost all are being discarded by the BRDF. Obviously a better approach is needed.

To more accurately sample a BRDF, we must effectively sample the distribution term by generating random ray directions that are relevant to the BRDF, as opposed to just firing them at random over the whole hemisphere. This is considered importance sampling the BRDF. The sampling equations are more mathematically involved than I will go into here, but are described in the various papers in great detail. Essentially we generate random microfacet normals (equivalent to the half-vector in the description above) that satisfy the BRDF, and then do the reflection calculation on these normals and the view direction to generate the light directions l. These samples tell the shader 'where to look' so we are not spending precious time sampling areas of the hemisphere that will not contribute to the reflection at that point. Each of these samples is then weighted by a Probability Density Function (PDF), which accounts for visibility and the concentration of facets with that particular half-vector normal. Averaging these samples provides a resulting reflectance with less variance than uniform sampling, and is much, much faster.

Uniform sampling wastes samples in non-contributing areas

Sampling the distribution sends rays in directions important to the BRDF

MULTIPLE IMPORTANCE SAMPLING

One strategy is to sample the BRDF and generate vectors to sample the environment/objects/lights. Another approach is to sample the lights/ environment and provide that information to the surface and evaluate the BRDF. This is particularly useful when lights are very small or very bright. If we only sample the material, we may miss these lights entirely due to the randomness of the ray directions. Worse, we may miss them in one frame and hit them in the next, causing ugly flickering (a problem I am quite familiar with from other projects).

Uniform sampling may miss small but important areas

An environment map is importance sampled by determining which areas of the environment map are brightest and sending the associated ray directions back to the material to use as sample directions. Similarly with very small lights, sending the material information about the position of the light makes sure enough samples go in that direction.

Importance sampling improves accuracy and reduces variance (same number of samples)

In many cases this is a good strategy, but in the case of large light sources, a large number of samples is required, and it can often be more efficient to just sample the material. Combining these two strategies is the basis of Multiple Importance Sampling. In the scope of this project, I used RenderMan's built in physically-plausible lights, which are capable of generating samples to send to the material, as well as applying their light contribution to samples from the material. RenderMan's new RPS 16 integrators take care of the weighing between the light and material samples.

Shading Specular Models

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

Understanding various specular models was one of the main goals of this project. First, I implemented several specular models with punctual/point light sources. At that point, the difference between the specular models is just slight changes to the shape of the fake specular ping. However, I implemented these models in the following order so that I might better understand the underlying concepts before trying to grasp a full physically-based material. All of these models are explained in great detail else on the web, so I will stick with very short descriptions.

I first started with the most basic specular model, Phong [1]. The Phong model attempts to address that surfaces do not simply reflect in one direction, but tend to spread out as an indication of roughness. Phong reflection is one simple equation, where the angle between the reflection and view vectors is raised to some power. The value of the poewr determines the sharpness of the highlight. Below are a couple of examples. This shader is not energy conserving in any way, so as the highlight becomes more blurred, the specular multiplier must be lowered to keep values realistic.

Jim Blinn made a key change to the Phong model by introducing the concept of ahalf vector [2]. The half-vector is the angle halfway between the view and light vectors. Phong requires the reflection vector to be computed, which is a more expensive operation. Blinn observed that if the half-vector is computed and compared to the surface normal, this was roughly equivalent to comparing the view and reflection vectors. The resulting exponent value is different, but the overall look is very similar to Phong but much cheaper to compute.

The next models I attempted to implement were the Ward isotropic and anisotropic models. Ward sought to create the simplest formula possible to match real-world measurements and as such the Ward model is more complex than the previous Phong and Blinn models. The isotropic version was very straightforward to implement from Ward's 1992 paper [3], but the anisotropic version requires tangent and bitangent vectors orthogonal to the surface normal. This is not straight- forward to compute using built in RenderMan functions, as the shading point only has a dPdu and dPdv function. This only provides predictable numbers with parameterized surfaces, so I used a piece of code from Ali Seiffouri's website to compute the tangent and bitangent vectors. I look forward to spending more time developing my own solution to this problem when I have an opportunity.

Finally, I implemented the Cook-Torrance specular model, or more specifically the modified Cook-Torrance present in Walter et al's 2007 paper. Cook-Torrance is a proper microfacet BRDF specular model, and is of the form described in the BRDF overview section. Unlike the other specular models mentioned before, the Cook-Torrance specular correctly dims out as the roughness increases.

I have opted to leave out the maths for the above specular models, but more detail can be found in the references if interested.

REFERENCES

[1] - The Phong Reflection model
[2] - Models of light reflection for computer synthesized pictures
[3] - Measuring and Modeling Anisotropic Reflection
Some slides from Cornell about MIS

FumeFX to RenderMan - Shading

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

SHADING THE FIRE

The visible color of most fire is based on a physical property called black-body radiation. A black body is a surface that absorbs all electromagnetic radiation. When a black body is heated to a constant temperature, it emits black body radiation, some of which is in the visible spectrum. The wavelengths and intensity of this radiation are defined by temperature alone. Therefore, there is a defined 'color' for a black body at a certain temperature.

The soot particles in fire are considered black bodies, and an obvious property of fire fluid simulation is temperature. If fire simulations are conducted with realistic temperature values (or converted later), the black body radiation spectrum can be applied to the fire particles to produce a realistic flame rendering. In my case, I am using a color spline with 10 values approximating the ramp.

Brightness of actual fire is often not taken to account in rendering of fire fluid simulations. The actual brightness is defined by the Stefan-Boltzmann law [2], which is quite involved but basically boils down to intensity being proportional to the temperature raised to the fourth power. I used the normalized temperature of the fire along the color gradient as the base and multiply the result by a user controlled constant. The result of the fourth power is still in the normalized range, and the constant gives the user direct control over the maximum intensity of the fire.

float scaleFactor = pow((normalizedSplinePosition),4)*fireExponent;
color fireColor = colorSplineValue * scaleFactor;

Fire opacity control is provided by a user controlled constant, which is multiplied by the normalized position on the temperature gradient. This way, the hotter the fire, the more opaque the fire, and the fire fades off nicely as the temperature cools. This is not entirely accurate, but is provided to offer more directibility, as it can still sometimes be difficult to dial in the contributions of the fire and smoke. I would like to extend this functionality in a more elegant way than providing a list of float values for a spline curve. A spline curve would also assist with a separate issue where the cooler fire (red) is generally so transparent as to not be visible at all, and I expect to add this in a future version.

FireOi = normalizedSplinePosition * fireOpacityVal;
FireCi = FireOi * fireColor;

It is often quite difficult to nail temperatures exactly in a fluid simulation, so the shader also includes the ability to multiply the temperature values up or down by a user controlled value to dial in a look.

SHADING THE SMOKE

The smoke shading process is much simpler than the fire; currently the user only has access to a color and opacity control. FumeFX considers a voxel full of smoke to have a value of 1, though these values can be greater than 1 to indicate more dense smoke. The user opacity value is multiplied by the smoke density from the voxel grid, so it acts as an 'opacity per 1 unit density' control. Color is currently just controlled via a color swatch, and color/opacity are not affected by temparature or other factors at this time.

A great resource for fire and smoke shading methods is the course notes from the Siggraph 2011 course Production Volume Rendering [1].

MULTIPLE SCATTERING

Single scattering is the effect of light penetrating into a volume along the light vector and attenuating over distance based on the density of the volume. Since this attenuation only depends on light direction, this is basically accomplished with deep shadows, where the accumulated light attenuation is stored in the shadow map and accessed via the illuminate(P) call from the shader.

Single scattering is great for light sources on the smoke but doesn't really account for fire illuminating the smoke, or for illumination from light sources bouncing around in the smoke cloud. To be economical, this multiple scattering requires the use a pre-computed point cloud. Writing the point cloud is quite simple, and can be done at a larger shading rate than the final renders.

//PtcFile contains the file path for the point cloud writing
//4 channels written: _area, _radiosity, _extinction, Cs

if(OI != color(0) && PtcFile != "") {
	//Check opacity to avoid writing the whole volume box
	float area = area(P, "dicing");
	bake3d(PtcFile, "_area,_radiosity,_extinction,Cs", P, N,
			"interpolate", 1,
			"_area", area, "_radiosity", bakeColor, 
			"_extinction", OI, "Cs", bakeColor);
}

This first step simply stores the radiance at each shading point in the volume where opacity is above zero. In my shader, I added controls to only scatter the fire light, so depending on the value of this check box the radiance from light sources may or may not be included in the point cloud. This way the user is able to gain up the resulting scattered light without washing out the entire smoke cloud from direct lighting.

The first point cloud is not sufficient for indirect lighting. There are two methods to generate an indirect lighting solution from this point cloud - the indirectdiffuse() call and the ptfilter external application.

Generally the indirectdiffuse() function is used as an efficient gather() loop and is raytraced, but it also has support for a point-based mode. In point-based mode, the shader evaluates the indirect lighting on the shading point at render time, though I found this to be extremely slow and opted not to use this method. I would like to revisit this in the future to explore any potential speedups I may have missed.

For my shader I utilized the ptfilter external application. Ptfilter is included with RenderMan Pro Server and performs a variety of operations on point clouds, from color bleeding and subsurface scattering to volume indirect lighting. In my initial implementation, I wrote the point clouds out via a low quality render, and then ran the following command in the shell:

ptfilter -filter volumecolorbleeding -clamp 1 -sortbleeding 1\
    old_point_cloud.ptc new_point_cloud.ptc

Ptfilter creates a new point cloud with the indirect illumination calculations baked in. This new point cloud can be used in the shader via a 3d texture lookup. The biggest problem with this workflow is the large number of steps and annoying back and forth between Maya, command line prman and ptfilter. I noticed some time ago with subsurface scattering that this ptfilter command could be built into the RMS workflow as a part of a pre-pass, but had no idea if it was possible to add my own passes to this system.

Shading Diffuse Models

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

The Lambert model has been around for a couple of hundred years now and correctly describes how an object is lit based on the surface normal and light position (view position does not matter. Lambert's BRDF is just a constant). By observation, even a diffuse surface does not follow the Lambertian model. A good example is a full moon. The moon is lit very evenly and appears from Earth to have a smooth(ish) surface. However, the light does not fall off around the edges in a way consistent with Lambert's model.

From the previous explanation, we know that surfaces are generally made up of microscopic facets. It would be appropriate to assume that Lambert's model is correct - for these small microfacets. The distribution and visibility of these microfacets defines what the macrosurface looks like under observation. In simple terms, the surface does not darken as much around the edges, because if the surface is rough some portion of the microfacets will still be reflecting light at the viewer. In my shader, I have implemented Oren-Nayar diffuse shading [1], a popular model for describing rough diffuse surfaces. In the image above, I am using a roughness of about 0.5 - a value of 1.0 appears as a nearly flat disc (like the moon) but I felt it was too extreme for the moon example image. A roughness value of 0 indicates the surface is completely flat and thus is identical to the Lambert case.

REFERENCES

[1] - Generalization of Lambert's Reflectance Model

The BRDF and Microfacet Theory

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  With RPS 17 and beyond, most of these features are standard, and I haven't continued development as I no longer have a RenderMan license at home.  I hope this information is useful!

Previously, I built a basic dielectric shader in RenderMan, which afforded me an opportunity to re-learn the relevant computer graphics maths.  I was not particularly concerned with physical correctness. My blurry reflection and refraction was just due to a large number of samples in a widening cone around the reflection/transmission direction, with all samples weighed equally. As I have continued my explorations with shading, I have discovered that this is not particularly physically-based, nor is it efficient.

I plan to build on these concepts and create a physically-plausible material along the lines of the mia_material_x (mental ray) material. There are several stages to implementing a shader of this type. First, I plan to implement a number of diffuse and specular models to better understand the underlying maths and practice implementation of a microfacet-based BRDF. Next, I will add reflection functionality, but by importance sampling the specular BRDF instead of the standard cone around the reflection vector.

RenderMan Pro Server 16 introduced a number of new features for physically-plausible shading, particularly Multiple Importance Sampling (MIS). MIS-aware shaders follow a somewhat strict structure utilizing RSL 2.0 structs and shader objects. Once I have a basic understanding of the maths for the various components, I will move on to creating the class-based shader and structs required to work with RPS 16's MIS features.

In the several times I have attended SIGGRAPH, the technical papers were always miles over my head. A secondary goal of this project is to spend enough time learning and recalling the relevant computer graphics math so that I can gain a general understanding of a shading-related technical paper from a quick reading.

There are quite a few additional features I plan to implement over the coming weeks and months, including AOVs, BTDF refraction, transluscency, emission, other BRDF models, and smooth integration into Slim (currently not practical since RPS 16 is not integrated into RenderMan for Maya).

Before I describe the individual models implemented, I want to go over some basic concepts at work. In shading, we assume that the local illumination at a surface point depends on only two vectors - the view direction and the light direction (though there may be multiple lights as we will see later). The Bidirectional Reflectance Distribution Function (BRDF) represents the relationship between the light and view vectors and describes the light response at the shading point based on those two variables. Given a view direction v, the BRDF defines the relative contributions of each incoming (light) direction l. The BRDF is 'bidirectional' because it can also evaluate all outgoing directions v given an incoming (light) direction l. This reciprocal nature is a defining feature of all physically-based BRDFs.

BRDFs are also energy conserving - that is to say the total amount of energy reflected as light is less than or equal to the energy of incident light. This is not the case in some other less involved specular models.

The other key concept at work with physically-based BRDFs is microfacet theory. Suppose that a micropolygon of a surface has a normal n. Microfacet theory suggests that, while n represents a kind of average normal of the surface at that point, it is actually made up of microscopic gouges. These microscopic gouges are themselves made up of tiny microfacets that are each optically flat.

The properties of this microstructure define how the surface will respond to light from a given view direction (or light direction). Given a view direction v and light direction l, using basic reflection math, it is clear that the vector halfway between (h) represents the normal of the surface that reflects v into l (or l into v). In other words, this halfway vector h defines which of the tiny microfacets we are concerned with, given where we are looking from and where the lights are. Based on our input parameters of l and v, we can compute h.

With h defined, we can describe the overall BRDF for this particular view and light direction. The general form of a microfacet-based BRDF, based on a modified Cook-Torrance model [1], is 

The equation may seem daunting at first, but each component will be described in simple-to-understand terms without math.

F(l,h) is the Fresnel function. This is present in any shader with reflections, and simply defines how reflections are low when viewed straight on and intensified at glancing angles (for most non-metal surfaces). In most shaders, this is implemented based on the macro-surface normal n, but in the BRDF, we are concerned with the fresnel effect of the microfacets with normal h. The difference is subtle at low roughness but very pronounced as roughness increases.

D(h) is the Distribution function. The distribution of microfacets is defined by this function unique to each BRDF model. This function determines the overall 'smoothness' of the surface, and is most responsible for the size and shape of the specular highlight. This function also has some parameter representing "roughness" and basically tells us what concentration this important vector h has among all of the various facets pointed all over the place. The higher that concentration, the 'smoother' the surface. A result of infinity would mean that all of the microfacets had the same normal (perfect mirror reflection).

G(l,v,h) is the Geometry function. This function accounts for occlusions in the surface. While the distribution term D(h) defines what concentration of microfacets have a normal h, it doesn't define whether or not we can see all of those faces (we can't). The geometry term G has two parts, shadowing and masking. Shadowing means a microfacet is not visible to light direction l and is thus not illuminated and not contributing to the reflection response. Masking means a microfacet is not visible to the view direction v (even though it may be illuminated), and thus is not contributing to the reflection response. In the real world, some of these reflections would bounce around and eventually become visible, but the error is so small as to not be worth the compute time currently.

The denominator 4(n.l)(n.v) is a normalization factor whose description is too involved for this webpage (a complete explanation can be found in Walter's 2007 paper extending the Cook-Torrance model. [2],[3]).

REFERENCES

[1] A Reflectance Model for Computer Graphics
[2] Microfacet Models for Refraction through Rough Surfaces
[3] Microfacet Models for reflection and refraction - slides

FumeFX to RenderMan - Part 1

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

FumeFX is a fluid simulation plugin to 3D Studio Max, useful for smoke and fire effects. In the past, I did a fair bit of FX simulation with FumeFX in 3D Studio Max. A major limitation with FumeFX was the shading. The volumes could only be rendered with the software renderer, and later mental ray support was buggy to say the least (admittedly V-Ray shading was supported from the first release, though I never had access to V-Ray). A lot of time was wasted animating lights to approximate the contribution of the fire to the lighting of the environment, as well as just dialing in the shading parameters as the defaults were terrible.

In the RenderMan II class, I have experimented with volume primitives and plan to build scripts to convert FumeFX data to a RIB Archive sequence for use in RenderMan. I will also build a shader based on the physical properties of fire but retains art-directibility. If time allows, I will incorporate this project with my RenderMan II shader and implement indirect illumination of the smoke and the environment.

VOLUMES IN RENDERMAN

Volumes in RenderMan can now be handled with RiVolumes, or a voxelized primitive that can have arbitrary data at each voxel point. Additionally, these volumes are shaded with surface shaders as opposed to the more complicated and slow raymarching approaches of VPVolumes. The data values from the voxel grid can be treated as primvars inside the surface shader, and this feature is the primary benefit of volume primitives for shading FumeFX data. FumeFX data is likewise a voxelized grid, and can be accessed and exported via MaxScript commands inside of 3DS Max.

Much of the process described below is also explained in the RPS application noteVolume Rendering [1].

CONVERTING THE DATA

FumeFX is not an open format, and does not appear to even have a C++ API, only a very limited MaxScript API. MaxScript is notoriously slow, and the only available way to export the data from the file is by looping through the voxel grid and writing the data out using the built in commands. My initial approach was to use these commands and build an intermediate text-based format for later conversion to RenderMan RIB files with Python.

-- example MaxScript for FumeFX data extraction
fn PostLoad = (
-- PostLoad is called when each frame's data is
-- loaded for viewing/playback/rendering.
-- nx, ny, and nz are defined by FumeFX as the 
-- maximum voxel sizes for the current frame
    for i in 0 to (nx-1) do
    for j in 0 to (ny-1) do
    for k in 0 to (nz-1) do (
        smokeVal = GetSmoke i j k
    )
)

Download the final MaxScript function

After running the simulation, the MaxScript function PostLoad is activated through a checkbox and the sequence is played back and written to disk as ascii text files. The runtime for the full MaxScript code was up to 20 minutes for my 19 MegaVoxel grid - an obvious concern. Next, I used Python to convert these text files into RenderMan RIBs.

Initially the Python segment was very slow as I was generating the giant strings to write to the RIB file. After switching to arrays and writing directly from the arrays, I was able to drop the execute time of the Python script from around 25 minutes to 30 seconds. However, these RIB files did not account for 3D Studio's Z-Up system. I reworked the Python to utilize 3D Arrays, so that I might query them differently to account for swapping Y and Z. This operation caused execute times to approach 20 minutes again.

Total conversion times of at least 40 minutes per frame was not acceptable, especially considering I would be working with even larger grids for production use. I ultimately decided to write one more optimized MaxScript function that would read the data from FumeFX's voxel grid, do the Y and Z axis swap, and write the RenderMan RIB files. With this scenario, the worst case frame conversions were about 12 minutes, a much more manageable (though still slow). Additionally, I converted the RIBs to GZipped Binary RIBs with catrib, a PRMan utility. File sizes were on average about 4 times smaller than the ascii counterpart.

A small RenderMan for Maya issue was encountered, where the 'resize bounding box' RIB Archive control did not seem to work if the RIB was GZipped (or binary), so I made the choice to just set a large bounding box for my renders at the small expense of time in order to keep file sizes manageable.

I also utilized FumeFX 2's post-processing functionality to 'shrink' the voxel grids to the minimum size needed to contain the smoke and fire. By default, the grids can become quite large and encompass a lot of additional space that only contains velocity data. This can be useful for some applications but was not needed in this case. The grid 'shrinking' vastly reduced the number of frames that were close to the full 19 MegaVoxel grid size.

REFERENCES

[1] RenderMan Pro Server App Note - Volume Rendering