FumeFX to RenderMan - Maya Integration

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

INTEGRATING WITH MAYA - PASSES

Based on the unweildy nature of the previous steps, I decided to investigate adding my own passes to the RenderMan Pass Settings Tree in Maya. I started this process by examining the Slim templates for the All Purpose Material Subsurface Scattering component and the Base Volume. After analyzing the Slim template code, I isolated the relevant TCL code to create these render passes. Like any of these experiments, a long series of deeper and deeper searches into the RenderMan Slim templates and TCL header files finally illuminated what actually happens with the TCL code in the main Slim template. I will not spend any time going over the basics of Slim template creation, but the interesting bit of the Slim template is posted below.

parameter float Disable {
   label {Disable Bake}
   subtype switch
   default 1
   msghandler {
      SetValue {
         set atl sss
         set ch _radiance_t
         set sty volumeScatter 
         set prop %obj
         set app [$prop GetAppearance]
         set map [[$app GetProperties -name *Crew*] GetValue]
         set d [$prop GetValue]
         set bakedFileProp [$app GetProperties -name *BakedFile*]
         set ptcFileProp [$app GetProperties -name *PtcFile*]
         set bakePassIdProp [$app GetProperties -name *BakePassId*]
         set bakePassClassProp [$app GetProperties -name *BakePassClass*]
         $bakedFileProp SetValue "\[bakemap -atlas $atl -chan $ch -map $map
                                 -style $sty -disable $d\]"
         $ptcFileProp SetValue "\[ptcfile $atl $map $sty\]" 
         $bakePassIdProp SetValue "\[bakepassid $atl $map $sty\]"
         $bakePassClassProp SetValue "\[bakepassclass $sty\]"
      }
   }
}
slimattribute string Crew {
   default {$SHADERGROUP}
}
parameter string CurrentPassId {
   provider variable locked
   default {$PASSID}
}
parameter string BakePassId {
   default {}
   provider variable locked
}
parameter string BakePassClass {
   default {}
   provider variable locked
}
parameter string PtcFile {
   default {}
   provider variable locked
}
parameter string BakedFile {
   default {}
   provider variable locked
}

The basic element of interest here is the msghandler SetValue. This TCL method runs when the value of the associated parameter changes. In this case, it was a checkbox for enabling or disabling the point cloud bake. When the point cloud is enabled, parameters are retrieved about the set of objects with the shader applied ("Crew") and the current Pass ID, and four TCL functions are set in strings that will create the relevant render passes in the Pass Settings dialog. I have not isolated the area yet where these are finally executed, but I am fairly certain these are evaluated by some TCL code that is executed at render-time. This code is mostly borrowed from the BaseVolume material.

Now that RenderMan will happily create the additional passes, I have control over the various associated render settings (shading rate, number of bounces, etc.). Next, I added conditionals to my shader code to only bake the map if the prepass was the current pass, and only use the point cloud result in the final render pass. These are not accessed in the standard way and are instead accessed via the various hidden parameters set in the previous code block.

//BakedFile and Disable are set as shader params
if(BakedFile != "" && Disable != 1) {
	color indirLight = 0;					
	texture3d(BakedFile, P, N, "_indirectdiffuse", indirLight);
	sAccum += indirLight * Intensity;
}

//Disable, CurrentPassId, BakePassId, PtcFile are all set as shader params
if(Disable != 1 && rdepth == 0 && CurrentPassId == BakePassId) {
	float area = area(P, "dicing");
	if(OI != color(0) && PtcFile != "") {
		bake3d(PtcFile, "_area,_radiosity,_extinction,Cs", P, N,
			"interpolate", 1,
			"_area", area, "_radiosity", CI, 
			"_extinction", OI, "Cs", CI);
}

I am extremely happy with how this smooths the process of using the shader and am much clearer (though still finding the last few details) on how these more complex Slim templates interface with RenderMan for Maya.

INTEGRATING WITH MAYA - AOVS

The last major feature I hoped to add was support for AOVs. Having a small amount of experience with standard RSL shader AOVs, I thought this would take only a couple of minutes to implement with simple output varying parameters. While it was not at all this simple, I was able after all to find a fairly elegant way to implement AOVs.

//A simple RSL shader with an AOV
surface hasAOV ( output varying color extraChan = 0 ) {
	//will create an AOV with the surface colored red
	extraChan = color(1,0,0);
}

First, a bit of background about how Slim shader templates work. A Slim template is a bunch of TCL defining input and output parameters followed by a block of RSL code. Inside this RSL code, values are assigned to output parameters and at first glance everything appears exactly like a regular RSL shader. The actual process is much more complicated than that, but can be described in brief. The RSLFunction defined in a Slim template is implemented in the resulting .sl file as just that - a function. The actual surface shader code calls the function and assigns the output parameters to the actual shader outputs. First, a very simple Slim template:

slim 1 extensions cutter {
extensions fundza cutr {
	template shadingmodel HasColor {
		collection shadingmodel result {
			access output
			display hidden
			parameter color CI {
				access output
				}
			parameter color OI {
				access output
				}
			}
		RSLFunction {
		void cutrHasColor (
			output color CI;
			output color OI;
			)
		{
		OI = Os;
		CI = Os * color(1,0,0);
		}
} } } }

This template simply creates an RSL function called cutrHasColor and sets the OI and CI values (no AOVs yet). The resulting .sl code (reduced):

surface HasColor () {
	void cutrHasColor ( output color CI;
			output color OI; )
	{
		OI = Os;
		CI = Os * color(1,0,0);
	}
	
	color tmp2;
	color tmp3;
	
	cutrHasColor(tmp2, tmp3);
	Ci = tmp2;
	Oi = tmp3;
}

A close look at this .sl file, created by Slim, shows that the RSLFunction block from Slim is just carried over as a function inside the shader. The shader is called and the output values are assigned to the correct required output values of the surface shader (Ci and Oi). This is all well and good, but I discovered quickly that if I tried simply adding a few more output parameters to the result collection of the Slim template and function, they didn't translate over to my final .sl code. It would still just set Ci and Oi. A bit more research led me to the concept of Visualizers.

Visualizers are a type of dynamic shader template, where that last bit of RSL is defined. In this case, the overall template has a shadingmodel type. This relates to ashadingmodel visualizer, which defines two outputs, OI and CI, and plugs them into Oi and Ci of the final RSL shader. There are many built in visualizers, and one of particular interest is the shadingmodel_aov visualizer. This visualizer references a built-in Pixar .h file with a list of predefined AOVs and runs some TCL loops to define all of the RSL code for their use. This file can be modified, but lives within the install directory and is quite inconvenient to work with. However, when Malcolm told me about this header file, I did some more digging and decided on a different approach by defining my own visualizer inside my Slim template.

Basically I wanted my template to have three AOVs - Fire, Smoke, and Indirect. This way the user in post would have control over the color of the particular element as well as the intensity of the indirect component. Ultimately, my output collection for the Slim template now needed 5 outputs instead of the standard two. As I mentioned, just adding them causes no errors, but did not hook them to the final RSL code in any meaningful way. I added the following visualizer to my code:

slim 1 extensions kevin_george {
extensions kgeorge kdg {
	template visualizer shadingmodel_kdg {
		collection shadingmodel_kdg shadingmodel {
			detail mustvary
			parameter color CI {
				detail mustvary
				default {1 1 1}
			}
			parameter color OI {
				detail mustvary
				default {1 1 1}
			}
			parameter color Fire {
				detail mustvary
				provider connection
				default {0 0 0}
			}
			parameter color Smoke {
				detail mustvary
				provider connection
				default {0 0 0}			
			}
			parameter color IndirFire {
				detail mustvary
				provider connection
				default {0 0 0}			
			}
		}
		
		RSLDeclare output varying color Fire 0
		RSLDeclare output varying color Smoke 0
		RSLDeclare output varying color IndirFire 0	
		
		RSLMain {
			generate
			output "Ci = [getvar CI];"
			output "Oi = [getvar OI];"
			output "Fire = [getvar Fire];"
			output "Smoke = [getvar Smoke];"
			output "IndirFire = [getvar IndirFire];"
		}
	}

This section is including a lot of code, but the above basically defines the parameters of a new template 'type' called shadingmodel_kdg that expects five output parameters instead of the two. Additionally, the RSLDeclare statements will add the the actual ouput varibles to the .sl file, and the 5 commands in RSLMain write the final values to the .sl file.

Still unwilling to cooperate, RenderMan did not recognize my new template type as attachable, so I was no longer able to attach my shader to geometry. To tell RenderMan that this new type is attachable, the slim.ini file must be modified with the following command:

SetPref SlimAttachableMap(shadingmodel_kdg) surface

With this final step complete, I was able to assign my shader and output AOVs with RenderMan for Maya.

More valuable information can be found in the RMS 3 documentation under the section Slim -> Templates -> Templates: Advanced Topics [3].

FUTURE DEVELOPMENTS

While I accomplished much of my desired goal, I still would like to extend this shader to be a bit more universal by learning and utilizing the same channel names as would be used with Maya Fluids. This extension should make it easier for me to integrate my shader with Maya Fluid simulations as well, instead of being totally reliant on my custom primvar channel names.

On the last day of the project, I found a huge oversight on my part - temparature is really not the best method to determine where fire resides. I was using this approach because FumeFX does not allow the user access to the actual 'fire' parameter. However, this does not consider that for very hot fire, there will be very hot convection areas above the flames that are not visible. I have determined that I can deal with this issue by adapting my code to also export the fuel channel and look forward to the next version of the shader.

I would also like more control over the color and opacity - preferably through some means to set a color gradient for each of these parameters for fire and smoke to dial in and art direct the final look.

Originally, I planned to use the vector velocity data from the FumeFX simulation as a sort of motion vectors, but I did not have time to implement this feature. I'm not certain it's as straightforward as exporting a motion vector pass for compositing, but I also plan to explore ways to use this velocity data in future versions.

REFERENCES

RenderMan Pro Server App Note - Volume Rendering
Stefan-Boltzmann Law
RenderMan Studio: Slim Templates - Advanced Topics
Production Volume Rendering - Siggraph 2011 Course Notes 

FumeFX to RenderMan - Shading

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

SHADING THE FIRE

The visible color of most fire is based on a physical property called black-body radiation. A black body is a surface that absorbs all electromagnetic radiation. When a black body is heated to a constant temperature, it emits black body radiation, some of which is in the visible spectrum. The wavelengths and intensity of this radiation are defined by temperature alone. Therefore, there is a defined 'color' for a black body at a certain temperature.

The soot particles in fire are considered black bodies, and an obvious property of fire fluid simulation is temperature. If fire simulations are conducted with realistic temperature values (or converted later), the black body radiation spectrum can be applied to the fire particles to produce a realistic flame rendering. In my case, I am using a color spline with 10 values approximating the ramp.

Brightness of actual fire is often not taken to account in rendering of fire fluid simulations. The actual brightness is defined by the Stefan-Boltzmann law [2], which is quite involved but basically boils down to intensity being proportional to the temperature raised to the fourth power. I used the normalized temperature of the fire along the color gradient as the base and multiply the result by a user controlled constant. The result of the fourth power is still in the normalized range, and the constant gives the user direct control over the maximum intensity of the fire.

float scaleFactor = pow((normalizedSplinePosition),4)*fireExponent;
color fireColor = colorSplineValue * scaleFactor;

Fire opacity control is provided by a user controlled constant, which is multiplied by the normalized position on the temperature gradient. This way, the hotter the fire, the more opaque the fire, and the fire fades off nicely as the temperature cools. This is not entirely accurate, but is provided to offer more directibility, as it can still sometimes be difficult to dial in the contributions of the fire and smoke. I would like to extend this functionality in a more elegant way than providing a list of float values for a spline curve. A spline curve would also assist with a separate issue where the cooler fire (red) is generally so transparent as to not be visible at all, and I expect to add this in a future version.

FireOi = normalizedSplinePosition * fireOpacityVal;
FireCi = FireOi * fireColor;

It is often quite difficult to nail temperatures exactly in a fluid simulation, so the shader also includes the ability to multiply the temperature values up or down by a user controlled value to dial in a look.

SHADING THE SMOKE

The smoke shading process is much simpler than the fire; currently the user only has access to a color and opacity control. FumeFX considers a voxel full of smoke to have a value of 1, though these values can be greater than 1 to indicate more dense smoke. The user opacity value is multiplied by the smoke density from the voxel grid, so it acts as an 'opacity per 1 unit density' control. Color is currently just controlled via a color swatch, and color/opacity are not affected by temparature or other factors at this time.

A great resource for fire and smoke shading methods is the course notes from the Siggraph 2011 course Production Volume Rendering [1].

MULTIPLE SCATTERING

Single scattering is the effect of light penetrating into a volume along the light vector and attenuating over distance based on the density of the volume. Since this attenuation only depends on light direction, this is basically accomplished with deep shadows, where the accumulated light attenuation is stored in the shadow map and accessed via the illuminate(P) call from the shader.

Single scattering is great for light sources on the smoke but doesn't really account for fire illuminating the smoke, or for illumination from light sources bouncing around in the smoke cloud. To be economical, this multiple scattering requires the use a pre-computed point cloud. Writing the point cloud is quite simple, and can be done at a larger shading rate than the final renders.

//PtcFile contains the file path for the point cloud writing
//4 channels written: _area, _radiosity, _extinction, Cs

if(OI != color(0) && PtcFile != "") {
	//Check opacity to avoid writing the whole volume box
	float area = area(P, "dicing");
	bake3d(PtcFile, "_area,_radiosity,_extinction,Cs", P, N,
			"interpolate", 1,
			"_area", area, "_radiosity", bakeColor, 
			"_extinction", OI, "Cs", bakeColor);
}

This first step simply stores the radiance at each shading point in the volume where opacity is above zero. In my shader, I added controls to only scatter the fire light, so depending on the value of this check box the radiance from light sources may or may not be included in the point cloud. This way the user is able to gain up the resulting scattered light without washing out the entire smoke cloud from direct lighting.

The first point cloud is not sufficient for indirect lighting. There are two methods to generate an indirect lighting solution from this point cloud - the indirectdiffuse() call and the ptfilter external application.

Generally the indirectdiffuse() function is used as an efficient gather() loop and is raytraced, but it also has support for a point-based mode. In point-based mode, the shader evaluates the indirect lighting on the shading point at render time, though I found this to be extremely slow and opted not to use this method. I would like to revisit this in the future to explore any potential speedups I may have missed.

For my shader I utilized the ptfilter external application. Ptfilter is included with RenderMan Pro Server and performs a variety of operations on point clouds, from color bleeding and subsurface scattering to volume indirect lighting. In my initial implementation, I wrote the point clouds out via a low quality render, and then ran the following command in the shell:

ptfilter -filter volumecolorbleeding -clamp 1 -sortbleeding 1\
    old_point_cloud.ptc new_point_cloud.ptc

Ptfilter creates a new point cloud with the indirect illumination calculations baked in. This new point cloud can be used in the shader via a 3d texture lookup. The biggest problem with this workflow is the large number of steps and annoying back and forth between Maya, command line prman and ptfilter. I noticed some time ago with subsurface scattering that this ptfilter command could be built into the RMS workflow as a part of a pre-pass, but had no idea if it was possible to add my own passes to this system.

FumeFX to RenderMan - Part 1

The following is a transcription of one of my independent projects while I was a student at SCAD.  The original page is still available here.  Unfortunately, I haven't had much time or inclination to further develop this as I haven't really done anything with an fx simulation in a couple of years.  Perhaps some of the information here is useful to someone!

FumeFX is a fluid simulation plugin to 3D Studio Max, useful for smoke and fire effects. In the past, I did a fair bit of FX simulation with FumeFX in 3D Studio Max. A major limitation with FumeFX was the shading. The volumes could only be rendered with the software renderer, and later mental ray support was buggy to say the least (admittedly V-Ray shading was supported from the first release, though I never had access to V-Ray). A lot of time was wasted animating lights to approximate the contribution of the fire to the lighting of the environment, as well as just dialing in the shading parameters as the defaults were terrible.

In the RenderMan II class, I have experimented with volume primitives and plan to build scripts to convert FumeFX data to a RIB Archive sequence for use in RenderMan. I will also build a shader based on the physical properties of fire but retains art-directibility. If time allows, I will incorporate this project with my RenderMan II shader and implement indirect illumination of the smoke and the environment.

VOLUMES IN RENDERMAN

Volumes in RenderMan can now be handled with RiVolumes, or a voxelized primitive that can have arbitrary data at each voxel point. Additionally, these volumes are shaded with surface shaders as opposed to the more complicated and slow raymarching approaches of VPVolumes. The data values from the voxel grid can be treated as primvars inside the surface shader, and this feature is the primary benefit of volume primitives for shading FumeFX data. FumeFX data is likewise a voxelized grid, and can be accessed and exported via MaxScript commands inside of 3DS Max.

Much of the process described below is also explained in the RPS application noteVolume Rendering [1].

CONVERTING THE DATA

FumeFX is not an open format, and does not appear to even have a C++ API, only a very limited MaxScript API. MaxScript is notoriously slow, and the only available way to export the data from the file is by looping through the voxel grid and writing the data out using the built in commands. My initial approach was to use these commands and build an intermediate text-based format for later conversion to RenderMan RIB files with Python.

-- example MaxScript for FumeFX data extraction
fn PostLoad = (
-- PostLoad is called when each frame's data is
-- loaded for viewing/playback/rendering.
-- nx, ny, and nz are defined by FumeFX as the 
-- maximum voxel sizes for the current frame
    for i in 0 to (nx-1) do
    for j in 0 to (ny-1) do
    for k in 0 to (nz-1) do (
        smokeVal = GetSmoke i j k
    )
)

Download the final MaxScript function

After running the simulation, the MaxScript function PostLoad is activated through a checkbox and the sequence is played back and written to disk as ascii text files. The runtime for the full MaxScript code was up to 20 minutes for my 19 MegaVoxel grid - an obvious concern. Next, I used Python to convert these text files into RenderMan RIBs.

Initially the Python segment was very slow as I was generating the giant strings to write to the RIB file. After switching to arrays and writing directly from the arrays, I was able to drop the execute time of the Python script from around 25 minutes to 30 seconds. However, these RIB files did not account for 3D Studio's Z-Up system. I reworked the Python to utilize 3D Arrays, so that I might query them differently to account for swapping Y and Z. This operation caused execute times to approach 20 minutes again.

Total conversion times of at least 40 minutes per frame was not acceptable, especially considering I would be working with even larger grids for production use. I ultimately decided to write one more optimized MaxScript function that would read the data from FumeFX's voxel grid, do the Y and Z axis swap, and write the RenderMan RIB files. With this scenario, the worst case frame conversions were about 12 minutes, a much more manageable (though still slow). Additionally, I converted the RIBs to GZipped Binary RIBs with catrib, a PRMan utility. File sizes were on average about 4 times smaller than the ascii counterpart.

A small RenderMan for Maya issue was encountered, where the 'resize bounding box' RIB Archive control did not seem to work if the RIB was GZipped (or binary), so I made the choice to just set a large bounding box for my renders at the small expense of time in order to keep file sizes manageable.

I also utilized FumeFX 2's post-processing functionality to 'shrink' the voxel grids to the minimum size needed to contain the smoke and fire. By default, the grids can become quite large and encompass a lot of additional space that only contains velocity data. This can be useful for some applications but was not needed in this case. The grid 'shrinking' vastly reduced the number of frames that were close to the full 19 MegaVoxel grid size.

REFERENCES

[1] RenderMan Pro Server App Note - Volume Rendering