This is the second part of the tutorial on Fast Subsurface Scattering in Unity. This post will show a working implementation of this effect.
This is a two part series:
At the end of this post, you will find a link to download the Unity project.
Introduction
The previous part of this tutorial explained the mechanism that allows approximating the look of translucent materials. Traditional surfaces are shaded based on the light coming from a direction . The shader we are going to write will add an additional component, , which de-fact works as if the material was illuminated by an opposite light source. This makes it look as if light from passed through the material.
Finally, we have derived a view-dependent equation to model the reflectance of the back lighting:
where:
- is direction the light comes from (light direction),
- is the direction the camera is looking at the material (view direction),
- is the orientation of the surface at the point we have to render (surface normal).
There are additional parameters which can be used to control the final look of the material. , for example, changes the perceived direction of the backlight so that is more aligned with the surface normal:
Finally, and (standing for power and scale) determine how to the backlight spread, and work in a similar way to the homonym parameters in the Blinn-Phong reflectance.
What’s left now is to implement this in a shader.
Extending the Standard Shader
As discussed before, we want this effect to be as realistic as possible. Our best choice is to extend Unity’s Standard shader, which already provides very good results for non-translucent materials.
❓ How to extend a Standard Shader?
If you are unfamiliar with the procedure, the specific topic of adding functions to a Standard Shader has been covered extensively in this blog. Two good starting tutorials for this are 3D Printer Shader Effect and CD-ROM Shader: Diffraction Grating.
To sum it up, the basic idea is to create a new surface shader and replace its lighting function with a custom one. In there, we will invoke the original Standard lighting function, to get the material rendered with Unity’s PBR shader.
Once we have that, we can calculate the contribution of the backlighting, and blend it with the original colour provided by the Standard lighting function.
You can find a good starting point here:
#pragma surface surf StandardTranslucent fullforwardshadows #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; half _Glossiness; half _Metallic; fixed4 _Color; #include "UnityPBSLighting.cginc" inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi) { // Original colour fixed4 pbr = LightingStandard(s, viewDir, gi); // ... // Alter "pbr" here to include the new light // ... return pbr; } void LightingStandardTranslucent_GI(SurfaceOutputStandard s, UnityGIInput data, inout UnityGI gi) { LightingStandard_GI(s, data, gi); } void surf (Input IN, inout SurfaceOutputStandard o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color; o.Albedo = c.rgb; // Metallic and smoothness come from slider variables o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; }
Let’s call the new lighting function to be used for this effect StandardTranslucent
. The backlight will have the same colour of the original light. What we can control is its intensity, I
:
#pragma surface surf StandardTranslucent fullforwardshadows #include "UnityPBSLighting.cginc" inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi) { // Original colour fixed4 pbr = LightingStandard(s, viewDir, gi); // Calculate intensity of backlight (light translucent) float I = ... pbr.rgb = pbr.rgb + gi.light.color * I; return pbr; }
❓ How come pbr is not clamped?
When adding two colours together, one should be careful not to go beyond . This is usually done with the saturate function, which simply clamps each colour component between and .
If the camera you are using is set to support HDR (high-dynamic range), then values above are used for post processing effects such as bloom. In this particular shader, we do not saturate the final colour, since a bloom filter will be applied to the final rendering.
⭐ Recommended Unity Assets
Unity is free, but you can upgrade to Unity Pro or Unity Plus subscription plans to get more functionalities and training resources for your games.
Back Lighting
Following the equations described in the first section of this tutorial, we can proceed to write the following code:
inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi) { // Original colour fixed4 pbr = LightingStandard(s, viewDir, gi); // --- Translucency --- float3 L = gi.light.dir; float3 V = viewDir; float3 N = s.Normal; float3 H = normalize(L + N * _Distortion); float I = pow(saturate(dot(V, -H)), _Power) * _Scale; // Final add pbr.rgb = pbr.rgb + gi.light.color * I; return pbr; }
The code above is a direct translation of the equations from the first part of this post. The translucency effect that results is believable (below), but does is not related in any way to the thickness of the material. This makes it very hard to control.
Local Thickness
It is obvious that the amount of back light strongly depends on the density and thickness of the material. Ideally, we would need to know the distance light travelled inside the material, and attenuating it accordingly. You can see in the image below how three different light rays with the same incident angle travel very different lengths through the material.
From the point of view of a shader, however, we do not have access to either the local geometry or the history of the light rays. Unfortunately, there is no way of solving this problem locally. The best approach proposed is to rely on an external local thickness map. That is a texture, mapped onto our surface, which indicates how “thick” that part of the material is. The concept of “thickness” is used loosely, as real thickness actually depends on the angle the light is coming from.
The diagram above shows how there is no unique concept of “thickness” associated with the red point on the circle. The amount of material the light is travelling through indeed depends on the light angle .
That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. Below (credits), you can see a good local thickness map visualised on the model of a statue. White colours correspond to parts of the model where the translucent effect will be stronger, approximating the concept of thickness.
❓ How to generate the local thickness map?
The author of this technique proposed an interesting way to automatically create a local thickness map from any model. Those are the steps:
- Flip the faces of the model
- Render Ambient Occlusion to a texture
- Invert the colour of the texture
The rationale behind this procedure is that by rendering ambient occlusion on the back faces, one can roughly “averages all light transport happening inside the shape“.
Instead of a texture, the thickness can also be stored directly in the vertices.
The Final Version
We now know that we need to take into account the local thickness of the material. The easiest way is to provide a texture map that we can sample. While not physically accurate, it can produce believable results. Additionally, the local thickness is encoded in a way that allows artists to retain full control on the effect.
In this implementation, the local thickness is provided in the red channel of an additional texture, sampled in the surf function:
float thickness; void surf (Input IN, inout SurfaceOutputStandard o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color; o.Albedo = c.rgb; // Metallic and smoothness come from slider variables o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; thickness = tex2D (_LocalThickness, IN.uv_MainTex).r; }
❓ How come the texture is not sampled in the lighting function?
I have chosen to store this value in a variable called thickness, which is later accessed by the lighting function. As a personal preference, I tend to do this every time I have to sample a texture that is later needed by a lighting function.
If you prefer, you can sample the texture directly in the lighting function. In this case, you need to pass the UV coordinates (possibly extending SurfaceOutputStandard
) and to use tex2Dlod
instead of tex2D
. The function takes two additional coordinates; for this specific application, you can set both of them to zero:
thickness = tex2Dlod (_LocalThickness, fixed4(uv, 0, 0)).r;
Colin and Mark proposed a slightly different equation to calculate the final intensity of the backlight. This takes into account both the thickness and an additional attenuation parameter. Also, they allow for an additional ambient component that is present at all time:
inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi) { // Original colour fixed4 pbr = LightingStandard(s, viewDir, gi); // --- Translucency --- float3 L = gi.light.dir; float3 V = viewDir; float3 N = s.Normal; float3 H = normalize(L + N * _Distortion); float VdotH = pow(saturate(dot(V, -H)), _Power) * _Scale; float3 I = _Attenuation * (VdotH + _Ambient) * thickness; // Final add pbr.rgb = pbr.rgb + gi.light.color * I; return pbr; }
This is the final result:
Conclusion
This post concludes the series on fast subsurface scattering. The approach described in this tutorial is based on the solution presented at GDC 2011 by Colin Barré-Brisebois and Marc Bouchard in a talk called Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look.
You can read the entire series here:
- Part 1. Fast Subsurface Scattering in Unity
- Part 2. Fast Subsurface Scattering in Unity
Become a Patron!
You can download all the necessary files to run this project (shader, textures, models, scenes) on Patreon.
Leave a Reply