Fast Subsurface Scattering in Unity (Part 2)

This is the second part of the tutorial on Fast Subsurface Scattering in Unity. This post will show a working implementation of this effect.

This is a two part series:

At the end of this post, you will find a link to download the Unity project.

Introduction

The previous part of this tutorial explained the mechanism that allows approximating the look of translucent materials. Traditional surfaces are shaded based on the light coming from a direction L. The shader we are going to write will add an additional component, -L, which de-fact works as if the material was illuminated by an opposite light source. This makes it look as if light from L passed through the material.

Finally, we have derived a view-dependent equation to model the reflectance of the back lighting:

    \[I_{back} = saturate\left(V \cdot - \left \langle L+N\delta \right \rangle \right )^{p} \cdot {s}\]

where:

  • L is direction the light comes from (light direction),
  • V is the direction the camera is looking at the material (view direction),
  • N is the orientation of the surface at the point we have to render (surface normal).

There are additional parameters which can be used to control the final look of the material. \delta, for example, changes the perceived direction of the backlight so that is more aligned with the surface normal:

Finally, p and s (standing for power and scale) determine how to the backlight spread, and work in a similar way to the homonym parameters in the Blinn-Phong reflectance.

What’s left now is to implement this in a shader.

Extending the Standard Shader

As discussed before, we want this effect to be as realistic as possible. Our best choice is to extend Unity’s Standard shader, which already provides very good results for non-translucent materials.

❓ How to extend a Standard Shader?

Let’s call the new lighting function to be used for this effect StandardTranslucent. The backlight will have the same colour of the original light. What we can control is its intensity, I:

#pragma surface surf StandardTranslucent fullforwardshadows

#include "UnityPBSLighting.cginc"
inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
	// Original colour
	fixed4 pbr = LightingStandard(s, viewDir, gi);
	
	// Calculate intensity of backlight (light translucent)
	float I = ... 
	pbr.rgb = pbr.rgb + gi.light.color * I;

	return pbr;
}

❓ How come pbr is not clamped?

Back Lighting

Following the equations described in the first section of this tutorial, we can proceed to write the following code:

inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
	// Original colour
	fixed4 pbr = LightingStandard(s, viewDir, gi);
	
	// --- Translucency ---
	float3 L = gi.light.dir;
	float3 V = viewDir;
	float3 N = s.Normal;

	float3 H = normalize(L + N * _Distortion);
	float I = pow(saturate(dot(V, -H)), _Power) * _Scale;

	// Final add
	pbr.rgb = pbr.rgb + gi.light.color * I;
	return pbr;
}

The code above is a direct translation of the equations from the first part of this post. The translucency effect that results is believable (below), but does is not related in any way to the thickness of the material. This makes it very hard to control.

Local Thickness

It is obvious that the amount of back light strongly depends on the density and thickness of the material. Ideally, we would need to know the distance light travelled inside the material, and attenuating it accordingly. You can see in the image below how three different light rays with the same incident angle travel very different lengths through the material.

From the point of view of a shader, however, we do not have access to either the local geometry or the history of the light rays. Unfortunately, there is no way of solving this problem locally. The best approach proposed is to rely on an external local thickness map. That is a texture, mapped onto our surface, which indicates how “thick” that part of the material is. The concept of “thickness” is used loosely, as real thickness actually depends on the angle the light is coming from.

The diagram above shows how there is no unique concept of “thickness” associated with the red point on the circle. The amount of material the light is travelling through indeed depends on the light angle L.

That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. Below (credits), you can see a good local thickness map visualised on the model of a statue. White colours correspond to parts of the model where the translucent effect will be stronger, approximating the concept of thickness.

❓ How to generate the local thickness map?

The Final Version

We now know that we need to take into account the local thickness of the material. The easiest way is to provide a texture map that we can sample. While not physically accurate, it can produce believable results. Additionally, the local thickness is encoded in a way that allows artists to retain full control on the effect.

In this implementation, the local thickness is provided in the red channel of an additional texture, sampled in the surf function:

float thickness;

void surf (Input IN, inout SurfaceOutputStandard o)
{
	// Albedo comes from a texture tinted by color
	fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
	o.Albedo = c.rgb;
	// Metallic and smoothness come from slider variables
	o.Metallic = _Metallic;
	o.Smoothness = _Glossiness;
	o.Alpha = c.a;

	thickness = tex2D (_LocalThickness, IN.uv_MainTex).r;
}

❓ How come the texture is not sampled in the lighting function?

Colin and Mark proposed a slightly different equation to calculate the final intensity of the backlight. This takes into account both the thickness and an additional attenuation parameter. Also, they allow for an additional ambient component that is present at all time:

inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
	// Original colour
	fixed4 pbr = LightingStandard(s, viewDir, gi);
	
	// --- Translucency ---
	float3 L = gi.light.dir;
	float3 V = viewDir;
	float3 N = s.Normal;

	float3 H = normalize(L + N * _Distortion);
	float VdotH = pow(saturate(dot(V, -H)), _Power) * _Scale;
	float3 I = _Attenuation * (VdotH + _Ambient) * thickness;

	// Final add
	pbr.rgb = pbr.rgb + gi.light.color * I;
	return pbr;
}

This is the final result:

Conclusion

This post concludes the series on fast subsurface scattering. The approach described in this tutorial is based on the solution presented at GDC 2011 by Colin Barré-Brisebois and Marc Bouchard in a talk called Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look.

You can read the entire series here:

Become a Patron!
You can download all the necessary files to run this project (shader, textures, models, scenes) on Patreon.

Comments

21 responses to “Fast Subsurface Scattering in Unity (Part 2)”

  1. Do you know some kind of shader for face smoothness in Unity, to make a snapchat filter effect?

  2. Hi Alan,

    Do you know if there’s any way for the translucency to ignore shadows? I’m using this technique on skin and I find I lose all my beautiful SSS skin glow when I have tight shadows turned on.

    Thanks for this thoughtful tutorial! 😀

  3. TJ Holleran avatar
    TJ Holleran

    Hi Alan,
    Thanks for the awesome breakdown. After implementing locally, I tried making a semi-transparent version … intending to fade at rim. With alpha = 1 on a Unity Standard shader set to transparent … the lighting does not look to change relative to when set to opaque. But, attempting transparency with this translucent setup … #pragma surface surf StandardTranslucent fullforwardshadows alpha:fade … results in a consistent over brightening relative to the original opaque version. Is there something that fails to get passed to our LightingStandardTranslucent functions when we switch to transparent?

    Thanks in advance.

  4. Mike Monroe avatar
    Mike Monroe

    I find this tutorial was fairly complete and interesting, but then quickly rushed over the setup of things like the Attenuation, Scale, and Power parameters at the end leaving the reader to sort of guess how exactly these should be configured.

    1. Hi Mike!

      Thank you for your comment.

      The use pow, is simply to remap VdotH to a non-linear gradient. This makes the specular highlights look more focused and localised.

      Scale and Power are very often seen in other shading techniques, such as Blinn-Phong. As such, I did not want to spend too much time on that. I have covered these values in a few other tutorials, such as “A Gentle Introduction to Shaders”.

      I hope this helps!

  5. Thanks for creating this, really helps to understand the concept. However I hope you could point me in the right direction with this: when trying to implement the effect I get this error – ‘Surface shader lighting model ‘StandardTranslucent’ is missing a GI function’

    Thanks again for taking the time to create this

    1. Thank you!

      Have a look at “❓ How to extend a Standard Shader?”.
      It actually explains how to get rid of that error!

  6. Amazing tutorial, I am failing to grok the method for the local thickness. What do you mean exactly by flip the faces? Just inverting the normals to make all the faces to be back facing?

    1. Yes, that’s the idea.
      If you do so, the light will bounce “inside” the mesh.

  7. Andrew avatar

    I saw an interesting technique for dynamic thickness maps a few years ago,

    Bind a separate low-resolution render target, which will represent a thickness map for your scene. You could use a free G-buffer channel if you’ve got one.

    Then, draw your translucent objects into the buffer with an “additive” blend mode. The color of each fragment is the normalized fragment depth, multiplied by 1 for the back-faces, and -1 for front-faces. (You could do this either with two passes, or by using the VFACE semantic).

    While this isn’t supported on all platforms, I’ve found that a good number of GPUs will actually allow negative colors in additive blends, effectively resulting in both addition and subtraction in a single pass! This will give you the depth of the back-faces, minus the depth of the front-faces, using only traditional drawing operations! I was mostly using it to render metaballs, but the same principle might be applicable for SSS!

  8. Carlos avatar

    Thank you SO much, this is literally one of the best shader tutorials I’ve ever come across in all my years studying game development.

    I love that you really seriously explain what the heck is going on with the math, and like for example why shaders can be difficult — what the limitations are, and how we get around it. By the end, it made the code you gave really make a lot more sense. The diagrams were super helpful too 🙂

    1. Thank you so much! <3

  9. “That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye.

    That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. ”

    Paragraph is repeated.

    Otherwise, excellent tutorial, as usual. Your unity tutorials are probably the best I’ve found online. My lecturers could learn a thing or two from you…

    1. I’ve corrected it!
      And thank you! I’m a lecturer myself, so I always try to create content as if I had to teach it to my students!

  10. […] Fast Subsurface Scattering in Unity […]

  11. Wouldn’t it be a better approximation for thickness to draw somewhere else the maximum distance of the pixel for the model and then when drawing the actual fragment comparing its distance from the furthest one saved in advance ?

    Yeah, I know most models are complex but in general they are more convex than not, also if there are holes light doesnt travel that well since it would be scattered by the further piece of model, so my assumption is that it would be a good approx for the real thickness.

    1. Hey!
      The solution I described is designed to be cheap, not really realistic.
      There are many other approaches one can use to get the thickness of the material. A common one is to render the scene from the perspective of the light into a depth map, so you know how far each point was when it hit the light.

  12. Superb blog. You are very kind writing and explaining everything. Thank you again.

    1. You’re too kind, John!

  13. […] Part 2. Fast Subsurface Scattering in Unity […]

  14. […] Part 2. Fast Subsurface Scattering 🚧 […]

Leave a Reply

Your email address will not be published. Required fields are marked *