in shader, tutorial, Unity3D

Fast Subsurface Scattering in Unity (Part 2)

Share Button

This is the second part of the tutorial on Fast Subsurface Scattering in Unity. This post will show a working implementation of this effect.

This is a two part series:

At the end of this post, you will find a link to download the Unity project.

Introduction

The previous part of this tutorial explained the mechanism that allows approximating the look of translucent materials. Traditional surfaces are shaded based on the light coming from a direction L. The shader we are going to write will add an additional component, -L, which de-fact works as if the material was illuminated by an opposite light source. This makes it look as if light from L passed through the material.

Finally, we have derived a view-dependent equation to model the reflectance of the back lighting:

    \[I_{back} = saturate\left(V \cdot - \left \langle L+N\delta \right \rangle \right )^{p} \cdot {s}\]

where:

  • L is direction the light comes from (light direction),
  • V is the direction the camera is looking at the material (view direction),
  • N is the orientation of the surface at the point we have to render (surface normal).

There are additional parameters which can be used to control the final look of the material. \delta, for example, changes the perceived direction of the backlight so that is more aligned with the surface normal:

Finally, p and s (standing for power and scale) determine how to the backlight spread, and work in a similar way to the homonym parameters in the Blinn-Phong reflectance.

What’s left now is to implement this in a shader.

Extending the Standard Shader

As discussed before, we want this effect to be as realistic as possible. Our best choice is to extend Unity’s Standard shader, which already provides very good results for non-translucent materials.

❓ How to extend a Standard Shader?
If you are unfamiliar with the procedure, the specific topic of adding functions to a Standard Shader has been covered extensively in this blog. Two good starting tutorials for this are 3D Printer Shader Effect and CD-ROM Shader: Diffraction Grating.

To sum it up, the basic idea is to create a new surface shader and replace its lighting function with a custom one. In there, we will invoke the original Standard lighting function, to get the material rendered with Unity’s PBR shader.

Once we have that, we can calculate the contribution of the backlighting, and blend it with the original colour provided by the Standard lighting function.For a good approximation:

You can find a good starting point here:

Let’s call the new lighting function to be used for this effect StandardTranslucent. The backlight will have the same colour of the original light. What we can control is its intensity,  I:

❓ How come pbr is not clamped?

When adding two colours together, one should be careful not to go beyond 1. This is usually done with the saturate function, which simply clamps each colour component between 0 and 1.

If the camera you are using is set to support HDR (high-dynamic range), then values above 1 are used for post processing effects such as bloom. In this particular shader, we do not saturate the final colour, since a bloom filter will be applied to the final rendering.

Back Lighting

Following the equations described in the first section of this tutorial, we can proceed to write the following code:

The code above is a direct translation of the equations from the first part of this post. The translucency effect that results is believable (below), but does is not related in any way to the thickness of the material. This makes it very hard to control.

Local Thickness

It is obvious that the amount of back light strongly depends on the density and thickness of the material. Ideally, we would need to know the distance light travelled inside the material, and attenuating it accordingly. You can see in the image below how three different light rays with the same incident angle travel very different lengths through the material.

From the point of view of a shader, however, we do not have access to either the local geometry or the history of the light rays. Unfortunately, there is no way of solving this problem locally. The best approach proposed is to rely on an external local thickness map. That is a texture, mapped onto our surface, which indicates how “thick” that part of the material is. The concept of “thickness” is used loosely, as real thickness actually depends on the angle the light is coming from.

The diagram above shows how there is no unique concept of “thickness” associated with the red point on the circle. The amount of material the light is travelling through indeed depends on the light angle L. That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye.

That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. Below (credits), you can see a good local thickness map visualised on the model of a statue. White colours correspond to parts of the model where the translucent effect will be stronger, approximating the concept of thickness.

❓ How to generate the local thickness map?
The author of this technique proposed an interesting way to automatically create a local thickness map from any model. Those are the steps:

  1. Flip the faces of the model
  2. Render Ambient Occlusion to a texture
  3. Invert the colour of the texture

The rationale behind this procedure is that by rendering ambient occlusion on the back faces, one can roughly “averages all light transport happening inside the shape“.

Instead of a texture, the thickness can also be stored directly in the vertices.

The Final Version

We now know that we need to take into account the local thickness of the material. The easiest way is to provide a texture map that we can sample. While not physically accurate, it can produce believable results. Additionally, the local thickness is encoded in a way that allows artists to retain full control on the effect.

In this implementation, the local thickness is provided in the red channel of an additional texture, sampled in the surf function:

❓ How come the texture is not sampled in the lighting function?

I have chosen to store this value in a variable called thickness, which is later accessed by the lighting function. As a personal preference, I tend to do this every time I have to sample a texture that is later needed by a lighting function.

If you prefer, you can sample the texture directly in the lighting function. In this case, you need to pass the UV coordinates (possibly extending SurfaceOutputStandard) and to use tex2Dlod instead of tex2D. The function takes two additional coordinates; for this specific application, you can set both of them to zero:

Colin and Mark proposed a slightly different equation to calculate the final intensity of the backlight. This takes into account both the thickness and an additional attenuation parameter. Also, they allow for an additional ambient component that is present at all time:

This is the final result:

Conclusion

This post concludes the series on fast subsurface scattering. The approach described in this tutorial is based on the solution presented at GDC 2011 by Colin Barré-Brisebois and Marc Bouchard in a talk called Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look.

You can read the entire series here:

You can download all the necessary files to run this project (shader, textures, models, scenes) on Patreon.

📧 Stay updated

A new tutorial is released every week.

💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

Paypal
Twitter_logo

Write a Comment

Comment

  1. Wouldn’t it be a better approximation for thickness to draw somewhere else the maximum distance of the pixel for the model and then when drawing the actual fragment comparing its distance from the furthest one saved in advance ?

    Yeah, I know most models are complex but in general they are more convex than not, also if there are holes light doesnt travel that well since it would be scattered by the further piece of model, so my assumption is that it would be a good approx for the real thickness.

    • Hey!
      The solution I described is designed to be cheap, not really realistic.
      There are many other approaches one can use to get the thickness of the material. A common one is to render the scene from the perspective of the light into a depth map, so you know how far each point was when it hit the light.

Webmentions

  • Fast Subsurface Scattering in Unity (Part 1) - Alan Zucconi September 13, 2017

    Hey!
    The solution I described is designed to be cheap, not really realistic.
    There are many other approaches one can use to get the thickness of the material. A common one is to render the scene from the perspective of the light into a depth map, so you know how far each point was when it hit the light.

  • Tutorial Series - Alan Zucconi September 13, 2017

    Hey!
    The solution I described is designed to be cheap, not really realistic.
    There are many other approaches one can use to get the thickness of the material. A common one is to render the scene from the perspective of the light into a depth map, so you know how far each point was when it hit the light.