in shader, tutorial, Unity3D

Fast Subsurface Scattering in Unity (Part 2)

Share Button

This is the second part of the tutorial on Fast Subsurface Scattering in Unity. This post will show a working implementation of this effect.

This is a two part series:

At the end of this post, you will find a link to download the Unity project.

Introduction

The previous part of this tutorial explained the mechanism that allows approximating the look of translucent materials. Traditional surfaces are shaded based on the light coming from a direction L. The shader we are going to write will add an additional component, -L, which de-fact works as if the material was illuminated by an opposite light source. This makes it look as if light from L passed through the material.

Finally, we have derived a view-dependent equation to model the reflectance of the back lighting:

    \[I_{back} = saturate\left(V \cdot - \left \langle L+N\delta \right \rangle \right )^{p} \cdot {s}\]

where:

  • L is direction the light comes from (light direction),
  • V is the direction the camera is looking at the material (view direction),
  • N is the orientation of the surface at the point we have to render (surface normal).

There are additional parameters which can be used to control the final look of the material. \delta, for example, changes the perceived direction of the backlight so that is more aligned with the surface normal:

Finally, p and s (standing for power and scale) determine how to the backlight spread, and work in a similar way to the homonym parameters in the Blinn-Phong reflectance.

What’s left now is to implement this in a shader.

Extending the Standard Shader

As discussed before, we want this effect to be as realistic as possible. Our best choice is to extend Unity’s Standard shader, which already provides very good results for non-translucent materials.

❓ How to extend a Standard Shader?
If you are unfamiliar with the procedure, the specific topic of adding functions to a Standard Shader has been covered extensively in this blog. Two good starting tutorials for this are 3D Printer Shader Effect and CD-ROM Shader: Diffraction Grating.

To sum it up, the basic idea is to create a new surface shader and replace its lighting function with a custom one. In there, we will invoke the original Standard lighting function, to get the material rendered with Unity’s PBR shader.

Once we have that, we can calculate the contribution of the backlighting, and blend it with the original colour provided by the Standard lighting function.For a good approximation:

You can find a good starting point here:

Let’s call the new lighting function to be used for this effect StandardTranslucent. The backlight will have the same colour of the original light. What we can control is its intensity,  I:

❓ How come pbr is not clamped?

When adding two colours together, one should be careful not to go beyond 1. This is usually done with the saturate function, which simply clamps each colour component between 0 and 1.

If the camera you are using is set to support HDR (high-dynamic range), then values above 1 are used for post processing effects such as bloom. In this particular shader, we do not saturate the final colour, since a bloom filter will be applied to the final rendering.

Back Lighting

Following the equations described in the first section of this tutorial, we can proceed to write the following code:

The code above is a direct translation of the equations from the first part of this post. The translucency effect that results is believable (below), but does is not related in any way to the thickness of the material. This makes it very hard to control.

Local Thickness

It is obvious that the amount of back light strongly depends on the density and thickness of the material. Ideally, we would need to know the distance light travelled inside the material, and attenuating it accordingly. You can see in the image below how three different light rays with the same incident angle travel very different lengths through the material.

From the point of view of a shader, however, we do not have access to either the local geometry or the history of the light rays. Unfortunately, there is no way of solving this problem locally. The best approach proposed is to rely on an external local thickness map. That is a texture, mapped onto our surface, which indicates how “thick” that part of the material is. The concept of “thickness” is used loosely, as real thickness actually depends on the angle the light is coming from.

The diagram above shows how there is no unique concept of “thickness” associated with the red point on the circle. The amount of material the light is travelling through indeed depends on the light angle L.

That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. Below (credits), you can see a good local thickness map visualised on the model of a statue. White colours correspond to parts of the model where the translucent effect will be stronger, approximating the concept of thickness.

❓ How to generate the local thickness map?
The author of this technique proposed an interesting way to automatically create a local thickness map from any model. Those are the steps:

  1. Flip the faces of the model
  2. Render Ambient Occlusion to a texture
  3. Invert the colour of the texture

The rationale behind this procedure is that by rendering ambient occlusion on the back faces, one can roughly “averages all light transport happening inside the shape“.

Instead of a texture, the thickness can also be stored directly in the vertices.

The Final Version

We now know that we need to take into account the local thickness of the material. The easiest way is to provide a texture map that we can sample. While not physically accurate, it can produce believable results. Additionally, the local thickness is encoded in a way that allows artists to retain full control on the effect.

In this implementation, the local thickness is provided in the red channel of an additional texture, sampled in the surf function:

❓ How come the texture is not sampled in the lighting function?

I have chosen to store this value in a variable called thickness, which is later accessed by the lighting function. As a personal preference, I tend to do this every time I have to sample a texture that is later needed by a lighting function.

If you prefer, you can sample the texture directly in the lighting function. In this case, you need to pass the UV coordinates (possibly extending SurfaceOutputStandard) and to use tex2Dlod instead of tex2D. The function takes two additional coordinates; for this specific application, you can set both of them to zero:

Colin and Mark proposed a slightly different equation to calculate the final intensity of the backlight. This takes into account both the thickness and an additional attenuation parameter. Also, they allow for an additional ambient component that is present at all time:

This is the final result:

Conclusion

This post concludes the series on fast subsurface scattering. The approach described in this tutorial is based on the solution presented at GDC 2011 by Colin Barré-Brisebois and Marc Bouchard in a talk called Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look.

You can read the entire series here:

You can download all the necessary files to run this project (shader, textures, models, scenes) on Patreon.

📧 Stay updated

A new tutorial is released every week.

💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

Paypal
Ko-fi
Twitter_logo

Write a Comment

Comment

15 Comments

  1. Wouldn’t it be a better approximation for thickness to draw somewhere else the maximum distance of the pixel for the model and then when drawing the actual fragment comparing its distance from the furthest one saved in advance ?

    Yeah, I know most models are complex but in general they are more convex than not, also if there are holes light doesnt travel that well since it would be scattered by the further piece of model, so my assumption is that it would be a good approx for the real thickness.

    • Hey!
      The solution I described is designed to be cheap, not really realistic.
      There are many other approaches one can use to get the thickness of the material. A common one is to render the scene from the perspective of the light into a depth map, so you know how far each point was when it hit the light.

  2. “That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye.

    That being said, we have to remember that this entire approach to translucency is not about being physically accurate, but just realistic enough to fool the player’s eye. ”

    Paragraph is repeated.

    Otherwise, excellent tutorial, as usual. Your unity tutorials are probably the best I’ve found online. My lecturers could learn a thing or two from you…

  3. Thank you SO much, this is literally one of the best shader tutorials I’ve ever come across in all my years studying game development.

    I love that you really seriously explain what the heck is going on with the math, and like for example why shaders can be difficult — what the limitations are, and how we get around it. By the end, it made the code you gave really make a lot more sense. The diagrams were super helpful too 🙂

  4. I saw an interesting technique for dynamic thickness maps a few years ago,

    Bind a separate low-resolution render target, which will represent a thickness map for your scene. You could use a free G-buffer channel if you’ve got one.

    Then, draw your translucent objects into the buffer with an “additive” blend mode. The color of each fragment is the normalized fragment depth, multiplied by 1 for the back-faces, and -1 for front-faces. (You could do this either with two passes, or by using the VFACE semantic).

    While this isn’t supported on all platforms, I’ve found that a good number of GPUs will actually allow negative colors in additive blends, effectively resulting in both addition and subtraction in a single pass! This will give you the depth of the back-faces, minus the depth of the front-faces, using only traditional drawing operations! I was mostly using it to render metaballs, but the same principle might be applicable for SSS!

  5. Amazing tutorial, I am failing to grok the method for the local thickness. What do you mean exactly by flip the faces? Just inverting the normals to make all the faces to be back facing?

  6. Thanks for creating this, really helps to understand the concept. However I hope you could point me in the right direction with this: when trying to implement the effect I get this error – ‘Surface shader lighting model ‘StandardTranslucent’ is missing a GI function’

    Thanks again for taking the time to create this

  7. I find this tutorial was fairly complete and interesting, but then quickly rushed over the setup of things like the Attenuation, Scale, and Power parameters at the end leaving the reader to sort of guess how exactly these should be configured.

    • Hi Mike!

      Thank you for your comment.

      The use pow, is simply to remap VdotH to a non-linear gradient. This makes the specular highlights look more focused and localised.

      Scale and Power are very often seen in other shading techniques, such as Blinn-Phong. As such, I did not want to spend too much time on that. I have covered these values in a few other tutorials, such as “A Gentle Introduction to Shaders”.

      I hope this helps!

Webmentions

  • Learning Shaders - Alan Zucconi November 29, 2018

    Hi Mike!

    Thank you for your comment.

    The use pow, is simply to remap VdotH to a non-linear gradient. This makes the specular highlights look more focused and localised.

    Scale and Power are very often seen in other shading techniques, such as Blinn-Phong. As such, I did not want to spend too much time on that. I have covered these values in a few other tutorials, such as “A Gentle Introduction to Shaders”.

    I hope this helps!

  • Fast Subsurface Scattering in Unity (Part 1) - Alan Zucconi November 29, 2018

    Hi Mike!

    Thank you for your comment.

    The use pow, is simply to remap VdotH to a non-linear gradient. This makes the specular highlights look more focused and localised.

    Scale and Power are very often seen in other shading techniques, such as Blinn-Phong. As such, I did not want to spend too much time on that. I have covered these values in a few other tutorials, such as “A Gentle Introduction to Shaders”.

    I hope this helps!

  • Tutorial Series - Alan Zucconi November 29, 2018

    Hi Mike!

    Thank you for your comment.

    The use pow, is simply to remap VdotH to a non-linear gradient. This makes the specular highlights look more focused and localised.

    Scale and Power are very often seen in other shading techniques, such as Blinn-Phong. As such, I did not want to spend too much time on that. I have covered these values in a few other tutorials, such as “A Gentle Introduction to Shaders”.

    I hope this helps!