in Maths, Shaders, Tutorial, Unity

Fast Subsurface Scattering in Unity (Part 1)

Most (if not all) optical phenomena that materials exhibit can be replicated by simulating how the individual rays of light propagate and interact. This approach is referred in the scientific literature as ray tracing, and it is often too computationally expensive for any real-time application. Most modern engines rely on massive simplifications that, despite being unable to reproduce photorealism, can produce a believable approximation. This tutorial introduces a fast, cheap and convincing solution that can be used to simulate translucent materials which exhibit subsurface scattering.

This is a two part series:

At the end of this post, you will find a link to download the Unity project.

Introduction

The Standard material in Unity comes with a Transparency mode, which allows rendering transparent materials. Transparency, in this context, is implemented with alpha blending. A transparent object is rendered on top of existing geometry, partially showing what is behind. While this works for many materials, transparency is a special case of a more general property, called translucency (sometimes also called translucidity). While transparent materials only affect the amount of light they let through (below, left), translucent ones can alter its path (below, right).

The result of this behaviour should be clear: translucent materials diffuse the light rays they let through, blurring what was behind them. Such a behaviour is rarely seen in games, since it is significantly more complex to implement. Transparent materials can be implemented naively with alpha blending, without ray tracing. Translucent materials, on the other hand, require simulating the deviation of the light rays. Such a computation is very expensive and is rarely worth it in real time rendering.

This often prevents from achieving other optical phenomena, such as subsurface scattering. When light hits the surface of a translucent material, a part propagates inside, bouncing between the molecules until it finds its way out. This often causes light absorbed at a specific point to be reemitted somewhere else. Subsurface scattering results in a diffuse glow that can be seen in materials such as skin, marble, and milk.

Real Time Translucency

There are two main obstacles that make translucency so expensive. The first one is that it requires simulating the scattering of light rays inside a material. Each ray can split in multiple ones, reflecting hundreds or even thousands of times inside a material. The second obstacle is that light received at one point is reemitted somewhere else. While this seems a minor issue, in reality, is a big deal.

To understand why, we first need to look at how most shaders work. In the realm of real-time rendering, GPUs expect a shader to be able to calculate the final colour of a material simply using local properties. For each vertex, shaders are designed to efficiently access only the properties that are local to that vertex. Reading the normal direction and albedo of a vertex is easy; retrieving the ones of its neighbours is not. Most real-time solution must work around these constraints, and find a way to fake the propagation of light within a material without relying on non-local information.

The approach described in this tutorial is based on the solution presented at GDC 2011 by Colin Barré-Brisebois and Marc Bouchard in a talk called Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look. Their solution is integrated into the Frostbite 2 engine, which was used for DICE’s Battlefield 3. While not being physically accurate, the approach presented by Colin and Marc produces very believable results at a very small cost.

The idea behind their solution is very simple. In opaque materials, the light contribution comes directly from the light source. Vertices that are inclined more than 90 degrees in respects to the direction of the light, L, receive no light (bottom, left). According to the model proposed in the presentation, translucent materials have an additional light contribution which is related to -L. Geometrically, -L can be seen as if some of the light actually passed through the material and made it to the other side (bottom, right).

Each light now accounts for two, distinct reflectances contributions: the front and back illuminations. Since we want our materials to be as realistic as possible, we will use Unity’s Standard PBR lighting models for the front illumination. What we need is to find a way to describe the contribution from -L, and render it in a way that somehow simulates the diffusion process which might have occurred inside the material.

⭐ Suggested Unity Assets ⭐
Unity is free, but you can upgrade to Unity Pro or Unity Plus subscriptions plans to get more functionality and training resources to power up your projects.

Back Translucency

As discussed before, the final colour of our pixels depend is the sum of two components. The first one is the “traditional” lighting. The second one is the light contribution from a virtual light source illuminating the back of our model. This gives the impression that light from the original source actually passed through the material.

To understand how to model this mathematically, let’s picture the following two scenarios (diagrams below). We are currently drawing the red point; since it’s in the “dark” side of the material, it should be illuminated by -L. From the perspective of an external viewer, let’s analyse the two extreme cases. We can see that V_B is perfectly aligned with -L, meaning that the viewer B should see the back translucency at its fullest. On the other hand, viewer A should see the least amount of backlight as it is perpendicular to -L.

If you are not new to shader coding, this kind of reasoning should sound familiar. We have encountered something similar in the tutorial on Physically Based Rendering and Lighting Models in Unity 5, where we showed how such a behaviour can be obtained using a mathematical operator called the dot product.

As a first approximation, we can say that the amount of back lighting due to translucency I_{back} is proportional to V \cdot -L. In a traditional diffuse shader, this would be N \cdot L. We can see that we have not included the surface normal in the calculation, as light is simply coming out of the material, not reflecting on it.

Subsurface Distortion

However, the surface normal should have some influence, even if minor, on the angle at which the light is leaving the material. The authors of this technique introduced a parameter, called subsurface distortion \delta, which forces the vector -L to point towards N. Physically speaking, this the subsurface distortion controls how strongly the surface normal deflects the outgoing back light. Following the solution proposed, the intensity of the back translucency component becomes:

    \[I_{back}=V \cdot -\left\langle L+N \delta \right\rangle\]

Where \left \langle X \right\rangle = \frac{X}{\left\|X\right\|} is a unit vector that points in the same direction of X. If you are familiar with Cg/HLSL, that is the normalize function.

When \delta=0, we return to the V \cdot -\L derived in the previous paragraph. When \delta=1, however, we are calculating the dot product between the view direction and -\left\langle L+N \right\rangle. If you are familiar with the Blinn-Phong reflectance, you should know that \left\langle L+N \right\rangle is the vector “in between” L and N. For this reason, we will call it as the halfway direction H.

The diagram above shows all the directions used so far. H is indicated in purple, and you can see that it rests in between L and N. Geometrically speaking, varying \delta from 0 to 1 causes a shift in the perceived direction of the light L. The light shaded area shows the range of directions the backlight will come from. In the image below you can see that with \delta=0, the object seems to be illuminated from the purple light source. When \delta moved towards 1, the perceived direction of the light source shifts towards the purple one.

The purpose of \delta is to simulate the tendency of certain translucent materials to diffuse the backlight with different intensities. Higher values of \delta will cause the back light to scatter more.

❓ Is this H the same H used in the Blinn-Phong Reflectance?
No. The Blinn-Phong reflectance defines H as \left\langle L+V \right\rangle. Here, we are using the same letter H to indicate \left\langle L+N \right\rangle.
❓ Is 𝛿 really interpolating between L and L+N?
Yes. Values of \delta from 0 to 1 linearly interpolates between L and L+N. This can be seen by unfolding the traditional definition of linear interpolation from L to L+N based on \delta:

    \[L \left(1-\delta\right) + \left(L+N\right) \delta=\]

    \[=L -\boxed{L \delta} + \boxed{L\delta} + N \delta =\]

    \[=L + N \delta\]

❓ How come the authors did not normalise L+N?
Geometrically speaking, the quantity L+N does not have unit length; hence, it needs to be normalised. In their final solution, the authors are not performing this normalisation step.

Ultimately, this entire effect is intended to be neither photo-realistic or physically based. During their presentation, the authors made very clear that it was intended to be used as a fast approximation of translucency and subsurface scattering behaviours. Normalising does not change the results too much, but introduces a significant delay.

Back Light Diffusion

At this point in the tutorial, we already have an equation that we can use simulate translucent materials. The quantity I_{back} can not be used to calculate the final light contribution.

There are two main approaches that can be used. The first one relies on a texture. If you want to have full artistic control on the way light diffuses in the material, you should clamp I_{back} between 0 and 1, and use it to sample the final intensity of the back light. Different ramp textures will simulate the light transport within different materials. We will see in the next part of this tutorial how this can be used to change the result of this shader dramatically.

The approach used by the author of this technique, however, does not rely on a texture. It creates a curve using Cg code only:

    \[I_{back} = saturate\left(V \cdot - \left \langle L+N\delta \right \rangle \right )^{p} \cdot {s}\]

The two new parameters, p (power) and s (scale) are used to change the properties of the curve.

Conclusion

This post explains the technical challenges in rendering translucent materials. An approximate solution is introduced, followed the approach presented by Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look. The next part of this tutorial will focus on how to actually implement this effect in a shader in Unity.

If you are interested in more sophisticated approaches to simulate subsurface scattering for real time applications, GPU Gems provides one of the best tutorials you can find.

Become a Patron!
You can download all the necessary files to run this project (shader, textures, models, scenes) on Patreon.

💖 Support this blog

This website exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

Patreon Patreon_button
Twitter_logo

YouTube_logo
📧 Stay updated

You will be notified when a new tutorial is released!

📝 Licensing

You are free to use, adapt and build upon this tutorial for your own projects (even commercially) as long as you credit me.

You are not allowed to redistribute the content of this tutorial on other platforms, especially the parts that are only available on Patreon.

If the knowledge you have gained had a significant impact on your project, a mention in the credit would be very appreciated. ❤️🧔🏻

Write a Comment

Comment

  1. Hi,
    I am confused about the Subsurface Distortion. You say that it pulls -L toward the normal vector, but it looks to me that -H is actually being driven away. If N,-L, and V were all aligned I would expect to have full light but H would be perpendicular to V, resulting in no light.
    Thanks in advance for the clarification and keep up the good work!

    • Hey Richard,
      this comment is pretty old, but I tripped over the same issue. My first idea was, to put the minus sign inside of the brackets -> normalize(-L + N * _Distortion). Then I thought, it might be more realistic not to drive the lightvector in direction of the normal but in the view direction so I tried normalize(-L + V * _Distortion), which works great for me.

Webmentions

  • Unity Shader – “快速“ 次散射 (Fast SSS : Fast Subsurface Scattering) – 源码巴士 August 5, 2022

    […] Fast Subsurface Scattering in Unity (Part 1) […]

  • Paper Shader in Unity - Game Dev Bill August 5, 2022

    […] In the built in renderer – do a lot of work.  This is a hard task and for now I’ll refer you to a write up by Alan Zucconi on it here. […]

  • Volumetric Atmospheric Scattering - Alan Zucconi August 5, 2022

    […] In most cases, such interaction can be faked very effectively, as seen in the tutorial on Fast Subsurface Scattering in Unity. Sadly, this is not the case if we want to recreate a believable sky. Instead of rendering only the […]

  • Fast Subsurface Scattering in Unity (Part 2) - Alan Zucconi August 5, 2022

    […] Part 1. Fast Subsurface Scattering in Unity […]