# Volumetric Rendering

This is the first part of a Unity tutorial dedicated to Volumetric Rendering, raymarching and signed distance fields. These techniques allow us to overcome the biggest limitation of modern 3D engines, which only let us render the outer shell of an object. Volumetric rendering enables the creation of realistic materials that interact with light in a complex way, such as fog, smoke, water and glass. Beautifully crafted effects such as NMZ‘s Plasma Globe (below) would simply be impossible without volumetric rendering.

These techniques are not complicated, but require a lot of steps in order to replicate the aforementioned effects. This tutorial has got you covered.

• Part 1: Volumetric Rendering | An introduction on what rendering volume means, and how it can be done in Unity;
• Part 2: Raymarching | Focuses on the the implementation of distance-aided raymarching, the de-fact standard technique to render volumes;
• Part 3: Surface Shading | A comprehensive guide on how to shade volumes realistically;
• Part 4: Signed Distance Functions | An in depth discussion on the mathematical tools that allow us to generate and combine arbitrary volumes;
• Part 5: Ambient Occlusion | How to implement realistic and efficient ambient occlusion in your volumes;

This first part will provide a general introduction to volumetric rendering, and end with a simple shader that will be the base of all our future iterations:

A full Unity package will be available soon. You may want to consider subscribing to the mailing list to stay updated.

### Introduction

Spheres, cubes and all the other complex geometries are made out of triangles in 3D egame engines, which are flat by definition. The real time lighting systems adopted by Unity is only capable of rendering flat surfaces. When you are rendering a sphere, for instance, Unity only draws the triangles that make its surface. Even for materials that are semi-transparent, only the outer shell is actually drawn and its color is blended with the one of the object behind it. There is no attempt in the lighting system to probe into a material’s volume. For the GPU, the world is made out of empty shells.

A broad range of techniques exists, in order to overcome this strong limitation. Even though is true that a traditional shader ultimately stops at the very surface of an object, it doesn’t mean we can’t go deeper. Volume rendering techniques simulate the propagation of light rays into a material’s volume, allowing for stunning and sophisticated visual effects.

### Volumetric Rendering

The fragment shader of an unlit textured object looks like this:

```fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.texcoord);
return col;
}```

Loosely speaking, that piece of code is invoked for every potential pixel (fragment) in the final rendered image. When the GPU invokes that fragment shader, it is because there is a triangle that is intersecting the camera frustum. In other words, the camera sees the object. Unity needs to know the exact colour of the object, so that it can assign it to the respective pixel in the rendered image.

Fragment shaders ultimately return the colour of an object at a specific location, as seen from a specific angle. The way this colour is calculated is entirely arbitrary. Nothing forbids us to “cheat” and to return something that does not necessarily match with the geometry that we are rendering realistically. The following diagram shows an example of this when rendering a 3D cube. When the fragment shader is queried to get the colour of the cube’s face, we are returning the same colours we could see on a sphere. The geometry is a cube, but from the camera’s perspective it looks and feels exactly like a sphere.

This is the basic concept behind volumetric rendering: simulating how light would propagate within the volume of an object.

If we want to emulate the effect shown in the previous diagram we need to describe it more precisely. Let’s say that our main geometry is a cube, and we want to volumetrically render a sphere inside it. There is actually no geometry associated with the sphere, as we will render it entirely via shader code. Our sphere is centred at `_Centre `and has radius `_Radius`, both expressed in world coordinates. Moving the cube won’t affect the position of the sphere, since it is expressed into absolute world coordinates. It’s also worth noting that the external geometry serves no purposes and won’t change the rest of the tutorial. The triangles of the outer shell of the cube becomes portals that allow to see inside the geometry. We could save triangles by using a quad, but a cube allows to see the volumetric sphere from every angle.

##### ⭐ Suggested Unity Assets ⭐
Unity is free, but you can upgrade to Unity Pro or Unity Plus subscriptions plans to get more functionality and training resources to power up your projects.

### Volumetric Raycasting

The first approach to volumetric rendering works exactly like the previous diagram. The fragment shader receives the point we are rendering (`worldPosition`) and the direction we are looking at (`viewDirection`); it then uses a function called `raycastHit` that indicates whether we are hitting the red sphere or not. This technique is called volumetric raycasting, as it extends the rays cast from the camera into the geometry.

We can now write a stub for our fragment shader:

```float3 _Centre;

fixed4 frag (v2f i) : SV_Target
{
float3 worldPosition = ...
float3 viewDirection = ...
if ( raycastHit(worldPosition, viewDirection) )
return fixed4(1,0,0,1); // Red if hit the ball
else
return fixed4(1,1,1,1); // White otherwise
}```

Let’s tackle each one of those missing components.

#### World Position

Firstly, the world position of the fragment is the point where the rays generated from the camera hit the geometry. We have seen in Vertex and Fragment Shader how to retrieve the world position in a fragment shader:

```struct v2f {
float4 pos : SV_POSITION;	// Clip space
float3 wPos : TEXCOORD1;	// World position
};

v2f vert (appdata_full v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.wPos = mul(_Object2World, v.vertex).xyz;
return o;
}```

#### View Direction

Secondly, the view direction is the direction of the ray that comes from the camera and hits the geometry at the point we are rendering. It requires us to know the position of the camera; which Unity includes in the built-in variable `_WorldSpaceCameraPos`. The direction of a segment that passes through these two points can be calculated as follows:

`float3 viewDirection = normalize(i.wPos - _WorldSpaceCameraPos);`

#### Raycast Hit Function

What we need now is a function `raycastHit` that, given the point we are rendering and the direction we are looking at it from, determines if we are hitting the virtual red sphere or not. This is the problem of intersecting a sphere with a segment. Formulae for this problems exist (link), but they are generally very inefficient. If you want to go for an analytic approach, you will need to derive formulae to intersect segments with custom geometries. This solution strongly constraints the models you can create; consequently it is rarely adopted.

### Volumetric Raymarching with Constant Step

As explained in the previous section, pure analytical volumetric raycasting is not a feasible approach to our problem. If we want to simulate arbitrary volumes, we need to find a more flexible technique that does not rely on intersecting equations. A common solution is called volumetric raymarching, and it is based on an iterative approach.

Volumetric raymarching slowly extends the ray into the volume of the cube. At each step, it queries whether the ray is currently hitting the sphere or not.

Each ray starts from the fragment position `worldPosition`, and is iteratively extended by `STEP_SIZE` units into the direction defined by `viewDirection`. This can be done algebraically adding `STEP_SIZE * viewDirection` to the `worldPosition` after each iteration.

We can now replace `raycastHit` with the following `raymarchHit` function:

```#define STEPS 64
#define STEP_SIZE 0.01

bool raymarchHit (float3 position, float3 direction)
{
for (int i = 0; i < STEPS; i++)
{
if ( sphereHit(position) )
return true;

position += direction * STEP_SIZE;
}

return false;
}```

The remaining piece of Maths required for this technique it to test whether a point `p` is inside a sphere:

```bool sphereHit (float3 p)
{
}```

Intersecting rays with spheres is hard, but iteratively checking if a point is inside a sphere is easy.

The result is this rather unsexy red sphere. Despite looking like a plain circle, this is actually an unlit sphere.

### Conclusion

This post introduces the concept of Volumetric Rendering. Even if traditional shader stops at the outer shell of a material, it is possible to keep projecting those rays inside a material’s volume to create the illusion of depth. Raymarching is one of the most commonly used technique. We have used is to draw an unlit red sphere. We will see in the following tutorials how to shade it realistically (Part 3. Surface Shading), how to make interesting shapes (Part 3. Signed Distance Fields) and even how to add shadows (Part 6. Hard and Soft Shadows). By the end of this series you’ll be able to create objects like the this one, with just three lines of code and a volumetric shader:

The next part of this tutorial will cover distance-aided raymaching, which is the de-facto standard technique used for volumetric rendering.

#### Other Resources

⚠  Part 6 of this series is available for preview on Patreon, as its written content needs to be completed.

If you are interested in volumetric rendering for non-solid materials (clouds, smoke, …) or transparent ones (water, glass, …) the topic is resumed in detailed in the Atmospheric Volumetric Scattering series!

This tutorial would have not been possible without the invaluable contribution of many other talented developers and artists, such as Íñigo Quílez and Mikael Hvidtfeldt Christensen.

The cover for this tutorial features Clouds, by Íñigo Quílez.

##### 💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

You will be notified when a new tutorial is relesed!

##### 📝 Licensing

You are free to use, adapt and build upon this tutorial for your own projects (even commercially) as long as you credit me.

You are not allowed to redistribute the content of this tutorial on other platforms. Especially the parts that are only available on Patreon.

If the knowledge you have gained had a significant impact on your project, a mention in the credit would be very appreciated. ❤️🧔🏻

1. Wonky

this is what I’ve done. what did I do wrong?

{
Properties
{
_Centre (“Centre”, float) = 0
}
{
// No culling or depth
Cull Off ZWrite Off ZTest Always

Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include “UnityCG.cginc”

sampler2D _MainTex;
float _Centre;

#define STEPS 64
#define STEP_SIZE 0.01

struct appdata {
float4 vertex : POSITION;
};

struct v2f {
float4 vertex : SV_POSITION;
float3 wPos : TEXCOORD1; // World Position
};

v2f vert (appdata v) {
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
o.wPos = mul(_Object2World, v.vertex).xyz;
return o;
}

bool sphereHit(float3 p) {
}

bool raymarchHit(float3 position, float3 direction){
for(int i=0; i<STEPS; i++){
if(sphereHit(position)){
return true;
}
position += direction * STEP_SIZE;
}
return false;
}

fixed4 frag (v2f i) : SV_Target
{
float3 worldPosition = i.wPos;
float3 viewDirection = normalize(i.wPos – _WorldSpaceCameraPos);
if(raymarchHit(worldPosition, viewDirection)){
return fixed4(1,0,0,1);
} else {
return fixed4(1,1,1,1);
}
}
ENDCG
}
}
}

• Did you try removing Cull Off ZWrite Off ZTest Always ?

• Jonathan

For Wonky and anyone else encountering this issue: this is most likely due to the raymarch algorithm exceeding the max number of iterations (STEPS) before it hits. You can use the debug visualization from part 2 of this tutorial to confirm. If so, increasing STEPS or STEP_SIZE will fix it.

• Jonathan

Another thing to be aware of is that you shouldn’t compute the view direction in the vertex shader as an ‘optimization’, as the value will get interpolated incorrectly and cause distortions in the view.

• Thank you for the support! 😀

2. Hi, Alan. Could you share the shader code? I think some important steps are missing in the tutorial

• Hey!
At the moment the code is not available.
Hopefully, it will be when the rest of this tutorial comes out!
Although I think it will be available only through Patreon!

3. Anthony Rosenbaum

Have you tested it’s performance on iOS or Android?

• Hey! Not yet!
Volumetric rendering is generally …slow.

### Webmentions

• Awesome Creative Coding – Massive Collection of Resources – Learn Practice & Share December 13, 2017

[…] Volumetric rendering – Explains how to create complex 3D shapes inside volumetric shaders. […]

• Game Art Digest #004 – Mighty Bear Games December 13, 2017

[…] Volumetric Rendering – Alan Zucconi […]

• Rainforest (a shadertoy demo) 小注 – Rosaceae M.pumila December 13, 2017

• Learning Shaders - Alan Zucconi December 13, 2017

[…] Volumetric Rendering in Unity […]

• Volumetric Sphere Shader – OMG Software December 13, 2017

[…] This is my implementation of Alan Zucconi’s tutorial on Volumetric Rendering. […]

• Volumetric Rendering: Signed Distance Functions - Alan Zucconi December 13, 2017

[…] Part 1: Volumetric Rendering […]

• Issue 1 – Unity Dev Weekly December 13, 2017

[…] Volumetric Rendering in Unity […]

• Volumetric Rendering: Surface Shading - Alan Zucconi December 13, 2017

[…] Introduction […]

• Volumetric Rendering: Raymarching - Alan Zucconi December 13, 2017

[…] Part 1: Volumetric Rendering […]