This is the first part of a Unity tutorial dedicated to Volumetric Rendering, raymarching and signed distance fields. These techniques allow us to overcome the biggest limitation of modern 3D engines, which only let us render the outer shell of an object. Volumetric rendering enables the creation of realistic materials that interact with light in a complex way, such as fog, smoke, water and glass. Beautifully crafted effects such as NMZ‘s Plasma Globe (below) would simply be impossible without volumetric rendering.

These techniques are not complicated, but require a lot of steps in order to replicate the aforementioned effects. This tutorial has got you covered.

**Part 1: Volumetric Rendering**| An introduction on what rendering volume means, and how it can be done in Unity;**Part 2: Raymarching**| Focuses on the the implementation of distance-aided raymarching, the de-fact standard technique to render volumes;**Part 3: Surface Shading**| A comprehensive guide on how to shade volumes realistically;**Part 4: Signed Distance Functions**| An in depth discussion on the mathematical tools that allow us to generate and combine arbitrary volumes;**Part 5: Ambient Occlusion**| How to implement realistic and efficient ambient occlusion in your volumes;**Part 6: Hard and Soft Shadows**| How to add real shadows to your volumes;**Part 7: Volume Raycasting**| A variant of raymarching that can render semitransparent surfaces such as fog and smoke.

This first part will provide a general introduction to volumetric rendering, and end with a simple shader that will be the base of all our future iterations:

- Introduction
- Part 1. Volumetric Rendering
- Part 2. Volumetric Raycasting
- Part 3. Volumetric Raymarching with Constant Step
- Conclusion

**A full Unity package will be available soon. You may want to consider subscribing to the mailing list to stay updated.**

### Introduction

Spheres, cubes and all the other complex geometries are made out of triangles in 3D egame engines, which are flat by definition. The real time lighting systems adopted by Unity is only capable of rendering flat surfaces. When you are rendering a sphere, for instance, Unity only draws the triangles that make its surface. Even for materials that are semi-transparent, only the outer shell is actually drawn and its color is blended with the one of the object behind it. There is no attempt in the lighting system to probe into a material’s volume. For the GPU, the world is made out of empty shells.

A broad range of techniques exists, in order to overcome this strong limitation. Even though is true that a traditional shader ultimately stops at the very surface of an object, it doesn’t mean we can’t go deeper. **Volume rendering techniques** simulate the propagation of light rays into a material’s volume, allowing for stunning and sophisticated visual effects.

### Volumetric Rendering

The fragment shader of an unlit textured object looks like this:

1 2 3 4 5 |
fixed4 frag (v2f i) : SV_Target { fixed4 col = tex2D(_MainTex, i.texcoord); return col; } |

Loosely speaking, that piece of code is invoked for every potential pixel (fragment) in the final rendered image. When the GPU invokes that fragment shader, it is because there is a triangle that is intersecting the camera frustum. In other words, the camera sees the object. Unity needs to know the exact colour of the object, so that it can assign it to the respective pixel in the rendered image.

Fragment shaders ultimately return the colour of an object at a specific location, as seen from a specific angle. The way this colour is calculated is entirely arbitrary. Nothing forbids us to “cheat” and to return something that does not necessarily match with the geometry that we are rendering realistically. The following diagram shows an example of this when rendering a 3D cube. When the fragment shader is queried to get the colour of the cube’s face, we are returning the same colours we could see on a sphere. The geometry is a cube, but from the camera’s perspective it looks and feels exactly like a sphere.

This is the basic concept behind volumetric rendering: simulating how light would propagate within the volume of an object.

If we want to emulate the effect shown in the previous diagram we need to describe it more precisely. Let’s say that our main geometry is a cube, and we want to volumetrically render a sphere inside it. There is actually no geometry associated with the sphere, as we will render it entirely via shader code. Our sphere is centred at _Centreand has radius _Radius, both expressed in world coordinates. Moving the cube won’t affect the position of the sphere, since it is expressed into absolute world coordinates. It’s also worth noting that the external geometry serves no purposes and won’t change the rest of the tutorial. The triangles of the outer shell of the cube becomes portals that allow to see inside the geometry. We could save triangles by using a quad, but a cube allows to see the volumetric sphere from every angle.

### Volumetric Raycasting

The first approach to volumetric rendering works exactly like the previous diagram. The fragment shader receives the point we are rendering (
worldPosition) and the direction we are looking at (
viewDirection); it then uses a function called
raycastHit that indicates whether we are hitting the red sphere or not. This technique is called **volumetric raycasting**, as it extends the rays cast from the camera into the geometry.

We can now write a stub for our fragment shader:

1 2 3 4 5 6 7 8 9 10 11 12 |
float3 _Centre; float _Radius; fixed4 frag (v2f i) : SV_Target { float3 worldPosition = ... float3 viewDirection = ... if ( raycastHit(worldPosition, viewDirection) ) return fixed4(1,0,0,1); // Red if hit the ball else return fixed4(1,1,1,1); // White otherwise } |

Let’s tackle each one of those missing components.

#### World Position

Firstly, the world position of the fragment is the point where the rays generated from the camera hit the geometry. We have seen in Vertex and Fragment Shader how to retrieve the world position in a fragment shader:

1 2 3 4 5 6 7 8 9 10 11 12 |
struct v2f { float4 pos : SV_POSITION; // Clip space float3 wPos : TEXCOORD1; // World position }; v2f vert (appdata_full v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.wPos = mul(_Object2World, v.vertex).xyz; return o; } |

#### View Direction

Secondly, the view direction is the direction of the ray that comes from the camera and hits the geometry at the point we are rendering. It requires us to know the position of the camera; which Unity includes in the built-in variable _WorldSpaceCameraPos. The direction of a segment that passes through these two points can be calculated as follows:

1 |
float3 viewDirection = normalize(i.wPos - _WorldSpaceCameraPos); |

#### Raycast Hit Function

What we need now is a function raycastHit that, given the point we are rendering and the direction we are looking at it from, determines if we are hitting the virtual red sphere or not. This is the problem of intersecting a sphere with a segment. Formulae for this problems exist (link), but they are generally very inefficient. If you want to go for an analytic approach, you will need to derive formulae to intersect segments with custom geometries. This solution strongly constraints the models you can create; consequently it is rarely adopted.

### Volumetric Raymarching with Constant Step

As explained in the previous section, pure analytical volumetric raycasting is not a feasible approach to our problem. If we want to simulate arbitrary volumes, we need to find a more flexible technique that does not rely on intersecting equations. A common solution is called **volumetric raymarching**, and it is based on an iterative approach.

Volumetric raymarching slowly extends the ray into the volume of the cube. At each step, it queries whether the ray is currently hitting the sphere or not.

Each ray starts from the fragment position worldPosition, and is iteratively extended by STEP_SIZE units into the direction defined by viewDirection. This can be done algebraically adding STEP_SIZE * viewDirection to the worldPosition after each iteration.

We can now replace raycastHit with the following raymarchHit function:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
#define STEPS 64 #define STEP_SIZE 0.01 bool raymarchHit (float3 position, float3 direction) { for (int i = 0; i < STEPS; i++) { if ( sphereHit(position) ) return true; position += direction * STEP_SIZE; } return false; } |

The remaining piece of Maths required for this technique it to test whether a point p is inside a sphere:

1 2 3 4 |
bool sphereHit (float3 p) { return distance(p,_Centre) < _Radius; } |

Intersecting rays with spheres is hard, but iteratively checking if a point is inside a sphere is easy.

The result is this rather unsexy red sphere. Despite looking like a plain circle, this is actually an unlit sphere.

### Conclusion

This post introduces the concept of Volumetric Rendering. Even if traditional shader stops at the outer shell of a material, it is possible to keep projecting those rays inside a material’s volume to create the illusion of depth. Raymarching is one of the most commonly used technique. We have used is to draw an unlit red sphere. We will see in the following tutorials how to shade it realistically (Part 3. Surface Shading), how to make interesting shapes (Part 3. Signed Distance Fields) and even how to add shadows (Part 6. Hard and Soft Shadows). By the end of this series you’ll be able to create objects like the this one, with just three lines of code and a volumetric shader:

**The next part of this tutorial will cover distance-aided raymaching, which is the de-facto standard technique used for volumetric rendering.**

#### Other Resources

**Part 1: Volumetric Rendering**- Part 2: Raymarching
- Part 3: Surface Shading
- Part 4: Signed Distance Fields
- Part 5: Ambient Occlusion
- Part 6: Hard and Soft Shadows
- Part 7: Volume Raycasting

This tutorial would have not been possible without the invaluable contribution of many other talented developers and artists, such as Íñigo Quílez and Mikael Hvidtfeldt Christensen.

- Rendering Worlds with Two Triangles with raytracing on the GPU in 4096 bytes
- Distance Estimated 3D Fractals
- HOW TO: Ray Marching
- Raymarching Distance Fields: Concepts and Implementation in Unity

The cover for this tutorial features Clouds, by Íñigo Quílez.

##### 📧 Stay updated

A new tutorial is released every week.

##### 💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

https://drive.google.com/file/d/0B17EMZrhzIZrcVNNa2kwOHJSTzQ/view?usp=sharing

this is what I’ve done. what did I do wrong?

here’s my shader

Shader “Hidden/VolumentricRendering01”

{

Properties

{

_Radius (“Radius”, float) = 1

_Centre (“Centre”, float) = 0

}

SubShader

{

// No culling or depth

Cull Off ZWrite Off ZTest Always

Pass

{

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

#include “UnityCG.cginc”

sampler2D _MainTex;

float _Radius;

float _Centre;

#define STEPS 64

#define STEP_SIZE 0.01

struct appdata {

float4 vertex : POSITION;

};

struct v2f {

float4 vertex : SV_POSITION;

float3 wPos : TEXCOORD1; // World Position

};

v2f vert (appdata v) {

v2f o;

o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);

o.wPos = mul(_Object2World, v.vertex).xyz;

return o;

}

bool sphereHit(float3 p) {

return distance(p, _Centre) < _Radius;

}

bool raymarchHit(float3 position, float3 direction){

for(int i=0; i<STEPS; i++){

if(sphereHit(position)){

return true;

}

position += direction * STEP_SIZE;

}

return false;

}

fixed4 frag (v2f i) : SV_Target

{

float3 worldPosition = i.wPos;

float3 viewDirection = normalize(i.wPos – _WorldSpaceCameraPos);

if(raymarchHit(worldPosition, viewDirection)){

return fixed4(1,0,0,1);

} else {

return fixed4(1,1,1,1);

}

}

ENDCG

}

}

}

Did you try removing Cull Off ZWrite Off ZTest Always ?

much much better!! thanks Alan~!

but there’s still remain weird boundary near cube edges 🙁

https://drive.google.com/file/d/0B17EMZrhzIZrZ2NVUmpvejV6N00/view?usp=sharing

For Wonky and anyone else encountering this issue: this is most likely due to the raymarch algorithm exceeding the max number of iterations (STEPS) before it hits. You can use the debug visualization from part 2 of this tutorial to confirm. If so, increasing STEPS or STEP_SIZE will fix it.

Another thing to be aware of is that you shouldn’t compute the view direction in the vertex shader as an ‘optimization’, as the value will get interpolated incorrectly and cause distortions in the view.

Thank you for the support! 😀

Hi, Alan. Could you share the shader code? I think some important steps are missing in the tutorial

Hey!

At the moment the code is not available.

Hopefully, it will be when the rest of this tutorial comes out!

Although I think it will be available only through Patreon!

Have you tested it’s performance on iOS or Android?

Hey! Not yet!

Volumetric rendering is generally …slow.