Volumetric Rendering: Raymarching

in shader, tutorial, Unity3D

Volumetric Rendering: Raymarching

Share Button

This post continues the tutorial on volumetric rendering, introducing one of the most used techniques: raymarching.

You can find here all the other posts in this series:


Loosely speaking, the standard behaviour of Unity 5’s lighting engine stops the rendering when a ray from the camera hits the surface of an object. There is no built-in mechanism for those rays to penetrate within the surface of an object. To compensate for this, we have have introduced a technique called raymarch. What we have in a fragment shader is the position of the point we are rendering (in world coordinates), and the view direction from the camera. We can manually extends those rays, making them hit custom geometries that exists only within the shader code. The barebone shade that allows us to do this is:

The rest of this post will provide different implementations for the raymarch function.

Raymarching with Constant Step

The first implementation of raymarching introduced in Volume Rendering used a constant step. Each ray is extended by STEP_SIZE in the view direction, until it hits something. If that’s the case, we draw red pixel, otherwise a white one.


Raymarching with constant step can be implemented with the following code:

As seen already, it renders unsexy, flat geometries:


The post Surface Shading will be dedicated entirely to give threedimensionality to volumetric geometries. Before that, need to focus on a better implementation of the raymarch technique.

Distance Aided Raymarching

What makes raymarching with constant step very inefficient is the fact that rays advances by the same amount every time, regardless of the geometry that fills the volumetric world. The performance of a shader suffers immensely by adding loops. If we want to use real time volumetric rendering, we need to find better a more efficient solution.

We would like a way of estimating how far a ray can travel without hitting a piece of geometry. In order for this technique to work, we need to be able to estimate the distance from our geometry. In the previous post we have used a function called  sphereHit which indicated whether a point was inside a sphere or not:

We can change it in such a way that instead of a boolean value, it returns a distance:

This now belongs to a family of functions called signed distance functions. As the name suggests, it provides a measure which can be positive or negative. When positive, we are outside the sphere; when negative we are inside and when zero we are exactly on its surface.

What sphereDistance allows us is to have a conservative estimation of how far our ray can travel without hitting a sphere. Without any proper shading, volumetric rendering is fairly uninteresting. Even if this example might seem trivial with a single sphere, it becomes a valuable technique with more complex geometries. The following image (from Distance Estimated 3D Fractals) show how raymarch works.  Each ray travels for as long as its distance from the closest object. In such a way we can dramatically reduce the number of steps required to hit a volume.


This brings us to the the distance-aided implementation of raymarching:

To better understand how it works, we replaced the surface rendering with a colour gradient that indicates how many steps were requires for raymarch to hit a piece of geometry:


It’s immediately obvious that the flat geometry that faces the camera can be identified immediately. The edges, instead, are much trickier. This technique also estimate how close we are to any nearby geometry. We will see in a future instalment, Ambient Occlusion, how this can be helpful to add details to our volume.


This post introduces the de-facto standard technique used for real time raymarching shaders. The rays advances in the volumetric medium according to a conservative estimation of the closest nearby geometry.

The next post will focus on how to use distance functions to create geometrical primitives, and how they can be combined together to create whichever shape you want.

Other Resources

📧 Stay updated

A new tutorial is released every week.

💖 Support this blog

This websites exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.


Write a Comment


  1. I created a scene, put a cube in it, applied the material created with the shader of Raymarching with constant step, the code is correct, but the cube stays white all the time. Any reason for this?

  2. How can you have a unity camera go within the volume contained inside the cube. Say for example I wanted to clip through it while the camera passes through it? I assume once the camera clips the mesh it no longer renders anything.

    • Hey! Exactly, that’s a big issue.
      One solution is to use a billboard quad to render your volume.
      You can also play with Cull Off.
      Unfortunately there’s no one-line solution. Also, you have to take into account the camera position to interpolate the values properly; not just the view direction.

  3. Very nice tutorial, but it would be nice if the animgif would show the actual expected output of the example and the boolean cube. It’s hard to see if I’m actually doing it right. 🙂


  • Tutorial Series - Alan Zucconi May 30, 2017

    edit: and NOT the boolean cube, ofcourse. No boolean pun intended! 🙂

  • Volumetric Rendering - Alan Zucconi May 30, 2017

    edit: and NOT the boolean cube, ofcourse. No boolean pun intended! 🙂

  • Volumetric Rendering: Signed Distance Functions - Alan Zucconi May 30, 2017

    edit: and NOT the boolean cube, ofcourse. No boolean pun intended! 🙂

  • Issue 2 – Unity Dev Weekly May 30, 2017

    edit: and NOT the boolean cube, ofcourse. No boolean pun intended! 🙂

  • Volumetric Rendering: Surface Shading - Alan Zucconi May 30, 2017

    edit: and NOT the boolean cube, ofcourse. No boolean pun intended! 🙂