in Shaders, Tutorial, Unity

Interactive Map Shader: Vertex Displacement

This online course is dedicated to interactive maps, and how to create them using Shaders in Unity.

This is a tutorial in three parts:

This effect will serve as the base for more advanced techniques, such as holographic projections and even Black Panther’s sand table.

A link to download the Unity package for this tutorial can be found at the end of this article.

The inspiration for this tutorial comes from a tweet that Baran Kahyaoglu posted to showcase some of the work he has been doing for Mapbox.

The scene (minus the map) comes from the Unity Visual Effect Graph Spaceship demo (below), which you can download here.

Anatomy of the Effect

The first thing is easy to notice is that geographical maps are flat: when used as textures, they lack the three-dimensionality that a true 3D model of that same region would have.

The first solution you can implement is creating a 3D model of the region you want in your game, and them using the geographical map as its texture. That works perfectly, but is time-consuming and stops you from implementing the “scrolling” effect seen in Baran Kahyaoglu’s video.

It is obvious that the best way to move forward is to go for a more technical approach. Luckily, shaders can be used to alter the geometry of a 3D model. This can be exploited to shape any flat plane into the valleys and mountains of the region we want.

For this tutorial, I will use a map of the Quillota region in Chile, which is known for its characteristic hills. The image below shows a texture of the region applied to a circular mesh.

While hills and mountains can be seen, they appear completely flat. This destroys any illusion of realism.

Normal Extrusion

The first step is to use shaders to alter the geometry using a technique called normal extrusion. What is needed is a vertex modifier: a function capable of manipulating the individual vertices of a 3D model.

How you use a vertex modifier changes based on the type of shader you have. In this tutorial, we are showing how to edit a Surface Standard Shader, which is one of the types of shaders that you can create with Unity.

There are many ways we can manipulate the vertices of a 3D model. One of the very first techniques that most vertex shaders tutorial teach is the normal extrusion. The idea is to push each vertex “outwards” (extrude), giving a more inflated look to a 3D model. The concept of “outwards” comes from the fact that each vertex is moved along its normal direction.

This works very well for smooth surfaces, but can create some weird artefacts for models which vertices are not properly welded. This effect was also explained in one of my very first tutorials: A Gentle Introduction to Shaders, where I showed how to extrude and intrude a 3D model.

Adding normal extrusion to a surface shader is easy. Each surface shader has a #pragma directive, which is used to provide additional pieces of information and commands. One of these is vertex:vert, which indicates that the function called vert will be used to process each vertex of the 3D model.

The edited shader looks like this:

#pragma surface surf Standard fullforwardshadows addshadow vertex:vert
...
float _Amount;
...
void vert(inout appdata_base v)
{
    v.vertex.xyz += v.normal * _Amount;
}

Since we are changing the position of the vertices, we also need to use addshadow if we want the model to correctly cast shadows on itself.

❓ What is appdata_base?
We can see that we have added the vertex modifier function (vert) which takes as a parameter a structure called appdata_base. This structure is what stores the information about every single vertex of the 3D model. It contains not just the vertex position (v.vertex), but also other fields such as the normal direction (v.normal) and the texture information (v.texcoord) associated with it.

For certain applications, that is not enough and we might need other properties, such as the vertex colour (v.color) and the tangent direction (v.tangent). Vertex modifiers can be defined with a variety of other input structures, including appdata_tan and appdata_full, which provide more information at a small performance cost. You can read more about appdata (and its variants) on the Unity3D wiki.

 

❓ How are values returned from vert?
The vertex function has no return value. If you are familiar with C#, you should also know that structures are passed by value, meaning that editing v.vertex only affect the copy of v which scope is limited to the function body.

However, v is also declared as inout, which means that is both used as an input and output. Any change we do to it, changes the actual variable that was passed to vert. The keywords inout and out are very often used in Cg, and they loosely correspond to ref and out in C#, respectively.

⭐ Suggested Unity Assets ⭐
Unity is free, but you can upgrade to Unity Pro or Unity Plus subscriptions plans to get more functionality and training resources to power up your projects.

Normal Extrusion With Textures

The code we have used in the section above works correctly, but is far from the effect we want to achieve. The reason is that we do not want to extrude all vertices by the same amount. We want the surface of our 3D model to match the valleys and peaks of the geographical region it represents. Firstly, we need to somehow store and retrieve the information of how raised each point on the map is. We want, in a nutshell, the extrusion to be modulated by a texture, which encodes the heights of our landscape. Such textures are often referred to as heightmaps, although it is not uncommon to see them called depthmaps, based on the context. Once the height information is available, we can modulate the extrusion of a flat plane based on the heightmap. As seen in the diagram below, this allows controlling which areas will be raised and which ones will be lowered.

It is relatively easy to find a satellite image of the geographical area of your interest, and its associated heightmap. Below, you can see a satellite map of Mars (left) and its heightmap (right), which have been used in this tutorial:

I have covered the concept of depthmaps extensive in another series titled Inside Facebook 3D Photos: Parallax Shaders.

For this tutorial, we will assume that the heightmap is stored is a grayscale image in which black and white correspond to the lower and higher altitudes, respectively. We also need these values to be scaled linearly, meaning that (for instance) a difference in colours of 0.1 corresponds to the same difference in height whether is between 0 and 0.1, or 0.9 and 1. When it comes to depthamps, this is not always the case since many of them store the depth information in a logarithmic scale.

Sampling a texture requires two pieces of information: the texture itself, and the UV coordinates of the point we want to sample. The latter can be accessed through the field texcoord stored in the appdata_base structure. That is the UV coordinate associated with the vertex currently being processed. Sampling textures in a surface function is done using tex2D , although tex2Dlod is required when we are in a vertex function.

In the snippet below, a texture called _HeightMap is used to modulate the amount of extrusion performed on each vertex:

sampler2D _HeightMap;
...
void vert(inout appdata_base v)
{
    fixed height = tex2Dlod(_HeightMap, float4(v.texcoord.xy, 0, 0)).r;
    vertex.xyz += v.normal * height * _Amount;
}
❓ Why can't tex2D be used in a vertex function?
If you look at the shader code that Unity generates for a Standard Surface Shader, you will notice that it already contains an example of how to sample textures. In particular, it sampled the main texture (called _MainTex) in the surface function (called surf) using a built-in function called tex2D.

Indeed, tex2D is the right function to sample pixels from a texture, whether that is used to store colours or heights. However, you might notice that tex2D cannot be used within a vertex function.

The reason is that tex2D does not only read pixels from a texture. It also decides which version of the texture to use, based on the distance from the camera. Loosely speaking, this is known as mipmapping, and allows having smaller versions of the same texture to be used at different distances, automatically.

In the surface function, the shader already knows which mipmap to use. That information might not be yet available in a vertex function, which is why tex2D cannot be used reliably. The function tex2Dlod, however, allows for two extra parameters which, for the purpose of this tutorial, can be set to zero.

The result can be seen quite clearly below:

There is one small simplification that can be done in our case. The code seen so far is supposed to work on any geometry. However, we can assume that our surface is completely flat. In fact, what we really want is to use this effect on a flat plane.

Consequently, we can remove v.normal and replace it with float3(0, 1, 0):

void vert(inout appdata_base v)
{
    float3 normal = float3(0, 1, 0);

    fixed height = tex2Dlod(_HeightMap, float4(v.texcoord.xy, 0, 0)).r;
    vertex.xyz += normal * height * _Amount;
}

This was possible because all coordinates in appdata_base are stored in model space, meaning that they are relative to the centre and orientation of the 3D model. Translating, rotating and scaling an object using its transform in Unity change the position, rotation and scale of the object, but leaves its original 3D model unaffected.

What’s Next…

In the next part of this online course, we will explore how to implement a scrolling effect, so that we can actually move the geometry around.

Unity Package Download

Become a Patron!
The full package for this tutorial is available on Patreon, and it includes all the assets necessary to reproduce the technique here presented.

💖 Support this blog

This website exists thanks to the contribution of patrons on Patreon. If you think these posts have either helped or inspired you, please consider supporting this blog.

Patreon Patreon_button
Twitter_logo

YouTube_logo
📧 Stay updated

You will be notified when a new tutorial is released!

📝 Licensing

You are free to use, adapt and build upon this tutorial for your own projects (even commercially) as long as you credit me.

You are not allowed to redistribute the content of this tutorial on other platforms, especially the parts that are only available on Patreon.

If the knowledge you have gained had a significant impact on your project, a mention in the credit would be very appreciated. ❤️🧔🏻

Write a Comment

Comment

  1. Hi, Love the tutorials! I’m curious if you could discuss the impact a mesh has on the shader (specifically the ‘disc’ in this case) and how the end-result looks like. Do you have any best-practices? I’d be curious what the mesh looks like in this case. Thanks!

  2. Hey this is a great tutorial! Is there any way for a user to now click on this “terrain?” For instances, I might want to spawn an object that sits on top of the terrain where they clicked. Normally one would a raycast from the camera to determine where the user was clicking. However now we don’t have a collision mesh to do this. Please keep these wonderful tutorials coming! Can’t wait to see the hologram tutorial mentioned in the third post.

    • Thank you so much for the nice comments.
      Yes, I am working on it, although it might be a few more months before it gets released!

      You are right, if you add a collider to the object, you will see that it does not get updated. The shader is only changing the vertices of the mesh, not the collider itself.
      For that, you will need to update the collider points manually.

      At this point, it might be worth doing it by using a compute shader, which can calculate the points much much faster than a traditional for loop.

      • Excellent idea about compute shaders. So if I need to compute the collider points anyway, I assume I can just copy the resulting mesh to the mesh filter, rather than using the shader vertex offset technique you’re describing here? Or would it still be more efficient to still use your shader technique? Thanks again!

        • A vertex shader changes where the faces are drawn, but does not really change the original mesh filter. Becauese of that, the mesh collider will not be updated. And, for the same reason, you cannot read the change reading the vertices in the mesh filter.

          You can use the same shader, with very small changes, to output the results (height) to a grayscale texture. Then you can read that in a script, pixel by pixel, to change a separate mesh that you can use for your mesh collider.

          Alternatively, you can use a compute shader to just get the values in an array, without the need to pass through a texture! That is waaaay faster.

Webmentions

  • Interactive Map Shader: Scrolling Effect - Alan Zucconi September 4, 2019

    […] Part 1: Interactive Map Shader: Vertex Displacement […]

  • Interactive Map Shader: Terrain Shading - Alan Zucconi September 4, 2019

    […] Part 1: Interactive Map Shader: Vertex Displacement […]