Ok. First thing, I’m sorry this isn’t the second part of the particles tutorial. I think I have vastly underestimated how complex it would be to extract a clear explanation of our FX system which is very very cool in a twisted and intricate way. Making the rest of the explanation worthwhile will require that I isolate some code from a more complex set. I’ll probably have to split it in two more parts to swallow the effort.
But today, I’ll write about normal maps… and rocks. I have already explained how we had used normal mapped sprites for Transcripted. If you don’t know what normal maps are, you can take a look at this previous post to learn the basics. Drifting Lands is a much more 3D game than Transcripted was. All ships and a lot of background elements are “real 3D” but with a pretty stylized modelling and fancy shaders hopefully making the whole thing look more like an illustration. The overall look of the game is influenced by a lot of games (Journey, Diablo3, Darksiders…) and a lot of animation movies (Disney and Ghibli productions mainly). The art direction is more focused on shapes, silhouettes and harmony of colors, than on detailed hi-res textures or realistic physically-based materials.
Drifting Lands takes place over, around and inside a broken planet. There will be plenty of flying bits of rocks involved in all environments of the game. The background rocks represent a large part of the visual identity of the game so we have worked quite some time to find the good balance between polycount and shader complexity to create nice looking blocks. Right now, we will focus on how these rocks catch the light.
Look carefully at the floating islands of rocks above. See how the external silhouette is really simple? Nothing random here, it’s been handcrafted with care by alternating long and short edges at various angles but it’s still very low poly. Now pay closer attention to “internal edges” where light and shadow meet. You’ll still see the same general design but with a very important difference : there’s a lot of dents and notches making the whole thing a lot richer visually. Usually, this kind of details comes with a quite detailed and complex geometry. If you tried to model the exact same stylized aspect in a traditional way, first it would take you a lot of time, and you would most likely fail to achieve the same shading behavior. See how we can change the light’s position and still get the same nice dented but sharp looking edges between light and shadows?
You could think that this effect is achieved by some kind of cell shading with a non-linear lighting ramp. But you would be wrong. Our rocks are indeed lit with a color ramp texture but it’s mostly a linear gradient. No, the trick is to tweak the normals of some pixels with normal maps and create something which is in fact physically impossible.
Let’s consider only two adjacent faces of a pillar of rock. Each face of a pillar is composed of several triangles smoothed together : normals of these triangles are continuous and point relatively all in the same direction. What we want is to transform the normals of some pixels along the edge of the left face to align them with the normals of the right face. And we will also modify normals of pixels of the right face to align with the polygons of the left.
If you want to translate that in “standard” geometry… well you can’t, you have to use normal maps or altered geometry. When we first went this way, we went the “hard” way : hand drawn normal maps. For each face of stone pillars, we searched for tangent space colors to simulate normals of the adjacent face. And here is what the result looks like…
It’s not really hard to do, just tedious and inefficient. But the result was cool, so we started to think about a better way to do this. In 3DSMax, when you model something, normals are automatically calculated to be perpendicular to the surface. And when two triangles not strictly coplanar are smoothed together, the normals of the common edge is an average of the directions of each triangle’s normal.
Typically you don’t manipulate normals, you only assign smoothing groups to triangles or polys. In Maya, you define creases between faces, which is the same thing as telling the two triangles of this edge to be from different smoothing groups. I don’t know Maya well enough, but in 3DSMax you can edit normals manually and point them wherever you want, even if it has no “real-world” meaning. For example, if you totally invert normals, modified faces will be lit as if the light was the other way around : faces looking at the light will be dark and faces looking away will be bright. Ok, that one is not often usefull… Now imagine several crossed planes to create a fake volume of vegetation. Many light positions will betray the true nature of this model, making it look very cheap.
Left : “classic” normals / Right : edited normals
But if you take all normals of these planes and point them all in the same direction, let’s say up… you’ve got a uniformely lit bush receiving an amount of light similar to the ground where it’s placed.
Let’s get back to our rocks, shall we? Right now, we have a low poly model but we want to add all those dents along the different edges. First step, we’ve got to unwrap the UVs of the low poly model. The important point here is that faces with edges to modify must be separated on the UV layout. You’ll be able to create ‘clean’ dents only between faces of different UV clusters because of texture filtering and mip mapping. You have to keep clear space for padding between your clusters. I’d recommend between 8 and 16 pixels depending on the normal map resolution and the size the objects will have most of the time on screen. You should end with something like this :
Now it’s cutting time but first you have to create a copy of your mesh. The original will stay low poly and the new one will be the ‘high-poly’ version. You must add geometry by cutting polygons along the edges you want to dent but keep everything flat. You add vertices and edges but do not move anything! If you do not display edges in the viewport, the high poly version should for now look roughly the same as the low poly. A few shading differences could appear due to the new tessellation but we will deal with that later. You don’t have to create really complex dent shapes right now, you’ll be able to refine manually the normal map later because you’ll have color references.
Now it’s time for ‘edit normals’, the Max modifier to manipulate normals manually. You should be aware that edit normals must often be the top modifier in the modifiers stack. Indeed if you add other modifiers, like edit mesh or edit poly on top of it, the smoothing groups are reapplied, normals are automatically recalculated and your manual editing is overwritten. For each poly to modify the process is as follow :
– select the normals of the poly you want to edit (you can select them all at once through face selection)
– split the normals of the poly you’ve selected (this is the same as making this poly part of another smoothing group)
– unselect everything because you’ve got the newly created normals also selected
– copy the direction of one of the normals of the adjacent face of the rock pillar
– reselect the poly you were editing, now only the normals of the poly are selected and not the normals of the surrounding faces
– paste the normal direction from the adjacent face
– repeat for all dents! Please note that you can edit multiple polys at the same time if they are facing the same direction and must be converted to another common direction.
You should now have something looking a lot like this :
Turn the light around in the viewport and see how the light is reacting along the edges. That’s what we want! You could export the model as is and it would still work exactly the same in Unity (providing you’ve configured the FBX importer to ‘import’ normals and not ‘calculate’ normals). But we don’t want to use that model because :
1 – it’s high poly
2 – the tessellation for dents probably created bad looking smoothing irregularities
The next step is normal map baking. It’s exactly what you do when you work on a AAA game and you want to transform a 3-million-polys-character into a 10000-polys-model. You’ve got a high poly model, a low poly model and you want to calculate the normal map which will transform the low poly into a high poly looking model. Ok, so I won’t cover the whole process in detail. There would be too much to say and you can find hundreds of tutorials out there about this. So let’s keep it a simple step by step :
– place the low and high poly models at the same coordinates (they should fit really well because they have actually the same 3D shape)
– select the low poly model
– go to the render to texture panel (shortcut ‘0’)
– choose your padding
– in the projection mapping section, enable projection mapping, pick the high poly object and go the projection options
– select the RayTrace method, disable Use Cage, choose tangent normal space and set green orientation to up
– return to the render to texture panel, use existing mapping channel, and in the output section add ‘normal map’,
– select file name and destination, texture resolution…
– … and finally click the render button!
If you see a window rendering something which does not look like a normal map, keep cool, it’s ok (in the twisted minds of autodesk engineers at least). The file written on your hard drive is a normal map. Open the normal map in Photoshop and add a layer with the UV layout on top of it (you can easily render the UV layout in the Unwrap editor window : Tools > Render UVW template). At this point you want to do two things :
– remove all smoothing problems caused by the details tessellation : all blueish color gradients which are not clearly color spots corresponding to notches and dents must be covered with a uniform 128/128/255
– if you want, add even more details to the dents you roughly modeled in 3DSMax. You just have to use the colors calculated by 3DSMax or 128/128/255 if you want to ‘erase’ some bits. Don’t forget to create color padding for all color stains you add.
Left : raw normal maps from render to texture / Right : manually edited normal maps
Now you want to import the low poly model in Unity (important parameters : import normals, not calculated / split tangents on). If you add a material with normal mapping and import the normal maps from Photoshop, it should work!
… or maybe not completely. I was stuck for a long time on a last problem and only solved it recently. For some creases, I was using the same technique but in Unity the light was behaving strangely with the normal map created in Max. And this is due to the way normal maps are stored in Unity. In order to save resources, Unity normal maps are stored only in 2 channels even if you need 3 values to define a normal. If you store 2 values for a 3D vector but you know its length, you can calculate the third component (because Pythagore is your friend). It happens we know the length of a normal, it’s one. Every Unity built-in shaders use a function called UnpackNormal whose role is to calculate the missing Z (blue) component. But there’s just a tiny problem : there are in fact two Z values possible, one positive and one negative. By default Unity decides that the good one is the positive value and honestly I can’t blame this decision. In a realistic world, tangent space normals with negative z value are impossible… but our technique relies on creating impossible surfaces.
I’ll get back to the shader we use in a later post but here is our solution. We have to import the normal map as a standard texture. Our shader is a surface shader but in the fragment shader we replace the usual :
o.Normal = UnpackNormal( Tex( _BumpMap, uv_BumpMap));
o.Normal = Tex( _BumpMap, uv_BumpMap)*2 - 1;
And yes, the editor does complain that we use a standard texture as a normal map but we know better :)
A few additional notes :
– the Unity normal mapping storage problem arises only when you try to shift a normal by more than 90° on one of the axis. If you don’t do that, you should be safe with classic Unity normal maps.
– this technique is not without problem : dents on the outer silhouette of the object don’t always look terribly good so you should keep them small.
And that’s all for now! Do not hesitate to ask questions or propose improvements to this technique.
If you like what we do, please share our work by liking our facebook page by following us on twitter or sharing our youtube channel. We’ll self publish Drifting Lands and we need all the love we can get out there ;)