So, where were we… After all, it’s been nearly a year for this long due second part ^^
Ha, Yes! In the first part, we have seen how we could generate some kind of mask in two parts for the appearance and disappearance of our particles. We also covered how we would use a level operation rather than a binary cutout mask. So we’ve got something a bit like that :
Right now, we haven’t talked much about the diffuse aspect of our particles and their color. That’s what we will focus on right now. The most basic thing you can do is just apply a diffuse texture to the particle, something preferably vaguely related to the animated mask. But if you have a fade in mask different from the fade out mask, you won’t be able to have something very coherent along its whole life. Let’s forget that, it just sucks…
The first thing we want is animate the color of the particle along its life. If we stick to our particle example which look a bit like an explosion part, it’s easy to choose colors : the first part, the appearance, is supposed to look like fiery gases expanding. So it will start a bright yellow, turning to orange or brown as the gases cool down. Then as it fades out, it will turn rapidly to black smoke. Rather than setting colors directly into the particle’s material and tween through the shader, we will use a texture. Sampling a texture is more costly than tweening between 2 colors, so why use one ? Well, with a texture we can finely tune all the colors the particle will have over time. We’re not limited to two or three tints, but we can set up to 128, 256 or even 512 tints along its life. 512 is definitely overkill, even 128 is, we’re talking about FXs lasting 3 or 4 seconds max most of the time. The second reason for using a texture is more important but let’s talk about that a bit later. If we explain things step by step I’m less likely to lose you along the way :)
So with our color ramp, we should be able to achieve this with the correct shader (we will see the actual complete shader at the end).
And that’s the texture we’re using to define the colors along the life of the particle.
From the left of the texture to the middle is the appearance, from the middle of the texture to the right border is the disappearance. Please note that it’s not related to the actual duration of our fade in / fade out phases. We can have a fade out phase 5 times longer than our fade in but each part will still take half of the color ramp.This part is handled before the shader (more details a bit later too). I’ve used a ‘large’ rectangle texture here but for the sake of readibility. Usually color ramp textures are defined as 1D or 1 dimension. You only use the width to store information and only need an height of 1 pixel. Making it square is useless… or maybe not : we could use the height for something else, store additional information. Because honestly, what we have right now is not brilliant. It’s just a plain uniform color animated over time. It still sucks.
The last touch is to apply a color gradient which is actually drawn along the gradients used for the fade in / out effect. So our previous color ramp now look more like this :
On the horizontal axis is still the life of the particle. On the vertical axis is stored the gradient drawn along the visible part of the mask. I insist : the visible part of the mask at a specific time defined on the horizontal axis. This gradient is animated with the mask and can change color over time. So the result looks a bit like this :
That’s way better! And don’t forget that’s just ONE particle. We can display dozens of those with various masks and color ramps! What’s cool too is that we can also use the alpha channel of this 2D color ramp map to define the transparency of the particle along its whole life. We don’t have to make the particle disappear completly through its mask we can also do this through the alpha channel of the ramp. Here is an alpha mask of 2D ramp and the result : a particle fading out at the end but mainly in the darkest (upper) part of its 2D ramp
There’s still one thing to discuss. Since we’re applying a color gradient along the grayscale images provided by the red and green channels of the animation texture, we should see a hiccup between the two phases. Something a bit like this :
Indeed, since the two animation gradients are different, the colors can’t match between the phases. To correct this problem, we have added a short transition or blending period between appearance and disappearance. And the problem is gone…
Be warned! If you’re an artist and have no development skills, now is maybe the time where you want to leave. If you think this technique may be useful to you, go and get your more bearded technical fellows to read what comes next, to implement it their own way and maybe a lot better than me.
I won’t detail the whole implementation of our particle system, it would be far too long. The basics are the same than for every other 2D system developed for Unity :
– we create a pool of FXs which are actually proceduraly generated quads
– when we need a FX, we initialize the first available in the pool free list
– when it’s done, it will automatically “self-dispose” itself and go back in the pool free list (and not be destroyed and garbage collected).
If you’re not familiar with pooled objects… well, you should be ! So go find some documentation about this. Here’s the unity tutorial about this subject. For most action games where you need a lot of objects and a good steady framerate, you definitely should use pools rather than instantiating / destroying objects.
One other major goal is to always use as few materials as possible because it will really hurt performances if you don’t. But how could all our particles use the same material when we clearly have to treat them individually ? Each particle follow its lifetime evolution, they can use the same textures but not the same parameters. How can we change the parameters of individual particles without creating new materials ? Because yes, in Unity, when you modify a material, you either modify all the objects using this material (with the sharedMaterial) or you’re actually creating a new copy of the original material. Even if you’re not really aware of it.
When I was young and dumb. Errr… 3 years ago, so ok I cannot invoke youth as an excuse : I was just dumb. Well, before… I used information stored in the geometry to make each sprite or particle unique. They all used the same material but I would store the opacity or tint of each sprite using the vertex color of the 4 vertices. Geometry properties like uv, normals, vcolor are linearly interpolated across the surface of each triangle. If you set a single value to all four vertices, you can get this value in the fragment shader for all pixels of the quad. It’s fun to use geometry data for other purposes than what it was created for but it’s inefficient sometimes : you have a limited set of values available, their type is fixed and sometimes you really need them for their original use. The other problem with this, is that you have to actually modify the mesh content to animate your values. And updating a mesh, even a simple one, is not so good. If you want to see a really fun and valid use of tweaking geometry data for other purposes, go over there to see how we used vertex color to animate a beating heart in Transcripted.
The solution is elsewhere, it lies in the concept of MaterialPropertyBlock implemented by the good folk behind Unity. A MaterialPropertyBlock is exactly what we need : a set of custom data defined for each renderer but still using the exact same material. All our particles are using more or less the same material but we use uv coordinates and MaterialPropertyBlock to make them unique.
A small disclaimer is required here : using MaterialPropertyBlocks is not ‘free’ and it does have impact on performances. Rendering 1000 objects with the same material is significantly slower if you apply a unique MaterialPropertyBlock to each of their renderer. Applying the MaterialPropertyBlock does have a cost in itself and it will prevent automatic batching. Though, it’s still significantly faster to use MaterialPropertyBlock than modifying geometry of each instance of your particles because it will not increase the number of VBOs in your scene. What I am trying to say here is that you can’t do everything with this technique. We chose to use this trick, because it allows the creation of compelling visual effects with a lot less particles. More complex animated particles but with very small requirement in term of memory : 1 texture the size of the FX and one 2D ramp which can be shared between a lot of different FXs. No need for large spritesheets.
I will now detail 2 Classes and a shader to implement the basics of our system. As I said, I won’t describe here the whole of our FX system. It’s way too many lines of code! I’ve tried to isolate only the relevant code and I’ve excluded one major aspect of things : our test particle here will use simple textures with only its own content. Of course, we do not use individual textures in our game. All our FX maps are packed in atlases and there’s a bit of UV juggling to crop only what you need for each particle. Still I have left in the shader and scripts mentions of these UV manipulations. All will be explained in the comments.
So here we go !
Our project :
– FX_generator.cs : a MonoBehaviour held by the main camera. This is where everything goes into motion and the only MonoBehaviour we will use.
– QuadTransitionFX.cs : an object instanced by FX_generator by the hundreds or thousands. Not a MonoBehaviour but this object will instantiate a Prefab and will feed its Renderer with a new MaterialPropertyBlock on every frame. This object handle the logic of a single particle and update its ‘view’.
– fx_shader_transition.shader : a surface shader animating the texture
– a single scene with a HDR camera and bloom camera effect. The camera holds the FX_generator Monobehaviour.
– a single QuadTransitionFX prefab which is a quad with a material using the fx_shader_transition shader. The animation and ramp textures are already assigned to the material.
As you can see a lot has been done ‘manually’ to simplify the code part.
And now the QuadTransitionFX Class :
And Finally the shader :
Finally, after nearly one year, it’s done. I’m afraid the shader is not exactly crystal clear. But if you take it step by step and experiment a bit with it, you should be ok and get a clear idea of how it’s working. If you want a LOT MORE practical examples of what you can achieve with this method, you can just download the free alpha version of Drifting Lands on Steam. Pretty much all visual FXs of the game are based on this technique.
You can download the little test project as a Unity Package for Unity5 right HERE.