Particles, post effects and general ignorance

So I thought I could be clever and remove redundant context device switches by not setting the index buffer in the device context if the very same index buffer was set before. Little did I know that I, just recently, made it so that the device context gets reset for each pass, so as to prevent any unnecessary render targets, shader variables and shaders to be attached to the rendering pipeline. Well, the obvious problem was that even though two objects with identical index buffers were rendered, let’s say first for the shadow maps, and then for the actual color, it would be so as the same index buffer would be set twice. Well, seeing as the entire device context gets reset between passes, this isn’t the case. I chose to tell you this, not because of stupidity (although it’s somewhat embarassing) but out of curiosity at what kinds of glitches may appear. The glitch was that if a light had shadows enabled, and an object which was previously shadowed moved out of the view able area of the light source (thus not rendering the shadows anymore), some other object would disappear.  So whenever you doubt your visibility system, or if you have some really strange and unpredictable glitch, it might be something as simple as pre optimization, which, as we know, is the root of all evil!

Anyways! Let’s get to the good news! So, we have particles, again. Directly in the content browser. Epic winage. More like good use of christmas… Not only are particles in place, but so are cube maps. So in effect I’ve also implemented a shader which uses environment mapping to get stuff shiny and pretty! Another use of cube maps is of course skyboxes, and as such, our at-will artist, Samuel, has made a very pretty skybox prototype which is now in use as the default skybox! This of course adds a much more ‘professional’ look to the content browser instead of the boring single-colored background.

gun

Environment mapped asset from our casual graphics artist

The shading system is no longer dependent on Nody, so we can once again implement new shaders. Using DirectX we’re using the ye olde’ Effects system, which works for all our purposes. And since Microsoft seem to have decided NOT to discontinue the Effects system, we will use it for our DirectX implementations. The awesome part of this system is of course the flexibility of having all shading states and such directly in the shader file. Also, all shader functions and application entry points are directly written into the file, so it’s really easy to implement new stuff. For the OpenGL implementation we must design some sort of ‘language’ which allows for the same set of functionality. This could be fixed using the same Nody-ish structure where you tag code as a sections, so that the code itself can be loaded as different objects, which should then also be linked together using the very same file. But that’s a project for the future. Only when this fundamental implementation is in place can we really design a node-based shader editor which solves the problem of graphics artists being able to create new shaders.

So tessellation is also back in business. It works. It looks good. Although it’s a bit shaky, since there is a very high risk of holes appearing in the mesh over mesh seams. We’ve tested a very quick low-poly version of a high-poly mesh with a displacement map, and it looks very good! One must still be very careful with the UV’s however, and of course by extension also the displacement map. The only thing which mitigates this pain is the fact that the Unreal Engine seems to have the same problem. Until some clever fellow(s) solves this problem, we’re going to leave it to the artist to solve the problem. Seems safe enough.

As if this wasn’t enough, I’ve also fixed post effects. By fixed I mean fiddled. By fiddled I mean adapted. Adapted to the DX11 render path. This means that post effect entities, an ingenious construction of Nebula (not done by me) which allows for post effects to be animated whenever one of these entities is encountered. A post effect entity gets triggered if the point of interest is inside a post effect entity. So in the level editor, we will be able to place a post effect entity, which will be triggered when entered. This can make for really cool effects, such as saturation, color balance, contrast, fog etc. The first post effect entity set will be the default entity, so every level could have one to set the ‘mood’ of the level.

And I’ve also reinstated the depth of field, which can be controlled using a post effect entity as every other post effect. I personally think it’s a bit gimmicky, but it’s serves the purpose of softly forcing the player to focus on a specific point. The only thing which bothers me with the current DoF implementation is that it uses variables to determine where the depth should be, not where it should be. This should be easily handled by sending a 2D position where in screen-space the focal point should be. The DoF range and intensity should still be a variable in my opinion, but the method with which the focal point gets determined is a bit shaky in my opinion.

Showing off the old and reimplemented DoF

Showing off the old and reimplemented DoF

hdr

Showing some modified HDR parameters, such as bloom color and bloom range

// Gustav

4 Comments

      • Alexander

        Cool!
        I would like to have the copy of your variant of the engine. You have a many tools in your Nebula, that will be very helpful for me 🙂
        I get acquainted just with source of Nebula to use it in my little project.
        When it will be possible to expect approximately the publication, if will be your’s Nebula generally published for all?

        P.S. sorry for my bad English

Leave Comment