Epidermis

I got a tip that skins are a very important thing to render properly. THE method to solve rendering through semi-solid objects such as skin, leaves etc. is called subsurface scattering, which can be seen here http://en.wikipedia.org/wiki/Subsurface_scattering.

The only problem with the general and near-perfect method of rendering skins using subsurface scattering is that one needs to perform light-operations per each light source. This means deferred rendering is out the window, which is bad! So, there is method, called SSSS or Screen-Space Subsurface Scattering, which has seen use in engines such as in the Unreal 3 engine.

Our graphics artist, Samuel, thought it would be a very nice addition to Nebula if I were to add this algorithm in order to render skins properly. Here is the result:

facenosss

No Screen-Space Subsurface Scattering

facesss

With Screen-Space Subsurface Scattering

I accomplished this by adding a new MRT, which renders Absorption color, Scatter color and binary mask which masks out the area where the SSS should take place. We need to mask out the rest because we are working with this in screen-space, and must therefore be very careful exactly what we apply our algorithm too, so that ordinary static objects don’t get this effect.

The algorithm itself is simply a horizontal and vertical bloom, where the light is blurred instead of the color depending on the depth of nearby pixels. This, combined with a Gaussian distribution which ‘favors’ reddish hues gives the effect that skin appear more life-like since it simulates light being spread under the surface of the skin.

The exact implementation in Nebula uses two passes. One pass is the standard boring old skinning pass, which calculates lighting, albedo, emissive and specularity. Then, we render all geometry which uses the SSS process, and while doing so we render out the absorption map and scatter map. Also, we render a small data buffer, which holds our variables, such as SSS width, SSS strength, and the SSS correction factor, as well as a bit which tells us if a certain pixel should be SSS:ed or not. Then we apply our screen-space post effect which runs the actual algorithm, performing the above seen image. This means that we can perform per-object settings for the SSS, while still rendering the result in screen-space. I also fixed a minor glitch with the original implementation, which gave artifacts when the edges of the screen cut a subsurfaced area by using the mirror address mode for the sampler, which means that pixels along the border of the screen wont suffer from wrapping samples, which in turn removes the artifact.

The red rectangle shows an artifact which occurs when using a wrap address mode

The red rectangle shows an artifact which occurs when using a wrap address mode

I also applied the same technique to the HBAO shader, seeing as it previously suffered from the same problem with artifacts along the screen borders.

// Gustav

Particles, post effects and general ignorance

So I thought I could be clever and remove redundant context device switches by not setting the index buffer in the device context if the very same index buffer was set before. Little did I know that I, just recently, made it so that the device context gets reset for each pass, so as to prevent any unnecessary render targets, shader variables and shaders to be attached to the rendering pipeline. Well, the obvious problem was that even though two objects with identical index buffers were rendered, let’s say first for the shadow maps, and then for the actual color, it would be so as the same index buffer would be set twice. Well, seeing as the entire device context gets reset between passes, this isn’t the case. I chose to tell you this, not because of stupidity (although it’s somewhat embarassing) but out of curiosity at what kinds of glitches may appear. The glitch was that if a light had shadows enabled, and an object which was previously shadowed moved out of the view able area of the light source (thus not rendering the shadows anymore), some other object would disappear.  So whenever you doubt your visibility system, or if you have some really strange and unpredictable glitch, it might be something as simple as pre optimization, which, as we know, is the root of all evil!

Anyways! Let’s get to the good news! So, we have particles, again. Directly in the content browser. Epic winage. More like good use of christmas… Not only are particles in place, but so are cube maps. So in effect I’ve also implemented a shader which uses environment mapping to get stuff shiny and pretty! Another use of cube maps is of course skyboxes, and as such, our at-will artist, Samuel, has made a very pretty skybox prototype which is now in use as the default skybox! This of course adds a much more ‘professional’ look to the content browser instead of the boring single-colored background.

gun

Environment mapped asset from our casual graphics artist

The shading system is no longer dependent on Nody, so we can once again implement new shaders. Using DirectX we’re using the ye olde’ Effects system, which works for all our purposes. And since Microsoft seem to have decided NOT to discontinue the Effects system, we will use it for our DirectX implementations. The awesome part of this system is of course the flexibility of having all shading states and such directly in the shader file. Also, all shader functions and application entry points are directly written into the file, so it’s really easy to implement new stuff. For the OpenGL implementation we must design some sort of ‘language’ which allows for the same set of functionality. This could be fixed using the same Nody-ish structure where you tag code as a sections, so that the code itself can be loaded as different objects, which should then also be linked together using the very same file. But that’s a project for the future. Only when this fundamental implementation is in place can we really design a node-based shader editor which solves the problem of graphics artists being able to create new shaders.

So tessellation is also back in business. It works. It looks good. Although it’s a bit shaky, since there is a very high risk of holes appearing in the mesh over mesh seams. We’ve tested a very quick low-poly version of a high-poly mesh with a displacement map, and it looks very good! One must still be very careful with the UV’s however, and of course by extension also the displacement map. The only thing which mitigates this pain is the fact that the Unreal Engine seems to have the same problem. Until some clever fellow(s) solves this problem, we’re going to leave it to the artist to solve the problem. Seems safe enough.

As if this wasn’t enough, I’ve also fixed post effects. By fixed I mean fiddled. By fiddled I mean adapted. Adapted to the DX11 render path. This means that post effect entities, an ingenious construction of Nebula (not done by me) which allows for post effects to be animated whenever one of these entities is encountered. A post effect entity gets triggered if the point of interest is inside a post effect entity. So in the level editor, we will be able to place a post effect entity, which will be triggered when entered. This can make for really cool effects, such as saturation, color balance, contrast, fog etc. The first post effect entity set will be the default entity, so every level could have one to set the ‘mood’ of the level.

And I’ve also reinstated the depth of field, which can be controlled using a post effect entity as every other post effect. I personally think it’s a bit gimmicky, but it’s serves the purpose of softly forcing the player to focus on a specific point. The only thing which bothers me with the current DoF implementation is that it uses variables to determine where the depth should be, not where it should be. This should be easily handled by sending a 2D position where in screen-space the focal point should be. The DoF range and intensity should still be a variable in my opinion, but the method with which the focal point gets determined is a bit shaky in my opinion.

Showing off the old and reimplemented DoF

Showing off the old and reimplemented DoF

hdr

Showing some modified HDR parameters, such as bloom color and bloom range

// Gustav

Skip to toolbar