The beginning of a preprocessing pipeline.

We’ve been somewhat silent. I would say very silent. Silence doesn’t mean we’ve done nothing though, quite the contrary.

We’ve been working A LOT the tools, mainly the content browser and the level editor. On my part, I’ve been fidgeting with the content browser to make it simpler, more minimalistic and faster. For example, the content browser now allows for a static model to be attached with a particle node, or several, which means that artists are able to build more complex (albeit non-skinned) pre-defined objects by adding particle nodes to them. The content browser is also more responsive and interactive due to the new system used when saving, loading and applying object parameter changes such as textures and other shader variables.

However more relevant is the new IBL pipeline which allows a level designer to place light probes in the scene, have them render a reflection cube map and an irradiance cube map, and then have it applied in a novel fashion on overlapping objects. To do so, a designer puts a light probe in the scene, give it a few parameters, presses a button and woop, the entire area is affected by said reflections and irradiance. This effect gives an illusion of realtime GI, since it simulates specular light bounces through reflections, and ambient light bounces through irradiance. The following image shows the result as displayed inside the level editor:

This image shows how reflections are projected inside scene.

This image shows how reflections are projected inside scene.

To do this, we first capture the scene from 6 angles using a bounding box as a capturing area. This area is used to later on reproject the reflecting rays so that we get parallax corrected cube maps when rendering. The result from the render is saved as a render target, and is then processed by CubeMapGen, which is now integrated into Nebula as an external library. Using a predefined set of settings, we generate the reflections and optionally, the irradiance; output it to the work folder and then assign them to the influence zones.

Simple stuff. But here comes the interesting part. As far as I have come across (this might be false), the common solution is to handle objects entering the reflection zone, which gets assigned a reflection map from which to calculate the reflections. Some solutions use the camera as a singular point of interest, and assigns the reflections on every visible object when the camera enters the influence zone. We do it differently.

We had this conundrum where we visualized two different zones of irradiance separated by a sharp border, say a door. Inside the room the lighting is dark, and outside the room, in the corridor, you have strong lights in the ceiling. If an object would move between said areas, then the irradiance would be gradual as the object would cross between these zones. This can be accomplished in a pixel shader by simply submitting N reflections and cube maps, calculate the distance between pixel and cube map, and then apply a blending based on the distance.

We thought of another way in which we don’t have to do the work per object. A way that would let us draw an ‘infinite’ amount of reflections and irradiance per pixel, and without the overhead of having to calculate reflections on pixels we simply cannot see. Enter deferred decal projected reflections.

By using decals, we can project reflections and irradiance into the scene using an origin of reflection and an influence area. The decal blends the reflections and irradiance based on a distance field function (box or sphere), which allows for other reflections to be blended. Decals are then rendered as boxes into the scene as any object but with a special shader that respects roughness and specularity. Using this method, we avoid:

1. Reflecting unseen pixels.
2. Submitting and binding textures per each affected object.
3. Applying reflections on dynamic objects without doing collision checks.
4. Having an upper limit on reflection/irradiance affecting zones.
5. Popping.

We have some limitations however, namely:

1. Render decals ordered on layer.
2. Have to switch between box and sphere distance field functions without shader switch overhead (sorting is impossible since the layer dictates draw order).
3. Potentially switch back and forth between many different textures (if we can see many reflection simultaneously).
4. We don’t calculate reflection and store it in the G-buffer. The projector shader is somewhat complex and computational heavy, so any simplifications are welcome.

Our solution give us the ability to project decals into the scene instead of having to apply them per object, meaning we won’t have any popping or appearing artifacts, and good scalability with the number of objects. I doubt this hasn’t been done before, and there are probably caveats with this method which are yet to be discovered.

NIDL files everywhere!

Everybody that actually worked with N3 has probably stumbled over NIDL files. They are basically XML files that are compiled to c++ code via the idl compiler and then included in the project. They are used for example for defining the messages between entities/properties and for defining new scripting commands.

One of our age old problem when creating a level editor tool was the use of properties and attributes in the project. If the editor wants to be able to display information about properties and allow editing of attributes, it has to be linked to the actual project in some way since all that information is defined only in the code. At first we had the idea to turn everything into a shared library and use a simple launcher that will load the games shared library and then run it. That would allow the level editor to actually just load in the specific game’s library and add the actual properties to the project. While that would definitely be awesome we decided against that for the time being since we don’t have enough time.

Our much simpler solution is to have the level editor only use very simple entities with graphics and physics (for picking) and just use generic editor widgets for all attributes used by the entity. The drawback with this option is that it’s not possible to see the effect of most attributes live, but at least it is possible to edit, save and batch them to the level database.
However, this does not solve the problem of the editor not knowing about what attributes exist and are used by which entity class. Now this is where the NIDL files come into play again. I extended the idl compiler to support new entries for attributes and properties and even the information about what attributes are used by which property. Those NIDL files can then be used by the level editor to display and edit unknown attributes and properties, additionally the idl compiler will generate code that automatically defines and declares all attributes and even registers them in the properties automatically so that they don’t need to be defined in the code as well, thus avoiding the risk for them being out of sync.

Transparency

Well, I’m back from being sick, so this post marks the first of the brand new work year. What we’ve realized is that we need to start wrapping things up, and that means making everything work together. To do this, everything related to materials and shaders have to be fully functional in the level editor (yeah, we have a level editor too!). I have a list with things to do, but it’s mostly smaller tasks such as fixing dynamic linkage, real-time reloading of frame shaders and material palette etc. Seeing as one can live with not using dynamic linkage (because one can just create specialized shaders), it’s more important to work on the cross-application stuff. One of the major ideas we came up with, is how to texture and set shader-specific variables to a model, without doing so in Maya using some very static and moronic GUI.

Currently, the .n3-files hold information about what texture is attached to what target. This is a problem because the .n3-files are in the export folder, which you will not commit. Instead, the assignment of variables has to lie in the work-folder, in some sort of pre-export model file. A model exported from Maya will have all its basic stuff, such as the Solid material, and no variables attached. Then, using the level editor and Nody lite (not yet implemented), one will select a model from a list of models, see a preview of that model using the assigned material and shader variables. The user can then chose to use another material from a list of materials, and see the model change in real-time. The user will also be able to set all material variables, such as texture, tessellation factor etc, depending on the shaders the material uses, and then apply the new settings. What will be happening beneath the hood is that model file will be replaced. First, the Nody lite runtime will replace the work .n3-file, so that future batches will still work smoothly, and also re-batch that one model so that it’s binary .n3-file looks right. That very user will then be done with the model, and will still be able to commit his or her export folder, and allow others to batch what he/she just did.

This of course means that I have to write another n3-writer, which writes to an xml-style model file, and then an xml-to-n3-converter which converts the xml-files to binary .n3. No problem. Nody lite will also be able to process and handle characters and particles. Nody lite can be compared to the UDK material editor. There, one has a big shader node, which has tons of inputs (basically it’s just one huge über-shader), where one can attach variables to each slot. This is basically just putting shader variables at different positions. Now, you might ask, how can Nody and Nody lite fare against such a worthy beast? Well, Nody not only lets you customize per-model shader variables and textures, but also lets you design the ENTIRE shader will every single tiny feature. Of course, this means that whatever engine you are using, you will need to support many different shaders to fully accomodate for Nody, but have in mind that one can just as easily create an übershader in Nody and use it the exact same way. But enough bragging, here’s a couple of images showing the level editor (in which I haven’t been involved) running in DX11.

This image shows Nebula running using the level editor with one global light, four spot lights, and five point lights spread out around a tiger tank. This window renders everything currently implemented using Nody, which means it uses deferred lighting, SSAO, and of course materials. You can also see the debug shape rendering, which now also works in DX11 because of the new ShapeRenderer.