Batching

One major feature a game engine needs to be able to do, is distribution. And by distribution I don’t only mean being able to distribute to customers and users, but also to developers and artists. How annoying wouldn’t it be to have to open every single scene in Maya, every shader project in Nody, every texture, and export them individually. Not only that, each artist or developer would all have to do this in order just to get the engine running.

Batching solves this. Batching can be both a good and a bad thing. The good thing is that everyone can run the batcher once and have everything set up, nice and working. The bad thing is that batching may and in most cases will perform lots of redundant batches, which takes time. The primary reason to why we’d want to redesign the way Nebula content is batched is because of the very fragile character batching, where everything had to be done in the exact unintuitive order or else the batcher would crash. But new reasons, such as the Nody shading system with the header file and the xml-formatted model files means that we need a new set of tools to batch the new content.

Right, I changed how models should be handled. Previously models would be exported DIRECTLY from Maya, so Maya would have to start in order to export the models. Not only that, but Maya would also need a plugin in order to set Nebula-specific stuff, such as the Nebula shader and the Nebula shader variables. In order to avoid this, Maya is only used for modelling. When exporting from Maya there are two paths. If the model file exists, only the mesh, primitive groups, and the node order will be updated. If it doesn’t exist, the exporter, currently under development, will create a basic model file using the always-existing Solid material. Also, instead of saving this directly to the export folder everything gets saved to work, so the model files, which previously only existed in export, can now be modified outside of Maya. Why you may ask?

The idea somehow resembles the material editor found in UDK. A model is presented to the user in a real-time preview of Nebula. The user can then change the material, and also set variables such as the textures, integers, floats and vectors, and of course see the result in real-time. Whenever a change is made, the xml-formatted model file is altered. This means that the modified model-file, still in the work folder, can be committed, and then batch exported without needing the huge Maya binary file, and everyone updating and exporting will see the changes. Not only is this better because one doesn’t need Maya for anything else but uv-mapping and modelling, but it’s also very easy to see how the model will look with the appropriate textures and settings seeing as the preview is rendered directly in Nebula.

This editor will exist as a tool for the level editor, allowing you to edit the model and see it in your level, in real-time!

Models aside, there is also the thing with batching shaders. The easy part is to compile the source, that’s simple, traverse all files in folder A, compile and write to file in folder B. The hard part is writing the .sdh-files without having to load the entire project every time. Since loading the graph network requires us to create all nodes, which in turn creates all the graphics, we’d want to avoid it because of the performance issues. Also, it seems a bit redundant to create graphics if we’re only going to use a command line interface to batch our shaders.

The project loader already works by loading specific objects, so I can chose to only load the settings manager and thus avoid having to create the entire scene in order to batch my shaders. The only problem is that my settings manager does not have the information required to perform a complete header generation, seeing as it doesn’t have any information relating to samplers, textures or class interfaces. These are found and generated when traversing the node graph, and what do we need for that? You guessed it, the graphics scene, which we don’t want in the first place! I’m currently working on a solution to this problem, and when it’s found, we will be able to batch the shaders with the push of a button.

When all the batchers are done, we will implement a program, very much like the current batcher, which will run all of our batchers, or a subset of them, using a GUI application. Also, the batchers themselves should also have a way to only export modified files, so as to avoid redundant exporting. It shouldn’t be more work than just checking the changed date for both the exported file (if present) and the source, and just see if the exported file has a change date older than the source.

So to wrap this up. Batching is good and evil, but is always vital for setting up a game environment from scratch. The key is to make the batching as fast as possible, so as to not slow down the development process whenever someone updates and start working. Also, it is also very important to avoid batching stuff which is already up-to-date, because it takes unnecessary time rather spent on working

Transparency

Well, I’m back from being sick, so this post marks the first of the brand new work year. What we’ve realized is that we need to start wrapping things up, and that means making everything work together. To do this, everything related to materials and shaders have to be fully functional in the level editor (yeah, we have a level editor too!). I have a list with things to do, but it’s mostly smaller tasks such as fixing dynamic linkage, real-time reloading of frame shaders and material palette etc. Seeing as one can live with not using dynamic linkage (because one can just create specialized shaders), it’s more important to work on the cross-application stuff. One of the major ideas we came up with, is how to texture and set shader-specific variables to a model, without doing so in Maya using some very static and moronic GUI.

Currently, the .n3-files hold information about what texture is attached to what target. This is a problem because the .n3-files are in the export folder, which you will not commit. Instead, the assignment of variables has to lie in the work-folder, in some sort of pre-export model file. A model exported from Maya will have all its basic stuff, such as the Solid material, and no variables attached. Then, using the level editor and Nody lite (not yet implemented), one will select a model from a list of models, see a preview of that model using the assigned material and shader variables. The user can then chose to use another material from a list of materials, and see the model change in real-time. The user will also be able to set all material variables, such as texture, tessellation factor etc, depending on the shaders the material uses, and then apply the new settings. What will be happening beneath the hood is that model file will be replaced. First, the Nody lite runtime will replace the work .n3-file, so that future batches will still work smoothly, and also re-batch that one model so that it’s binary .n3-file looks right. That very user will then be done with the model, and will still be able to commit his or her export folder, and allow others to batch what he/she just did.

This of course means that I have to write another n3-writer, which writes to an xml-style model file, and then an xml-to-n3-converter which converts the xml-files to binary .n3. No problem. Nody lite will also be able to process and handle characters and particles. Nody lite can be compared to the UDK material editor. There, one has a big shader node, which has tons of inputs (basically it’s just one huge über-shader), where one can attach variables to each slot. This is basically just putting shader variables at different positions. Now, you might ask, how can Nody and Nody lite fare against such a worthy beast? Well, Nody not only lets you customize per-model shader variables and textures, but also lets you design the ENTIRE shader will every single tiny feature. Of course, this means that whatever engine you are using, you will need to support many different shaders to fully accomodate for Nody, but have in mind that one can just as easily create an übershader in Nody and use it the exact same way. But enough bragging, here’s a couple of images showing the level editor (in which I haven’t been involved) running in DX11.

This image shows Nebula running using the level editor with one global light, four spot lights, and five point lights spread out around a tiger tank. This window renders everything currently implemented using Nody, which means it uses deferred lighting, SSAO, and of course materials. You can also see the debug shape rendering, which now also works in DX11 because of the new ShapeRenderer.