Working with surface materials

The transform of moving the render state stuff out of the model and into its own resource has been done, and it turns out that it works. It’s concept is similar to what the previous post touched upon, although with some slight modifications.

For example, my initial idea was to apply a surface material ONCE per shader, and then iterate each submesh and render them. However, it turns out that a submesh with the same surface material properties might actually need to use other shaders, for example, we might want to do instanced rendering, which picks a model matrix from an array using gl_InstanceID and occasionally non-instanced rendering, which uses the singular model matrix. One can consider that we always perform instanced rendering, i.e. that we always consider an array of per-object variables, although there is no semantic way in the shader language to denote (and contextually shouldn’t be) variables or uniform buffer blocks to have a per-draw call changing pattern. What we don’t want is to have to implement a version of each surface which uses instancing. An optional solution is to not use instancing for ordinary objects, but exclusively for particles, grass or other frequently updated (massively) repeated stuff.

Instead, it might be smarter to adapt the DX11 way which is that every single variable is a part of a buffer block, where variables declared outside of an explicit buffer block is just considered to be a part of the global block. This method will always require an entire block update, even if only one variable has changed, although using the new material system there is a flexible way around this.

Instead of working on a per-object basis, where each object will literally set their state for each draw, it’s more attractive to use shader storage blocks for variables shared, and guaranteed to be unique per object, such as model matrix, object id etc. Each surface material will provide a single uniform buffer block which is updated ONCE when the surface is created, and modified if a surface gets any constants changed (which is rare during runtime).

When rendering, we simply only need to apply the shader, bind the surface block, then per each object bind their unique blocks, and we’re done with the variable updates. Or are we?

A problem comes from the fact that we might want to animate some shader variable, for example the alpha blend value, which forces us to update an otherwise shared uniform buffer block. Unfortunately, this means we not only have to modify the block to use the new alpha value, but we also need to modify it back to its default state once we don’t want the change anymore.

So one idea is to have changeable values in a transient uniform buffer block (which uses persistent mapping for performance), have one or several material blocks (one for foliage stuff, one for water stuff, one for ordinary objects, etc.) and have a shader storage block for variables which are guaranteed to be changed on an object basis. Why shader storage block you might ask? Well, if we can buffer all of our transforms into one huge array, we can retain a global dictionary of their variables, which can then be accessed in the shader.

By doing this we can minimize the need to set the same transforms twice for the same object, for example when doing shadow mapping (which is three passes: global, spot, point) the default shading, and then perhaps some extra shaders. A shader storage block is flexible in this context, because it can hold a dynamic count of instances. This method basically approaches instancing, because with this method we could also utilize the MultiDraw calls, since we have prepared our state prior to rendering, and all sub meshes already have their unique stuff posted in the shader.

My concern with this is that we might have the case where not all sub meshes share the same surface material. This is the case if we have a composite mesh, where some part uses for example subsurface scattering, and the rest is glass, or cloth, or some other type of material. The issue is basically that we have a very small subset of cases where we win any performance, since we need this particular scenario: A mesh where all parts share the same surface material and shaders.

The issue related to clever drawing methods always fall back on the issue of addressing variables if you’re drawing more than one object. Storing all variables in per-object buffers is trivial, however doesn’t allow us to use any of the fancy APIs like MultiDraw. Storing all variables in global buffers is difficult, because we need to need to define a set of global buffers (with hard-coded names) which is explicitly updated on an index representing the object. This poses a problem if we have several parameter sets, for example PBR, Foliage, UV animation and then try to somehow use 3 global buffers, because we must also have another buffer which denotes the objects id into this buffer (unless we can use gl_DrawID).

I think the best way might be to do something like this:
Surface material blocks are retrieved by looking at the surface variables, extract their blocks and create a buffer for them which is persistently mapped. Objects which needs to modify singular values do so by writing to the persistently mapped buffer directly, which will make the value active at the next render. The issue with this is that if we perform some type of buffering method, we can only have as many buffer changes before we need a sync. Basically, if we have a triple buffering method, we need to wait every third change because the previous change might not have been drawn yet.

We have one transient buffer which is the one changing every frame (time, view matrix, wind direction, global light stuff), and finally a buffer per each submesh which contain its transforms and object ID. So to summarize:

  • Surface material contains buffers for each buffer they are ‘touching’. Mapped for rare changes (triple buffered).
  • Transient frame buffer like the one we have now, but with more of the common variables. Mapped for changes (triple buffered).
  • Per-object transform buffer. Mapped for changes (triple buffered).

AnyFX should be made so that variables which lay outside a varblock (uniform buffer block/constant buffer) is just put in a global varblock with a reserved name. So AnyFX should probably turn over the responsibility of handling uniform buffers to the engine, and not manage it intrinsically. As such, a variable which lies in a block will just assert because setting a variable inside of a block will have no purpose or result. Instead we can set the buffer which should be used in the varblocks by sending an OpenGL buffer handle to the VarBlock implementation. This will also make it simpler to integrate the API with future APIs which forces explicit control (read Vulkan, DX11+) of buffers, but still wraps the shader part in a comfortable and flexible manner.

In essence, we should minimize the buffer changes, and we’re not restricted to the current system where we poke inside a gigantic transform buffer for each time we draw. The current method suffers from the issue that we have to sync every N’th object, and we update the same object several times for each time we draw, which might be 6 times per frame (!). The new method would sync every N’th frame instead, which should scale better when the same object gets rendered multiple times.

The beginning of a preprocessing pipeline.

We’ve been somewhat silent. I would say very silent. Silence doesn’t mean we’ve done nothing though, quite the contrary.

We’ve been working A LOT the tools, mainly the content browser and the level editor. On my part, I’ve been fidgeting with the content browser to make it simpler, more minimalistic and faster. For example, the content browser now allows for a static model to be attached with a particle node, or several, which means that artists are able to build more complex (albeit non-skinned) pre-defined objects by adding particle nodes to them. The content browser is also more responsive and interactive due to the new system used when saving, loading and applying object parameter changes such as textures and other shader variables.

However more relevant is the new IBL pipeline which allows a level designer to place light probes in the scene, have them render a reflection cube map and an irradiance cube map, and then have it applied in a novel fashion on overlapping objects. To do so, a designer puts a light probe in the scene, give it a few parameters, presses a button and woop, the entire area is affected by said reflections and irradiance. This effect gives an illusion of realtime GI, since it simulates specular light bounces through reflections, and ambient light bounces through irradiance. The following image shows the result as displayed inside the level editor:

This image shows how reflections are projected inside scene.

This image shows how reflections are projected inside scene.

To do this, we first capture the scene from 6 angles using a bounding box as a capturing area. This area is used to later on reproject the reflecting rays so that we get parallax corrected cube maps when rendering. The result from the render is saved as a render target, and is then processed by CubeMapGen, which is now integrated into Nebula as an external library. Using a predefined set of settings, we generate the reflections and optionally, the irradiance; output it to the work folder and then assign them to the influence zones.

Simple stuff. But here comes the interesting part. As far as I have come across (this might be false), the common solution is to handle objects entering the reflection zone, which gets assigned a reflection map from which to calculate the reflections. Some solutions use the camera as a singular point of interest, and assigns the reflections on every visible object when the camera enters the influence zone. We do it differently.

We had this conundrum where we visualized two different zones of irradiance separated by a sharp border, say a door. Inside the room the lighting is dark, and outside the room, in the corridor, you have strong lights in the ceiling. If an object would move between said areas, then the irradiance would be gradual as the object would cross between these zones. This can be accomplished in a pixel shader by simply submitting N reflections and cube maps, calculate the distance between pixel and cube map, and then apply a blending based on the distance.

We thought of another way in which we don’t have to do the work per object. A way that would let us draw an ‘infinite’ amount of reflections and irradiance per pixel, and without the overhead of having to calculate reflections on pixels we simply cannot see. Enter deferred decal projected reflections.

By using decals, we can project reflections and irradiance into the scene using an origin of reflection and an influence area. The decal blends the reflections and irradiance based on a distance field function (box or sphere), which allows for other reflections to be blended. Decals are then rendered as boxes into the scene as any object but with a special shader that respects roughness and specularity. Using this method, we avoid:

1. Reflecting unseen pixels.
2. Submitting and binding textures per each affected object.
3. Applying reflections on dynamic objects without doing collision checks.
4. Having an upper limit on reflection/irradiance affecting zones.
5. Popping.

We have some limitations however, namely:

1. Render decals ordered on layer.
2. Have to switch between box and sphere distance field functions without shader switch overhead (sorting is impossible since the layer dictates draw order).
3. Potentially switch back and forth between many different textures (if we can see many reflection simultaneously).
4. We don’t calculate reflection and store it in the G-buffer. The projector shader is somewhat complex and computational heavy, so any simplifications are welcome.

Our solution give us the ability to project decals into the scene instead of having to apply them per object, meaning we won’t have any popping or appearing artifacts, and good scalability with the number of objects. I doubt this hasn’t been done before, and there are probably caveats with this method which are yet to be discovered.

Fun with the content tools and other updates

We have been busy lately as our game projects will be starting up soon. General updates have been a bit slow but here is a short recap about what happened in the last months:

We implemented full linux support for both Nebula and the toolchain. There are some minor issues with the math library that is currently based on bullets vectormath and some of our own code, I think some matrix  functions are a bit off, but other than that everything works as expected. That obviously means that the OpenGL4 renderer has become the primary render engine now. Some other libraries have seen integration as well as for example support for  Havok (only windows due to licensing limitations) and Recast&Detour.

Apart from that there are obviously piles of all fixes everywhere, but mostly to the toolchain, namely the leveleditor and the contentbrowser.

Samuel has had some fun with the tools and took some screencasts while experimenting

 

Physically based lighting

Seeing as we’re aiming for a bleeding edge engine, there is no need to skip out on anything. A little bird whispered in my ear that there are other ways of performing lighting than the standardized simple blinn-phong method commonly used, and since we’re on a pretty flexible budget when it comes to graphics performance, I thought I should give it a good looksie.

Physically based lighting basically takes more into account than regular lighting. It also provides a more ‘real’ representation of the world in terms of reflective light (albedo) and surface roughness/gloss. Couple that with the original cheat called normalmaps and you got yourself some pretty good-looking effects. Basically, all  materials have been added with a new roughness map which allows a graphics artist to author the surface complexity of a model. This allows lighting to properly respond to the surface instead of just applying a uniform specular reflectiveness. The shader code (mostly taken and translated from http://www.altdevblogaday.com/2011/08/23/shader-code-for-physically-based-lighting/) looks like this:

float normalizationTerm = (roughness + 2.0f) / 8.0f;
float blinnPhong = pow(NH, roughness);
float specularTerm = normalizationTerm * blinnPhong;
float cosineTerm = NL;
float base = 1.0f - HL;
float exponent = pow(base, 5.0f);
vec3 fresnelTerm = specColor.rgb + ( 1.0f - specColor.rgb ) * exponent;
float alpha = 1.0f / ( sqrt ( (PI / 4) * roughness + (PI / 2)) );
float visibilityTerm = (NL * (1.0f - alpha) + alpha ) * ( NV * ( 1.0f - alpha ) + alpha );
visibilityTerm = 1.0f / visibilityTerm;
float3 spec = saturate(specularTerm * cosineTerm * fresnelTerm * visibilityTerm) * lightColor.xyz;

As you can see, this code is way more complex than the standard formulae. What you can see here is that instead of using a constant value for specular power, we instead use the roughness. This allows us to have a per-pixel roughness authored by a graphics artist. The only downside to this is that roughness is somewhat unintuitive in terms of encoding/decoding. To decode roughness, which is a value in the range [0..1], I use this formula (taken from  Physically-based Lighting in Call Of Duty: Black Ops):

float specPower = exp2(10 * specColor.a + 1);

This allows our specular power to be in the range [1, 8192]. Since our method uses the Blinn-Phong algorithm for distribution, our specular power is much greater than the range [0..1], however easier to compute than the more advanced yet more intuitive Beckmann algorithm (which actually operates in the range [0..1]). The result can be seen in the picture below:

Screenshot from 2013-12-12 13:25:13

Note the specular light given off by the local lights which was previously non-existant.

A part of performing physically based lighting is to also use reflections and proper ‘roughing’ of the reflections. Reflections affects both specular light (since it’s actually a reflection, go figure) and the final color of the surface. To account for this in our completely deferred renderer, the environment maps on reflective objects take roughness into account, and selects a specific mip-level in the environment map based on the roughness. The awesome tool (https://code.google.com/p/cubemapgen/) can take an ordinary cube map and generate mips where each mip is a BRDF approximation (actually there are several different algorithms, but for the sake of clarity we’ll stick to Blinn-Phong BRDFs). We can also say to generate a new mip using a glossness falloff, resulting in a very good-looking mip-chain for our cube maps.

You have have come across this image http://seblagarde.files.wordpress.com/2011/07/reference_top_ref_bottom_mipchain.jpg showing a series of cube maps with different levels of reflectiveness, which is exactly what we are doing and what we want. Just to clarify, this is all precomputed using an original cube map and is not done in real-time! The more interesting part is that what is visible in your environment cube map is irrelevant. What is relevant is that the average color of the cube map fits your scene in terms of colors and lighting. In the pictures below, we have the same model ranging in roughness from 0-1.

Roughness set to 0.0

Roughness set to 0.0

Roughness set to 0.5

Roughness set to 0.5

Roughness set to 1.0

Roughness set to 1.0

As you can see, the roughness changes the surface look of the object dramatically, although still uses the exact same shader. Also note that using the skydome cube map as the reflective cube map is a bit ugly since it’s half bright half dark. That’s all!

 

// Gustav

New tech!

While I’ve been hard at work with the Content Browser, I’ve also done tons of work on the rendering side. The first and probably coolest feature I’ve added is instancing, which basically lets us add model entities to the brim without giving an all to big loss of FPS. The bottle neck right now seems to be the visibility system. I have tested rendering 4000 1134 polygon models with varying results, from 30 FPS on my home computer to a solid 60 on a newer one. It seems that the rendering itself isn’t causing the bottle neck, but instead the CPU is bottle necked by the visibility system.

I’ve chosen not to use a texture nor a vertex stream to send the instancing transforms to the shader. Instead I render batches of 256 instances at a time, which basically divides the number of draw calls by 256. The reason why the upper limit is 256 is because the biggest  size an array can have in HLSL. No matter, it’s not really a problem seeing as cutting the number of draw calls by 256 for large amounts of objects takes the FPS from unplayable to fluent. I have plans to investigate the other methods in the future, but I’m holding off for the moment. The thing is that texture fetching would require 4 * 3 textures fetches per vertex in the shader, because the sampler only takes 4 pixel components per fetch, and a matrix consists of 4 rows, and each vertex needs ModelViewProjection, ModelView and Model. Vertex streaming seems to be the other viable option, and I will look into that if it’s necessary.

Oh, and if you’ve ever wondered how 4000 trucks would look like in real time in a boring grid, here is a picture:

nebulainstanced

 

But that’s not all, nono, far from it! Every shader previously written in Nody has been converted into the old .fx format. This might seem counterproductive, but it turns out that working with shaders using Nody was far from optimal, and we made the decision to just code the shaders instead of designing them. In the future I will look into how we can use Nody to accomplish this, but with another approach.

As a result, I’ve re-implemented the old tessellation shader, and we’ve also tried it using a ‘real’ model with a ‘real’ displacement map generated from a high-poly sculpt. I don’t have any images to show it, but I can assure you that it works. There is a problem though, we must use soft edges, because otherwise we get cracks in our tessellated mesh.

nebulanotessellation nebulatessellation

In the image above we have a large cuboid which is very thin. The mesh was solid and intact before tessellation, but because the normals at the edges point in different directions because of the hard edge, the tessellated result gets cracked because the normals at both sides of the edge point in different directions. As such, one cannot use tessellation with hard edges unless the level of tessellation is zero over the seam. The tessellation shader also tessellates based on eye distance, so the further away you are the lower the tessellation factor will be. Oh and before I forget, the tessellation shader also works for skinned meshes.

Not only can we do directly in Nebula, and get a feel of how the result will look, but we can also render everything in a wireframe mode, giving artists a hum of how fine the tessellation is. This, we hope, will prove useful when working with tessellated meshes. Here’s a picture of the very old eagle mesh, tessellated and rendered in wireframe for your debugging pleasure.

nebulaeagletess nebulatexprev

 

As you can see, I’ve redesigned how variables are handled and displayed. On the right you can see what variables are available for the current material. Textures can also be previewed by hovering over the icon. The thumbnail picture can be clicked on, and it opens up a file browser which lets you select textures from the file system.

I’ve also taken the liberty to add a shader which lets you animate UVs. It doesn’t work with keyframes, but instead using timing and angles. The shader has a set of parameters, linear direction which animates the UVs in a specified direction, angle which rotates the UVs around the UV shell center point, linear speed which determines the speed of the linear animation, and angular speed which determines the speed of the angular animation. With all these parameters, we can achieve rather good looking animations. It can also tile the texture in X and Y depending on the variables.

nebulauvanimated1 nebulauvanimated2

Above you can see the same object with different settings for the UV animations. You can see the tile count which is dependent on the NumXTiles and NumYTiles variables.

As you probably can see, the browser also allows you to change the light color, and also allows you to lock the global light to always point in the direction of the camera. Well, I guess that’s all for now!

 

 

Progress on the pipeline and everything else

We have been a bit slow on the progress reporting, mostly because we have been fairly busy recently. In the last few weeks/months we have been working towards our goal to create a new content pipeline that is both easier to use, more stable, and easier to maintain and extend. The first big component, the new content browser/importer is gearing towards its completion and supports importing of FBX files with both previewing, live updating, configuring of texturing, materials and physics properties.
Textures can be easily added to a project by just opening them from the importer or even just drag and dropping them into the application. Textures can be previewed, the conversion parameters can be configured and they can be easily applied to objects.

Apart from the work on the content browser, things have been happening behind the scenes, we started on a unix/OpenGL4 port and have made quite some progress there. The main showstopper at the moment is the shader system, requiring all shaders to be written in GLSL and nody to be adapted to creating GLSL shaders. Since this was more of a fun sidetrack and our game project draws closer we chose to put it on hold and focus on more pressing things instead.
The physics integration has been rewritten from scratch, this time properly wrapping the underlying physics engine (in this case bullet), making it easier to integrate other physics engine and also cleaning up the first version that was a fairly quick hack. Apart from being an overall nicer and cleaner implementation it has been integrated into the render component, making it a first class citizen of nebula and also tying it close to the graphics system. All physics objects are using the resource subsystem, properly loaded and reused/instanced. The use of bullet files as a resource has been scrapped since it cause more problems than it was able to solve, mostly due to the fairly inconsistent manner tools and plugins create the files, apart from that it locked us to the use of bullet which we wanted to avoid as well. Physics meshes are stored using native nebula mesh formats and binary nebula files, making them independent of the physics engine used.

On a positive side note, we have acquired more workforce and have a person working full time on a new leveleditor (that will combine all the old tools into one application) and will be tightly coupled to the content browser, as well as add more extensive scripting abilities.

Since our version of Nebula3 is not really compatible with the last open source version any more we figured we should probably change its name. We have some tentative working names but haven’t really decided on something final yet.

Plug-in, Nebula-out. Get it?

I’ve been hard at work getting a Nebula 3 plugin for Maya to have all the features we’d want. Why you may ask, isn’t the pipeline awesome as it is? Well, the answer is no, no it isn’t. It works, but it is far from smooth, and the Nebula plugin aims to address that. The plugin is currently only for Maya, but there will be a Motion Builder version as well. The plugin basically just wrap FBX exporting, and makes sure it’s suitable for Nebula by using a preset included with the Nebula distribution. It also runs the FBX batcher which converts the fbx-file to the model files and mesh files. It can also preview the mesh if it’s exported, and will export it if it doesn’t exist already. This allows for immediate feedback how the model will look in Nebula. It also tries preserve the shader variables, but it’s impossible to make it keep the material. That’s because DirectX doesn’t support setting a vertex layout with a vertex shader with a smaller input than the layout. This is a problem because converting a skin from static to skinned will cause Nebula to crash, seeing as the material is preserved between exports. So the plugin offers a way to get meshes, including characters, directly to Nebula, which is very nice indeed.

I’ve also been working with getting a complete Motion Builder scene into Nebula, and I actually got Zombie to work with all features. This means the skin, along with more than one animation clip (yay) can be loaded into Nebula seamlessly by simply saving the Motion Builder file and running the exporter. I will probably make a Nebula 3 plugin for Motion Builder as well, so we can have the exact same export and preview capabilities as in Maya.

I know I have been promising a video showing some of the stuff we’ve done, but I just haven’t had the time! Right now, I will start working on the documentation for our applications so there are clear directions for anyone who wish to use them (mainly our graphics artists here at Campus) . The plugin already redirects from Maya to three different HTML docs, each of which will describe the different tools. That’s nice and all, except for the fact that the HTML docs are completely empty.

Characters continued

So I ran into a problem with characters when I tried to load the FBX example Zombie.fbx. It turns out the skeleton in that particular scene isn’t keyed directly, but indirectly using a HIK rig. So when I tried to read the animations from the scene, I got nothing, every curve was a null pointer. I’ve tried back and forth just getting the character in the scene, but the HIK rig doesn’t really NEED to follow the actual skeleton, so there is no exact way of knowing it will fit. Instead, I go into Maya and bake the simulation. This won’t bake the animation to be per-vertex, but instead it will make sure every joint is keyed identically to all it’s effectors. The reason why I don’t really want to read characters in total is not because of simple rig-to-joint connections, but also because of effectors. So a skeleton might be linked to another HIK rig, but the animations are not identical, just slightly identical. Baking the simulation will make sure that every effector along with every possible related skeleton gets keyed in the skinned skeleton.

I really hope MotionBuilder can do the same, because otherwise we are going to be stuck using a single animation layer until the Maya FBX exporter gets support for exporting multiple takes.

I’ve also been working on getting the model files to update when one exports a previously existing Maya scene. The thing is that if one has spent lots of time modifying textures and shader variables, and then decides to tamper with the base mesh or possibly the entire scene, the model file will still retain the information previously supplied. This is a way to compensate for the fact that we can’t set shaders, textures and shader variables in Maya, but have to do it in an external program. When this is done, only meshes with identical names to those already existing in the model file will retain their attributes, all others have to be changed in the material editor.

Characters

I’ve been hard at work getting characters to work, and now they do, well sort of…

There seem to be two problems with characters at this present moment, the first is that Maya FBX exporter have no way of setting the skin and skeleton to bind pose before exporting (which is super important because the skeleton can be retrieved in bind pose, but the mesh can’t). If one is to not export while in bind pose, the mesh will not be relative to the skeleton, resulting an incorrect mesh deformation. The second problem is that I can’t load the animations from the FBX examples such as Zombie.fbx. The mesh works, the skeleton works, but the animation clips doesn’t get loaded for some reason. I have still to investigate this. The good news is that I actually can have an animated character! This means that the skin fragmentation, skeleton construction, animation clips (albeit only from Maya at the moment) and scene construction with a character actually works!

Now, if you found this thread because you were tearing your head off trying to understand how Maya stores joints in FBX, here is the deal. First of all, your KFbxPose is useless, seeing as you want your joints while they are in bind pose to begin with. The only thing you need is your KFbxNode for each joint, which is easily retrievable using a recursive algorithm to traverse your joints. When you got this, all you want is to get the PreRotation (using KFbxNode::GetPreRotation(KFbxNode::eSOURCE_SET) ) and current orientation using the KFbxNode::LclRotation.Get(). Your PreRotation corresponds to the Joint Orientation in Maya, and this rotation will be the basis for your joint. Now, we have two vectors, where X, Y and Z correspond to the degree of rotation around each axis. Note the use of degrees, if you want radians, this is where you want to convert it. The PreRotation (or Joint Orientation) consists of three angular values, rotation around X, Y and Z, but they are not made up of a free form transformation. To get a rotation matrix for these angles, you need to construct a rotation matrix for X, Y and Z (using the axis 1,0,0 for X, 0,1,0 for Y and 0,0,1 for Z) using the axis-angle principle, where your angle is your rotations value. Multiply these three matrices together and we get our final rotation matrix for the joint.

Example code show what I just explained (Note: n_deg2rad converts degrees to radians):

 

KFbxVector4 preRotation = joint->fbxNode->GetPreRotation(KFbxNode::eSOURCE_SET);

// first calculate joint orientation
matrix44 xMat = matrix44::rotationx(n_deg2rad((float)preRotation[0]));
matrix44 yMat = matrix44::rotationy(n_deg2rad((float)preRotation[1]));
matrix44 zMat = matrix44::rotationz(n_deg2rad((float)preRotation[2]));
matrix44 totalMat = matrix44::multiply(matrix44::multiply(xMat, yMat), zMat);

 

That is not enough however. We also want your bone rotation at the moment of binding, which we get by getting the LclRotation as previously mentioned. Then apply the same principle to those values…

 

KFbxVector4 bindRotation = joint->fbxNode->LclRotation.Get();

// then calculate the bind value for the bone
matrix44 bindX = matrix44::rotationx(n_deg2rad((float)bindRotation[0]));
matrix44 bindY = matrix44::rotationy(n_deg2rad((float)bindRotation[1]));
matrix44 bindZ = matrix44::rotationz(n_deg2rad((float)bindRotation[2]));
matrix44 bindMatrix = matrix44::multiply(matrix44::multiply(bindX, bindY), bindZ);

 

Now we have both matrices, the bone matrix and the joint matrix. In games, we don’t really care about bones, seeing as we want a united joint matrix which we can use in our skinning shader. Anyhow, when we got our matrices, we also want to combine these to get the actual joint bind pose…

 

// multiply them
bindMatrix = matrix44::multiply(bindMatrix, totalMat);

 

If you want quaternions as rotations, which is the case with Nebula, just convert the matrix to quaternion, otherwise keep it like this. This solution gives you the joints where they are NOT multiplied with their parents matrices, and this is because Nebula wants them unrelated. Nebula calculates the inverted bind pose for each joint when loading them, and simply multiplies the current joint with the parents inverted bind pose when evaluating the skeleton. Another way of solving this would otherwise be to get the KFbxPose, but the pose only gives you matrices which are premultiplied by all parents, which means the skeleton will be in world space, detached from their parents, which in turn means Nebula wont be able to multiply them with their parents matrices. So, use this method if you want to skin in realtime using the algorithm JointMatrix = JointPoseInverted * (JointPose * JointParentPose).

So the conclusion is that Nebula wants the joints in local space (not multiplied by their parents), so that Nebula can multiply the parents afterwards. The reason behind this is because the skeleton can have simultaneous animation clips running at the same time, which means the parent joints might be affected by two animations simultaneously, and thus the parent matrix cannot be pre-multiplied. This is probably the way most game engines would handle skeletons, seeing as there is minimal re-computation for maximal flexibility.

I will post a video proving that it actually works when I’ve made it work for multiple animation clips (the Zombie.fbx problem).

 

EDIT: I found on the FBX discussion forums that all of this can be done with a single function call, called KFbxNode::EvaulateLocalTransform… Thankfully, I’ve learned a lot about how it’s really done underneath the hood so I’m not bitter about it… Well OK, maybe a little bit…

Animations

The time I’ve had this past week as been focused on animations and characters in Nebula. There is a big difference between exporting characters in the new installment of Nebula compared to the old. The biggest feature is the fact that one can have several characters in one Maya scene, and have them exported as several individual characters! There is a pretty big difference between characters and ordinary static objects in Nebula, mainly in their model files (.n3). You see, a model file describes a scene, which is usually initiated with a transform node describing the global bounding box for the entire scene. This node then holds all meshes in the scene, with all their corresponding values for material, texture and variables. However, characters are much different! They have another parent node, called CharacterNode, which describes the character skeleton. All meshes described within the CharacterNode are counted as skins to the skeleton, which in turn means they have to be skinnable! This means that having both characters and static objects in the same scene is impossible with the current design. One might as why I don’t just add a root node which contains both a CharacterNode with all its skins, and then have all the other nodes parallel to that node. Well, you see, Nebula has to decide whether or not a MODEL is a character or a static mesh. So combining both static meshes and characters would cause big problems. This also means every single skeleton needs its very own model. Currently, the batcher decides whether or not a Maya scene should be a character in Nebula, or a static mesh. There wouldn’t be a problem if one would just take all static objects into one model, and have every character in their separate ones, except if it wasn’t for giving them a proper name! So one has to chose if they want to make an ordinary static object scene, or a character scene, so that’s that!

And of course, the biggest problem is getting the skeletons, animation curves and skinning to work properly, seeing how many variables there are that can go wrong. Currently I think I’ve managed to get the skeleton working properly, seeing as I can have a box using three joints, unanimated, and it looks correct. However, as soon as I apply an animation, it breaks. The image to the left shows how it looks after animation, and the right one before animation.

 

 

 

 

 

 

 

 

I also realized that Nebula only accepts skins which use 72 or less joints, which means that more complex models needs to be split into smaller fragments, where each fragment can use 72 or less joints. I should have this done by the end of the week unless something very time consuming turns up.

I’ve also been collaborating with my colleagues and we’ve started wrapping our programs together, mainly by designing a central class for handling settings. For example, if I set the project directory in Nody, it should be remembered by all toolkit applications so that one doesn’t need to reset it everywhere if one is to change the working directory.