Plug-in, Nebula-out. Get it?

I’ve been hard at work getting a Nebula 3 plugin for Maya to have all the features we’d want. Why you may ask, isn’t the pipeline awesome as it is? Well, the answer is no, no it isn’t. It works, but it is far from smooth, and the Nebula plugin aims to address that. The plugin is currently only for Maya, but there will be a Motion Builder version as well. The plugin basically just wrap FBX exporting, and makes sure it’s suitable for Nebula by using a preset included with the Nebula distribution. It also runs the FBX batcher which converts the fbx-file to the model files and mesh files. It can also preview the mesh if it’s exported, and will export it if it doesn’t exist already. This allows for immediate feedback how the model will look in Nebula. It also tries preserve the shader variables, but it’s impossible to make it keep the material. That’s because DirectX doesn’t support setting a vertex layout with a vertex shader with a smaller input than the layout. This is a problem because converting a skin from static to skinned will cause Nebula to crash, seeing as the material is preserved between exports. So the plugin offers a way to get meshes, including characters, directly to Nebula, which is very nice indeed.

I’ve also been working with getting a complete Motion Builder scene into Nebula, and I actually got Zombie to work with all features. This means the skin, along with more than one animation clip (yay) can be loaded into Nebula seamlessly by simply saving the Motion Builder file and running the exporter. I will probably make a Nebula 3 plugin for Motion Builder as well, so we can have the exact same export and preview capabilities as in Maya.

I know I have been promising a video showing some of the stuff we’ve done, but I just haven’t had the time! Right now, I will start working on the documentation for our applications so there are clear directions for anyone who wish to use them (mainly our graphics artists here at Campus) . The plugin already redirects from Maya to three different HTML docs, each of which will describe the different tools. That’s nice and all, except for the fact that the HTML docs are completely empty.

Characters continued

So I ran into a problem with characters when I tried to load the FBX example Zombie.fbx. It turns out the skeleton in that particular scene isn’t keyed directly, but indirectly using a HIK rig. So when I tried to read the animations from the scene, I got nothing, every curve was a null pointer. I’ve tried back and forth just getting the character in the scene, but the HIK rig doesn’t really NEED to follow the actual skeleton, so there is no exact way of knowing it will fit. Instead, I go into Maya and bake the simulation. This won’t bake the animation to be per-vertex, but instead it will make sure every joint is keyed identically to all it’s effectors. The reason why I don’t really want to read characters in total is not because of simple rig-to-joint connections, but also because of effectors. So a skeleton might be linked to another HIK rig, but the animations are not identical, just slightly identical. Baking the simulation will make sure that every effector along with every possible related skeleton gets keyed in the skinned skeleton.

I really hope MotionBuilder can do the same, because otherwise we are going to be stuck using a single animation layer until the Maya FBX exporter gets support for exporting multiple takes.

I’ve also been working on getting the model files to update when one exports a previously existing Maya scene. The thing is that if one has spent lots of time modifying textures and shader variables, and then decides to tamper with the base mesh or possibly the entire scene, the model file will still retain the information previously supplied. This is a way to compensate for the fact that we can’t set shaders, textures and shader variables in Maya, but have to do it in an external program. When this is done, only meshes with identical names to those already existing in the model file will retain their attributes, all others have to be changed in the material editor.

Characters

I’ve been hard at work getting characters to work, and now they do, well sort of…

There seem to be two problems with characters at this present moment, the first is that Maya FBX exporter have no way of setting the skin and skeleton to bind pose before exporting (which is super important because the skeleton can be retrieved in bind pose, but the mesh can’t). If one is to not export while in bind pose, the mesh will not be relative to the skeleton, resulting an incorrect mesh deformation. The second problem is that I can’t load the animations from the FBX examples such as Zombie.fbx. The mesh works, the skeleton works, but the animation clips doesn’t get loaded for some reason. I have still to investigate this. The good news is that I actually can have an animated character! This means that the skin fragmentation, skeleton construction, animation clips (albeit only from Maya at the moment) and scene construction with a character actually works!

Now, if you found this thread because you were tearing your head off trying to understand how Maya stores joints in FBX, here is the deal. First of all, your KFbxPose is useless, seeing as you want your joints while they are in bind pose to begin with. The only thing you need is your KFbxNode for each joint, which is easily retrievable using a recursive algorithm to traverse your joints. When you got this, all you want is to get the PreRotation (using KFbxNode::GetPreRotation(KFbxNode::eSOURCE_SET) ) and current orientation using the KFbxNode::LclRotation.Get(). Your PreRotation corresponds to the Joint Orientation in Maya, and this rotation will be the basis for your joint. Now, we have two vectors, where X, Y and Z correspond to the degree of rotation around each axis. Note the use of degrees, if you want radians, this is where you want to convert it. The PreRotation (or Joint Orientation) consists of three angular values, rotation around X, Y and Z, but they are not made up of a free form transformation. To get a rotation matrix for these angles, you need to construct a rotation matrix for X, Y and Z (using the axis 1,0,0 for X, 0,1,0 for Y and 0,0,1 for Z) using the axis-angle principle, where your angle is your rotations value. Multiply these three matrices together and we get our final rotation matrix for the joint.

Example code show what I just explained (Note: n_deg2rad converts degrees to radians):

 

KFbxVector4 preRotation = joint->fbxNode->GetPreRotation(KFbxNode::eSOURCE_SET);

// first calculate joint orientation
matrix44 xMat = matrix44::rotationx(n_deg2rad((float)preRotation[0]));
matrix44 yMat = matrix44::rotationy(n_deg2rad((float)preRotation[1]));
matrix44 zMat = matrix44::rotationz(n_deg2rad((float)preRotation[2]));
matrix44 totalMat = matrix44::multiply(matrix44::multiply(xMat, yMat), zMat);

 

That is not enough however. We also want your bone rotation at the moment of binding, which we get by getting the LclRotation as previously mentioned. Then apply the same principle to those values…

 

KFbxVector4 bindRotation = joint->fbxNode->LclRotation.Get();

// then calculate the bind value for the bone
matrix44 bindX = matrix44::rotationx(n_deg2rad((float)bindRotation[0]));
matrix44 bindY = matrix44::rotationy(n_deg2rad((float)bindRotation[1]));
matrix44 bindZ = matrix44::rotationz(n_deg2rad((float)bindRotation[2]));
matrix44 bindMatrix = matrix44::multiply(matrix44::multiply(bindX, bindY), bindZ);

 

Now we have both matrices, the bone matrix and the joint matrix. In games, we don’t really care about bones, seeing as we want a united joint matrix which we can use in our skinning shader. Anyhow, when we got our matrices, we also want to combine these to get the actual joint bind pose…

 

// multiply them
bindMatrix = matrix44::multiply(bindMatrix, totalMat);

 

If you want quaternions as rotations, which is the case with Nebula, just convert the matrix to quaternion, otherwise keep it like this. This solution gives you the joints where they are NOT multiplied with their parents matrices, and this is because Nebula wants them unrelated. Nebula calculates the inverted bind pose for each joint when loading them, and simply multiplies the current joint with the parents inverted bind pose when evaluating the skeleton. Another way of solving this would otherwise be to get the KFbxPose, but the pose only gives you matrices which are premultiplied by all parents, which means the skeleton will be in world space, detached from their parents, which in turn means Nebula wont be able to multiply them with their parents matrices. So, use this method if you want to skin in realtime using the algorithm JointMatrix = JointPoseInverted * (JointPose * JointParentPose).

So the conclusion is that Nebula wants the joints in local space (not multiplied by their parents), so that Nebula can multiply the parents afterwards. The reason behind this is because the skeleton can have simultaneous animation clips running at the same time, which means the parent joints might be affected by two animations simultaneously, and thus the parent matrix cannot be pre-multiplied. This is probably the way most game engines would handle skeletons, seeing as there is minimal re-computation for maximal flexibility.

I will post a video proving that it actually works when I’ve made it work for multiple animation clips (the Zombie.fbx problem).

 

EDIT: I found on the FBX discussion forums that all of this can be done with a single function call, called KFbxNode::EvaulateLocalTransform… Thankfully, I’ve learned a lot about how it’s really done underneath the hood so I’m not bitter about it… Well OK, maybe a little bit…

Animations

The time I’ve had this past week as been focused on animations and characters in Nebula. There is a big difference between exporting characters in the new installment of Nebula compared to the old. The biggest feature is the fact that one can have several characters in one Maya scene, and have them exported as several individual characters! There is a pretty big difference between characters and ordinary static objects in Nebula, mainly in their model files (.n3). You see, a model file describes a scene, which is usually initiated with a transform node describing the global bounding box for the entire scene. This node then holds all meshes in the scene, with all their corresponding values for material, texture and variables. However, characters are much different! They have another parent node, called CharacterNode, which describes the character skeleton. All meshes described within the CharacterNode are counted as skins to the skeleton, which in turn means they have to be skinnable! This means that having both characters and static objects in the same scene is impossible with the current design. One might as why I don’t just add a root node which contains both a CharacterNode with all its skins, and then have all the other nodes parallel to that node. Well, you see, Nebula has to decide whether or not a MODEL is a character or a static mesh. So combining both static meshes and characters would cause big problems. This also means every single skeleton needs its very own model. Currently, the batcher decides whether or not a Maya scene should be a character in Nebula, or a static mesh. There wouldn’t be a problem if one would just take all static objects into one model, and have every character in their separate ones, except if it wasn’t for giving them a proper name! So one has to chose if they want to make an ordinary static object scene, or a character scene, so that’s that!

And of course, the biggest problem is getting the skeletons, animation curves and skinning to work properly, seeing how many variables there are that can go wrong. Currently I think I’ve managed to get the skeleton working properly, seeing as I can have a box using three joints, unanimated, and it looks correct. However, as soon as I apply an animation, it breaks. The image to the left shows how it looks after animation, and the right one before animation.

 

 

 

 

 

 

 

 

I also realized that Nebula only accepts skins which use 72 or less joints, which means that more complex models needs to be split into smaller fragments, where each fragment can use 72 or less joints. I should have this done by the end of the week unless something very time consuming turns up.

I’ve also been collaborating with my colleagues and we’ve started wrapping our programs together, mainly by designing a central class for handling settings. For example, if I set the project directory in Nody, it should be remembered by all toolkit applications so that one doesn’t need to reset it everywhere if one is to change the working directory.

Content

The past two weeks have all been centered around content, and how to get content into Nebula. We’ve been working on a format called Alembic, which provides easy-to-export plugins for Maya. We just recently realized though, that the Maya plugin doesn’t connect skins and skeletons, so there is no way of know what skin goes to what skeleton, bah! Instead, Maya animates every single vertex individually, so a 5 megabyte .mb-file becomes a 50 megabyte .abc file! Not only is it space inefficient, it’s also an enormous setback in performance when animating. Anyhow, we decided to revert back to the FBX, because we realized it would be much easier to dig into FBX (mainly because there has been some work done by a couple of my old class mates). They made an exporter which would allowed you to make a character in Maya, export it to FBX, and then use it in Nebula. And while that is rather nice, the application was quite difficult to expand on.

So we made a new one! It basically uses all the same things at the moment, excepts in a way more modular way. For example, if one has several meshes in one Maya scene, the FBX batcher will export every single mesh node to its own file, and if there is only one mesh, it will create a file with the same name as the counterpart. Also, the batcher will create a model (basically scene graph fragments) which holds every single mesh in your Maya scene as a separate mesh, thus allowing you to modify them by attaching different materials, variables etc. The only thing that is left to do with it is to allow parenting of objects. Seeing as Nebula models already handles parenting by having a node hierarchy, one could just as easily make sure the meshes come in the exact same hierarchy in the model as they do in Maya. The only problem with this is that there might be some unexpected behavior when animating. This remains to be seen, however I fear that parenting with characters will be a feature to add in the very close future.

Right, content, where was I. Yes, meshes, ok, we need meshes, and we have meshes. Although it wasn’t trouble free. Nebula has a set of tools which greatly help with mesh importing, the MeshBuilder, MeshBuilderVertex and MeshBuilderTriangle. These three classes is all you need to represent a mesh. Getting data from FBX was also trivial, but getting data into the MeshBuilderwasn’t. Since a vertex can have several UV-coordinates, and the FBX-models are compressed in such a manner so that they have an index list and a data list for very piece of vertex information, it would require extensive parsing just to know how many vertices one would need! Instead, the MeshBuilder has a function called inflate, which makes sure every triangle has its own unique set of vertices. Thus, one can traverse the index lists and set the data with ease, and then just remove the redundant vertices right? WRONG! Retrieving bitangents and tangents from FBX resulted in every single vertex being unique, which in turn resulted in the MeshBuilder removing 4 vertices when cleaning up. The removal resulted in a destruction of the mesh, and this was of course not acceptable. So instead we decided to get the UVs and Normals, which are only unique for some vertices, and then deflate (remove redundancies) and calculate the bitangents and tangents ourselves. No more exploding meshes = mission accomplished.

This wasn’t all however. It turns out Nebula saves meshes in a compressed format, where positions are stored raw, normals bitangents and tangents are saved as single ints, and the texture coordinates are saved as one int. They use a packing technique where the RG and BA components of the vector describes where in tangent space the normal is pointing, which in turn gives you a decent precision. Texture coordinates are saved as two 16 bit unsigned shorts, resulting in every piece of texture coordinate is only the size of an int. Texture coordinates needs to have more precision than normals, but certainly doesn’t need to be 32 bit per component.  Right, I thought you might want to see some proof, so I grabbed a picture for you.

This shows two models made in Maya and exported using the FBX batcher. The artifacts you might see on the inside of the “sphere” is a glitch caused by the SSAO shader. The character is dressed with the eagle texture, so that explains her being transparent in some areas.