Lately, I’ve been hard at work with the stuff I like the most, shading! First, I reimplemented the old Nebula exponential shadow maps (which looks great by the way) in DX11 for pointlights and spotlights. The direction light is a little trickier, but I will be all over that shortly. I also took the chance to remove the DSF depth buffer into a more high-precision depth-buffer. The old DSF buffer stored 8 bits normal id, 8 bits object id and 16 bits depth, whilst the new buffer stores 32 bits pure depth, removing all the halo-problems.
Bringing shadows to life sure wasn’t easy, but it was well worth the while. Nebula had a limit to only use 4 shadow casting lights per frame, I thought I could boost that to make it use 16 (newer hardware can handle it). I’ve also begun working on the global light shadow algorithm, and I thought I’d start with making the PSSM method work. The reason why I chose PSSM is because major parts of it have already been implemented, but also because it seems like a very valid concept.
To handle AO, and setting variables for the AO, I decided to make a new server for it, which was fitted right next to the light and shadow servers (seeing as it has to do with lighting). The method for computing screen-space ambient occlusion is called HBAO, which basically samples the depth buffer using an offset texture (random texture). It uses the current sample point along with the sampled point to calculate not only depth difference, but also the total difference in angle, giving off a strong occlusion effect if the angle between two surface tangents differ a lot. More of the algorithm can be found here: http://www.nvidia.com/object/siggraph-2008-HBAO.html.
The introduction of the AOServer also allows for setting the AO variables live (whenever there is an interface available). Here are two pics showing the awesome new graphics.
The picture on the right shows a real-time AO pass, and the right shows a scene with 3 shadow-casting lights. Pretty nice right!
Also found a couple of things I want to do with Nody. First, I changed so one can create different types of render targets. If one wants to write to a 1, 2, 3 or 4-channel texture, one should have no problem doing so. I’m also considering being able to add and manipulate render targets between frame shaders. For example, the AO-pass uses the DepthBuffer from the main frame shader, and the main frame shader uses the AO-buffer from the HBAO frame shader. I also want to be able to add an MRT or an RT directly to the output node, and then decide how many channels one wants to use. When this is done, one should not be able to add new render targets or MRTs. This is to avoid silent errors which might occur if the render targets have their positions switched. Also, attaching render targets to the output nodes should also allow you to pick from ANY frame shader, instead of just the current one.
It would also be awesome to have the ability to use compute-shaders in Nody, as well as nodes which lets you code everything freely.