Being wrong, again
So, I was wrong again. You can’t blend normals. This should have been obvious to me, seeing as blending normals might result in two normals nulling eachother out if you blend them. Take a normal, lets say it’s 0.5,0.5,0.5 pre-normalized, and you blend it with a vector -0.5, -0.5, -0.5 with the blend factor 0.5. What do you get? That’s right, zero, you get zero, resulting in complete and utter darkness. This is not good, for tons of reasons, many of which I shan’t explain. Just try constructing a TBN matrix where one vector is 0,0,0. Not so easy! So I removed it, and sure, objects which use alpha (mainly transparent surfaces) won’t get lit correctly seeing as the lighting should take place per alpha-object instead of screen-space. On the other hand, it’s hardly visible, seeing as alpha-objects often consist of glass, where as many layers of glass results in an additive effect because of reflections in the light etc, making the rendering error hard to see.
Good news are that I finally got CSM to work! It wasn’t easy, and I’ll explain why. As you may or may not know, Nebula saves the depth of each pixel as the length of the view-space position of that pixel. What this allows us to do, is to take any arbitrary geometry in the scene, and calculate a surface position which lies in the same direction that we are currently rendering. Why is this important then? Well, as you see, saving depth in this manner allows us to calculate the distance between lets say a point light, and the shaded surface positions it should light, determine the distance between these two points, and thus light the surface. Great! We can literally re-construct our world-space positions by simply taking the depth-buffer and multiplying it with a normalized vector pointing from our camera to our pixel. Now comes the hard part. What do you do, when you have no 3D-geometry to render, but instead have a full-screen quad, as is the case with our global light. After finding this guide: http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/, I tried it out, and got some good and some not so good results.
What you see is the same scene, using the above mentioned algorithm to reconstruct the world-space position. If you’re lazy, this is the general concept. In the vertex shader, get a vector pointing from the camera (that’s 0,0,0 for a full-screen quad) to each frustum corner. Then, in the pixel shader, normalize this ray, take the camera position + normalized ray * sampled depth. Sampled depth is the length of the view-space position vector. I thought this could be the result of a precision error or whatnot, but the solution was a proof that it wasn’t. Instead of using a ray and such to recreate this, I simply let my geometry shader (not the GS, but the shader which is used to render deferred geometry) output the world-space position to a buffer which would then hold these values. Using that texture to sample the data, the big black blob disappeared. Somehow, the method used for calculating the world-space position has to lack in accuracy, so I pondered. Normalizing the vector could not be correct. Why? Well, consider the fact that I want a vector going from the frustum begin to each pixel on the far plane. Each vector in that respect is not normalized, seeing as the distance straight forward is shorter than it is to a corner, which can be derived using Pythagoras theorem. Currently, the shadows work, and I’ve baked them into an extended buffer, using both depth as the alpha-component, and the RGB as the world space position components, to save using another render target. I’m not satisfied yet though, seeing as the explanation on mynameisjp.com must have some validity to it, but for now, I’ll keep the data in the render target.