We’ve been somewhat silent. I would say very silent. Silence doesn’t mean we’ve done nothing though, quite the contrary.
We’ve been working A LOT the tools, mainly the content browser and the level editor. On my part, I’ve been fidgeting with the content browser to make it simpler, more minimalistic and faster. For example, the content browser now allows for a static model to be attached with a particle node, or several, which means that artists are able to build more complex (albeit non-skinned) pre-defined objects by adding particle nodes to them. The content browser is also more responsive and interactive due to the new system used when saving, loading and applying object parameter changes such as textures and other shader variables.
However more relevant is the new IBL pipeline which allows a level designer to place light probes in the scene, have them render a reflection cube map and an irradiance cube map, and then have it applied in a novel fashion on overlapping objects. To do so, a designer puts a light probe in the scene, give it a few parameters, presses a button and woop, the entire area is affected by said reflections and irradiance. This effect gives an illusion of realtime GI, since it simulates specular light bounces through reflections, and ambient light bounces through irradiance. The following image shows the result as displayed inside the level editor:

This image shows how reflections are projected inside scene.
To do this, we first capture the scene from 6 angles using a bounding box as a capturing area. This area is used to later on reproject the reflecting rays so that we get parallax corrected cube maps when rendering. The result from the render is saved as a render target, and is then processed by CubeMapGen, which is now integrated into Nebula as an external library. Using a predefined set of settings, we generate the reflections and optionally, the irradiance; output it to the work folder and then assign them to the influence zones.
Simple stuff. But here comes the interesting part. As far as I have come across (this might be false), the common solution is to handle objects entering the reflection zone, which gets assigned a reflection map from which to calculate the reflections. Some solutions use the camera as a singular point of interest, and assigns the reflections on every visible object when the camera enters the influence zone. We do it differently.
We had this conundrum where we visualized two different zones of irradiance separated by a sharp border, say a door. Inside the room the lighting is dark, and outside the room, in the corridor, you have strong lights in the ceiling. If an object would move between said areas, then the irradiance would be gradual as the object would cross between these zones. This can be accomplished in a pixel shader by simply submitting N reflections and cube maps, calculate the distance between pixel and cube map, and then apply a blending based on the distance.
We thought of another way in which we don’t have to do the work per object. A way that would let us draw an ‘infinite’ amount of reflections and irradiance per pixel, and without the overhead of having to calculate reflections on pixels we simply cannot see. Enter deferred decal projected reflections.
By using decals, we can project reflections and irradiance into the scene using an origin of reflection and an influence area. The decal blends the reflections and irradiance based on a distance field function (box or sphere), which allows for other reflections to be blended. Decals are then rendered as boxes into the scene as any object but with a special shader that respects roughness and specularity. Using this method, we avoid:
1. Reflecting unseen pixels.
2. Submitting and binding textures per each affected object.
3. Applying reflections on dynamic objects without doing collision checks.
4. Having an upper limit on reflection/irradiance affecting zones.
5. Popping.
We have some limitations however, namely:
1. Render decals ordered on layer.
2. Have to switch between box and sphere distance field functions without shader switch overhead (sorting is impossible since the layer dictates draw order).
3. Potentially switch back and forth between many different textures (if we can see many reflection simultaneously).
4. We don’t calculate reflection and store it in the G-buffer. The projector shader is somewhat complex and computational heavy, so any simplifications are welcome.
Our solution give us the ability to project decals into the scene instead of having to apply them per object, meaning we won’t have any popping or appearing artifacts, and good scalability with the number of objects. I doubt this hasn’t been done before, and there are probably caveats with this method which are yet to be discovered.