Random idea about a new way to do deferred lighting. The idea is to decouple lighting from geometry normals. In order to do that, lighting information is stored as aggregated lights ( direction + color ).
- 1st pass – z-prepass ( just render depth )
- 2nd pass – render lighting geometry / quads / tiles…. Output aggregated virtual directional lights for every pixel. This means weighted average of light directions and weighted sum of light colors for every pixel.
- 3rd pass – render geometry and shade using buffer with aggregated directional lights (and maybe add standard forward directional light)
2nd pass render target layout:
RT0: aggregated light color RGB RT1: aggregated light direction XYZ
We want to achieve this:
AggregatedLightColor = 0.
AggregatedLightDir = 0.
for every light
{
AggregatedLightColor += LightColor * LightAttenuation
AggregatedLightDir += LightDir * intensity(LightColor * LightAttenuation)
}
In order to do this, we need:
- Init RT0 and RT1 with 0x00000000
- Setup additive blending states
- Output from light pixel shader:
ColorRT0 = LightColor * LightAttenuation ColorRT1 = LightDirection * dot( ColorRT0, ToGrayscaleVec )
Cons?
- Light aggregation as virtual directional lights per pixel is an approximation. Moreover we can’t properly blend normals by using their arithmetic averages. It means that with many lights per pixel (with opposing directions) it won’t be too accurate (but it shouldn’t be too visible).
Benefits?
- Flexibility. You can use almost any lighting model
- You can render lighting in lower resolution as high frequency normal map details are added later. There will be artifacts at depth discontinuities, but maybe for some type of content (think desaturated and gray as Gears of War or Killzone 2 :)) they won’t be to visible
- Less bandwidth and memory usage (if we compare it to deferred lighting and shading, which stores full specular color, not just it’s intensity).
- Z prepass is faster than rendering GBuffer or normals + exponent
- A bit simpler calculations. No need for encoding / decoding material properties (normal, exponent,…).
Now it’s time to find some free time and code a demo in order to compare it to deferred lighting/shading in real application :).
P.S. decoupling can be also done by storing lighting as spherical harmonics or cubemaps: link1 link2 link3 ( thanks Hogdman from gd.net forums ). Downside of that method is lack of proper specular, because of low frequency lighting data and this method will be slower.
P.S. 2 It looks like it would be better to store normals as angles (RT1.xy – weighted 2 angles, RT1.z – sum of weights). It would ensure proper aggregated light direction interpolation.
UPDATE: I prototyped this method and it doesn’t work too well :). Comparison screenshot with hard case for idea – two points lights with very different color influencing same area. Left – normal lighting and right – aggregated to direction and color: