Aggregated deferred lighting

Random idea about a new way to do deferred lighting. The idea is to decouple lighting from geometry normals. In order to do that, lighting information is stored as aggregated lights ( direction + color ).

  1. 1st pass – z-prepass ( just render depth )
  2. 2nd pass – render lighting geometry / quads / tiles…. Output aggregated virtual directional lights for every pixel. This means weighted average of light directions and weighted sum of light colors for every pixel.
  3. 3rd pass – render geometry and shade using buffer with aggregated directional lights (and maybe add standard forward directional light)

2nd pass render target layout:

RT0: aggregated light color RGB
RT1: aggregated light direction XYZ

We want to achieve this:

AggregatedLightColor = 0.
AggregatedLightDir = 0.

for every light
{
    AggregatedLightColor += LightColor * LightAttenuation
    AggregatedLightDir += LightDir * intensity(LightColor * LightAttenuation)
}

In order to do this, we need:

  1. Init RT0 and RT1 with 0x00000000
  2. Setup additive blending states
  3. Output from light pixel shader:
ColorRT0 = LightColor * LightAttenuation
ColorRT1 = LightDirection * dot( ColorRT0, ToGrayscaleVec )

Cons?

  • Light aggregation as virtual directional lights per pixel is an approximation. Moreover we can’t properly blend normals by using their arithmetic averages. It means that with many lights per pixel (with opposing directions) it won’t be too accurate (but it shouldn’t be too visible).

Benefits?

  • Flexibility. You can use almost any lighting model
  • You can render lighting in lower resolution as high frequency normal map details are added later. There will be artifacts at depth discontinuities, but maybe for some type of content (think desaturated and gray as Gears of War or Killzone 2 :)) they won’t be to visible
  • Less bandwidth and memory usage (if we compare it to deferred lighting and shading, which stores full specular color, not just it’s intensity).
  • Z prepass is faster than rendering GBuffer or normals + exponent
  • A bit simpler calculations. No need for encoding / decoding material properties (normal, exponent,…).

Now it’s time to find some free time and code a demo in order to compare it to deferred lighting/shading in real application :).

P.S. decoupling can be also done by storing lighting as spherical harmonics or cubemaps: link1 link2 link3 ( thanks Hogdman from gd.net forums ). Downside of that method is lack of proper specular, because of low frequency lighting data and this method will be slower.

P.S. 2 It looks like it would be better to store normals as angles (RT1.xy – weighted 2 angles, RT1.z – sum of weights). It would ensure proper aggregated light direction interpolation.

UPDATE: I prototyped this method and it doesn’t work too well :). Comparison screenshot with hard case for idea – two points lights with very different color influencing same area. Left – normal lighting and right – aggregated to direction and color:

aggr

Advertisements
This entry was posted in Graphics, Lighting. Bookmark the permalink.

7 Responses to Aggregated deferred lighting

  1. Reg says:

    This sounds like a very interesting idea! I can't wait to see if it works, because I'm not so sure about it 🙂

    Like

  2. Dab says:

    Did you implement it? Have any screenshots? Have any screenshots with “bad” cases? 🙂 I just wonder about practical application.

    Like

  3. KriS says:

    @Dab why did You have to remind me about it at midnight. Instead of going to bed after good evening with software rasterization programming I had to prototype it :). It does look quite strange at intersections of lights with different color and direction. So it doesn't look too practical, but I think that decoupling lighting from normals and aggregating lights is a good idea. Maybe instead of directional lights I should use virtual point lights or smth…

    Like

  4. Dab says:

    It just occurred to me: what if you would use normals in lighting pass? You can reconstruct normals from depth (so no need to render anything but early-Z first) and it looks quite well: http://dabroz.scythe.pl/2010/05/02/ssao-revisited

    Like

  5. KriS says:

    @Dab that's an interesting idea. Now I remember that NVIDIA presentation about normal reconstruction from depth. The question is how to include normal maps into this process.

    Like

  6. Dab says:

    You could modify depth in early-Z pass (using some parallax method like POM or CSM) but I guess it won't be “early-Z” anymore 🙂

    Like

  7. KriS says:

    Very similar method was described in that NVIDIA presentation (http://developer.nvidia.com/object/nvision08-DemoTeam.html).

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s