Octahedron normal vector encoding

Many rendering techniques benefit from encoding normal (unit) vectors. For example in deferred shading G-buffer space is a limited resource. Additionally it’s nice to be able to encode world space normals with uniform precision. Some encoding techniques work only for view space normals, because they use variable precision depending on normal direction.

World space normals have some nice properties – they don’t depend on camera. This means that on static objects specular and reflections won’t wobble when camera moves (imagine FPS game with slight camera movement on idle). Besides their precision doesn’t depend on camera. This is important because sometimes we need to deal with normals pointing away from camera. For example because of normals map and perspective correction or because of calculating lighting for back side (subsurface scattering).

Octahedron-normal vectors [MSS*10] are a simple and clever extension of octahedron environment maps [ED08]. The idea is to encode normals by projecting then on a octahedron, folding it and placing on one square. This gives some nice properties like quite uniform value distribution and low encoding and decoding cost.

I compared octahedron to storing 3 components (XYZ) and spherical coordinates. Not a very scientific approach – just rendered some shiny reflective spheres. Normals were stored in world space in a R8G8B8A8 render target. Post contains also complete source code (which unfortunately isn’t provided in original paper), so you can paste into your engine and see yourself how this compression looks in practice.

XYZ

float3 Encode( float3 n )
{
return n * 0.5 + 0.5;
}

float3 Decode( float3 f )
{
return f * 2.0 - 1.0;
}

xyz

xyz_2

Spherical coordinates

float2 Encode( float3 n )
{
float2 f;
f.x = atan2( n.y, n.x ) * MATH_INV_PI;
f.y = n.z;

f = f * 0.5 + 0.5;
return f;
}

float3 Decode( float2 f )
{
float2 ang = f * 2.0 - 1.0;

float2 scth;
sincos( ang.x * MATH_PI, scth.x, scth.y );
float2 scphi = float2( sqrt( 1.0 - ang.y * ang.y ), ang.y );

float3 n;
n.x = scth.y * scphi.x;
n.y = scth.x * scphi.x;
n.z = scphi.y;
return n;
}

spherical

spherical_2

Octahedron-normal vectors


float2 OctWrap( float2 v )
{
return ( 1.0 - abs( v.yx ) ) * ( v.xy >= 0.0 ? 1.0 : -1.0 );
}

float2 Encode( float3 n )
{
n /= ( abs( n.x ) + abs( n.y ) + abs( n.z ) );
n.xy = n.z >= 0.0 ? n.xy : OctWrap( n.xy );
n.xy = n.xy * 0.5 + 0.5;
return n.xy;
}

float3 Decode( float2 f )
{
f = f * 2.0 - 1.0;

// https://twitter.com/Stubbesaurus/status/937994790553227264
float3 n = float3( f.x, f.y, 1.0 - abs( f.x ) - abs( f.y ) );
float t = saturate( -n.z );
n.xy += n.xy >= 0.0 ? -t : t;
return normalize( n );
}

octahedron

octahedron_2

Conclusion

Spherical coordinates have bad value distribution and bad performance. Distribution can be fixed by using some kind of spiral [SPS12]. Unfortunately it still requires costly trigonometry and quality is only marginally better than octahedron encoding.

One other method worth mentioning is Crytek’s best fit normals [Kap10]. It provides extreme precision. On the other hand it won’t save any space in G-Buffer as it requires 3 components. Also encoding uses a 512×512 lookup texture, so it’s quite expensive.

Octahedron encoding uses a low number of instructions and there are only two non-full rate instruction (calculated on “transcendental unit”). One rcp during encoding and one rcp during decoding. In addition quality is quite good. Concluding octahedron-normal vectors have great quality to performance ratio and blow out of water old methods like spherical coordinates.

UPDATE: As pointed by Alex in the comments, interesting normal encoding technique survey was just released [CDE*14]. It includes detailed octahedron normal comparison with other techniques.

UPDATE 2: Added optimized Octahedron decoding by Rune Stubbe.

References

[MSS*10] Q. Meyer, J. Sübmuth, G. Subner, M. Stamminger, G. Greiner – “On Floating-Point Normal Vectors”,  Computer Graphics Forum 2010
[ED08] T. Engelhardt, C. Dachsbacher – “Octahedron Environment Maps”, VMW 2008
[Kap10] A. Kaplanyan – “CryENGINE 3: Reaching the speed of light”, Siggraph 2010
[SPS12] J. Smith, G. Petrova, S. Schaefer – “Encoding Normal Vectors using Optimized Spherical Coordinates”, Computer and Graphics 2012
[CDE*14] – Z. H. Cigolle, S. Donow, D. Evangelakos, M. Mara, M. McGuire, Q. Meyer – “A Survey of Efficient Representations for Independent Unit Vectors”, JCGT 2014

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

17 Responses to Octahedron normal vector encoding

  1. Nathan Reed says:

    Thanks for doing this comparison! I came across octahedral normals last year and they seemed like a great idea; glad to know they stack up well in practice.

    Just a note: you should be able to reduce instruction count a bit by writing out sign(x) as (x >= 0 ? 1 : -1). sign() is a bit more expensive since it also checks for zero, which isn't necessary here. (Emil Persson's “Low-Level Thinking in High-Level Shading Languages” talk raised this issue; it's well worth reading if you haven't seen it!)

    Liked by 1 person

  2. Yes, I've seen Emil's talk. Great stuff. I've decided to use sign here for better code readability, but forgot to mention that it should be replaced by ternary operator. Thanks for pointing it. I updated post and replaced sign with ternary operator.

    Like

  3. Anonymous says:

    Does it make sense to use octahedron-normals using 12 bits per component? Does it look better than using “raw” XYZ?

    Liked by 1 person

  4. Adding additional bits always improves quality. At 10 bits it's hard to spot any artifacts. Obviously octahedron-normals will look worse comparing to XYZ when using same bit depth. The purpose of this method is to compress normals and store them in 2 components instead of 3.

    Liked by 1 person

  5. I think Anonymous was asking if 12:12 octahedral *beats* XYZ 8:8:8; they are the same size, but the former uses all possible 16 million values relatively evenly over the surface of the sphere, whereas for XYZ, (unless using the magic crytek table method) most of the 16m values are wasted as not being unit vectors. I haven't tried figuring out the answer myself in detail, but back of the envelope:
    surface area of a sphere of radius 127 is about 0.2m, so only 0.2m unit normals can be stored by XYZ 8 bit. that's only using about log2(0.2m) = 17ish bits, which would imply that even 9:9 octahedral would be competitive with XYZ 8 bit. But because of squash and stretch distortion in the octahedral mapping, Im guessing 10:10 would be 'definitely better', saving 4 bits. or, as anonymous said, go to 12:12 and get significantly more representable directions. It's a shame that lerping octahedral normals is tricky over the boundaries.

    anyway, thanks for the blog post & code snippet!

    Like

  6. Anonymous says:

    That's what I meant. That makes sense thank you. Will do some tests when I get home if my test bed is still compiling…

    Like

  7. morgan just posted this http://jcgt.org/published/0003/02/01/paper.pdf which shows octrahedral 12:12 looking much better than xyz8:8:8. timing eh 🙂

    Like

  8. I see now. I'm currently using octahedral normals 10:10 and they definitely look better than XYZ 8:8:8. As i wrote before it's hard to spot any artifacts. 12:12 looks like a bit of overkill for me. Thanks for the paper.

    Like

  9. Very interesting technique, thanks for writing this up!

    Is there anything stopping us from using this technique to encode viewspace normals?

    Also, regarding the “wobbling” of the reflections using viewspace normals mentioned, is it actually due to using viewspace normals or more due to the low precision representation of them (eg RGBA8)?

    Like

  10. You can use it also for view space normals. ONV encode entire range will quite uniform precision, so it can be wasteful for view space normals. Wobbling is due to low precision. In practice I was always using 10:10 for view space normals and all specular/reflection were nice and stable.

    Like

  11. Anonymous says:

    Can we exploit this kind of encoding for Mesh Vertex Compression ?

    normalized(vertex) –> 2 16bit values
    vertex magnitude –> 1 32bit value

    or

    normalized(vertex) –> 2 8bit values
    vertex magnitude –> 1 32bit value

    How much 32bit precision we lose in these 2 cases ?

    Like

  12. Tobias Zirr says:

    What do you need the epsilon for in the ONV projection?

    Also, note that the implementation in the paper linked earlier (http://jcgt.org/published/0003/02/01/paper.pdf) is superior in the way it projects: While [MSS*10] shares your projection, they use (1.0 – abs(n.yx)) * signNotZero(n.xy), i.e. components AND SIGNS are swapped. This ensures that there are no seams on Z sign change, which is important whenever Z might accidentally change its sign near zero, e.g. due to discretization. In fact, this is likely to happen when the results of encoding are naively written to lower-precision textures, in which case you don't want your normal pointing in a completely different direction!

    Like

  13. Thanks! Their wrapping method is indeed superior and I just updated post to use it. Epsilon was introduced to fix that z sign change issue, but not it's not required.

    Like

  14. Pingback: DirectX 12 Engine – Image Based Lighting (IBL) + Tone Mapping – Nicolas Bertoa

  15. Pingback: BRE Architecture Series Part 1 – Overview – Nicolas Bertoa

  16. Pingback: Increasing wave coherence with ray binning – Interplay of Light

  17. Pingback: A Frame of Slime Rancher - My Blog

Leave a comment