ACES Filmic Tone Mapping Curve

Careful mapping of HDR values to LDR is an important part of a modern game rendering pipeline. One of the goals of our new renderer was to replace Reinhard‘s tone mapping curve with some kind of a filmic tone mapping curve. We tried one from Ucharted 2 and tried rolling our own, but weren’t happy with either of this solutions. Finally, we settled on the one from ACES, which is currently a default tone mapping curve in Unreal Engine 4.

ACES color encoding system was designed for seamless working with color images regardless of input or output color space. It also features a carefully crafted filmic curve for displaying HDR images on LDR output devices. Full ACES integration is a bit of overkill for games, but we can just sample ODT( RRT( x ) ) transform and fit a simple curve to this data. We don’t even need to run any ACES code at all, as ACES provides reference images for all transforms. Although there is no linear RGB D65 ODT transform, but we can just use REC709 D65 and remove 2.4 gamma from it.

Curve was manually fitted (max fit error: 0.0138) to be more precise in the blacks – after all we will be applying some kind gamma afterwards. Additionally, data was pre-exposed, so 1 on input maps to ~0.8 on output and resulting image’s brightness is more consistent with the one without any tone mapping curve at all. For the original ACES curve just multiply input (x) by 0.6.

Fitted curve’s HLSL source code (free to use under public domain CC0 or MIT license):

float3 ACESFilm(float3 x)
{
float a = 2.51f;
float b = 0.03f;
float c = 2.43f;
float d = 0.59f;
float e = 0.14f;
return saturate((x*(a*x+b))/(x*(c*x+d)+e));
}

Fitted curve plotted against source data’s sample points:

ACES_film_curve

UPDATE: This is a very simple luminance only fit, which over saturates brights. This was actually something consistent with our art direction, but for a more realistic rendering you may want a more complex fit like this one from Stephen Hill.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

38 Responses to ACES Filmic Tone Mapping Curve

  1. opioidwp says:

    Should this be used like the uncharted function (U()) like this: tonemapped = U(color) / U(White); or simply like tonemapped = KN(color); and done?

    Like

  2. ProtonFactor says:

    Hey, I’m interested in running the ACES code to see how close your curve matches for myself (I like to verify data and modify it sometimes). But at the RTT function in the source code it says the colors should be in aces color space as input… Should I be using linear rgb as input for RTT? or is there a transform from linear to aces color space?

    Like

    • Yes, basically you can just use linear space RGB as input (ACES is RGB linear color space with a D60 white point). Keep in mind that my curve is shifted (pre-exposed), so it won’t match a by-the-book ACES curve.

      Like

      • ProtonFactor says:

        Ah okay, thanks for the clarification. Duly noted about the shift, I’ll account for that. And also thanks for the post, it works really well by the way. My team would just like to be able to modify the curve without straying too far from the “ground truth” and hence we have to test it against the code. Anyway, thanks again.

        Like

  3. Pingback: HDR Display – First Steps | Krzysztof Narkowicz

  4. Pingback: Image dynamic range | Bart Wronski

  5. Pingback: KlayGE 4.10中渲染的改进(二):Tone mapping - KlayGE游戏引擎

  6. Pingback: HDR Rendering With WebGL - tech-goat.com

  7. jj99 says:

    I’ve tried the Stephen Hill’s code, but it gave me quite different result (more saturated). Then I tried the ACES code in UE and it matches more closely your fitting. The UE code of course is even more complicated and not very suitable for use without baking to LUT.

    Like

    • It should be visible in very bright pixels – in Stephen Hill’s fit they tend to loose their saturation. We are currently doing a more realistic game and I had to refit ACES, so it doesn’t saturate so much in those very bright pixels.

      Like

      • David Clamage says:

        Can you clarify if the input and output to your function are in linear or sRGB space? I noticed that Stephen Hill’s fit that you linked comments that there’s an sRGB->Linear and then Linear->sRGB conversion. Is there any function that goes linear->linear? I’d prefer to use the hardware sRGB conversion if possible.

        Like

        • Both in my fit and in linked snippet (Hill’s fit) inputs are linear RGB (BT709 primaries with a linear transfer function). Yes, mentioned comments refer to sRGB, but having sRGB input/ouput makes no sense on modern hardware and linked sample uses this fit as linear->AcesFitted()->linear. Clearly there is no linear->sRGB on input there and there is linear->sRGB on output (“output = LinearTosRGB(ACESFitted(color) * 1.8f);” https://github.com/TheRealMJP/BakingLab/blob/master/BakingLab/ToneMapping.hlsl#L105).

          Like

          • Stephen Hill says:

            To further clarify: “sRGB” in my code comments refers to the sRGB (or Rec709) colour gamut, not the display transform (which isn’t a linear transform, so can’t possibly be represented by a 3×3 matrix!). The matrices are actually a concatenation of several separate transformations. For instance:

            // sRGB => XYZ => D65_2_D60 => AP1 => RRT_SAT

            Here we start by transforming RGB (in an assumed sRGB/Rec709 colour gamut) to CIE XYZ. Then a D65 to D60 chromatic adaptation transform is applied, since sRGB/Rec709 has a D65 white point, while ACES uses D60. Then we transform from XYZ back to RGB, but with a wider AP1 (ACEScg) gamut. Finally, the ACES RRT’s (slight) desaturation transform is applied.

            Like

          • nobody says:

            “LinearTosRGB(ACESFitted(color) * 1.8f)”
            Could someone explain the usage of that? I wonder how 1.8 was chosen? Is the factor really supposed to be applied to the LDR value instead of scaling the input (ie tweaking exposure)? I get very different visual results with the MJP/Hill formulation compared to this blog’s fit.

            Like

            • Hi,

              I think it’s the similar concept as in my curve – some eyeballed scale value to make it more consistent with brightness from other tonemappers or when you don’t user any tonemapper. Basically to be able to A/B test your new curve without having to manually adjust the exposure.

              Like

  8. Pingback: Games Look Bad, Part 1: HDR and Tone Mapping | Promit's Ventspace

  9. matt77hias says:

    What do you use for the inverse tone mapping (i.e. lighting -> tone mapping -> AA -> inverse tone mapping -> Post Processing -> tone mapping, adaption, gamma -> back buffer)?

    Like

  10. j4m3z0r says:

    Hey, first up: thanks for this — really neat work!

    I was wondering if you could explain the reasoning behind the input range being from 0-10? I’ve been applying it to data from a camera RAW file, which is by convention normalized to a range of 0-1. Feeding that unit-scaled data directly into this function works ok, but the peak output is then about 0.8, which means we’re not using the top 20% of the output device’s dynamic range (well, ~10% after gamma correction). Similarly, stretching the input to a range of 0-10 effectively gives a much steeper curve, which doesn’t produce good results.

    For the time being I’m just scaling the curve by multiplying the output by 1.0 / ACESFilm(1.0). The result looks ok, but I’d really like to understand what’s going on and use the correct inputs. My current approach feels like a bit of a hack! 🙂

    Apologies if this is basic knowledge that I’ve missed — this has been my first experimentation with trying to get a more filmic look to my images, so there’s likely a lot of background I’m missing.

    Like

    • The basic idea is that 1 is ~paper white and the rest of the range is used for brighter pixels (e.g. specular highlights). This way we avoid clipping highlights. In practice for games exact values of input range don’t make a big difference, as dst = Tonemap( src * exposure ), where exposure is a custom factor to make image look right. If you care about exact units (e.g. for picture interchange), then you should use appropriate ACES transforms (camera X -> ACES color space -> sRGB). If it’s to make photos look nice then I think you would be better with some fancy software emulating classic analog films.

      Like

      • j4m3z0r says:

        Thanks for the speedy reply!

        The intent here is to write software to make photos look nice, and simulations of various film stock is in the pipeline, but you gotta start somewhere. In practice, using any kind of filmic tone-mapping at all seems to be the big win, but you’re correct that I’ll ultimately need to implement the full pipeline.

        Looking at the metadata on some test raw files, they actually encode a “Specular White Level” value, which for well-exposed shots is generally at around 15,000 (~0.9, since these are 16 bit unsigned values). So a better approximation is probably to treat that number as 1.0, and scale the input accordingly. That does mean that output pixels won’t ever hit full brightness in a well-exposed image, but that’s probably correct behavior: you want to reserve dynamic range on the output to give some kind of representation to images that are hugely over-exposed (which would have a lower Specular White Level, I expect).

        Thanks again!

        Like

  11. Pingback: HDR技术之tone mapping – gleam

  12. Pingback: Tonemapping on mobile (unlit material) – Imaginary Blend

  13. Shih-Chin says:

    Hi, thanks for your sharing.

    I was wondering how to retrieve the data from OCT(RRT(x)) for numerical fitting, I’ve looked into the implementation of RRT.ctl and ODT.Academy.sRGB_D60sim_100nits_dim.ctl from
    https://github.com/ampas/aces-dev/tree/master/transforms/ctl

    I generated a sampling sequence with following functions retrieved from those two ctl files:
    i => [10^-5, 10^5]
    float v = segmented_spline_c5_fwd(i);
    v = segmented_spline_c9_fwd(v);
    v = Y_2_linCV(v, CINEMA_WHITE, CINEMA_BLACK);
    v = std::min(v, 1.0f) * SCALE;

    I found my sampling sequence are not close to your implementation (scaled input(x) by 0.6) and others’: figure

    There is a function ‘darkSurround_to_dimSurround’ which applies gamma transform in xyY, did you also take into account this function? Could you please elaborate more about how did you retrieve the data of ODT(RRT(x))? Thanks in advance!

    Like

  14. netocg says:

    Hi,

    I have a few tone map Nuke script and one has a very similar formula that you are presenting.
    In my free time I like to shoot images and mostly I capture 7 different exposures. I wonder if and hdr image done with camera could be toned mapped to have a better natural looking perception representation of what our eyes see by using your tone mapping formula?
    If that so what settings you would adjust on the formula to fit the 7different levels of exposure data?

    Best regards,
    Antonio.

    Like

  15. Pingback: Tone mapping进化论 – guodong's blog

  16. Pingback: Unreal Shader for Substance Designer – Calvin's Technical Art Blog

  17. Pingback: Post Processing?Substance Painter Color Profile. – leegoonz technical art director

  18. Pingback: Casual Shadertoy Path Tracing 2: Image Improvement and Glossy Reflections « The blog at the bottom of the sea

  19. Pingback: SDRとHDR

Leave a reply to Krzysztof Narkowicz Cancel reply