Hats off to Aurora HDR

I am on record stating that I don’t care too much for stagey HDR preset effects, yet I find myself admitting that one of the better HDR editing programs, Aurora HDR, has done a fine job with an intractable photo. I write here to give credit where credit is due, but because many of my readers may not know what HDR is, I beg you, if you are a photographer, to please bear with me as I explain it. After all, it’s quite interesting.

Yesterday there was a magnificent sunset. I only noticed it at the last moment, because my office faces away, to the east. The sun was plummeting down behind some trees minutes at most before going behind the mountains. I barely had time to grab my camera and get a quick series of shots out my bedroom window. To my eye, it was a beautiful scene, because the human eye can pick out details in the shadows while still being abe to handle bright objects. A camera has a harder time of it.

Figure 1. Unedited photo of sunset, 13 November 2020, Scranton, PA. Photo author.

A human eye is better at seeing a range of brightnesses running from inky black to bright white than practically any camera you or I are apt to lay our hands upon. The span of that continuum from black to white is the dynamic range of the eye. Cameras have dynamic ranges, too, but they are limited compared to the human eye. Think of how the human ear is weak compared to a dog’s ear.

In practice, my ear just doesn’t hear a dog whistle. By contrast a camera “sees” whites brighter than it can handle, but it sees everything above a certain threshold as just white, and everything below a lower one as just black. You can see both happening above, in figure 1: the sun is terribly blown out, and vast amounts of detail have been lost to muddy brown or black.

Figure 2. -1 ev stopped-down photograph of sunset, 13 November 2020, Scranton, PA. Photo: author.

A modern camera will make a pretty shrewd guess on how to attractively center the exposure within its dynamic range. Dirty Harry would approve (“A man’s got to know his limitations”). The sun is so bright, however, that if the camera squints real hard, tons of stuff goes invisibly down into undifferentiated black. That is the case in figure 2. I altered the camera’s setting by making it squint much harder (-1 ev), so you see a lot more detail in the sky around the sun, but now the branches really are black silhouettes.

Photographers make a virtue of this by artistically composing the silhouettes. Likewise, if the camera took those drops we take at the eye doctor’s so as to dilate its pupils and let more light in, large parts of the image, mostly the sky, would be blown out into undifferentiated white.

That’s the tradeoff with cameras: while I wanted to record the fantastic scene I saw with my human eyes, the camera could only shift the exposure to favor seeing the bright stuff better or the dark stuff better. Or both equally badly. It cannot simultaneously do both well. Figure 3 is what I got when I stopped the camera down -2 ev, which is basically like squinting and shading your eyes with your hand. Like figure 2, it’s otherwise unedited.

Figure 3. -2 ev stopped-down photo of sunset, 13 November 2020, Scranton, PA. Photo author.

You see in figure 3 that we can very nearly see the sun as a disk, and that the branches in front of it are now crisply clear. The rest is a mess of black.

Happily, there are tons of things that can be photographed that fit nicely within a camera’s dynamic range. Shooting into the sun just isn’t one of them.

This changed when digital photography matured. It turns out that the limitations on a camera’s dynamic range come in part from limitations on the electronics’ ability to process and show and print an image. It’s SCIENCE. Good camera sensors these days can, however, come a lot closer to seeing the eye’s dynamic range. Recall that an image is just a bunch of numbers reflecting (heh) the color and quantity of light hitting the wee sensor elements that record pixels. My camera’s sensor has 6,000 across and 4,000 down, creating images of 24 million (mega) pixel size. An iPhone has 3,000 by 4,000.

So, the sensor dutifully records its numbers, but the image-processing software and screens end up throwing away that information above and below those thresholds I mentioned, lumping them in with the whites and blacks respectively. If your camera turns out jpegs, you never have access to that data. So, like 10 or 15 years ago you may have noticed image processing software getting RAW updates. These allowed things like iPhoto (or whatever) to grab files from the camera before all that truncation takes place. There are proprietary names for these files, but generically they’re called RAW.

Now, a RAW file is not magic. You can’t see it. What you see on the back of your camera after you take the picture or in Adobe’s Lightroom or Affinity Photo or Apple’s Photos is still truncated, but the engineers have found ways of allowing us to edit the RAW data through, or maybe despite, the limitations of the monitors we use to visualize it. But the point is that the editor (human and program) now has access to a substantially higher dynamic range than we did when images were immediately generated as jpegs or tiffs or whatever.

When I started shooting RAW, I noticed it mostly in the fact that I was able to use the editor to make shadows much brighter, and add details to areas that were originally black. Effectively I was bringing the dark RAW data up over the threshold. What’s amazing is precisely how much information is hidden in the blacks of a displayed RAW image. Now if you load up a photo like my figure 3 in a RAW editor, you can bring up the shadows (and darken the highlights) but this will have chain effects on other aspects of the image. Everything is linked, and you may have noticed that when you brighten up the dark parts of an image you bring out a certain not-nice graininess (figure 4). Figure 4 is about as good as I can get the image in terms of brightness with a normal editor: the highlights are dimmed down 100%, the shadows boosted 100%. Wouldn’t it be great if we could separate the highlights, midtones and shadows and expose our picture just right for each?

Figure 4. Author’s edit of image in figure 2. Photo: author.

Bracketing is a process photographers used nearly from the beginning. Any one photograph inevitably suffers from the tradeoffs I mentioned above. So, many film photographers would bracket the exposure, taking what calculations said was just right, but then, as insurance, doing another identical one a little more exposed, and yet another identical one a little less exposed. Chances were one of the exposures would be just right even if the initial calculation had been a bit off. But if you think about it, between those three photos you actually have a higher dynamic range than the camera could achieve in any one of them. If only you could grab the best parts of each!

When photography went digital it became possible to use an editor to merge the best bits of bracketed sets of photos. The editor could take the manageable highlights from the darkest bracket image, the midtones from the ‘just-right’ one, and the shadows from the brightest image. The resulting image showed elements which, though visible to the eye, had been hopelessly beyond the camera’s dynamic range for any one shot. When done conservatively it makes for really vivid, attractive images. My iPhone does this with every image, in the phone, by default, and has for years. This kind of processing is called HDR, ‘High Dynamic Range.’

The bracketing for HDR is no longer just an insurance policy but a standard tool of art in digital photography. Many camera bodies have built-in software to bracket images nowadays. Mine does not, or I would have bracketed my sunset (figure 1) for an HDR merge. Just about all digital editing programs worth their salt now have built-in HDR processing routines: Adobe, Affinity, Skylum, to mention the ones I know. The latter company also sells Aurora HDR, a stand-alone HDR processor. And it’s here that I’ve finished backstory and reached my main point. About time! you may say.

Aurora HDR has a second mode that proceeds from the fact that RAW camera data usually contains much more information than gets used. So the Skylum engineers developed (heh) a routine that takes a single RAW image and effectively takes it apart into three images, and adjusts those three so that one favors each member of a traditional bracket. It then recombines these in an HDR routine. It’s like magic when it works well (figure 5).

Figure 5. HDR process of image in figure 2. Photo: author.

By comparing figures 4 and 5 you can see how HDR is able to get the best out of highlights and shadows. In my opinion, figure 5 is preferable to figure 4.

But nothing comes for free. The zippiest HDR image trades on using brighter shadows and dimmer highlights to compress the extended dynamic range of the human eye into the more restricted dynamic range of imaging equipment. This is another way of saying that the process reduces contrast, because contrast, which the human eye adores, is precisely the separation of different levels of brightness or color.

If you’re like me your eye is probably drawn to figure 2c, at least until you realize that while it has high drama it’s all shadows and highlights, which is a hallmark of a low dynamic range. Even if you like the abstract geometry of the maple’s bare branches, you lose that up in the clouds whereas you retain it perfectly well in figure 2b. Figure 2a, which you may find to bear some similarities to figure 4, brings out midtones in the golden leaves, for example, but the sky is a blah mess. Low contrast tends to push everything to be about the same brightness as everything else, and where things started out being pretty similar, in the clouds and the sky in general, it’s a disaster.

Put another way, without further processing, an HDR image, while startling in the range of tones from nature that it brings out, would nevertheless be a bit drab. My first inclination upon seeing figure 5 is to boost the contrast, but that would defeat the HDR processing.

So the HDR artists (they exist) developed other methods of adding zing to stand in for the missing contrast. One is to play with local contrast, making edges of larger and smaller objects in the HDR photo stand out more. Leaves are a good example of the former, clouds a classic case of the latter. Color is largely hors du combat in the game of brightness played in HDR, so you will often find color contrast zapped to high levels. Think of the HDR image as a picture-perfect steak grilled to perfection which lacks flavor. Local contrast and color contrast and a hundred other techniques are like A1 Sauce added to compensate.

Figure 6. Mannered HDR preset edit of image in figure 5. Photo: author.

But, to carry my metaphor further, the A1 Sauce covers up the sight of the steak, which was a goodly part of the delight of it, so there’s an aesthetic trade-off. And so, while HDR is simply a photographic technique, and when done conservatively yields very good results, in the hands of high-end practitioners it has a very distinctive, artificial look. It’s like the way you can instantly identify a piece of pop music as eighties by virtue of distinctive synth sounds. We’re allowed to like, even love, highly mannered, artificial forms of art, but their preciousness makes them cloying, their ubiquity soon makes us bored with them, and the bandwagon effect soon brings a lot of third-rate efforts to the public’s attention. So you’ll rarely see unqualified praise of HDR these days, but rather a revolt against it, or at least its most mannered examples. So back to my sunset (figure 6).

Figure 6 is the HDR image in figure 5, but subjected to one of the presets offered with Aurora HDR. I think it’s called ‘Bright Sun.’ It’s got a filter overlay turning things an orangey gold, and there is some mathematical wizardry, applying what looks like glows around objects. The blue of the sky is dulled by the orange, but the foliage is pin sharp and the solar disk is nearly perfectly visible. The clouds have not only been colored but also subjected to a great deal of local contrast enhancement. See how dark masses of cloud stand out as against figure 2. I like it, despite what I’ve said above, but I take little credit for altering an image with one click to apply changes envisioned by another photographer.

But stepping back, my hat is most definitely off to the Aurora HDR engineers, because they are monstrously clever to be able to achieve so much with one overexposed RAW image file in figure 5, and as I say, I like figure 6.

Published by gsb03632

A college professor living in Arlington, VA

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: