High Dynamic Range and Camera Curves

(= Poor Man's RAW I)


As discussed in the foregoing, the internal workings of a digital camera usually result in initial, internal "raw images" that are up to 12 bit = 4096 levels in colour information. Whatever the route, these "raw images" are ultimately reduced to 8-bit images in some colour space (like sRGB or Adobe RGB) that ordinary colour monitors can show and ordinary colour printers can print.

Unless you are the lucky owner of a (high-end) digital camera that provides RAW as well as JPEG output this reduction is done inside the camera and outside your control. Why then all these extra levels to begin with? The answer lies in the needs to get just the right selection and distribution of the different levels to represent as fairly as possible the fine details in both shadow and highlight parts of the picture. In certain cases this is all but impossible.

Who hasn't tried to photograph a person or an object half in shadow and half in bright sunlight or in strong back-light. To take a few examples:


The eye saw something like this:

But the camera produced this:




My wife thought she captured her husband:

What she saw was more like this:



While the human eye has a fabulous capacity to deal with huge differences in contrast and in brightness, photos B and C are inevitably what you get when using a standard consumer camera in situations like these. To improve the situation - at least somewhat - all those extra levels must somehow be brought back into play.

Take a closer look at image C above. It is a typical example of a scene that simply contains a greater brightness range than can be directly captured in one shot by current digital consumer cameras. This rather dark (in all other parts than the sky) picture shows us detail in the highlight parts that would be washed out in a "properly exposed" picture". On the other hand, had we settled for "correct exposure" of the mid-brightness parts, say the shore line and boulders, details in the sky would still be washed out and details in the foreground would be hidden in the shadow. Thus, it appears that we need at least three exposures to properly capture all details of this scene. In fact, so it is and that is the plain truth if we want to capture ALL detail.

However, there is more information hidden in image C than meets the eye at first glance and which you may already have guessed from Image D which is the same picture as it comes out after some subtle processing. Let it be stated without any further proof: You can NOT go from C to D by simply adjusting brightness, gamma and contrast in any digital imaging software. The "best" results will be flat and grainy and a far cry from what can actually be achieved. So what do we do - play around with levels and curves in all three colours until we get a satisfactory result like D? In a sense that is what we must do but fortunately there are substantial shortcuts made possible in that process.

Imaging software like Photoshop and PhotoImpact allows us to combine a series of bracketed exposures into a single image which encompasses the tonal detail of the entire series. In doing so, the images are combined to a 16-bit (or even more) High Dynamic Range image which you cannot print or view directly on your monitor, but you can manipulate it and watch your progress (in an 8-bit "image-of-your-HDR picture" on your monitor) as you proceed. Obviously, all good things come at a certain cost and as you will finally end up with an 8-bit image that you can print and display, you will have to make certain compromises and choices: Broadening the overall tonal range will inevitably result in degraded contrast in some tones in the final, 8-bit resulting image. But the choice is now yours and you can achieve results that far exceeds the dynamic range of any single exposure.

The best is yet to come: Images D and A above were not produced via bracketed exposures and via HDR images produced therefrom. From the original exposures, B and C, they were produced directly in a one-key-stroke process by means of a camera-specific curve generated as a "by-product"  from an earlier production of another HDR image of quite a different subject, (but with similar challenges in respect of huge differences in brightness over the entire scene)


First, let us see how an HDR image will be crated in practice:

For this, you will need at least two - preferably more - images of the same subject with different exposure times. This is much like bracketing a photograph with exposures 1 or more stops over and below the assumed correct one. But, whereas bracketing is made to achieve one "best exposure", in HDR we shall use all images to produce on single picture showing detail in both shadow and highlight and in-between.

Normally, one will use this for completely stationary scenes / subjects only, such as interiors, architecture and landscapes. However, here we shall use quite another subject, namely the moon. The moon is usually considered an easy target for beginners in astrophotography because it is large and bright. However, it isn't THAT easy because the moon is one of the most contrast-rich subjects that one can think of and it is very difficult to capture detail along the terminator (the border area between night and day on the moon) and the dark crater floors as well as detail on the extremely bright lunar disk away from the terminator.

Now, the moon moves quite rapidly in the field of view of long focal telephoto lenses or telescopes. Consequently, for an HDR image a careful alignment of the individual images is required. This is a bit tedious but otherwise straightforward with any decent imaging software so, enough about alignment and directly to the pictures that will form the basis for our HDR image:



E. 1/30 sec. exposure at f/10 and ISO 200


F. 1/60 sec. exposure at f/10 and ISO 200



G. 1/90 sec. exposure at f/10 and ISO 200


H. 1/180 sec. exposure at f/10 and ISO 200


From here, the production of an HDR image is very simple provided that you have the right software. How it is done may differ in detail from software to software, but one basically just opens all images and then presses a "CREATE HDR"-button of some sorts. The HDR image as such which will be a 12 or 16-bit per channel image (or even more in some software) and, like the camera RAW images, they cannot be directly shown on screen or printed, but they can be exported back for normal, further processing as 3 x 16 bit TIFF or 3 x 8 bit JPEG (or TIFF) images. The HDR images may also be manipulated directly in the software's HDR-mode but that requires quite some computing power: In a true colour 3 x 8 bit image you have 256 X 256 x 256 = 16.8 million colours - or different combinations of levels, if you will - to work with; in a 3 X 12 bit HDR image you have 4096 X 4096 x 4096 or almost 69 thousand million combinations to manage - and yes, that does take time!!! Thus, I shall just export my HDR i as it is, which for the pictures above is the following:


I. Resulting HDR image exported "as is"

to 3 x 8 bit true colur RGB image

It is clear that detail in the terminator has been enhanced without washing out any detail in the bright areas of the entire lunar disk. As said, the image could have been (much) fine-tuned before I re-exported it as a normal JPEG image, but that is really not my main business here. HDR images may be fine for architecture and similar, but the real exciting this is that at the same time as I produced my lunar HDR image I also produced an individual camera and scene-type specific camera curve that I may use over and over again to obtain similar results as image f. above using just one, single exposure for future, similar objects.
So, what is a camera-specific curve and, how do I use it?
CURVES is an editing tool that allows us to remap the tonal ranges in an image or in any of the (RGB-) channels of the image. A (specific) camera curve  is a curve that changes the tonal ranges in response to how that camera's sensor responds to different light intensity levels. That response curve can be saved as a camera curve profile (e.g.: a CCF file in PhotoImpact) to be used for optimization of single-shot exposures once you have generated the camera curve (profile) via the making of an HDR image as shown above. A few images and their corresponding histograms may help clarify. (As mentioned elsewhere the histogram represents the values of the pixels (from 0 to 255) along the x-axis and the vertical y-axis represent the weight (number of pixels with a given value) with which these pixels contribute to the tonal values in the image):

Remapping the tonal ranges by means of Curves is a bit related to Equalizing; however, while Equalizing is an automated process that remaps the histogram / tonal ranges to be essentially flat, Curves allows for remapping in a more sophisticated way in selected ranges only.

In the example below it is obvious that pixels with low values, i.e.: the shadow parts dominate the picture. After equalization, details are visible in the shaow regions while the middle and background parts have become pale and flat.

Using Curves with a camera specific Camera Curve Profile gives a much more pleasing result:

In comparison to the stretching made above, note how subtle the changes made to the histogram are this time. As a result, the grass still looks like grass and the trees in the background are still healthy green - not pale as above - while details in the shadow have still been brought forward.

This is just one curve profile for your camera. It may serve well for general purposes but one may want to try to make more curves for very special scenes (as for the moon above). On the following page, we shall see in more detail how camera curve profiles are made and used in practise. It is really very simple so, stay tuned.

Copyright 2009 - Steen G. Bruun