For general photography, the results you get with today’s digital cameras are about as good and sometimes better than as the results you get with film cameras. Computer graphics has achieved the goal of photorealism. Now the goal is to go beyond simply matching paper and silver halide – to create display technologies which can present any visual stimuli our eyes are capable of seeing.
One area of rapid development is in dynamic range. A new crop of technologies using High Dynamic Range imaging (HDR or HDRI) aim to extend the dynamic range of digital imaging technologies way beyond traditional media.
About Dynamic Range
Dynamic range refers to the range of brightness levels that exist in a particular scene – from darkest – before complete and featureless black, to lightest – before complete featureless white. In photography (rather than in sensitometry or printing) this is measured in stops, with each F stop being a doubling or halving of the amount of light being received by the recording medium, be it film or a digital sensor.
For the sake of this discussion, let’s assume that a typical colour transparency film or digital camera sensor can record 6 stops of dynamic range. Most people would agree that colour negative can record about one stop more and B&W film maybe an additional stop as well. So, somewhere between six and nine stops of brightness level is what most photographic systems are capable of recording.
I’ll just note parenthetically that with film there is a shoulder and toe to any film’s dynamic range curve, where values flatten out, and with digital sensors there is the question of noise acceptability in the lower values. Each of these fuzzys the exact values that various photographers can agree upon when it comes to the dynamic range that a particular film/developer combination, or sensor, can produce. I have no desire to enter that particular debate at this time.
So, if for the sake of discussion we will use the number of 6 stops of dynamic range as being what a typical film or sensor can record, we now see the problem that HDR attempts to address. Namely, to capture more dynamic range. While the majority of day to day scenes are easily handled by this dynamic range, there are situations, especially for the landscape photographer, where more is needed.
Now, there are many ways to accomplish this. The traditional method has been through the use of split neutral density filters. Put a two to three stop blocking filter over part of the scene that’s overly bright, and you can now set an exposure that encompass the brightness range available. The problem with this approach is that a full set of quality 1, 2 and 3 stop filters, in soft and hard edged configuration, with filter holders and lens adaptor rings, can cost close to $1,000. Also, they can be slow to use under the rapidly changing light conditions at the beginning and end of the day (when they are most likely to be used). Also, they really only work unobtrusively when there is a clearly defined edge to the brightness transition, such as a horizon line or a cliff edge.
A more contemporary approach is to take multiple exposures of the same scene, varying (usually) just the shutter speed. Take a “normal” exposure, and then a few more at 1-2 stops over and under that point. Then, in Photoshop, blend these exposures, using the parts from each one that capture properly the part of the scene that you want.
This can work very well, but to look convincing it needs to be done with some considerable skill, and usually requires quite a bit of work with masks and brushes. Also popular, when multiple exposures aren’t possible (for example when there is movement in the scene), is to process the RAW file twice, once for the highlights and once for the shadows. This can’t, of course, extract information that isn’t in the actual file, but it can do a better job than using the usual raw processing tools currently available. The same blending techniques as are used for merging multiple exposures are also used here, with similar issues arising.
Dynamic Range in Photography
Photography involves a capture device (the camera), a storage medium (e.g. film), and a display or output device (e.g. paper).
The dynamic range of each stage (capture, storage and output) plays a crucial role in the quality of the results. In general, technologies with greater dynamic range produce more realistic results. But photography is a compound process, and the dynamic range of each stage must be considered. When the dynamic range of the source scene is too great for any one stage of the process, something must be sacrified: you must either give up detail in the shadows or the highlights. Photographers have to know and work withing the limitations of their camera, storage and output devices.
W. Eugene Smith spent five days in the darkroom until he came up with a print of Albert Schweitzer that he was happy with:
(for more on this, see Fredo Durand’s lecture slides on The Art and Science of Depiction).
Smith was dealing with the issue that silver halide negatives have a greater dynamic range than photographic paper – so he had to “dodge and burn” different areas of the image to get a result where both the lamp and sitter are visible.
Perhaps the greatest master of dynamic range in photography was Ansel Adams. He was the first to systematically measure the sensitivity range of all of the equipment he used. His “zone system” let him predict precisely what details he could capture on film and paper, so he could make decisions before pressing the shutter:
Color negative films have less dynamic range (or “latitude”) than black and white films. My understanding is that the multiple layers and dies in color film result in reduced sensitivity. The first color films had very poor latitude, so film manufacturers added more layers – each color layer was split in two, a high-sensitivity and a low-sensitivity layer, using different crystal formations:
(I’m not an expert, but maybe color positive film doesn’t use this trick, hence the difference in latitude between positive and negative film?)
One way to get extended dynamic range with color photography is to use black-and-white film together with color filters. You have to take three exposures on separate sheets of black and white film: one with a red filter, one with a green, and one with a blue – and then composite the three images together. If you use glass plate negatives, you end up with images that have incredible colors and resolution. See below:
The most amazing thing about this image is that it was taken in around 1915 by Prokudin Gorskii. While it is true that this image was digitally enhanced in Photoshop, in my own experiments with a 4×5 and red/green/blue filters I could easily create an extended non-kodachrome tonal scale.
High Dynamic Range Imaging
HDR is the short for High Dynamic Range. Dynamic range in a photograph, in simple terms, means the contrast in a photo. It is the difference between the darkest and the lightest colour value. So a HDR photo is one which has a greater dynamic range than other normal photos (LDR or Low Dynamic Range).
Left: Low Dynamic Range image Right: High Dynamic Range image
The purpose of using High Dynamic Range Imaging (HDRI) in a photo is to produce a photograph that is better in quality than a normal photo. HDR photography helps in capturing those minute details of a scene which is not possible in case of normal photography. It basically helps to reduce the limitations of the camera with which the photograph is taken. Contrary to the common view, the purpose of HDR photography is not to produce any artistic photos but to have photos that are superior in quality.
The first step that is involved in HDR photography is choosing a good camera. It is not possible to get a HDR image with any digital camera. The technique here is to take several photos of the same scene at different exposure levels. These photos are then combined to get a single HDR photograph. So a HDR photo can be taken with any camera that has a manual shutter speed adjustment.
The next step involves selecting a proper scene to be photographed. It should be made sure that the scene has high contrast. Scenes with low contrast cannot be used for HDR photography as the entire dynamic range is shot by the camera in one shot. Besides, looking for high dynamic range in a photo, the scene should not have any moving objects.
After choosing the camera and the scene, the exposure in the camera has to be adjusted and then multiple shots are taken using the timer of the camera. These multiple shots are then merged to produce a HDR photo.
One of the most important benefits of HDR photography is that helps to capture scenes that have very high contrast level. It produces a photo as it is and presents an image that is extremely high in quality. A HDR photo has all the details that can be seen by naked eyes. Because of some of the limitations of digital cameras, some details of the photographs taken are usually missing. However, using a HDR technique helps in overcoming these limitations which is not possible otherwise.
Since the photos are taken at various exposure levels, each photo has maximum exposure. The best of these shots are then merged. Hence, photos are shot with as little noise as possible.
Do we really need HDR?
I recently read this comment from Sam Berry:
… the whole article has no mention of the fact that the reason most controlled lighting is almost always done to ratio of less than 8:1 even with neg film /modern digital capable of much more is because that’s what looks good. HDR technology now means you can reproduce your harsh midday sunlit scene perfectly, and it will look identically awful compared to the original.
The debate boils down to this: Does an image with a 300:1 dynamic range look good because it represents a physical sweetspot — something to do with our perceptual system that works well at that ratio? Or is it that all we’ve had access to for hundreds of years are reflective images with a roughly 300:1 dynamic range, so we are accustomed to that?
I had a similar question in my mind before seeing the BrightSide HDR display. Now, after looking at a HDR image on a 50,000:1 HDR display, I am no longer concerned about over-brightness, 50,000:1 is still way less than the brightness of looking directly at the sun. It wasn’t blinding. It isn’t a question of harsh. Images simply looks better when they look more real.
In the coming decade, HDR digital imaging technology will arrive, and change how we take, manipulate, store, use and display images forever.