High-Dynamic-Range Imaging

Camera sensors are typically far more limited in terms of dynamic range, i.e. ratio between the darkest and brightest parts, than the human eye. Similar to many other human sensors, the eye is extremely adaptable and can detect very weak light sources as well as very bright ones. A typical real world scene may expose a dynamic range of about 10,000 : 1, which can be easily captured by a single view of the human eye. Only very recent imaging sensors are also capable of capturing such large dynamic ranges. Some more extreme, but also very typical natural scenes, e.g. an indoor photography depicting bright light sources such as a window on a sunny day, may even show a dynamic range of 100,000 : 1. However, most consumer cameras still employ sensors with smaller dynamic range, which often leads to over- and under-exposed areas in an image as illustrated in the following figure.



Therefore, adaptation of exposure time and aperture, automatically or manually, is vital to fully capture a natural scene. Even though image sensors will continue to improve in the future, there are still applications in which hardware-based image enhancement is not practical. For instance large surveillance systems at airports and train stations usually employ a significant number of cameras, which may easily exceed thousands. For such application scenarios where the replacement of cameras entails changes in other infrastructure components (e.g. data storage, data transmission, power supply or operating software) the hardware layer is considered irreplaceable for a long time period and software-based enhancement is the only acceptable solution. High-dynamic-range (HDR) fusion is then a suitable algorithmic solution. Multiple images that have been captured with different exposure settings are merged to generate an image with larger dynamic range than any of the input images by itself.


Possibly the earliest work on high-dynamic-range imaging was carried out by Gustave Le Gray around 1850. Back then, the films used were far more limited in dynamic range than the camera sensors used today. To be still able to capture bright and dark parts in the sky and on the horizon of the sea, he physically combined two differently exposed images captured with different exposure times [1]. Later Charles Wyckoff invented a film with multiple layers, which differed in sensitivity to light [2]. Images captured with this special film were then printed in pseudo-color to visualize all the details. A famous example was depicted on the Life Magazin in 1954. With the development of digital imaging sensors these very old concepts of image fusion for HDR imaging were rediscovered.

 

In 1993 Steve Mann discussed for the first time concepts for image fusion, which could be used for HDR imaging [3]. The algorithmic detail was published later in a pioneering paper, which suggested to reconstruct the camera response curve from a set of differently exposed images and then fuse this set to recover the HDR image [2]. A few years later, Debevec & Malik presented a similar approach [4], which mainly differed in the reconstruction of the camera response curve. Many other methods have been proposed until now [5, 6, 7, 8, 9].

 

Below some results generated using the Minimal-HDR approach are depicted.

 

 

 

References

[1] N. Rosenblum. A World History of Photography. Abbeville Press, 2007.

[2]  S. Mann and R. W. Picard. On being ‘undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures. Proceedings of IS&T, 1995.

[3]  S. Mann. Compositing multiple pictures of the same scene. Proceedings of the 46th Annual IS&T Conference, 1993

[4]  P.E. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. Special Interest Group of Graphics, 1997.

[5] S. Mann and R. Mann. Quantigraphic imaging: Estimating the camera response and exposures from differently exposed images. Computer Vision and Pattern Recognition, 2001.

[6] M. A. Robertson, S. Borman, and R.L. Stevenson. Estimation-theoretic approach to dynamic range enhancement using multiple exposures. Journal of Electronic Imaging, 2003.

[7]  M.D. Grossberg and S.K. Nayar. Determining the Camera Response from Images: What is Knowable? IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003.

[8] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2005.

[9]  R. Szeliski. Computer Vision: Algorithms and Applications. Springer, 2010.

 

 

 

Last Updated on Wednesday, 11 December 2013 10:33