Superresolution

Cameras are becoming increasingly accurate in capturing the infinitely detailed real world. One popular, but not very precise measure for the amount of information that a digital camera can capture is the number of pixels. In the last decade the number of pixels has multiplied by roughly a magnitude from 320 × 240 (in 1990, Dycam Model 1, also known as Logitech FotoMan) up to 4896×3264 (in 2009, Canon EOS-1D Mark IV). Cameras with over 1 billion pixels have also been developed, yet only for very special and rare applications [1].


Although this improvement in resolution over the recent years sounds very impressive, images and videos we capture will in general never have enough pixels. We see the following reasons leading to this conclusion. First, even a “high-resolution” image (e.g. with 20 million pixels) is still far away from the level of detail that the human eye and brain is able to capture. Second, many mobile cameras still do not have the latest high quality imaging sensors. Surveillance cameras still operate with standard “low-resolution” video formats like NTSC and PAL (around 720×576 pixels). Many cameras like in hand-held devices like cell phones, which make up the largest share in digital cameras being sold, employ very cheap sensor due to cost restrictions. Hence, the images suffer from bad illumination, motion blur and noise. Third, spatial resolution is also constraint by the infrastructure. In surveillance settings higher resolution means that more data needs to be stored and transferred. Therefore, more storage and higher bandwidth is required, which can be quite expensive. Furthermore, many existing systems have to operate for a long time with a given resolution. Fourth, increasing the number of pixels in the imaging sensor is not enough. The sensor elements also have to become smaller in order to capture finer details or expensive optics have to be used to achieve the same effect. However, decreasing the size of a pixel is accompanied with an increase in shot noise as the amount of captured light is lowered. Lastly, an increased number of pixels also make fast read-out of sensors technically more difficult and more expensive.


An alternative to increasing the resolution of imagery on the hardware side is to employ software-based solutions. One class of algorithms that increase the number of pixels and the level of detail, i.e. the spatial resolution, is called superresolution. An example of the amount of resolution improvement is depicted in the following figure.



Fig. 1: Superresolution example (left: original, right: enhanced version).

 

Superresolution has many different applications. A very obvious one are forensic investigations. Like in many action movies, crime investigation often faces the problem of limited spatial resolution. The suspect is just too blurry and unclear to be identified. Other applications relate more to graphics. Here the overall quality of the whole image or video should be increased (e.g. sharpen a blurry image). Superresolution has also been applied to scientific imagery like satellite images. In this application the motion model is quite limited to planar motion due to the great distance between the camera and the scene allowing for good resolution improvements.


Below some superresolution results (top: original video, bottom: enhanced video) using the MAP approach from [2] are shown.






References

[1] Pan-STARRS Project. http://pan-starrs.ifa.hawaii.edu/public/home.html. 2010.

[2] D. Capel, Image Mosaicing and Superresolution. PhD Thesis, University of Oxford, 2001.

Last Updated on Sunday, 08 December 2013 17:07