High dynamic range compressive imaging: a programmable imaging system

June 20, 2017 | Autor: Miguel Correia | Categoria: Optical Engineering, Optical physics, Electrical And Electronic Engineering
Share Embed


Descrição do Produto

High dynamic range compressive imaging: a programmable imaging system Mehrdad Abolbashari Filipe Magalhães Francisco Manuel Moita Araújo Miguel V. Correia Faramarz Farahi

Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 08/30/2012 Terms of Use: http://spiedl.org/terms

Optical Engineering 51(7), 071407 (July 2012)

High dynamic range compressive imaging: a programmable imaging system Mehrdad Abolbashari University of North Carolina Center for Optoelectronics and Optical Communications and Center for Precision Metrology Charlotte, North Carolina 28223 E-mail: [email protected] Filipe Magalhães INESC Porto Rua do Campo Alegre 687, 4169-007 Porto, Portugal and Universidade do Porto Departamento de Engenharia Electrotécnica e de Computadores Faculdade de Engenharia Rua Dr. Roberto Frias s/n 4200-465 Porto, Portugal Francisco Manuel Moita Araújo INESC Porto Rua do Campo Alegre 687, 4169-007 Porto, Portugal

Abstract. Some scenes and objects have a wide range of brightness that cannot be captured with a conventional camera. This limitation, which degrades the dynamic range of an imaged scene or object, is addressed by use of high dynamic range (HDR) imaging techniques. With HDR imaging techniques, images of a very broad range of intensity can be obtained with conventional cameras. Another limitation of conventional cameras is the range of wavelength that they can capture. Outside the visible wavelengths, the responsivity of conventional cameras drops; therefore, a conventional camera cannot capture images in nonvisible wavelengths. Compressive imaging is a solution for this problem. Compressive imaging reduces the number of pixels of a camera to one, so a camera can be replaced by a detector with one pixel. The range of wavelengths to which such detectors are responsive is much wider than that of a conventional camera. A combination of HDR imaging and compressive imaging is introduced and is benefitted from the advantages of both techniques. An algorithm that combines these two techniques is proposed, and results are presented. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.OE.51.7.071407]

Subject terms: high dynamic range imaging; compressive imaging; compressive sampling; high dynamic range compressive imaging. Paper 111211SS received Sep. 28, 2011; revised manuscript received Apr. 25, 2012; accepted for publication Apr. 25, 2012; published online Jun. 11, 2012.

Miguel V. Correia INESC Porto Rua do Campo Alegre 687, 4169-007 Porto, Portugal and Universidade do Porto Departamento de Engenharia Electrotécnica e de Computadores Faculdade de Engenharia Rua Dr. Roberto Frias s/n 4200-465 Porto, Portugal Faramarz Farahi University of North Carolina Center for Optoelectronics and Optical Communications and Center for Precision Metrology Charlotte, North Carolina 28223 and Department of Physics and Optical Science Charlotte, North Carolina 28223

1 Introduction One of the limitations of imaging systems is their dynamic range. Scenes in nature usually have higher dynamic ranges than conventional cameras can capture. Therefore, full representation of a scene that has regions with both very high and 0091-3286/2012/$25.00 © 2012 SPIE

Optical Engineering

very low radiances is not possible with conventional cameras. High dynamic range (HDR) imaging addresses this issue.1–3 Another challenge in imaging is the lack of cameras for some regions of the electromagnetic spectrum. For example, for terahertz and millimeter wavelengths, no commercial camera is available, and for IR wavelengths, cameras are expensive and their spatial resolutions are less than what

071407-1

Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 08/30/2012 Terms of Use: http://spiedl.org/terms

July 2012/Vol. 51(7)

Abolbashari et al.: High dynamic range compressive imaging: a programmable imaging system

is available in the visible region. One solution for this problem is compressive imaging. In compressive imaging only one detector with a single pixel is used to capture the light from the object or scene, and after collecting enough samples, the full image can be reconstructed. Compressive imaging also can be used to increase the speed of imaging in some applications like magnetic resonance imaging (MRI).4–7 Applications for combining HDR imaging and compressive imaging include scenarios where the scenes and/or objects have high contrast ratios, the radiation wavelength is beyond the range to which conventional cameras are sensitive, or the image capture speed is important. The rest of this paper is organized as follows: Sec. 2 explains the basics of HDR imaging, compressive sampling, and compressive imaging. Section 3 describes the high dynamic range compressive imaging (HDRCI) technique. Section 4 presents the experiments and results, and Sec. 5 presents concluding remarks. 2 Background 2.1 High Dynamic Range Imaging Many objects and scenes exhibit a broad range of brightness because of the range of colors, the reflection or transmission coefficient, the illumination pattern, etc. In a typical digital camera, the value for each color (red, green, and blue) is stored using 8 bits; assuming that minimum intensity is equal to 1 and maximum intensity is equal to 255, the dynamic range of a camera is about 48 dB for each color. To capture a scene that requires a dynamic range of more than 48 dB for each color, a new imaging technique is needed. One of the most popular methods is to capture images of the same scene with different exposure times and gains, and then combine the resulting images to construct an image with high dynamic range.1,2 Early work in this area was done by Mann,8,9 who proposed an algorithm that combined different images of the same scene captured with different exposure times in order to construct an image with high dynamic range. The proposed combination algorithm is based on a certainty function that is the derivative of the camera’s response function. Another work in this field was reported by Maddan10 where, once again, multiple images of a scene are captured with a CCD for different exposure times. The exposure time was changed such that at minimum exposure time there is no saturated pixel, and at maximum exposure time some pixels become saturated. By examining each pixel at different exposure times, we realize that pixels might (1) be saturated at long exposure time and their values become the maximum allowable, (2) remain dark from either short exposure time or simply lack of light, or (3) take values below saturation and above noise level. To construct the final HDR image, each pixel of the HDR image is chosen from the image to have a value below saturation and to be the most precise due to its minimum quantization error. Yamada et al.11 worked on dynamic range extension of TV cameras for vehicles. They used the same principle of acquiring several images with different exposure times and combining them into one enhanced image with high dynamic range. There are also some works on extending the dynamic range of color images. Moriwaki12 used the same principle Optical Engineering

of combining images with different exposure times and constructing an adaptive exposure color image. It was shown that the constructed image has better accuracy in applications like color discrimination. A work reported by Debevec and Malik13 uses images with different exposure times to find the response function of the imaging process up to a scale factor and then uses this response function to construct HDR images. Mitsunaga and Nayar14 used images obtained with different exposure times to calculate the response time of the imaging system. Only the ratio of exposure time needs to be known to accurately recover the response function. Another technique for improving the dynamic range of an image is to change the exposure time or intensity of each pixel individually. Nayar and Mitsunaga3 proposed an imaging system where neighboring pixels have different exposure times. Their system used a mask with different transparencies in front of the detector. Mannami et al.15 proposed HDR imaging by means of reflective liquid crystal. They used LCoS (liquid crystal on silicon) as a spatial light modulator in front of a camera to spatially modulate the intensity of each pixel. One of the issues here, to obtain good results, is geometric alignment between the pixels of camera and LCoS. They used homography for geometric calibration. Also, an off-line calibration was conducted to infer the radiometric properties of the system. At each step they changed the values of the LCoS pixels so that the values of the corresponding camera pixels became equal to an optimal value. This optimal value corresponds to optimal radiance. Adeyemi et al.16 used a digital micromirror device (DMD) to acquire an HDR image in a microscopy system. They also used the geometrical calibration for corresponding pixels of camera and DMD. Also, DMD is characterized for its reflected power vs. the digital values of DMD pixels. They reported that with their system, in principle, they can improve dynamic range of an image by a factor of 573, although their experimental results show improvement by only a factor of 5. Commercially, some imaging systems, including Viper FilmStream, SMal, Pixim, and SpheronVR,2 can capture and record HDR images in one shot. Fujifilm has introduced point-and-shoot cameras that capture HDR from two images with different exposure times.17 2.2 Compressive Sampling Compressive sampling (CS) is an emerging area with a wide range of applications.18–20 In science and engineering the applications include imaging,21–23 communications,24,25 analog-to-digital convertors,26,27 computational biology,28 etc. The basic idea of compressive sampling comes from the fact that if a signal (either one-or multi-dimensional) can be represented sparsely in a domain, then it can be sampled at rates lower than the Nyquist rate. Nyquist sampling theorem says that if a signal is band-limited, with B being its highest frequency, then sampling the signal at the rate of at least 2B guarantees perfect reconstruction. This condition of Nyquist theorem might be considered the worst case; in other words, this condition is certainly sufficient for successful reconstruction of signal, but it is not necessarily the required condition. For example, for some multiband signals, different methods can be used to sample the signal at rates lower than the Nyquist rate.29 This sub-Nyquist rate can

071407-2

Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 08/30/2012 Terms of Use: http://spiedl.org/terms

July 2012/Vol. 51(7)

Abolbashari et al.: High dynamic range compressive imaging: a programmable imaging system

be achieved if the signal is sparse. The sampling can be done uniformly, nonuniformly,29 or randomly,26,30,31 in the time or frequency domain.32 The reconstruction of signals sampled by use of compressive sampling is another field of research. Many algorithms have been developed for compressive sampling reconstruction; most are either l1 minimization33,34 or greedy35,36 algorithms. The early works on reconstruction with l1 minimization were done by Santosa and Symes37 in 1983. They showed that it is possible to reconstruct a sparse spark train from part of its spectrum with l1 minimization. In 1989 Donoho and Stark38 showed that under certain conditions a noisy signal can be recovered perfectly with l1 minimization (Logan’s certainty principle), while it is not possible to do so with l2 minimization. Since then there has been a lot of interest on sparse representation and l1 minimization.39,40 The theory of compressive sampling is based on two concepts: sparsity and incoherence. Sparsity means that the signal has a representation in which most of its coefficients in that domain are zero. In a mathematical form, suppose that x can be represented in base Ψ as follows: x¼

N X i¼1

ai ψ i

in which only m coefficients ai are nonzero. Then x is m-sparse in the domain Ψ. If the nonzero coefficients of x are supported in a relatively small set, then x is said to be a sparse signal. If outside that small set the coefficients are not zero but very small, and decaying rapidly enough, the signal is said to be compressible. The other basic concept in compressive sampling is incoherence. Consider two bases Ψ and Φ each with N normalized elements. Then the mutual coherence between these two bases is defined as41 MðΦ; ΨÞ ¼

Sup fhφi ; ψ j ig; i; j

which can be simply described as a similarity measure between two bases. The incoherence criterion states that in order to reconstruct a sparse signal with as few samples as possible, the sparse basis and measurement basis need to be as incoherent as possible. In other words, let Ψ be the sparse basis and Φ the measurement basis, then the lower MðΦ; ΨÞ means that a lower number of samples is needed to reconstruct the signal. 2.3 Compressive Imaging In conventional imaging systems an image is usually captured and then compressed. This process leads us to ask if there is a way to capture the image directly in a compressed form so that processing and compressing the raw image is not needed, thereby saving time and energy. Compressive sampling is the answer to this question. It can be shown that if an image is sparse in some bases (and this is why the image can be compressed), one could use the concept of compressive sampling; that is, sample the object by multiplying the light reflected or transmitted from it with sampling functions/vectors in order to find the projection of the object on each sampling function/vector, and then use an optimization method to reconstruct the image from the acquired sampled data. Optical Engineering

One of the early works for realization of this concept was conducted by Wakin et al.42 They proposed an imaging system in which reflected light from the object is multiplied by a random code and then captured by a single detector; this is done by using a digital micromirror device (DMD) as a light modulator whose pixels reflect light to the detector according to random codes. Signals from the detector are processed to reconstruct the image. Gan43 proposed a block compressive imaging method where the scene is captured block by block by a compressive imaging technique. In another article, Gan et al.44 proposed scrambled Hadamard codes for compressive imaging and showed that those integer-valued codes have performance compared to dense scrambled Fourier ensemble. Rivenson and Stern45 introduced separable sensing operators for compressive imaging that significantly lower the complexity of compressive imaging for large images at the cost of acquiring more samples. Nagesh and Li46 proposed architecture and a reconstruction algorithm for compressive imaging of colored images. Zerom et al.47 demonstrated high-resolution quantum ghost imaging with the use of compressive imaging. They reported that their technique both reduces acquisition time and uses photons more effectively. In the field of medical imaging, there are plenty of works related to compressive imaging. Lustig et al. have a series of papers that propose methods for rapid MRI imaging.4–7 Their method is based on randomly selected pixels in k-space and reconstruction with l1 minimization or TV minimization. With these techniques the image can be acquired faster and with relatively better quality. 3 High Dynamic Range Compressive Imaging In the previous section the advantages and techniques for HDR imaging and compressive imaging were explained. If these two techniques can be combined together in a single system, we will have an imaging system that can increase the dynamic range of an image and can create the image faster and/or in wavelengths where conventional cameras do not work. For example, in fluorescence imaging there is demand for HDR cameras suitable for the infrared region of the spectrum.48 Such cameras are expensive, so HDRCI is a solution for this type of imaging. The HDRCI is a programmable imaging system49 that can control the radiometric properties of the system; it uses one detector with a single pixel to obtain data and uses postprocessing to construct the image. Figure 1 shows a schematic of the proposed system. The object is illuminated by a light source and imaged by an optical system on a spatial light modulator (SLM). The SLM, which can be in a reflective or transmissive configuration, acts as a spatial intensity filter [second block of Fig. 1(a)] to change the intensity of different parts of the image independently (i.e., the HDR part). The SLM is also used as a device to multiply different codes [third block of Fig. 1(a)] with the image formed on the SLM (i.e., the CS part); that is, it uses sampling functions/ vectors, called codes, to sample the image. For the active illumination technique that was used in the experiment, SLM is part of the light source. The change in intensity of different parts of the image and the multiplication of codes are accomplished in the light source, and therefore the projected light is manipulated before illuminating the object. After applying a sufficient number of codes and acquiring

071407-3

Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 08/30/2012 Terms of Use: http://spiedl.org/terms

July 2012/Vol. 51(7)

Abolbashari et al.: High dynamic range compressive imaging: a programmable imaging system

Fig. 1 Schematic of HDRCI system in (a) block diagram, (b) transmissive, and (c) reflective configurations.

corresponding data, the acquired data are used to reconstruct the image. One issue in HDRCI is defining a meaningful parameter that is a measure of the dynamic range of the final image. For conventional imaging, dynamic range is defined as the ratio of maximum intensity and minimum intensity. For a digital camera, dynamic range is limited by noise, such as quantization noise, shot noise, read-out noise, etc. In HDRCI, the noise of acquired data is not the final image noise, but the pixels’ values and therefore image noise are calculated from acquired samples through an optimization algorithm. Therefore, we need a new definition for minimum level of intensity other than noise. Obviously, our definition for the minimum level of intensity should be based on the noise level of the system calculated for the reconstructed image. To practically measure that noise level, the HDRCI system will be run without input; namely, a black object is selected as input and the reconstructed signal will be considered output noise. The standard deviation of this output noise is considered the noise level or minimum intensity; that is, every signal below this threshold is considered noise and its corresponding pixel is considered a dark pixel. [Note that the proposed technique to measure noise level does not consider the signal noise (i.e., the noise coming from the light source itself), but the definition of minimum intensity which corresponds to the noise level is not limited to only measurement noise and can include all other noises, including signal noise.] Based on this definition, the aim of the HDRCI system is to bring the pixels of an image above a minimum level of intensity (excluding dark pixels) while avoiding saturation of pixels with high intensity. In other words, HDRCI captures information from a wider range of intensity levels than does a CS imaging system, providing higher dynamic range. As mentioned previously, in conventional imaging, an HDR image can be obtained if the exposure times are changed for all pixels or a set of pixels. One of the basic differences between conventional imaging and HDRCI is the fact that the exposure time for HDRCI applies not to a single pixel but to an ensemble of pixels. Therefore a parameter equivalent to exposure time must be defined for HDRCI. Optical Engineering

This parameter can be defined as using a combination of analog-to-digital convertor (ADC) gain level and a mask to select an ensemble of pixels. To change the exposure time, first we block/unblock a group of pixels and increase/ decrease the gain of ADC. In analogy to conventional HDR imaging, we introduce the algorithm for HDRCI (Fig. 2). First, we select a proper level for the ADC gain. The proper level at this step can be defined as the level at which the number of saturated samples is some small percentage of total samples. When this is between 10% and 30%, the performance is satisfactory.50 Second, an image is constructed by use of the compressive imaging technique. Pixels that have values greater than a threshold value are identified. This threshold value can be defined on the basis of the histogram of the reconstructed image or some other statistics of the reconstructed image. Next, the identified pixels with values greater than the threshold are attenuated in such a way that their values reach above the noise level and far below saturation. This level depends on the number of steps the algorithm will iterate and the amount of increase in the ADC gain. For example, suppose that the number of iterations for the algorithm is two and that each pixel of the attenuator has 8 bits. Therefore, on average, for each iteration 4 bits can be used, giving an attenuation factor of 24 ¼ 16. Hence, a pixel with maximum intensity (or a saturated pixel) can be reduced by a factor of 16 relative to the highest possible level (saturation level). The level of ADC gain will be determined accordingly, based on the percentage of attenuation for the whole image. For instance, if the average attenuation of image intensity is 80% [for an attenuation factor of 80 1∕ð1 − 100 Þ ¼ 5], then the gain of ADC will be increased by a factor of 5. As the ADC gain changes, the noise level will also change, thereby affecting the noise level of quantized samples. This can be avoided if we suppose that the dominant noise of quantized samples is the noise of the ADC for all gain levels; therefore, increasing the ADC gain does not change the noise level significantly. This is an acceptable assumption for the implemented system. (For the implemented system the pnoise-equivalent ffiffiffiffiffiffi power of the photodiode is 5.5 × 10−14 W∕ Hz, the average

071407-4

Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 08/30/2012 Terms of Use: http://spiedl.org/terms

July 2012/Vol. 51(7)

Abolbashari et al.: High dynamic range compressive imaging: a programmable imaging system

Fig. 2 Flow chart of algorithm for high dynamic range compressive imaging.

responsivity of the photodiode is approximately 0.5, and the bandwidth of the system is
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.