Wide-Dynamic-Range CMOS Image Sensors—Comparative Performance Analysis

Share Embed


Descrição do Produto

2446

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Wide-Dynamic-Range CMOS Image Sensors—Comparative Performance Analysis Arthur Spivak, Alexander Belenky, Alexander Fish, Member, IEEE, and Orly Yadid-Pecht, Fellow, IEEE

Abstract—A large variety of solutions for widening the dynamic range (DR) of CMOS image sensors has been proposed throughout the years. We propose a set of criteria upon which an effective comparative analysis of the performance of wide-DR (WDR) sensors can be done. Sensors for WDR are divided into seven categories: 1) companding sensors; 2) multimode sensors; 3) clipping sensors; 4) frequency-based sensors; 5) time-to-saturation (timeto-first spike) sensors; 6) global-control-over-the-integration-time sensors; and 7) autonomous-control-over-the-integration-time sensors. The comparative analysis for each category is based upon the quantitative assessments of the following parameters: signalto-noise ratio, DR extension, noise floor, minimal transistor count, and sensitivity. These parameters are assessed using consistent assumptions and definitions, which are common to all WDR sensor categories. The advantages and disadvantages of each category in the sense of power consumption and data rate are discussed qualitatively. The influence of technology advancements on the proposed set of criteria is discussed as well. Index Terms—Active pixel sensor (APS), CMOS image sensors (CIS), dynamic range (DR), noise floor (NF), sensitivity, sensors, signal-to-noise ratio (SNR).

I. I NTRODUCTION

T

HE ABILITY to integrate various smart functions of imagers on a single chip [1], [2], using CMOS technology, has been the stimulation for fast growth of the production of digital cameras based on this technology. A large variety of smart functions allowed CMOS imagers to be implemented in the areas of security vision systems, medical devices, quality control, and space research [3]. While scaling trends in CMOS technology facilitate higher resolution imaging, they also have an adverse effect on image performance, leading to limited quantum efficiency, increased leakage current, and reduced signal swing [4], hence reduced dynamic range (DR) of the sensor [3], [5], [6]. The DR is one of the most important figures of merit in CMOS image sensors (CISs) and quantifies the ability of a sensor to image highlights and shadows. In cases of high illumination levels, a narrow DR causes the saturation of pixels with high sensitivity, resulting in loss of information. These issues provide researchers with many challenges during their efforts to realize high-quality CIS-based vision systems. Manuscript received February 2, 2009; revised June 9, 2009. First published September 29, 2009; current version published October 21, 2009. The review of this paper was arranged by Editor A. Theuwissen. A. Spivak, A. Belenky, and A. Fish are with the VLSI Systems Center, Ben-Gurion University of the Negev, Beersheba 84105, Israel (e-mail: [email protected]; [email protected]; [email protected]). O. Yadid-Pecht is with the Electrical and Computer Engineering Department, University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail: [email protected]). Digital Object Identifier 10.1109/TED.2009.2030599

The overall task for wide-DR (WDR) imaging can be divided into two distinguished stages: image capture, preventing the loss of scene details, and image compression, allowing image representation on conventional computer screens. The first stage is a challenge mainly for image sensor designers and can be implemented at the pixel or system levels, whereas the second stage is mainly accomplished in software. This paper examines only the solutions that are relevant to the first stage, i.e., to the image capture ability. Improvement of image capture capability can be done either by reducing the noise floor (NF) of the sensor [7]–[10] or by extending its saturation toward higher light intensities. Here, we focus on the solutions that extend the DR toward high light intensities. Various solutions for extending the DR in CISs at the pixel or system levels have been presented in recent years. A qualitative summary of the existing solutions and their comparisons are presented in [11]. In that paper, the WDR algorithms have been divided into six general categories. This paper updates the division to categories and provides quantitative assessments of sensor performance within each category, as well as overall qualitative comparisons between the different categories. The updated classification of the WDR schemes is presented in the following: 1) companding sensors that compress their response to light due to their logarithmic transfer function; 2) multimode sensors that have a linear and a logarithmic response at dark and bright illumination levels, respectively (i.e., they are able to switch between linear and logarithmic modes of operation); 3) clipping sensors, in which a capacity well adjustment method is applied; 4) frequency-based sensors, where the sensor output is converted into a pulse frequency; 5) time-tosaturation [(TTS); time-to-first spike] sensors, where the image is processed according to the time the pixel was detected as saturated; 6) sensors with global control over the integration time; and 7) sensors with autonomous control over the integration time, where each pixel has control over its own exposure period. This paper is organized as follows: Section II presents the definitions and basic assumptions. Section III presents the quantitative study of the WDR schemes. Section IV presents the discussion. Section V summarizes the study. II. D EFINITIONS AND BASIC A SSUMPTIONS To be consistent throughout the comparisons presented in this paper, we have defined basic assumptions that are applicable for each algorithm. Moreover, these assumptions simplify the numeric analysis of each approach for widening the DR. We assume that pixels within each group of WDR sensors operate without color filters, i.e., the signal from the pixel is converted

0018-9383/$26.00 © 2009 IEEE

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

in grayscale. Symbols iph , idc , and σr relate to photogenerated current, dark current, and readout signal deviation, respectively. We assume that the incident light intensity is constant over the frame and that the transfer function from the light intensity to the photogenerated current is linear and constant. This assumption allows us to easily calculate the accumulated photocharge. For all of the sensor groups, aside from the logarithmic ones, we assume that the longest integration time is tint . We also assume that all sensors operate at a video frame rate (30 frames per second), and, therefore, the longest integration time is 30 ms. All signal-to-noise ratio (SNR) calculations were performed for the same range of photogenerated current iph . In order to simplify further the calculations, we assume that the dark signal is equal to 0.1 fA. We also assume that all capacitances within the pixel are time invariant; otherwise, all calculations become extremely complex. Each pixel is represented with functional blocks in order to make calculations for a particular group of WDR sensors as general as possible. For example, the photodiode that integrates photocharge during the frame is replaced  with the integrator block “ .” Note that, since the accumulated photogenerated charge is negative, the current flows out of the integrator block. The equivalent capacitance associated with the integrator Cint is equal for all pixels and is set to 10 fF. The saturation signal is 1 V for all techniques except the logarithmic one. The analog-to-digital converter (ADC) used for the final analog-to-digital (A–D) conversion has the same resolution for all the sensors. The full well capacity of a pixel (without additional capacitances) is defined by Qmax . The in-pixel source follower amplifier, utilized for nondestructive readout, is assumed to have constant gain and to operate in the saturation region. The gain fixed pattern noise (FPN) σPRNU (where PRNU stands for “photoresponse nonuniformity”) component in all systems (except the logarithmic mode) is assumed to be between 0.1% and 0.3% of the photogenerated signal, according to the functionality and the minimal number of transistors required for the implementation of the pixel in the specific category of WDR sensors. The offset FPN σOff _FPN component is assumed to vary from 0.1% to 0.3% of the saturation signal according to the functionality, minimal number of transistors inside the pixel, and calibration procedure complexity. The minimal number of transistors within the pixel is assessed assuming that each switch and source follower are implemented with one and with two transistors, respectively. The comparator is implemented with five transistors, and the memory unit is implemented with six transistors. We also assume that the pixels do not share any transistors between them. The minimal readout noise is 3e−. Thermal noise stored on capacitors, which is referred to as “kTC” noise, is taken into consideration only when it is caused by the reset operation. This noise is a function of the sensor temperature T , which is given in degrees Kelvin. The bulk effect is ignored; otherwise, the threshold voltage is a function of the signal itself. The SNR of each WDR scheme is defined as the ratio between the photogenerated signal and the root mean square of the average noise power. The average power of the noise is calculated by summarizing the variances of all the noise components, assuming that none of the noise random processes are correlated. Note that all noise components are input referred.

2447

Fig. 1. Schematic diagram of the reference APS.

When the photocurrent exceeds the bounds of the system’s DR, its SNR drops to zero. However, the SNR as defined previously is not sufficient for the evaluation of sensor ability to image high DR scenes since it does not indicate the sensitivity of the sensor, i.e., the ability to distinguish between two adjacent light intensities. Therefore, in this paper, we perform an analysis of sensitivity along with SNR and DR throughout the whole range of relevant photocurrents. This analysis allows assessing the effectiveness of a certain DR extension. The sensitivity parameter in this paper is defined as the derivative of the pixel output with respect to the photogenerated current and is presented in volts per picoampere. The sensitivity for photocurrents exceeding the DR is zero. We also define the minimum detectable signal as NF, evaluated in electrons. In order to compare the parameters, such as NF, minimal transistor count, etc., a conventional three-transistor (3T) active pixel sensor (APS) cell, shown in Fig. 1, was taken as a reference pixel. In this pixel, the photogenerated and dark currents iph discharge an integrator capacitance. At the end of the integration time, the voltage at the “sense” node is read out (Vout ) using a source follower amplifier (Buffer in Fig. 1). Ibias is the columnwise current source associated with the Buffer. A new integration starts after the photodiode is reset by activating the “Reset” signal. The SNR for the reference sensor is given by SNR = 20 log10 iph tint Cint

 q(i +i )t σr2 + phC 2dc int int

σ2 i2ph t2int + PRNU 2 Cint

. (1) Q2max 2 +σOff 2 _FPN Cint

The NF is calculated as follows:  Cint Q2max qidc tint 2 NF = σr2 + + σOff _ FPN 2 2 . q Cint Cint

(2)

All the calculations are performed using MATLAB. Fig. 2 presents the calculated SNR for a 3T APS-based sensor. The calculation of the SNR for the reference pixel is performed with the following values: Qmax = 62 500e− Cint = 10 fF NF = 65e− .

T = 300 K σPRNU = 0.1%

σr = 48 μV σOff _FPN = 0.1%

2448

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 4.

Schematic diagram of a logarithmic response pixel.

III. C ALCULATION OF SNR OF WDR S ENSORS A. Companding Sensors (Logarithmic Compressed Response Photodetectors)

Fig. 2. Calculated SNR for the reference sensor.

Pixels with a compressed response, as depicted in Fig. 4, achieve an extremely high DR (over 100 dB). Their companding ability enables the representation of a wide range of light intensities by a relatively small signal voltage swing [13]–[16]. The companding characteristic is achieved through the logarithmic relation between the current flowing within the pixel and the output voltage, as shown in iIn = iph + idc = I0 e

Fig. 3. Sensitivity of the reference sensor.

The “knee” in the SNR curve (Fig. 2) is caused by an offset FPN component, which is taken into account in (1). The sensitivity S (depicted in Fig. 3) of the reference sensor is given by   iph tint d tint dVsense = . (3) = S= diph diph Cint Cint

_

_

q(Vbias tr −Vsense −Vth tr ) kT

(5)

where iIn is the pixel current that is composed of photogenerated iph and dark currents idc . I0 is equivalent to the current iIn when transistor Mtr is at onset of the subthreshold operation region. k is the Boltzmann constant. The voltages Vbias_tr and Vth_tr are the bias and the threshold voltages of Mtr , respectively. Vsense is the voltage at the sensing node “sense” of the pixel. We can express the voltage signal caused by light fallen onto a pixel with reference to the signal caused by the dark current only as follows:   iph kT ln Vsig = . (6) q idc The shot noise variance in the logarithmic pixel is given by

In all of the DR calculations, the following basic DR definition is used: DR = 20 log10

imax imin

(4)

where imax is the largest detectable photogenerated current and imin is the smallest detectable current defined by the NF. For all of the reviewed solutions, the maximum DR extension is presented by the DR factor (DRF) [12]. This factor represents the DR extension for each particular WDR algorithm. The calculation of the DRF is unique for each WDR algorithm. In this paper, we present it in decibels in order to bring it to the same scale as the DR. For some algorithms, it is calculated as the ratio of the integration times, whereas, for other algorithms, it is defined as the ratio between the integration capacitances. In some cases, it is a combination of both the aforementioned ratios. Power consumption for each category of WDR sensors is assessed qualitatively. Quantitative analysis of the power consumption parameter is out of the scope of this paper.

kT 2 V¯shot = 2CPD

(7)

where CPD is the capacitance of the photodiode. By using (6) for the effective signal energy calculation and by summing up all the noise sources (readout, FPN, and shot noise), we get the following expression: SNR = 20 log10 

kT q

ln

iph idc

kT 2 2 2 2 σr2 + 2C + σPRNU _ log iph + σOff _FPN Vth_tr PD

(8)

where σr is the voltage deviation caused by the readout process and σPRNU_ log and σOff _FPN are the gain and offset FPN components, respectively. Note that σPRNU_ log is a function of the photogenerated signal itself since the sensor transfer function is nonlinear. The signal swing of the logarithmic pixel is equal to Vth_tr ; thus, this parameter is the reference for the

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

Fig. 5.

Calculated SNR for the logarithmic sensor.

Fig. 6. Sensitivity of the logarithmic sensor.

offset FPN component calculation. Note that the logarithmic pixel depicted in Fig. 4 cannot be calibrated; therefore, the offset FPN is considered to be 0.3%. Fig. 5 shows the calculated results for the logarithmic sensor SNR. The dashed line denotes the SNR of the reference sensor. The SNR calculation was performed using the following values: T = 300 K

σr = 96 μV

Cpd = 10 fF

Vth_tr = 0.5 V σPRNU_ log = 0.0026 V/iph σOff _FPN = 0.3%

N F = 109e− .

The NF for the logarithmic pixel is calculated by  2 Vth Cint kT _tr 2 NF = σr2 + + σOff . _ FPN 2 q 2CPD CPD

(9)

The logarithmic pixel’s DRF (in relation to the reference pixel) can be expressed as follows: DRF = 20 log10

I0 iref _sat

(10)

where iref _sat is the saturation current of the reference pixel that is given by iref _sat =

Cint ΔV tint

2449

(11)

where ΔV is the saturation signal. Substituting the appropriate values into (10) and (11), we get that the DRF is approximately equal to 50 dB. Although the logarithmic pixels provide a large WDR, they suffer from increased FPN, reduced sensitivity, and reduced signal swing. The FPN offset component can partially be removed through a calibration procedure reported in [16]. This procedure cancels out the threshold voltage variations, but other offset components remain untouched, such as dark current spatial variations across the pixel array. An improved version of this calibrating procedure is proposed in [15]. Complex recursive algorithms that reduce the offset and even the gain components of FPN are reported in [17], but these algorithms are very difficult to implement on the same die as the pixel array.

The loss of sensitivity and, as a result of it, loss of information can clearly be seen when we derive the sensitivity of the logarithmic sensor   iph d kT dVsense kT ln = . (12) S= = diph diph q idc qiph From Fig. 6, we can tell that, although the sensor sensitivity is greater than zero throughout five decades, it drops virtually to zero after three decades only, making high light intensities very difficult to distinguish. As previously mentioned, the threshold voltage Vth limits the output voltage swing of a conventional logarithmic pixel to a few hundred millivolts. The problem of reduced swing in the logarithmic pixel is addressed in [18], where a load transistor is added within the pixel. While increasing the output voltage swing, this solution results in increased gain FPN. Moreover, the logarithmic pixels suffer from image lag. This undesired lag effect is most pronounced at low light conditions, and it is caused by a long settling time constant that can exceed the frame time. B. Multimode Sensors Multimode sensors have multiple modes of operation, exploiting the advantages of each mode. At low illumination conditions, they operate as conventional linear pixels, whereas, at high illuminations, they operate as companding pixels. A possible implementation of a multimode sensor is presented in [19]. In this implementation, the pixel can operate both in a conventional integration mode and in a current readout mode. In this case, the final logarithmic conversion of the current to voltage is performed outside the pixel. Another possible implementation of a multimode sensor is presented in [20], where the sensor passes from linear to logarithmic mode by itself during the frame. On the other hand, in the pixel depicted in Fig. 7, the transfer function can be switched between logarithmic and linear modes. Consequently, a nondestructive readout in either linear or logarithmic mode can be performed at the pixel level and then read out using an analog buffer, as presented in [21] and [22]. In the linear integration mode of operation, the SNR is given by (1).

2450

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 7. Linear–logarithmic multimode pixel with voltage readout.

Fig. 9.

Sensitivity of the multimode sensor.

The sensitivity of the multimode sensor is depicted in Fig. 9. The NF is assessed using (2). As can be seen after the pixel switches to logarithmic operation, the sensitivity drops by several orders of magnitude. C. Clipping (Capacitance Adjustment) Sensors

Fig. 8. (a) Calculated SNR of single-mode sensors (logarithmic and linear). (b) Combined SNR of the multimode sensor.

The SNR of a pixel operating in a continuous logarithmic mode can be expressed using (8). Therefore, by deliberately “stitching” the regions of sensor operation, we can achieve an optimal SNR for each of the intensities within the DR. Fig. 8(a) shows the SNR curves of single-mode sensors (the logarithmic and the linear modes of operation), whereas Fig. 8(b) presents the combined SNR characteristics of a multimode sensor that can operate in both linear and logarithmic modes. The criterion for switching between operating modes is derived from the comparison between the output signals received at the same illumination levels. It is obvious that the linear mode is preferable over the logarithmic one prior to pixel saturation. Once the pixel has become saturated, the logarithmic mode is preferable. The DR extension of a multimode sensor is achieved in the logarithmic mode; hence, the DR extension is expressed by (10). The following values were used for the calculation: T = 300 K σr = 100 μV Qmax = 62 500e− N F = 65e− Cint = 10 fF σOff _FPN = 0.1% Lin. mode σPRNU = 0.1% tint = 30 ms Log. mode σPRNU_ log = 0.0026 V/iph σOff _FPN = 0.3%.

Clipping sensors are sensors where a capacity well adjustment method is applied. In this method, the integrating capacitance of a certain pixel is increased throughout the integration period. For this algorithm, the DR extension is a function of the pixel capacitance and the amount of time that this capacitance integrates the charge. In this technique, the whole integration time can be divided into a certain number of slots. At the beginning of each slot, the integrating capacitance is set to a higher value. Assuming that the integrating capacitance is being reset every integration slot, we can relate a corresponding DRFi that can be expressed as follows: ⎛ i ⎞ Cn ⎜ n=0 ⎟ ti−1 ⎟ DRFi = 20 log10 ⎜ ⎝ i−1 ⎠ ti Cn n=0

C0 = Cint ,

1≤i≤M

(13)

where Cn is the added capacitance during the time interval [tn , tn+1 ] and ti is the ith integration time slot. M is the number of capacitance adjustment steps. Therefore, the expression for the overall DRF is the combination of all the aforementioned extension factors     C0 + C1 + C2 t1 C0 + C1 t0 • DRF = 20 log10 C0 t1 C0 + C1 t2   C0 + · · · + CM tM −1 • C0 + · · · + CM −1 tM ⎛ M ⎞ C ⎜ n=0 n ⎟ t0 ⎟ = 20 log10 ⎜ (14) ⎝ C0 ⎠ tM .

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

2451

Fig. 10. Schematic diagram of a pixel with lateral overflowing capacitance (LOFIC).

If the pixel capacitance is being reset only at the beginning of a new frame, then the relations (13) and (14) become independent of the time ratios. A possible implementation of this method was demonstrated in [23], where the sensing node depth potential was controlled by an external signal that was applied to each pixel within the APS array. Another solution is proposed in [24], where two images are captured using two different integration capacitances within the pixel. Afterward, each capture is multiplied by two different gains, thus producing four images of different sensitivities. One alternative solution is to use a pixel that accommodates a lateral overflow integration capacitance (LOFIC) [25]–[27] as shown in Fig. 10. The idea is to collect the photogenerated charges that overcome the potential barrier of the ConLOFIC switch that separates the photosensing area from the pixel sensing node. Thus, there are two signals that are read out from the pixel. The first is the signal that is completely transferred to the integrator capacitance when the ConLOFIC switch is disconnected from the LOFIC capacitance. The second is the signal that is stored on both the LOFIC and integrator capacitances that are connected by the ConLOFIC switch. In this case, the DRF expression is simplified to DRF = 20 log10

Cint + CLOFIC Cint

Fig. 11. (a) Calculated SNR of signals accumulated with and without LOFIC. (b) Combined SNR response.

However, in this case, the CDS operation requires a memory circuit for storing the reset pixel level. Fig. 11(a) presents the SNR calculation of each signal separately: with LOFIC and without LOFIC (no CDS). The combined SNR calculation results are shown in Fig. 11(b). The following values were used in the calculation presented in Fig. 11: T = 300 K

Vswitch = 0.8Vsat = 1.6 V

N F = 95e− Regular mode

Cint = 10 fF

Qmax = 62 500e−

σPRNU = 0.1%

σOff _FPN = 0.15%

σr = 98 μV LOFIC mode

Cint = 10 fF ˜ max = 1.25M e− Q

(15)

where Cint and CLOFIC are the integrator and overflow capacitances, respectively. The “Reset_L” signal is used to reset the LOFIC capacitance before it can be connected to the “sense” node. The SNR of a pixel that does not utilize the LOFIC capacitance is given by (1). On the other hand, the SNR of a pixel that utilizes the LOFIC ˜ max capacitance is shown in (16) at the bottom of the page. Q denotes the full well capacity of the combined capacitance, which consists of the initial and the overflowing capacitances. If a Correlated Double Sampling (CDS) procedure is performed, then the “kTC” component can be removed from (16).

tint = 30 ms

CLOFIC = 200 fF σPRNU = 0.1%

σOff _FPN = 0.15%

The pixel in Fig. 10 can be formed using five transistors. Its offset FPN is a bit larger relative to the reference 3T pixel. The NF is assessed using (2). The DRF that is calculated using (15) is equal to 26 dB. Vswitch is the boundary discharge voltage for the signal processing. When the pixel does not pass Vswitch , the preferred signal is that with the higher sensitivity; otherwise, the output signal will be the voltage that was accumulated on SNR (not the Cint and CLOFIC capacitances. The maximum √ in the logarithmic scale) is improved beyond Qmax since the amount of charge that can be utilized for signal evaluation is

iph tint Cint +CLOFIC

SNR = 20 log10  σr2 +

q(iph +idc )tint (Cint +CLOFIC )2

+

σr = 5 μV.

2 2 σPRNU i2ph t2int +σOff _FPN Q˜ 2max (Cint +CLOFIC )2

(16) +

2kT (Cint +CLOFIC )

2452

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 12. Calculated sensitivity for the LOFIC sensor.

increased according to the capacitances ratio (15). Moreover, it will increase even further when the column capacitance that is shared by all pixels in the same column will be utilized for charge accumulation flowing out of the chosen pixel [19]. The sensitivity for such a pixel varies according to the capacitance utilized for signal processing. In regular mode, the sensitivity is given by (3). When LOFIC is utilized, the sensitivity is evaluated by   iph tint d tint dVsense = . = S= diph diph Cint + CLOFIC Cint + CLOFIC (17) The combined sensitivity for the LOFIC sensor is depicted in Fig. 12. Another method for the WDR that utilizes clipping the pixel response is presented in [28]. In this method, multiple partial resets are applied to the pixel during the frame, with each reset at certain time point. Each reset event sets the pixel potential to some intermediate value. The values of the intermediate resets are sorted in a descending order. Consequently, the brighter pixels are clipped by a certain reset, whereas the darker pixels continue to integrate untouched. The difference in time and in values of the adjacent reset events sets the DRF for that integration slot as follows:   Qi − Qi−1 tint DRFi = 20 log10 • (18) ti − ti−1 Qmax where Qi is the maximal charge that can be accumulated until the time ti . The overall DRF is given by   Qmax − QM tint • DRF = 20 log10 (19) tint − tM Qmax

Fig. 13.

where QM and tM are charge and time point before the last integration slot, respectively. The SNR for such sensor can be calculated as shown in (20) at the bottom of the page. The calculated SNR is depicted in Fig. 13. The calculation is performed assuming the following values: Qmax = 62 500e Q2 =

σr2

+

q((iph +idc )(tint −ti )+Qi ) (Cint )2

3Qmax 4

t0 = 0 511tint 512 Cint = 10 fF σPRNU = 0.1%

t3 =

Q0 = 0

Q1 =

7Qmax 8 3tint t1 = 4

Q3 =

Qmax 2

Q4 = Qmax t2 =

t3 = tint

15tint 16

tint = 30 ms

T = 300 K σOff _FPN = 0.1%

σr = 96 μV N F = 86e− .

The pixel utilized in this method is very simple, and its structure is identical to the reference 3T in Fig. 1. However, the NF is higher than that in the reference sensor since the thermal noise signals induced by the partial resets are not correlated  Q2max 2kT idc tint Cint 2 σr2 + + σOff + q 2 . (21) NF = _ FPN 2 q Cint Cint Cint The DRF is calculated using (19) to be 36 dB. Partial-reset method extends the DR, without the need to utilize an additional in-pixel capacitance. However, it suffers from low sensitivity   iph (tint − ti ) d tint − ti dVsense = . (22) = S= diph diph Cint Cint The decrease in sensitivity can be seen in Fig. 14.

Qi +

SNR = 20 log10 

Calculated SNR for the multiple-partial-reset sensor.

+

iph (tint −ti ) Cint

2 2 σPRNU Q2i +i2ph (tint −ti )+σOff _FPN Q2max (Cint )2

(20) +

2kT (Cint )

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

2453

Fig. 16. Schematic diagram of a TTS pixel.

Assuming fmax = 5 MHz (similar numbers quoted in [30]), the DR is equal to 105 dB. The DRF for light-to-frequency sensors is not calculated since these sensors detect saturation events only. The SNR at the input of the comparator is given by

Fig. 14. Calculated sensitivity for the multiple-partial-reset sensor.

SNR = 20 log10 

ΔV σr2 + qΔV Cint

(25)

kT 2 2 2 +σ ˜PRNU +σOff _FPN (ΔV ) + Cint

where the gain FPN component is a constant that is given by 2 2 σ ˜PRNU = σPRNU

Fig. 15. Conceptual diagram of a light-to-frequency converting pixel.

D. Frequency-Based Sensors In these sensors, light intensity is converted into pulse frequency [29]–[32]. The conceptual diagram of this type of pixel is depicted in Fig. 15. When the voltage at the “sense” node passes below the reference voltage Vref , the comparator generates a pulse that updates the data in the “Digital Storing Unit” block and resets the integrator by means of the feedback signal “Self Reset.” The frequency of spikes generated by a certain light intensity can be calculated as follows: f=

iph + id iph + id = Cint (Vreset − Vref ) Cint ΔV

(23)

with Vreset representing the reset voltage and Vref the reference voltage of the photodiode. ΔV is the difference between Vreset and Vref . At the end of the frame, the digital data Dout are read out for final processing. In this way, the pixel can output data already in a digital domain. It can be assessed that implementing the pixel depicted in Fig. 15 will require at least 12 transistors. Note that, in these sensors, the light intensity that does not cause the pixel to discharge beyond Vref is not recognizable. Thus, the DR in such a sensor is defined by the ratio of the maximal counter switching rate fmax and the minimal one that can be expressed by means of the nominal integration time tint DR = 20 log10

fmax 1 tint

= 20 log10 fmax tint .

(24)

i2ph t2sat 2 = σPRNU (ΔV )2 2 Cint

(26)

where tsat is the time that it took for the pixel to discharge from Vreset to Vref . The NF is calculated using (21) when the kTC component is taken into account only once. The assessed noise is 210 electrons. It can easily be seen from (25) that the SNR curve for the frequency-based sensor is a straight line throughout the whole DR. If ΔV is set to its maximum possible value, the SNR reaches its peak value for every light intensity within the DR. This is an advantage over WDR sensors, where SNR dips exist in the extended DR due to the sensitivity drop. However, setting ΔV to maximum limits the sensor imaging abilities to highly illuminated scenes. Leveling down ΔV will enable to image darker scenes but at the expense of SNR decrease. The saturation detection occurs with the reference to time constant voltage; therefore, the sensitivity, as it is defined in this paper, is not calculated for this group of sensors. E. TTS Sensors In these imagers, the information is encoded at the time that the pixel is detected as saturated. Various algorithms based on this principle have been presented in the literature [33]–[40]. A general schematic diagram of a TTS pixel is presented in Fig. 16. Each pixel is connected to a global varying reference voltage, generated by the voltage ramp Vramp . When the “sense” node exceeds this voltage, the pixel comparator fires a pulse that indicates pixel saturation Vcomp . This pulse causes a time-variant signal that encodes the time of the saturation (T imeref ) to be written into the “Saturation Event Memory.” By knowing the time until the first firing event and the reference voltage at the time of the event, the final image can be

2454

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 18.

Fig. 17. Calculated SNR for the TTS sensor.

reconstructed. In particular case, when Vcomp is kept constant and there was no firing event, denoting pixel saturation, then the light intensity can conventionally be retrieved by reading the pixel analog value (mantissa) through a Buffer. Generally, the highest achievable current is proportional to the fastest rate of voltage ramp change. Therefore, the DR extension here is DRF = 20 log10

tint ttoggle_ min

(27)

where ttoggle_ min is the minimal detectable saturation time. Its value is constrained by the system performance and particularly by the properties of the ADC within the pixel. Equations (28) and (25) describe the SNR characteristics of this algorithm for the case of a piecewise reference voltage with infinite resolution. For pixels that saturate (i.e., reach ΔV ) within tint , it will be constant. For the less illuminated pixels, it enables integration throughout the whole integration period SNR = 20 log10 iph tint Cint

 σr2

+

q(iph +idc )tint 2 Cint

+

2 2 σPRNU i2ph t2int +σOff _FPN Q2max 2 Cint

. +

2kT Cint

(28) If a monotonically rising ramp, which climbs from 0 V to ΔV during tint , is applied, the SNR will be calculated with (28) for all illumination levels. The value of tint in (28) will be replaced with ttoggle (the comparator switching time) that is given by ttoggle = 

ΔV ΔV tint

+

iph Cint

.

(29)

The calculated results are depicted in Fig. 17. The following values were used in the SNR calculation shown in Fig. 17: Qmax = 62 500e− σr = 190 μV σPRNU = 0.3%

ΔV = 1 V Cint = 10 fF σOff _FPN = 0.3%

T = 300 K N F = 206e− tint = 30 ms.

Calculated sensitivity for the TTS sensor.

Due to the relatively complex pixel structure, which consists of at least eight transistors, these SNR simulations were performed with higher FPN components and higher readout variance. Increased gain FPN prevents the sensor from reaching as high an SNR as the reference sensor, although the differences in the maximum SNR values are minor. The sensitivity of a TTS sensor depends on the shape of the time-varying signal Vramp (Fig. 16) and is calculated as   iph ttoggle d iph dttoggle ttoggle dVsense = + . = S= diph diph Cint Cint Cint diph (30) The NF is calculated using (2). From Fig. 17, we can conclude that the DRF in this method exceeds 50 dB. From Fig. 18, where sensitivity of TTS sensor is depicted, it can be easily understood that for low light intensities the preferable way of signal processing is the voltage readout by means of Buffer (Fig. 16), since the switching time converges to tint (29). For the last two decades of light intensities that are “squeezed” into a very small voltage range and the switching time converges to be inversely proportional to iph , the preciseness of the signal processing depends on the resolution of the saturation time representation.

F. Sensors With Global Control Over the Integration Time The concept of global control over the integration time is to sample the pixel array after a predetermined integration time and to reset the whole array regardless of the amount of charge each pixel has integrated until then [41]–[53]. There are several solutions implementing this concept. One of the solutions is the conventional multiple-capture algorithm. In this algorithm, the whole sensor integrates for different exposure times regardless of the incoming light intensity [41]–[44]. The final image can be reconstructed by choosing the closest value to saturation for each pixel. The general structure of a pixel, used to implement the conventional multiple-capture algorithm, is the same as for the reference pixel (see Fig. 1). It operates identically to the reference pixel for each exposure.

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

Fig. 19. Calculated SNR characteristics for a sensor utilizing conventional multiple captures.

2455

Fig. 20. Calculated sensitivity for a sensor utilizing conventional multiple captures.

To simulate the conventional multiple-capture algorithm, the sensor signal is calculated for five different exposure times, and, after each exposure, the image is read out. The DR extension is the ratio between the longest and the shortest exposures and is given by DRF = 20 log10

tmax _exposure tmin _exposure

(31)

where tmax _exposure and tmin _exposure are the maximal and the minimal exposure times respectively. The SNR in this WDR scheme can be represented by (1), where tint is replaced with one of the exposure times tint_i . The SNR calculation results are presented in Fig. 19. The following values were used during the simulation depicted in Fig. 19: Qmax = 62 500e−

T = 300 K

σr = 48 μV

Cint = 10 fF

σPRNU = 0.1%

σOff _FPN = 0.1%

N F = 65e−

tint_0 = 20 ms

tint_1 = 7.5 ms

tint_2 = 1.8755 ms

tint_3 = 469 μs

tint_4 = 117 μs.

The NF is the same as for the reference sensor. The DRF is calculated to be 45 dB using (31). The SNR dips in Fig. 19 are explained by the expression  tint_i tint_i = 10 log10 , SN Rdip ∼ = 20 log10 tint_i+1 tint_i+1 i = 0, . . . , 4 (32) where tint_i and tint_i+1 are the successive exposure periods. The sensitivity for such a sensor is given by   iph tint_i d tint_i dVsense = . (33) = S= diph diph Cint Cint The sharp falloffs in the sensitivity, as seen in Fig. 20, are caused by the reduction in the capture times for the higher light intensities. It can easily be understood from (32) and (33) that, as the DR extension increases, the SNR dips and

Fig. 21. SNR for a sensor utilizing overlapping multiple captures.

sensitivity reduction will become more severe. Obviously, there is a tradeoff between the effort to increase the DR extension and the effort to keep the SNR drop as small as possible. One of the ways to minimize the SNR drop is to perform multiple captures at a high frequency [45], [47]. However, performing captures with high frequency complicates the pixel structure and increases the number of A–D conversions. In the conventional multiple-capture method, the longest integration time is lower than the frame time due to the need to accommodate additional exposures within the same frame. Consequently, the SNR for the low-end illumination intensities is reduced. A possible solution for extending the longest integration time is overlapping the multiple-capture technique, presented in [51]–[53]. In this technique, two captures, one long and one short, occur at the same time. In other words, the captures overlap one another; thus, the charge integrated for a long period of time is not being dumped before the short integration period occurs. In order to prevent an undesired charge dump during the frame, the whole array is reset not to the highest possible value but to some intermediate one. The pixel that utilizes overlapping multiple captures is identical to the reference 3T pixel (Fig. 1). The SNR in Fig. 21

2456

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

Fig. 22. Sensitivity of a sensor utilizing overlapping multiple captures.

Fig. 24. SNR for a sensor utilizing autonomous control over the integration time.

Fig. 23. Schematic diagram of a conditionally reset pixel.

is assessed in the same way as for the conventional multiple captures Qmax = 62 500e− Cint = 10 fF N F = 65e−

T = 300 K σPRNU = 0.1% tint_0 = 30 ms

σr = 48 μV σOff _FPN = 0.1% tint_1 = 117 μs.

The large SNR drop is caused by a sharp decrease in the short capture time. This can be seen in Fig. 22, where sensitivity is depicted. By cascading the DR extensions, the SNR dips can be reduced, however, at the expense of lowering the frame rate [53]. G. Sensors With Autonomous Control Over the Integration Time The sensors that belong to this group automatically adjust the integration times of each pixel according to the illumination intensity [54]–[60] by adding a conditional reset circuitry to the pixels (Fig. 23). In such sensors, every pixel is nondestructively read out to a logic circuitry at certain time points. This circuitry,

SNR = 20 log10  σr2 +

based on comparators, compares the pixel value with the timeinvariant threshold voltage and generates a signal notifying whether the pixel has discharged below the threshold or not. If the pixel has discharged below the threshold, the signal generated by the logic will reset the pixel by means of the Logic Decision signal fed to the Conditional Reset switch (Fig. 23). If the pixel signal level is above the threshold, it continues to integrate without reset until the next frame. The number of resets performed on a certain pixel is stored in the memory unit and is updated every time the pixel is checked to be reset or not. At the end of the frame, the pixel’s analog value (mantissa) is read out and converted by the ADC. The digital information regarding the number of resets is read out of the memory and is used for autoscaling of a digitized mantissa value. A possible implementation with four transistors of a pixel with the conditional reset feature is presented in [61]. The DR extension is equal to the ratio between the nominal integration time and the minimal exposure time and thus can be calculated by (31). The calculation of the SNR in the current group of sensors depends on the mode of shutter they are based upon (rolling [54] or global [57], [58], [60]). For sensors that operate in rolling shutter mode, the SNR can be calculated by (34), shown at the bottom of the page. tint_i in this relation is the notation for the exposure time. The calculated result can be seen in Fig. 24. The SNR relation for a sensor operating in global shutter mode will be derived in a future work. The following values were used in the calculation: Qmax = 62 500e− Cint = 10 fF N F = 86e−

T = 300 K σPRNU = 0.1% tint_0 = 30 ms

σr = 96 μV σOff _FPN = 0.1% tint_1 = 7.5 ms

tint_2 = 1.8755 ms

tint_3 = 469 μs

tint_4 = 117 μs.

iph tint_i Cint q(iph +idc )tint_i 2 Cint

+

2 σPRNU i2ph t2int_i 2 Cint

(34) +

Q2max 2 σOff 2 _FPN Cint

+

2 CkT int

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

2457

TABLE II S UMMARY OF THE A SSESSED PARAMETERS FOR F REQUENCY-BASED AND TTS S ENSORS

TABLE III S UMMARY OF THE A SSESSED PARAMETERS FOR G LOBAL AND AUTONOMOUS C ONTROL OVER THE I NTEGRATION T IME S ENSORS

Fig. 25. Calculated sensitivity for the autonomous-control-over-theintegration-time sensor. TABLE I S UMMARY OF THE A SSESSED PARAMETERS FOR C ONVENTIONAL L OGARITHMIC , L INEAR –L OGARITHMIC , C LIPPING , AND PARTIAL -R ESET S ENSORS

The calculated DRF is equal to 48 dB given by (31). The NF (21) in this pixel is higher than that in the conventional one due to the lack of ability to perform CDS. The analysis of the sensitivity depicted in Fig. 25 is the same as that for the multiple-capture method presented in (33). IV. D ISCUSSION The purpose of this section is to discuss the pros and cons of the aforementioned WDR techniques (Tables I–III). The parameters that have been chosen for comparison are NF, calculated in electrons; minimal number of transistors required for pixel implementation (Minimal # of transistors); absolute DR (DR), calculated in decibels; DR extension factor (DRF), evaluated in decibels; and sensitivity of the pixel (Sensitivity), assessed in volts per picoampere. These parameters are essential for proper understanding of the tradeoffs associated with each category and provide the basis for comparison between the reviewed algorithms. For every WDR category, the assessed sensitivity will not be presented as an absolute number but rather as a range of possible values received from the appropriate figures. The top row in Table I presents the performance characteristics of a conventional sensor based on the reference pixel (Fig. 1). The conventional sensor (Conv.) has low NF due to its

simple pixel structure. Since the only operation which is needed to perform after the integration is ADC conversion, the control over the reference sensor is very simple. The conventional sensor requires only a minimal amount of circuitry, and its output signal processing is straightforward. Therefore, the power consumption for such a sensor is low. The spatial resolution can be assessed qualitatively assuming that the pixel arrays of all sensors discussed previously occupy the same area and are implemented in the same technology. Thus, as the minimal number of transistors within the pixel rises, the spatial resolution decreases. Consequently, sensors based on pixels with three transistors have the best spatial resolution. The sensitivity of the conventional sensor is constant through the whole DR. Both the pixel capacitance and the integration time are constant. The logarithmic sensor (Log.), mentioned earlier, has a very simple pixel structure; however, its NF is higher than that of the reference pixel due to increased offset FPN. Generally, the power consumption of the logarithmic sensor is low, as it does not require any complicated circuitry at its periphery. The control over such a sensor is simple, but it becomes more complicated when calibration procedures for FPN reduction are utilized. A remarkable DR extension is achieved through compressing the sensor response, which maps large range light intensities to a very small voltage signal. This fact is clearly shown by the four-orders-of-magnitude difference in sensitivity between low and high photocurrents. We assess that the logarithmic sensor owns the highest sensitivity of all WDR sensors in low light conditions and one of the lowest in the extended DR. To conclude, the logarithmic sensor provides a very wide DR and high spatial resolution. However, this sensor requires more complex color processing due to the nonlinear response and has a reduced sensitivity at high illumination intensities due to its companding ability. The pixel that is capable of operating in both linear and logarithmic modes (multimode) consists of a minimum of three transistors; therefore, the spatial resolution of the sensor based on such a pixel is high. Generally, the NF in multimode

2458

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

sensors can be regarded as low since their pixels do not contain frequently switched circuitry, and CDS procedure for kTC noise elimination can be performed. Low NF allows this multimode sensor to reach a very high DR (118 dB), while its DRF is the same as that for the pure logarithmic sensor. The control over a multimode sensor can be more complicated than that of the pure logarithmic sensor [16] due to the need to store a linearly integrated value and then to compare it to that received in the logarithmic mode of operation. In such a case, the frame rate is lower, and power consumption is higher than that in the pure logarithmic sensor. In the regular integration mode, the light intensities are linearly mapped to pixel voltage, and the sensitivity is high. However, in the logarithmic mode, high light intensities are compressed to a small swing, and, as a result of that compression, the sensitivity drops abruptly. We can conclude that the multimode sensor allows reaching very high DR with decent spatial resolution but at the expense of very low sensitivity at high end of illumination level and nontrivial signal processing. Sensors utilizing LOFIC and multiple-partial-reset (M. Partial Resets) method are both categorized as clipping sensors. An important advantage of the latter method over LOFIC is simpler pixel structure, allowing higher spatial resolution. Both of the aforementioned methods produce piecewise linear pixel response, but, in the sensor with LOFIC, additional signal processing is required before for the output signal selection between two or more signals, whereas, in the multiple partial-reset sensor, only one signal is received from the pixel; thus, the latter sensor can achieve higher frame rate. Power consumption in the sensor using LOFIC is higher than that consumed in partial resets due to extensive signal processing. However, the implementation of the partial-reset method requires a very accurate analog circuitry and precise timing control since every time the pixel is being reset to a different intermediate value throughout the frame. Nonuniformity of reset values or nonuniformity in the times of the reset events, throughout the APS array, can cause increased gain FPN. The accuracy and precision of the reset events can limit either the size of the array or the frame rate as well. The advantages of the LOFIC sensor, referring to the partial-reset sensor, are lower NF due to the possibility of performing CDS and higher sensitivity in highly lighted scenes. A certain advantage of the LOFIC sensor over other√WDR sensors is the maximal reachable SNR, which exceeds Qmax bound proportionally to the capacitance ratio (15). Table II contains the performance characteristics of the frequency-based and TTS sensors. These sensors are based on relatively complex pixels that include ADCs, self-reset circuitry, etc. For that reason, the NF is assessed to be higher compared to that of the other sensors. Another drawback of these sensors is that their spatial resolution is low since the minimal transistor count within the pixel is higher relatively than other sensors. The power consumption is high, since the pixel is frequently self-reset and the counter is toggled at the same frequency [30]. In TTS sensors, the dominant power consumers are inpixel comparator and peripheral circuits that perform the final calculation of the signal according to the threshold voltage and the time it took for the pixel to saturate [38]. Note that, in some TTS sensors, a very precise time-varying global reference

should be generated. Uniformity in the distribution of timevarying voltage throughout the pixel array can limit the spatial resolution in TTS sensors. Frequency-based sensors also necessitate supplying additional global data lines throughout the APS array. However, most of those signal lines are digital, excluding the reference voltage. The reference voltage is time invariant, so it can easily be distributed evenly to each pixel within the array. The final readout for frequency-based sensors is usually simpler than that of TTS since each pixel already contains the digitized data output (Fig. 15). In case that TTS pixel is digital [34], the readout procedure is the same. However, the digital data in frequency based sensor is generated by the pixel itself, whereas in TTS sensor, a global bus should deliver the whole digital word to all pixels in parallel. Therefore, we assess that frequency based sensors and TTS sensors based upon digital pixels, have the highest frame rate relatively other WDR schemes. In frequency-based sensors, the mapping of light intensity to the frequency of the reset spikes is linear. On the other hand, in TTS sensors mapping the incoming light intensity to the saturation time to is non-linear. An important advantage of frequency-based and TTS sensors is that they can reach a very high DR, particularly those based on TTS, where the reference voltage control is extremely flexible [38]. Moreover, as can be learned from Table II, TTS sensors reach the highest DR and DRF. Table III presents the assessment of the parameters of WDR sensors that are based on conventional (Conv. Mult. Captures), overlapping-multiple-captures (Overlap. Mult. Captures), and autonomous-control-over-integration-time (Autonom) algorithms. Sensors based on these three algorithms reach remarkable DR and DRF while keeping the pixel structure very simple. Moreover, the DR extension in these sensors is very flexible and can be changed according to (31). The spatial resolution of a sensor with autonomous integration time control will be slightly below those utilizing conventional or overlapping multiple captures due to additional transistor implementing the conditional reset ability. The sensitivity of a sensor with conventional multiple captures is lower than that with overlapping captures or autonomous integration time control due to the reduced longest integration time. In overlapping multiple captures, the sensitivity is low throughout the whole extended DR since only one capture for the DR extension is utilized. At high end of illumination intensities in these three WDR sensors, the sensitivity in the extended DR region falls by two orders of magnitude; however, it is still relatively higher than that in the logarithmic and TTS sensors. The drawback of the conventional multiple-capture sensor is that it requires extensive periphery DSP circuitry. The data processing in such sensor involves multiple A–D conversions and frequent readout cycles from memory units that store the digitized pixel values from the previous captures. In a multipleexposure sensor, the frame rate can be limited by the time required to process increasing the number of possible outputs. It is possible to reduce the number of possible output signals and spare the need for memory units for storing pixel values from multiple frames if two captures are utilized only [51]. In this way, the final output signal can be retrieved immediately

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

after the second pixel readout. However, extending the DR using two captures only will cause severe SNR drop at the boundary of the extended range (32). On the other hand, in sensors with autonomous control over the integration time, the control is not complicated since the intermediate comparisons of the pixel values are performed to a constant voltage and the final processing involves autoscaling of a single digitized pixel value only. The DR extension in such sensors is bounded by the minimal time of the memory update process, which occurs between the saturation checks. The dominant power consumption sources for conventional multiple-capture sensors are memory unit and digital signal processing circuitry. In the case of overlapping multiple captures, the main power consumer is the signal processing circuitry outside the pixel. The dominant power consumer in sensors with autonomous integration time control is the memory unit that stores the WDR bits. V. S UMMARY As the technology of implementation of CISs scales down, pixel size is being reduced, increasing the spatial resolution. Further improvement in the spatial resolution can be achieved by sharing certain circuitry among adjacent pixels [51]. Power consumption is being reduced as supply voltages decrease. Further power decrease can be achieved by using of low-power techniques [60]. Scaling down of pixel size causes a decrease in intrapixel capacitance. Smaller capacitance levels up the sensor sensitivity, improving its ability to detect low light intensities. However, such capacitance reduction increases the “kTC” noise, and decreases the maximal SNR. Consequently, CDS procedure, which removes the “kTC” noise is desirable to be utilized in the final signal processing and the in-pixel capacitance size should be sufficient to keep the required maximal SNR. Generally, there are three approaches for widening DR nonlinear, piecewise linear, and linear according to the sensor transfer function. The nonlinear approach that is utilized in logarithmic, multimode, and TTS sensors allows reaching very high DR but with remarkable loss of sensitivity which negatively affects the image quality. Sensors with piecewise linear and linear response such as clipping, global, and autonomous control over the integration time reach the DR, which is similar to that of nonlinear sensors, with improved sensitivity. Frequency-based sensors, which have a linear response, reach a high DR but with loss of information in the darker scenes. The quantitative assessments of the DR extension, NF, SNR, minimal transistor count, and sensitivity for each category of WDR sensors were performed and discussed. Power consumption was discussed qualitatively. Advantages and drawbacks of each of the seven sensor categories were discussed and summarized. ACKNOWLEDGMENT The authors would like to thank Dr. D. Lubzens, Dr. S. Lineykin, A. Teman, S. Glozman, T. Katz, and L. Blockstein for valuable discussions.

2459

R EFERENCES [1] O. Yadid-Pecht, R. Ginosar, and Y. Shacham-Diamand, “Random access photodiode array for intelligent image capture,” IEEE Trans. Electron Devices, vol. 38, no. 8, pp. 1772–1780, Aug. 1991. [2] S. K. Mendis, S. E. Kemeny, R. C. Gee, B. Pain, C. O. Staller, Q. Kim, and E. R. Fossum, “CMOS active pixel image sensors for highly integrated imaging systems,” IEEE J. Solid-State Circuits, vol. 32, no. 2, pp. 187– 197, Feb. 1997. [3] E. R. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,” IEEE Trans. Electron Devices, vol. 44, no. 10, pp. 1689–1698, Oct. 1997. [4] D. N. Yaung, S. G. Wuu, Y. K. Fang, C. S. Wang, C. H. Tseng, and M. S. Liang, “Nonsilicide source/drain pixel for 0.25-μm CMOS image sensor,” IEEE Electron Device Lett., vol. 22, no. 2, pp. 71–73, Feb. 2001. [5] K. Cho, A. I. Krymski, and E. R. Fossum, “A 1.5-V 550 μW 176 × 144 autonomous CMOS active pixel image sensor,” IEEE Trans. Electron Devices—Special Issue on Image Sensors, vol. 50, no. 1, pp. 96–105, Jan. 2003. [6] O. Yadid-Pecht and R. Etienne-Cummings, CMOS Imagers: From Phototransduction to Image Processing. Norwell, MA: Kluwer, 2004. [7] K. Mizobuchi, S. Adachi, J. Tejada, H. Oshikubo, N. Akahane, and S. Sugawa, “A low-noise wide dynamic range CMOS image sensor with low and high temperatures resistance,” Proc. SPIE, vol. 6816, pp. 681 604-1–681 604-8, 2008. [8] E. Stevens, H. Komori, H. Doan, H. Fujita, J. Kyan, C. Parks, G. Shi, C. Tivarus, and J. Wu, “Low-crosstalk and low-dark-current CMOS image-sensor technology using a hole-based detector,” in Proc. ISSCC— Image Sensors and Technology, 2008, p. 60 595. [9] S. L. Barna, “Dark current reduction circuitry for CMOS active pixel,” U.S. Patent 7 186 964 B2, Mar. 6, 2007. [10] I. Shcherback, A. Belenky, and O. Yadid-Pecht, “Empirical dark current modeling for complementary metal oxide semiconductor active pixel sensor,” Opt. Eng.—Special Issue on Focal Plane Arrays, vol. 41, no. 6, pp. 1216–1219, Jun. 2002. [11] O. Yadid-Pecht, “Wide dynamic range sensors,” Opt. Eng., vol. 38, no. 10, pp. 1650–1660, Oct. 1999. [12] D. Yang and A. El Gamal, “Comparative analysis of SNR for image sensors with enhanced dynamic range,” Proc. SPIE, vol. 3649, pp. 197– 211, Jan. 1999. [13] S. G. Chamberlain and J. P. Lee, “Silicon imaging arrays with new photoelements, wide dynamic range and free from blooming,” in Proc. Custom Integr. Circuits Conf., Rochester, NY, 1984, pp. 81–85. [14] K. A. Boahen and A. G. Andreou, “A contrast sensitive retina with reciprocal synapses,” in Advances in Neural Information Processing Systems, vol. 4. San Mateo, CA: Morgan Kaufmann, 1992, pp. 764–772. [15] E. Labonne, G. Sicard, M. Renaudin, and P. Berger, “A 100 dB dynamic range CMOS image sensor with global shutter,” in Proc. 13th IEEE ICECS, Dec. 2006, pp. 1133–1136. [16] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J. Bogaerts, “A logarithmic response CMOS image sensor with on-chip calibration,” IEEE J. Solid-State Circuits, vol. 35, no. 8, pp. 1146–1152, Aug. 2000. [17] D. Joseph and S. Collins, “Modeling, calibration, and correction of nonlinear illumination-dependent fixed pattern noise in logarithmic CMOS image sensors,” IEEE Trans. Instrum. Meas., vol. 51, no. 5, pp. 996–1001, Oct. 2002. [18] L.-W. Lai, C.-H. Lai, and Y.-C. King, “A novel logarithmic response CMOS image sensor with high output voltage swing and in-pixel fixedpattern noise reduction,” IEEE Sensors J., vol. 4, no. 1, pp. 122–126, Feb. 2004. [19] N. Akahane, R. Ryuzaki, S. Adachi, K. Mizobuchi, and S. Sugawa, “A 200 dB dynamic range iris-less CMOS image sensor with lateral overflow integration capacitor using hybrid voltage and current readout operation,” in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2006, pp. 1161–1170. [20] C. E. Fox, J. Hynecek, and D. R. Dykaar, “Wide-dynamic-range pixel with combined linear and logarithmic response and increased signal swing,” Proc. SPIE, vol. 3965, pp. 4–10, May 2000. [21] N. Tu, R. Hornsey, and S. Ingram, “CMOS active pixel image sensor with combined linear and logarithmic mode operation,” in Proc. IEEE Can. Conf. Elect. Comput. Eng., 1998, pp. 754–757. [22] G. Storm, R. Henderson, J. E. D. Hurwitz, D. Renshaw, K. Findlater, and M. Purcell, “Extended dynamic range from a combined linear–logarithmic CMOS image sensor,” IEEE J. Solid-State Circuits, vol. 41, no. 9, pp. 2095–2106, Sep. 2006. [23] S. Decker, D. McGrath, K. Brehmer, and C. G. Sodini, “A 256 × 256 CMOS imaging array with wide dynamic range pixels and columnparallel digital output,” IEEE J. Solid-State Circuits, vol. 33, no. 12, pp. 2081–2091, Dec. 1998.

2460

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 56, NO. 11, NOVEMBER 2009

[24] Y. Wang, S. L. Barna, S. Campbell, and E. R. Fossum, “A high dynamic range CMOS APS image sensor,” in Proc. IEEE Workshop Charge Coupled Devices, Adv. Image Sens., Jun. 2001, pp. 137–140. [25] E. R. Fossum, “High dynamic range cascaded integration pixel cell and method of operation,” U.S. Patent 6 888 122 B2, May 3, 2005. [26] N. Akahane, S. Sugawa, S. Adachi, K. Mori, T. Ishiuchi, and K. Mizobuchi, “A sensitivity and linearity improvement of a 100-dB dynamic range CMOS image sensor using a lateral overflow integration capacitor,” IEEE J. Solid-State Circuits, vol. 41, no. 4, pp. 851–858, Apr. 2006. [27] L. Woonghee, N. Akahane, S. Adachi, K. Mizobuchi, and S. Sugawa, “A high S/N ratio and high full well capacity CMOS image sensor with active pixel readout feedback operation,” in Proc. IEEE ASSCC, Nov. 2007, pp. 260–263. [28] D. Hertel, A. Betts, R. Hicks, and M. ten Brinke, “An adaptive multiplereset CMOS wide dynamic range imager for automotive vision applications,” in Proc. IEEE Intell. Veh. Symp., Eindhoven, The Netherlands, Jun. 2008, pp. 614–619. [29] K. P. Frohmader, “A novel MOS compatible light intensity-to-frequency converter suited for monolithic integration,” IEEE J. Solid-State Circuits, vol. SSC-17, no. 3, pp. 588–591, Jun. 1982. [30] X. Wang, W. Wong, and R. Hornsey, “A high dynamic range CMOS image sensor with inpixel light-to-frequency conversion,” IEEE Trans. Electron Devices, vol. 53, no. 12, pp. 2988–2992, Dec. 2006. [31] L. G. McIlrath, “A low-power low-noise ultrawide-dynamic-range CMOS imager with pixel-parallel A/D conversion,” IEEE J. Solid-State Circuits, vol. 36, no. 5, pp. 846–853, May 2001. [32] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “A biomorphic digital image sensor,” IEEE J. Solid-State Circuits, vol. 38, no. 2, pp. 281– 294, Apr. 2006. [33] V. Brajovic and T. Kanade, “New massively parallel technique for global operations in embedded imagers,” in Proc. IEEE Workshop CCDs, Adv. Image Sens., Apr. 1995, pp. 1–6. [34] A. Kitchen, A. Bermak, and A. Bouzerdoum, “PWM digital pixel sensor based on asynchronous self-resetting scheme,” IEEE Electron Device Lett., vol. 25, no. 7, pp. 471–473, Jul. 2004. [35] A. Bermak and A. Kitchen, “A novel adaptive logarithmic digital pixel sensor,” IEEE Photon. Technol. Lett., vol. 18, no. 20, pp. 2147–2149, Oct. 15, 2006. [36] A. Bermak and Y.-F. Yung, “A DPS array with programmable resolution and reconfigurable conversion time,” IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 14, no. 1, pp. 15–22, Jan. 2006. [37] L. Qiang, J. G. Harris, and Z. J. Chen, “A time-to-first spike CMOS image sensor with coarse temporal sampling,” Analog Integr. Circuits Signal Process., vol. 47, no. 3, pp. 303–313, Apr. 2006. [38] C. Shoushun and A. Bermak, “Arbitrated time-to-first spike CMOS image sensor with on-chip histogram equalization,” IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 15, no. 3, pp. 346–357, Mar. 2007. [39] D. Stoppa, A. Vatteroni, D. Covi, A. Baschiroto, A. Satori, and A. Simoni, “A 120-dB dynamic range CMOS image sensor with programmable power responsivity,” IEEE J. Solid-State Circuits, vol. 42, no. 7, pp. 1555–1563, Jul. 2007. [40] T. Lule, M. Wagner, M. Verhoven, H. Keller, and M. Bohm, “10000-pixel, 120 dB imager in TFA technology,” IEEE J. Solid-State Circuits, vol. 35, no. 5, pp. 732–739, May 2000. [41] O. Yadid-Pecht and E. R. Fossum, “Image sensor with ultra-high-lineardynamic range utilizing dual output CMOS active pixel sensors,” IEEE Trans. Electron Devices—Special Issue on Solid State Image Sensors, vol. 44, no. 10, pp. 1721–1724, Oct. 1997. [42] M. Mase, S. Kawahito, M. Sasaki, Y. Wakamori, and M. Furuta, “A wide dynamic range CMOS image sensor with multiple exposure-time signal outputs and 12-bit column-parallel cyclic A/D converters,” IEEE J. SolidState Circuits, vol. 40, no. 12, pp. 2787–2795, Dec. 2005. [43] S. Masaaki, M. Mitsuhito, K. Shoji, and T. Yoshiaki, “A wide-dynamicrange CMOS image sensor based on multiple short exposure-time readout with multiple-resolution column-parallel ADC,” IEEE Sensors J., vol. 7, no. 1, pp. 151–158, Jan. 2007. [44] Y. Takayoshi, K. Shigetaka, M. Takahiko, and K. Yoshihisa, “A 140 dBdynamic-range MOS image sensor with in-pixel multiple-exposure synthesis,” in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2008, pp. 50–51. [45] L. Xinqiao and A. El Gamal, “Photocurrent estimation for a selfreset CMOS image sensor,” Proc. SPIE, vol. 4669, pp. 304–312, 2000. [46] S. Kavusi and A. El Gamal, “Quantitative study of high dynamic range image sensor architectures,” in Proc. SPIE—Electronic Imaging, Jan. 2004, vol. 5301, pp. 264–275.

[47] S. Kavusi and A. El Gamal, “Folded multiple-capture: An architecture for high dynamic range disturbance-tolerant focal plane array,” in Proc. SPIE—Infrared Technology and Applications, Apr. 2004, vol. 5406, pp. 351–360. [48] D. Yang, A. El Gamal, B. Fowler, and H. Tian, “A 640 × 512 CMOS image sensor with ultra wide dynamic range floating point pixel level ADC,” IEEE J. Solid-State Circuits, vol. 34, no. 12, pp. 1821–1834, Dec. 1999. [49] J. Rhee and Y. Joo, “Wide dynamic range CMOS image sensor with pixel level ADC,” Electron. Lett., vol. 39, no. 4, pp. 360–361, Feb. 2003. [50] A. Guilvard, J. Segura, P. Magnan, and P. Martin-Gonthier, “A digital high dynamic range CMOS image sensor with multi-integration and pixel readout request,” in Proc. SPIE—Sensors, Cameras, and Systems for Scientific/Industrial Applications VIII, San Jose, CA, Jan. 28–Feb. 1, 2007, pp. 1–10. [51] Y. Egawa, H. Koike, R. Okamoto, H. Yamashita, N. Tanaka, J. Hosokawa, K. Arakawa, H. Ishida, H. Harakawa, T. Sakai, and H. Goto, “A 1/2.5 inch 5.2 Mpixel, 96 dB dynamic range CMOS image sensor with fixed pattern noise free, double exposure time read-out operation,” in Proc. IEEE ASSCC, Nov. 2006, pp. 135–138. [52] E. Yoshitaka, T. Nagataka, K. Nobuhiro, S. Hiromichi, N. Akira, H. Hiroto, L. Yoshinori, and M. Makoto, “A White-RGB CFA-patterned CMOS image sensor with wide dynamic range,” in Proc. IEEE Int. SolidState Circuits Conf., Feb. 2008, pp. 52–53. [53] O. Yusuke, T. Atushi, T. Tadayuki, K. Akihiko, S. Hiroki, K. Masanori, and N. Tadakuni, “A 121.8 dB dynamic range CMOS image sensor using pixel-variation-free midpoint potential drive and overlapping multiple exposures,” in Proc. Int. Image Sens. Workshop, Jun. 2007, pp. 30–33. [54] O. Yadid-Pecht and A. Belenky, “In-pixel autoexposure CMOS APS,” IEEE J. Solid-State Circuits, vol. 38, no. 8, pp. 1425–1428, Aug. 2003. [55] T. Hamamoto and K. Aizawa, “A computational image sensor with adaptive pixel based integration time,” IEEE J. Solid-State Circuits, vol. 36, no. 4, pp. 580–585, Apr. 2001. [56] P. M. Acosta-Serafini, I. Masaki, and C. G. Sodini, “A 1/3 VGA linear wide dynamic range CMOS image sensor implementing a predictive multiple sampling algorithm with overlapping integration intervals,” IEEE J. Solid-State Circuits, vol. 39, no. 9, pp. 1487–1496, Sep. 2004. [57] A. Fish, A. Belenky, and O. Yadid-Pecht, “Wide dynamic range snapshot APS For ultra low-power applications,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 11, pp. 729–733, Nov. 2005. [58] A. Belenky, A. Fish, A. Spivak, and O. Yadid-Pecht, “Global shutter CMOS image sensor with wide dynamic range,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 54, no. 12, pp. 1032–1036, Dec. 2007. [59] S. W. Han, S. J. Kim, J. H. Choi, C. K. Kim, and E. Yoon, “A high dynamic range CMOS image sensor with in-pixel floating-node analog memory for pixel level integration time control,” in Proc. Symp. VLSI Circuits, Dig. Tech. Papers, 2006, pp. 25–26. [60] A. Belenky, A. Fish, A. Spivak, and O. Yadid-Pecht, “A snapshot CMOS image sensor with extended dynamic range,” IEEE Sensors J., vol. 9, no. 2, pp. 103–111, Feb. 2009. [61] O. Yadid-Pecht, B. Pain, C. Staller, C. Clark, and E. R. Fossum, “CMOS active pixel sensor star tracker with regional electronic shutter,” IEEE J. Solid-State Circuits, vol. 32, no. 2, pp. 285–288, Feb. 1997.

Arthur Spivak was born on April 29, 1979. He received the B.Sc. degree in electrical engineering from Technion–Israel Institute of Technology, Haifa, Israel, in 2005 and the M.Sc. degree in electrical engineering from Ben-Gurion University of the Negev, Beersheba, Israel, in 2009, where he is currently working toward the Ph.D. degree. Since 2006, he has been with the VLSI Systems Center, Ben-Gurion University of the Negev, where he has been engaged in the research, development, and design of analog and digital circuits integrated in CMOS imagers.

SPIVAK et al.: WIDE-DYNAMIC-RANGE CMOS IMAGE SENSORS—COMPARATIVE PERFORMANCE ANALYSIS

Alexander Belenky received the B.Sc. degree in physics and the M.Sc. degree in electrooptics engineering from Ben-Gurion University of the Negev, Beersheba, Israel, in 1995 and 2003, respectively, where he is currently working toward the Ph.D. degree. Since 1998, he has been with the VLSI Systems Center, Ben-Gurion University of the Negev, where he is responsible for the VLSI laboratory. His current interests are smart CMOS image sensors, image processing, and imaging systems.

Alexander Fish (S’04–M’06) received the B.Sc. degree in electrical engineering from Technion–Israel Institute of Technology, Haifa, Israel, in 1999 and the M.Sc. and Ph.D. (summa cum laude) degrees from Ben-Gurion University of the Negev, Beersheba, Israel, in 2002 and 2006, respectively. From 2006 to 2008, he was a Postdoctoral Fellow with the ATIPS Laboratory, University of Calgary, Calgary, AB, Canada. He is currently a Faculty Member with the Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev. His research interests include low-power CMOS image sensors, analog and digital on-chip image processing, algorithms for dynamic range expansion, and low-power design techniques for digital and analog circuits. He has authored over 50 scientific papers and patent applications and two book chapters. Dr. Fish was the recipient of the Electrical Engineering Dean Award at Technion in 1997 and the Technion President’s Award for Excellence in Study in 1998. He was a coauthor of two papers that won the Best Paper Finalist awards at the ICECS’04 and ISCAS’05 conferences. He was also the recipient of the Young Innovator Award for Outstanding Achievements in the field of information theories and applications by ITHEA in 2005. In 2006, he was honored with the Engineering Faculty Dean “Teaching Excellence” recognition at Ben-Gurion University of the Negev. He has served as a Referee in the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS—I, the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS —II, Sensors and Actuators Journal, the IEEE S ENSORS J OURNAL, and SPIE Optical Engineering Journal, as well as ISCAS, ICECS, and IEEE Sensors Conferences. He was also a Coorganizer of special sessions on “smart” CMOS Image Sensors at the IEEE Sensors Conference 2007 and on low-power “Smart” Image Sensors and Beyond at the IEEE ISCAS 2008.

2461

Orly Yadid-Pecht (S’90–M’95–SM’01–F’07) received the B.Sc. degree in electrical engineering and the M.Sc. and D.Sc. degrees from Technion–Israel Institute of Technology, Haifa, Israel, in 1984, 1990, and 1995, respectively. From 1995 to 1997, she was a National Research Council (USA) Research Fellow in the areas of advanced image sensors with the Jet Propulsion Laboratory, California Institute of Technology, Pasadena. In 1997, she was a Member with the Departments of Electrical and Electro-Optical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel, where she founded the VLSI Systems Center, specializing in CMOS image sensors. Since 2009, she has been an iCORE Professor of integrated sensors intelligent systems with the University of Calgary, Calgary, AB, Canada. Her main subjects of interest are integrated CMOS sensors, smart sensors, image processing, neural nets, and microsystem implementation. She has published over a hundred papers and patents and has led over a dozen research projects supported by government and industry. Her work has over 200 external citations. In addition, she has coauthored and coedited the first book on CMOS image sensors: CMOS Imaging: From Photo-Transduction to Image Processing (2004). She also serves as the Director on the board of two companies. Dr. Yadid-Pecht has served as an Associate Editor for the IEEE T RANSACTIONS ON V ERY L ARGE S CALE I NTEGRATION S YSTEMS and the Deputy Editor-in-Chief for the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS—I. She has also been on the CAS BoG. She currently serves as an Associate Editor for the IEEE TBioCAS, the CAS Representative for the IEEE Sensors Council, and a member of the Neural Networks Nanoelectronics and Gigascale Systems committees and the Sensory Systems committee, which she chaired during 2003–2004. She was an IEEE Distinguished Lecturer of the Circuits and Systems Society in 2005. She was also the General Chair of the IEEE International Conference on Electronic Circuits and Systems and is a current member of the steering committee of this conference.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.