A 30-frames/s megapixel real-time CMOS image processor

Share Embed


Descrição do Produto

1732

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

A 30-Frames/s Megapixel Real-Time CMOS Image Processor Daniel Doswald, Member, IEEE, Jürg Häfliger, Patrick Blessing, Norbert Felber, Peter Niederer, and Wolfgang Fichtner, Fellow, IEEE

Abstract—A real-time 1024 1024 image processor for digital motion camera systems is described. The application-specific IC (ASIC) corrects dark current and white imbalance pixelwise, and performs a color interpolation over nine lines of images from a miniaturized camera head housing a single charge-coupled device (CCD). A color space transformation is implemented to achieve true-color motion images with frame rates up to 30 frames/s. Additional features include a focus and illumination criteria calculation and a megapixel-to-PAL scan conversion. The chip area is 49 mm2 , and it was fabricated in a single-poly three-layer-metal 0.35- m CMOS process. The device is packaged in a 208-pin ceramic quad flat package (CQFP), and dissipates 278 mW at 2.5 V. Index Terms—1-CCD camera, autofocus, color space transformation, dark current, endoscopic imaging, focus criterion, image processor, machine vision, real-time, true-color.

I. INTRODUCTION

Fig. 1. Camera head and back-end with image processor ASIC.

R

EAL-TIME motion pictures providing image quality superior to standard video in terms of resolution and color fidelity are often required. In various applications such as biomedicine or machine vision, space and power requirements are the limiting factors for a camera system design. In order to meet these requirements, the camera system is often partitioned into a strongly miniaturized low-power camera head containing only one single charge-coupled device (CCD) sensor, and an image processing back-end where space and power limitations are not critical. Fig. 1 shows the prototype system with the miniaturized camera head and the back-end with this application-specific IC (ASIC). The complete camera system is illustrated in Fig. 2. The timing generator provides the CCD image sensor, the correlated double sampling (CDS) stage, and the A/D converter with the pulses needed for operation, and the driver stage converts the pulses for the image sensor to the corresponding voltage levels and drive strengths. Two lines are read out of the CCD simultaneously and processed in parallel by two standard ICs containing CDS stage and A/D converter. The raw digital

Manuscript received March 23, 2000; revised June 16, 2000. This work was supported by the Swiss Priority Program in Micro and Nano System Technology (MINAST). D. Doswald, N. Felber, and W. Fichtner are with the Integrated Systems Laboratory, Swiss Federal Institute of Technology (ETH), Zürich CH-8092, Switzerland (e-mail: [email protected]). J. Häfliger, P. Blessing, and P. Niederer are with the Institute of Biomedical Engineering, Swiss Federal Institute of Technology (ETH), Zürich CH-8092, Switzerland. Publisher Item Identifier S 0018-9200(00)09424-5.

camera data is then coded by the transmitter and sent to the back-end over two cable pairs, where the data is received and processed by the ASIC presented in this paper. For color imaging, the CCD sensor is covered with a Bayer color filter array (CFA) and the RGB image information is reconstructed with complex algorithms requiring an extremely high computation effort in order to achieve reasonable frame rates and artifact-free interpolation quality. In this paper, two different interpolation methods are compared and the realized implementation of the complex algorithm is described. Other 1-CCD color camera systems are described in [1], [2]. The ASIC also performs pixelwise dark-current and white-gain balancing which ensures good results even with lower-grade CCD sensors at lower cost. In order to get “true-color” images at the output of the ASIC, different color space transformation techniques have been investigated and a matrix transformation implemented. In hand-held biomedical endoscopes, autofocusing systems with additional components (e.g., laser for distance measurements) have to be avoided because of the added weight and safety (e.g., eye surgery applications). For this purpose, a real-time passive autofocus criterion calculation which is based only on the received images is implemented in the image processor. An additional feature of this chip is the scan converter which derives standard digital image signals for a PAL video encoder to enable low-cost long-time documentation and storage on a video recorder.

0018–9200/00$10.00 © 2000 IEEE

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

Fig. 2.

Fig. 3.

1733

Block diagram of prototype camera system with image processor.

Bayer RGB color filter array (CFA).

II. CFA TO RGB INTERPOLATION The camera head provides digitized raw image data from the CCD sensor with a Bayer CFA. Only one color component (green, red, or blue) per pixel is thus available. For the reconstruction of the full RGB information of each pixel, two algorithms are discussed in this section. The most common interpolation algorithms are described in [3], [4]. Only a smooth hue transition algorithm in logarithmic exposure space based on two lines (TwoLineSH) and an enlarged pixel neighborhood interpolation algorithm (EPNI) will be explained in this paper. The Bayer CFA arrangement is illustrated in Fig. 3. The interpolation is explained for the missing two color components of the pixels. All other pixels are treated similarly. (The marked pixels at the border of the image are mirrored to get a “infinitely repeated” interpolation area.)

To present the differences in the interpolation algorithms, a colorful test image is used (Fig. 4).1 The image contains several subsections of test pictures as well as some synthetical structures that make it sensitive to interpolation artifacts. It intentionally includes spacial frequencies above the Nyquist limit for each of the color components of the CFA. This is often the case in real situations due to residues of imperfect spacial anti-aliasing filters in the optical path. Good interpolation algorithms will treat aliasing errors inconspicuously. This original RGB image sampled by an ideal CFA filter yields the test image that is used for comparing and evaluating the different interpolation algorithms. The TwoLineSH algorithm avoids costly line memories. A smooth hue transition algorithm in logarithmic exposure space was modified to work on the two parallel image lines. The equapixel block in Fig. 3 are tions for the

1All test images presented in this paper are available in color at http://www.iis.ee.ethz.ch/ publications/papers/2000/JSSC-35-11-figs.pdf.

1734

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

Fig. 4. Test image used for interpolation studies (color image available for download: http://www.iis.ee.ethz.ch/publications/papers/2000/JSSC-35-11-fig4.tif).

Fig. 5. Interpolated image using the TwoLineSH algorithm (color image available for download: http://www.iis.ee.ethz.ch/publications/papers/2000/JSSC35-11-fig5.tif).

(3) (1) Since this algorithm can be implemented very efficiently with only six pixels of short-time storage, it is well suited for digital motion camera applications. The result is shown in Fig. 5. The main interpolation artifacts are miscolored pixels and the “zipper” effect at edges. A method to reduce these effects is to use a neighborhood-sensitive criterion where the green interpolation is restricted to the one dimension parallel to an assumed edge. This, however, can not be done with only two lines. ) in the The simplest way of sensing an edge (for pixel pixel neighborhood is to calculate the differences of the green pixels in horizontal and vertical direction:

Depending on these differences, the green interpolation is performed only in the direction in which an edge is assumed. In order to reduce zipper effects at diagonal edges or picketfence structures, a larger pixel neighborhood with special filter algorithms is required. Unfortunately, diagonal differences for the green pixels can not be calculated, because there are no green pixels on the diagonals. One solution is to use also the red and blue pixels to predict the missing green values, which is possible under the assumption that all color channels are correlated. This results in the image model described in [4] that supposes the red and blue values to be correlated with luminance (green) over the extent of the interpolation neighborhood by

(2)

(4)

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

1735

Fig. 6. Adaptive green interpolation using an enlarged neighborhood.

Fig. 7. Interpolated image using EPNI algorithm (color image available: http://www.iis.ee.ethz.ch/publications/papers/2000/JSSC-35-11-fig7.tif).

is either the red or blue value at location and is the appropriate bias for the given pixel neighborhood. The following new green interpolation equations can be derived from this image model:

The red and blue values are calculated using the following equations (smooth hue transition in logarithmic exposure space):

(5) corresponds to averaging the four neighbor pixels where , to an edge-enhancing interpolation in the and horizontal or vertical direction, respectively. Fig. 6 shows the implemented decision algorithm for the green interpolation. If the average of the neighbor pixels is larger than a programmable threshold value, enough green information is assumed to predict an edge depending on the horizontal and vertical differences.

(6) Fig. 7 shows the result of the EPNI algorithm with a threshold of 30 (range ). To interpolate a green value of an image line, five lines are used. For the chrominance interpolation, the green values of two more adjacent-lines pixels need to be known. This means that a total of seven lines are accessed to reconstruct a red or blue pixel value.

1736

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

III. COLOR SPACE TRANSFORMATION

TABLE I SIMULATED ERRORS WITH 3, 6, 7 AND 10 POLYNOMIAL TERMS

The CCD sensor of the camera and the display device (e.g., cathode ray tube monitor (CRT) or flat panel) use different device-dependent RGB color spaces. For correct display and interpretation of the images, an accurate conversion between those two color spaces must be found. Color space transformation can be divided into three categories. The first approach is a linear regression method which optimizes the correlation between the color spaces. The second category divides the source color space into small cells and uses a three-dimensional lookup table and interpolation to calculate the target color specifications. The third variant is based on cognitive methods such as fuzzy logic and neural networks which try to simulate human decision-making processes. Due to the higher implementation cost of the latter two categories, the linear regression method was implemented. For this approach, the color space transformation can be written as follows:

(7) , , In these equations, the independent variables represent the color specifications in the CCD’s color and , , and denote the color space, the variables , , and specifications in the CRT’s color space, and are the polynomial coefficients. The order and the number of terms of the polynomals influence the accuracy and the implementation costs. In the following analyses, polynomals with 3, 6, 7, and 10 terms are investigated:

4) The white-balanced values are transformed to values using the polynomial apyield the new proximation. values and the spectral emittance of the 5) The three phosphors of a typical 6500-K balanced CRT monitor [8] are used to derive the additive emission spectra for the color patches as produced by the monitor. values for the CRT emission spectra of the 6) color patches are calculated. and values are transformed 7) Both the to the CIELAB color space. The color difference for each color patch is determined , the measure for accuracy. and averaged to 8) The polynomial coefficients are now optimized in order to minimize the resulting average error. Table I shows the resulting errors for color patches of a GretagMacbeth™ ColorChecker™ (GMCC) [9], various Munsell color chips (MCC) [10], and ceramic color tiles (CCS) from CERAM Research [11]. According to these results, six polynomial terms offer a good trade-off between accuracy and implementation costs. The color space transformation can therefore be expressed by of the the following matrix equation, whereas the matrix : polynomial coefficients is of the size

(9)

(8) Extensive simulations were performed to find which of these polynomals provide sufficient accuracy. In this context, the following procedure was performed: 1) Reflectance spectra of different color patches under standard D65 illumination (Commission Internationale de L’Éclairage, CIE) define the reference emission spectra. 2) The CIE 1931 standard colorimetric observer [7] applied to the reference emission spectra provides the reference tristimulus values of the illuminated color patches. The spectral sensitivity of the CCD sensor [5] weighted by the transmittance spectrum of a near-infrared blocking filter [6] is used to calculate the response of the reference emission spectra. values are then white-balanced. 3) The calculated

In Fig. 8, the locations of the color patches of a GMCC in the CIELAB color space are shown. The crosses indicate the reference values and the circles correspond to transformed values. The solid border line indicates the gamut of the sensor-filter combination. color space transformation is implemented with proThe grammable coefficients. The programming range has been optimized per coefficient based on simulations of different scene illuminations. IV. AUTOFOCUS State-of-the-art passive autofocus strategies as routinely installed in video cameras [12] are based on single TV lines. They are often too slow for real-time implementations, e.g., as in medical endoscopes, and show unreliable performance for mainly horizontally structured objects. A new passive autofocus criterion [13] based on two-dimensional regions solves these problems.

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

2

Fig. 8. Result of 6 3 matrix color space transformation: value, corresponds to transformed value.



1737

2 indicates reference

Focusing on an object can be achieved by maximizing the high-frequency content of its image. Therefore, a focus criterion has to provide a scalar measure of the high-frequency content which yields a maximum value when the object is in focus. Various strategies for deriving a focus criterion can be found in the literature [14]–[17]. The most well known are the variation of intensity, sum modulus difference (SMD), spectrum of power (SP), and entropy of grey levels. SMD adapted to a discrete image is well suited to hardware implementation since only additions/subtractions are required.

Fig. 9. Endoscopic image of human intestine used for focus criterion evaluation.

(10) (11) (12) An enhanced criterion based on SMD uses squaring of differences, at obviously higher hardware cost. (13) (14) (15) This method weights small signals much lower than absolute differences do, yielding in a better noise robustness. Furthermore, the function sensitivity on the focus position is much larger and secondary maxima are less probable. Fig. 10, based on the reference image in Fig. 9, shows these enhancements of the new SPSMD (15) criterion over the conventional XSMD (10). V. IMPLEMENTATION A simplified block diagram of the chip architecture is shown in Fig. 11. The raw image data from the camera head is trans-

Fig. 10. Comparison of two focus calculations on Fig. 9. (a) Criterion XSMD. (b) Criterion SPSMD.

mitted as a differential signal to allow long cables. After passing the on-chip differential receiver, it is conditioned by the protocol decoder for subsequent processing. The wide-range dark current and white gain compensation ensures good results even with lower-grade CCD sensors at lower cost. The CFA interpolator reconstructs the two missing color components of each pixel. RGB data now pass the color space transformation in order to achieve best color fidelity with different scene illuminations. Because all the aforementioned processes work on the two-channel data stream from the CCD sensor, a line serializer is needed to rearrange the data in order to provide the usual progressive scan image to the frame buffer output. Aside this main data path, the focus and exposure unit calculates the luminance components of the image and evaluates a two-dimensional criterion on a sizable window. The scan converter unit derives a standard digital image for a PAL video encoder for low-cost long-time documentation storage on a video recorder. Two synchronous DRAM (SDRAM) controllers enable cost-efficient intermediate data storage and the hardware assisted parallel interface (HAPI) provides easy configuration access to all units.

1738

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

Fig. 11.

Block diagram of the ASIC.

Fig. 12.

Architecture of correction unit.

For low-voltage differential signaling (LVDS) standard conforming data transmission from the camera head, special full-custom level- and impedance-compatible receivers were designed. These receivers are also used for the 200-MHz system clock input. Although the measured nonreturn-to-zero (NRZ) data rate limit which exceeds 700 Mb/s would allow a single 400-Mb/s data channel, transmitter and cable specifications let us decide to implement two 200-Mb/s channels. CCD sensors are divided into different grade classes depending on the number of minor and major defective pixels. Using a pixelwise gain and dark current compensation, sensors with minor defects perform the same as sensors without any defects.

Besides minor pixel defects, the different light sensitivity of each color may degrade image quality. Therefore, even with high-grade CCD sensors, at least the gain of each color has to be adjusted. The correction algorithm chosen can be described with the following equation:

(16) The architecture of the integrated correction unit is illustrated in Fig. 12. The two lines are multiplexed together and alternatingly pixelwise processed by common arithmetic sub-blocks using double-edge triggered (DET) design technique. At the end

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

Fig. 13.

Architecture of SDRAM and configuration interfaces.

Fig. 14.

CFA pixels used for interpolation.

of this unit, the two lines are demultiplexed for further parallel processing by the interpolation unit. The pre-gain values are taken directly from the on-chip register file and the offset and gain values come from the correction SDRAM interface. The correction unit performs 80 million multiply–accumulate (MAC) operations each second. The external 64-Mb RAM for dark current and white gain compensation holds four programmable correction sets. This enables buffering and therefore allows loading and switching during operation without loss of frames. In addition to the pixelwise correction, a second mode without external memory is implemented. It corrects the CCD data of each color channel globally. The SDRAM interface for this unit is depicted in Fig. 13. Depending on the programmed mode, the correction data is taken from the register file (colorwise) or from an external SDRAM (pixelwise). Sixteen bits of data are required each 40-MHz cycle. This gives a required continuous band-

1739

width of 80 MB/s. To guarantee this bandwidth, a single-edge triggered (SET) 100-MHz 8-b SDRAM interface was implemented. Depending on the phase relation between the 100-MHz SET and the 20-MHz DET clock domain of the surrounding blocks, the data and control signals are read into or written out of the interface. Since the EPNI interpolation algorithm was chosen and two lines are processed simultaneously, the interpolation unit needs seven lines additional to the two lines coming from the correction unit. Fig. 14 depicts the seven line-memories connected as a huge first-in-first-out (FIFO) memory with a register chain at each line memory output, which form the actual CFA pixel array used for the interpolation of the two marked center pixels. The architecture of the arithmetic part of the algorithm is shown in Fig. 15. First the pixel neighborhood is examined to decide on the direction used for the green interpolation. Then the required green values are calculated. After the green values are known,

1740

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

Fig. 15. Architecture of arithmetic part of interpolation algorithm.

Fig. 16.

Architecture of color space transformation.

the red and blue values of the pixels being interpolated are calculated using three smooth-hue transition interpolators which work in the logarithmic exposure space. Finally, the RGB values of each pixel are selected and output. The required memory bandwidth of this interpolation algorithm is 700 MB/s and a total of 580 million operations/s (not including round and shift operations) are calculated by the interpolation unit. The color space transformation unit performs the previously matrix transformation. Its architecture is predescribed , , and terms are calsented in Fig. 16. First the culated. Then the complete six-dimensional vector is given into three equal sub-blocks (one for each color component) which perform the actual transformation. Here again, only one transformation unit was implemented for both lines together using the same DET clocking scheme as the correction unit. The color

space transformation unit performs 240 million signed multiplications and 600 million signed MACs each second. The autofocus and illumination control unit calculates a reduced resolution luminance image and over a programmable window the SPSMD focus criterion. The architecture of this focus and illumination unit is presented in Fig. 17. The luminance information of each pixel is first calculated according to (17). (17) This luminance value is passed to the SPSMD as well as to the illumination control. After a 4:1 decimation, the reduced luminance image is output during each frame at a 10-MHz rate for further processing. The focus criterion is calculated and becomes available at the output of the ASIC at the end of each

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

Fig. 17.

Architecture of focus and illumination unit.

Fig. 18.

Architecture of scan converter unit.

frame. The focus and illumination unit has a 20-MHz DET clock domain and performs 230 million operations each second. Another feature of this ASIC is the scan converter. An SDRAM as temporary frame memory and a PAL video encoder IC are the only external components required. Fig. 18 illustrates the architecture of this unit. Simultaneous write and read operations for the frame buffering of the scan converter require a complex scheduling solution. The decimated progressive scan images are stored in the external memory with a rate of 30 frames/s and read back in an interlaced frame format with 50 fields/s. On-chip FIFO memories couple the 10-MHz 24-b input pixel stream to the 100-MHz clocked SDRAM control unit, and synchronize the digital image data to the 14.75-MHz domain of the external video encoder IC. The whole functionality of this multifunction ASIC is not used in each application. In order to save power and to reduce

1741

noise in the back-end when certain blocks are not in use, each block has its own clock distribution network which can be shut down by clock gating. All clocks inside of the ASIC are derived from a 100-MHz clock with a duty cycle of 50%, and the video encoder clock of 14.75 MHz. VI. DESIGN METHODOLOGY AND REALIZATION The IC contains RAMs, LVDS receivers, and standard cell blocks. The full-custom layout LVDS receivers are provided with isolated power supplies from individual package pins. The standard cell blocks use an HDL/synthesis methodology. Synthesis was performed with Synopsys. A cycle-true C description of the ASIC was built for verification purposes and test vector generation. Timing-driven layout is used for cell placement and routing. Chip floor planning and assembly

1742

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 35, NO. 11, NOVEMBER 2000

converter was operational with dummy data. Fig. 19 shows a die photograph of the ASIC. The chip features are summarized in Table II. VII. CONCLUSION

Fig. 19.

Die micrograph with enlarged LVDS receiver.

TABLE II CHIP OVERVIEW

A digital 30-frames/s megapixel real-time CMOS image processor ASIC has been implemented. By its complex interpomatrix color space transformation, lation algorithm, the and the enhanced autofocus criterion calculation, it improves the system performance of 1-CCD color motion cameras. The pixelwise correction of CCD imperfections such as dark current and white gain even allows the use of lower-grade CCD sensors, which reduces system cost. The ASIC performs 1790 million operations each second, not including the round and shift operations. The maximum word width is 38 bits. The total memory bandwidth including all internal and external RAMs is 1620 MB/s. Implemented in a 0.35- m CMOS technology, the ASIC uses a chip area of 49 mm . This image processor with its algorithms could also be used to improve quality of today’s CMOS image sensors. Especially the pixelwise correction mode would help to scale down some of the disadvantages of CMOS image sensors. Furthermore, implementing these algorithms together with a CMOS sensor on one chip would result in an improved camera-on-a-chip solution. ACKNOWLEDGMENT The authors wish to thank S. Oetiker and B. Schreier for their excellent work during the design of this ASIC, and M. Brändli, A. Burg, J. Hertle, R. Reutemann, and Y. Lehareinger for their essential contributions making possible this successful design. REFERENCES [1] H. Zen et al., “A new digital signal processor for progressive scan CCD,” IEEE Trans. Consumer Electron., vol. 44, pp. 289–296, May 1998. [2] M. Loinaz et al., “A 200mW 3.3V CMOS color camera IC producing 352 288 24b video at 30 frames/s,” in Dig. Tech. Papers, IEEE Int. Solid-State Circuits Conf., 1998, pp. 168–169. [3] J. E. Adams Jr., “Interactions between color plane interpolation and other image processing functions in electronic photography,” in SPIE—Cameras and Systems for Electronic Photography and Scientific Imaging, vol. 2416, Bellingham, WA, Feb. 1995, pp. 144–151. [4] , “Design of practical color filter array interpolation algorithms for digital cameras,” in SPIE—Real-Time Imaging II, vol. 3028, Bellingham, WA, Feb. 1997, pp. 117–125. [5] “KAI-1010 Series—Performance Specifications,” Eastman Kodak Co., 1998. [6] “Near-Infrared Blocking Filter,” Balzers Thin Films, BD 800 110 RE (0699-1), 1999. [7] “Colorimetry,” Comission Internationale de L’Éclairage (CIE), Vienna, Austria, 2nd ed., Pub. CIE 15.2, 1986. [8] D. B. Judd, Color in Business, Science and Industry. New York: Wiley, 1975. [9] C. S. McCamy et al., “A color-rendition chart,” J. Appl. Photographic Eng., vol. 2, no. 3, pp. 95–99, 1976. [10] “Munsell book of color—Matte finish collection,” Munsell Color, Baltimore, MD, pt. 40 291, 1976. [11] “Ceramic Color Standards Series II,” CERAM Research, Ltd., Staffordshire, U.K., 1990. [12] NV-S700 Training Manual, no. VRD9205D101, Panasonic. [13] P. Blessing et al., “Passive autofocus for digital endoscopic imaging systems,” in SPIE—Biomedical Diagnostic, Guidance, and Surgical-Assist Systems, 1999, vol. 3595, pp. 148–157.

2

are done with the Silicon Ensemble toolset by Cadence. Commercial design and electrical rule checking (DRC, ERC), and layout versus schematic (LVS) tools are used for physical verification. Built-in self-test structures for the RAM macro cells, partial scan paths, and a configurable test port ensure testability. A total of eleven clock domains, partly in DET design technique, have been implemented. They carry three phase-locked and one asynchronous clocks. Domain-wise clock gating allows to reduce power if some functionality of the ASIC is unused. The datapath has been optimized concerning word widths. The ASIC is fully operational at 30 frames/s with a supply voltage ranging from 2.5 to 3.3 V. For the power measurements, real video images were used and all clock domains were active. Due to the lack of 2.5-V 64-Mb SDRAMs, the colorwise correction mode was used for the 2.5-V power measurements and the scan

DOSWALD et al.: REAL-TIME CMOS IMAGE PROCESSOR

[14] B. Liao, “A Study of Camera Focusing in Different Applications,” Ph.D. dissertation, Hamburg, Germany, 1993. (in German). [15] R. A. Jarvis, “Focus optimization criteria for computer image processing,” Microscope, 1976. [16] E. P. Krotov, Active Computer Vision by Cooperative Focus and Stereo. New York: Springer, 1989. [17] K. Ooi et al., “An advanced autofocus system for video camera using quasi-condition reasoning,” IEEE Consumer Electron., vol. 36, Feb. 1990.

Daniel Doswald (M’00) was born in Zürich, Switzerland, in 1971. He received the Dipl. El.-Ing. (M.Sc.) and Dr. sc. techn. (Ph.D.) degrees in electronic engineering from the Swiss Federal Institute of Technology (ETH), Zürich, in 1996 and 2000, respectively. He joined the Integrated Systems Laboratory (IIS) of ETH, where he worked as a Research and Teaching Assistant in the field of ASIC and system design and test. His research interests are in digital signal processing and mixed signal circuits with applications to image processing, digital audio/video, and system-oriented VLSI design.

Jürg Häfliger was born in Baden, Switzerland, in 1971. He received the Dipl. El.-Ing. (M.Sc.) degree in electronic engineering from the Swiss Federal Institute of Technology (ETH), Zürich, Switzerland, in 1997. He joined the Institute of Biomedical Engineering and Medical Informatics of the ETH, working in the area of high-definition television systems and color science.

Patrick Blessing was born in Menzikon, Switzerland, in 1970. He received the Dipl. Masch.-Ing. (M.Sc.) and Dr. sc. techn. (Ph.D.) degrees in mechanical engineering from the Swiss Federal Institute of Technology (ETH), Zürich, Switzerland, in 1996 and 2000, respectively. He joined the Institute of Biomedical Engineering and Medical Informatics of the ETH, working in the fields of digital control of passive focus and illumination systems.

1743

Norbert Felber was born in Trimbach, Switzerland, in 1951. He received the Dipl. Phys. (M.Sc.) in 1976 from the Swiss Federal Institute of Technology (ETH), Zürich, Switzerland. Subsequently, he was a Research Assistant at the Laboratory of Applied Physics, ETH, where he received the Dr. sc. nat. (Ph.D.) degree in 1986. In 1987, he joined the Integrated Systems Laboratory (IIS) of ETH, where he is currently a Research Associate and Lecturer in the field of VLSI design and test. His research interests are in telecommunications, digital signal processing (digital filters, audio, video, pattern recognition, and image processing), optoelectronics, measurement techniques, and device characterization.

Peter Niederer received the degree in theoretical physics at the University of Zürich and the Ph.D. degree at the Swiss Federal Institute of Technology (ETH) Zürich, Switzerland, in 1967 and 1972, respectively. From 1973 to 1974, he was a Research Engineer at the Biomedical Department, General Motors Research Laboratories, Warren, MI. He then joined the Institute of Biomedical Engineering, ETH Zürich, as a Senior Researcher. In the spring of 1980, he was a Visiting Faculty Member at the University of Houston, TX. He has been a Full Professor of Biomedical Engineering at the ETH Zürich since 1992. In the field of medical optics, he has collaborated with industry relating to high-definition endoscopy. He has been a Consultant in the Biomed II program of the EU and is Editor-in-Chief of Technology and Health Care. Prof. Niederer received the Goetz Prize from the Faculty of Medicine at the University of Zurich for his work on injury biomechanics in 1980.

Wolfgang Fichtner (M’79–SM’84–F’90) received the Dipl. Ing. degree in physics and the Ph.D. degree in electrical engineering from the Technical University of Vienna, Vienna, Austria, in 1974 and 1978, respectively. From 1975 to 1978, he was an Assistant Professor in the Department of Electrical Engineering, Technical University of Vienna. From 1979 through 1985, he was with AT&T Bell Laboratories, Murray Hill, NJ. Since 1985, he has been Professor and Head of the Integrated Systems Laboratory, Swiss Federal Institute of Technology (ETH), Zürich, Switzerland. In 1993, he founded ISE Integrated Systems Engineering AG, a company in the field of technology computer-aided design. Prof. Fichtner won the Andy S. Grove Field Award for his work on numerical modeling of semiconductor devices in 2000. He is a member of the Swiss National Academy of Engineering.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.