A novel real-time DSP-based video super-resolution system

Share Embed


Descrição do Produto

2264

IEEE Transactions on Consumer Electronics, Vol. 55, No. 4, NOVEMBER 2009

A Novel Real-Time DSP-Based Video Super-Resolution System Sebastian Lopez, Member, IEEE, Gustavo M. Callico, Member, IEEE, Felix Tobajas, Jose F. Lopez and Roberto Sarmiento Abstract — The possibility of increasing the spatial

resolution of video sequences is becoming extremely important in present-day multimedia systems. In this sense, super-resolution represents a smart way to obtain highresolution video sequences from a finite set of low-resolution video frames. This set of low-resolution images must be obtained under different capture conditions of the image, from different spatial positions and/or from different cameras—this being the super-resolution paradigm, which is one of the fundamental challenges of sensor fusion. However, the vast computational cost associated with common super-resolution techniques jeopardizes their usefulness for real-time consumer applications. To alleviate this drawback, an implementation of a proprietary super-resolution algorithm mapped onto a hardware platform based on a Digital Signal Processor (DSP) is presented in this paper. The results obtained show that, after an incremental optimization procedure, we are able to obtain super-resolved CIF video sequences (352×288 pixels) at 38 frames per second1. Index Terms — Super-resolution, DSP, interpolation, motion estimation.

I. INTRODUCTION Contrary to the common belief, pixel count is not an appropriate measure of the resolution of an image. Instead, the resolution of an image should be defined in terms of the fine details that are visible. In this sense, the industry practice of increasing the pixel count and reducing the pixel size does not always yield an improved resolution. To guarantee the success of this strategy, dedicated hardware components such as powerful optics and complex image stabilization mechanisms are required. Furthermore, resolution enhancement by sensor miniaturization is restricted by the diffraction limit and the amount of incoming light [1]. One of the main techniques to increase the resolution of a digital image in a smart way is super-resolution. The theoretical basis of super-resolution is the generalized Nyquist theorem [2], which implies that a signal can be recovered from N sub-sampled signals with a sample frequency N times smaller than the Nyquist frequency. Following this reasoning, 1

This work has been supported by the Spanish Ministry of Science and innovation (MICINN) under the project DR. SIMON (TEC2008-06846-C0202). S. López, G.M. Callicó, F. Tobajas, J.F. López and R. Sarmiento are with the Institute of Applied Microelectronics (IUMA) from the University of Las Palmas de Gran Canaria, E–35017, Las Palmas de Gran Canaria, SPAIN. Contributed Paper Manuscript received October 9, 2009

super-resolution uses the spatial and temporal correlation of a sequence of images with small shifts between them, obtaining a high-resolution image from a set of low-resolution images. For this reason, super-resolution is currently considered as an outstanding technique in order to improve the quality of the images received by the users of consumer applications such as video streaming on the Internet, videoconferencing, or mobile multimedia devices, just to name a few. There are several approaches to super-resolution and for further details interested readers are referred to the extensive survey published in [3]. One of the first proposals was the frequency domain based super-resolution algorithm introduced by Huang and Tsay in [4]. In other state-of-the-art work, super-resolution is posed as an inverse problem, and thus, a regularized reconstruction strategy is used [5]-[9]. A different approach is based on the introduction of Projections Onto Convex Sets (POCS), which is an iterative approach to incorporate prior knowledge about the high-resolution image [10]. Along these lines, Irani and Peleg [11] proposed an iterative back-projection approach that is similar to the procedure used in tomography. Unfortunately, all these algorithms tend to be extremely slow due to their elevated associated computational cost. For this reason, efficient implementations are a must for superresolution applications under strict real-time constraints. Due to their balanced combination of flexibility and hardware performance, DSPs have demonstrated to be feasible solutions for the implementation of real-time multimedia systems, such as H.264/AVC video decoders [12] and encoders [13], or audio processors [14]. However, to the best of the authors’ knowledge, the suitability of DSP-based hardware platforms in order to accelerate super-resolution algorithms has not been yet inspected. In order to fulfill this niche, a low-cost video super-resolution algorithm together with its implementation onto a general purpose DSP is presented, implemented, and evaluated in this paper. The remainder of this paper is organized as follows. In Section II the super-resolution algorithm presented in this paper is described, while in Section III the main features of the hardware platform used in this work are expounded. In Section IV, the most significant implementation results obtained in terms of quality of the super-resolved images, and processing speed within the DSP, are presented. II. SUPER-RESOLUTION ALGORITHM Generally speaking, super-resolution algorithms can be classified as [1]:

0098 3063/09/$20.00 © 2009 IEEE

S. Lopez et al.: A Novel Real-Time DSP-Based Video Super-Resolution System

a. Fusion algorithms: based on merging several motion corrected low-resolution frames to form a higher resolution one. b. Restoration algorithms: the aim of this type of algorithm is to recover the image spectrum beyond the spatial bandwidth of the image system. c. Synthesis algorithms: these refer to a class of superresolution algorithms that use model-based techniques to improve the resolution of input frames. The super-resolution algorithm introduced in this work is based on image fusion, with a non-iterative behavior, which represents an advantage in order to be applied in a real-time implementation—this is in contrast with the iterative nature of the fusion algorithms in the state-of-the-art. Furthermore, it is applied to video sequences, thus the number of input and output frames remains constant, which is known as dynamic super-resolution throughout the literature. In this case, in order to obtain a high-resolution frame from a low-resolution set of images, the information contained in the nearby frames must be considered. This set of frames is named as a sliding frames window in this work. The main stages of this super-resolution algorithm, depicted in Fig. 1, are the following:

2265

size is determined by the precision of the motion estimation stage. As the motion estimation is performed at quarter pixel precision, the size of the very high resolution grid is sixteen times larger (four times larger in each spatial direction) than the size of the low resolution grid. This quarter pixel very high resolution grid is outlined in Fig. 2, where the original low resolution pixels are represented by solid squares while the pixels incorporated from other frames within the sliding frames window are represented by numbered squares, which identify the frame they belong to. In addition, the solid thin lines represent the half pixel positions, while the dashed ones represent quarter pixel positions.

Fig. 2. A quarter pixel very high resolution grid.

Fig. 1. General scheme of the super-resolution algorithm.

1. Motion estimation. This stage estimates the motion vectors between the current frame and each frame within the sliding frames window. In order to estimate the motion vectors, the frame is split into macroblocks, just like in a video compression scheme, using the full search block matching algorithm as the search method, and the sum of absolute differences (SAD) as the distortion measure between the reference and the current macroblock. An important feature of the motion estimator is its precision, which should be at least the inverse of the zoom factor. Although in this work the zoom factor is always equal to 2 (the super-resolved frames are four times bigger in area than the low resolution ones), the motion vectors are computed with a quarter pixel precision in order to enhance the quality of the final super-resolved image, as will be explained later. 2. Shift and add. In order to super-resolve each frame of the video sequence, it is necessary to obtain information from the nearby frames and mix it with the current frame. This is the goal of this stage, which takes the estimated motion vectors and the frames within the sliding frames window in order to add information to the current frame. For doing that, it uses a very high resolution grid, which

3. Holes filling. If the nearby frames do not have enough information to fully fill the very high resolution grid, there will be some empty positions on it that are named holes within the scope of this work. To display the processed super-resolved frame properly, these holes are interpolated. In the super-resolution algorithm, implemented in this work, the pixels are computed by applying a bilinear interpolation with the original low-resolution pixels and the pixels incorporated by the shift and add stage. It is important to note that, as the zoom factor has been fixed to two, only the holes generated at half pixel positions are interpolated during this filling process. Once all the empty half pixel positions are computed, the resulting image is decimated by a factor of two in order to produce the final super-resolved image. In this process, the pixels incorporated by the shift and add stage at quarter pixel positions (identified with the numbers 1 and 3 at Fig. 2) are lost, but have been incorporated to the very high resolution grid in order to improve the interpolation process performed at this last stage of the super-resolution algorithm. III. DSP-BASED HARDWARE PLATFORM The super-resolution algorithm detailed in the previous section has been mapped onto a low-cost high performance

2266

IEEE Transactions on Consumer Electronics, Vol. 55, No. 4, NOVEMBER 2009

imaging and video development platform based on a fixedpoint DSP, able to perform 5760 million instructions per second (MIPS) at a clock rate of 720 MHz. Regarding its internal architecture, the DSP has eight highly independent functional units, six of them being ALUs that support single 32-bit, dual 16-bit, or quad 8-bit arithmetic instructions per clock cycle, while the other two are multipliers that support four 16×16-bit multiplications per clock cycle or, alternatively, eight 8×8-bit multiplications per clock cycle. These functional units are split into two datapaths, with one multiplier and three ALUs in each one. The DSP uses a two-level internal memory architecture for program and data, as well as external memory. The first level of cache memory is extremely fast and counts with 128 Kbits for the program code (L1P) and 128 Kbits for data (L1D) that can be accessed without CPU stalls. On the other hand, the second level of cache memory has 2 Mbits for both program and data, being slightly slower (L2). Finally, the external 32 Mbytes SDRAM memory can also be accessed by the DSP, but this may cause CPU stalls. In addition, the DSP has three video port peripherals that are configurable and can support either video capture and/or video display modes. Fig. 3 resumes the internal architecture of the DSP considered in this work.

In order to ease the mapping task, the algorithm has been divided into the functional units shown in Fig.4. Besides the motion estimation, shift and add and holes filling units that represent the three processes described in Section II, the following additional units have been considered: ƒ Image pad: this unit is in charge of replicating the pixels that fall in the borders of the frames in order to enable the possibility of having motion vectors pointing outside the region delimited by each frame. ƒ 1/Z: this unit introduces a delay of one frame necessary to correctly perform the motion estimation and the shift and add processes. ƒ Upholes: this unit is in charge of obtaining the initial very high resolution grid from the information contained in each low-resolution input video frame within the sliding frames window. ƒ Resize: this unit decimates by a factor of two the final very high resolution grid in order to obtain the definitive super-resolved image with a zoom factor equal to 2.

Fig. 4. Functional units of the implemented super-resolution algorithm.

Fig. 3. DSP general diagram

IV. RESULTS The complete super-resolution algorithm has been described using the C programming language in order to be implemented onto the selected DSP using its associated compiler. The flexibility provided by this language allows changing a set of parameters that have a fundamental role within the superresolution algorithm, including the length of the sliding frames window, the size of the search area at the motion estimation stage, and the dimensions of the macroblocks. However, with the aim of maintaining the computational cost within affordable limits, the super-resolution implementation presented in this work has been obtained by fixing the length of the sliding frames window to two (the immediate backward and forward frames with respect to the frame to be superresolved), the size of the macroblocks to 16×16 pixels, and the dimensions of the search area at the motion estimation stage to 24×24 pixels.

The first implementation obtained within this work was only able to process 160×128 video sequences sampled at 3 frames per second, which is clearly insufficient for real-time consumer applications. For this reason, a set of techniques have been applied in order to modify the code and, as a consequence, to accelerate the super-resolution process as much as possible. The techniques that have been considered in this optimization procedure are the following: a. Compiler options: the compiler used in this work has certain options that can improve the algorithm speed, as loop unrolling, the use of SIMD instructions, etc. b. Loop simplification: to allow an efficient pipeline schedule by the compiler, the loops in the code have to be as simple as possible, i.e. no function calls, control flow changes, data hazards or large pieces of code must be placed inside the loops. c. Multiple data memory access: the DSP can access up to 64 bits in a single instruction, therefore it is possible to access eight bytes at a time to thereby accelerate the global process. d. Memory management: the most used variables are placed into the DSP internal memory, speeding up the access to these variables and, as a consequence, the whole super-resolution process. At this point, it is important to mention that for QCIF or larger input video sequences, the very high resolution grid cannot be entirely placed into the internal memory because of the large dimensions of the

S. Lopez et al.: A Novel Real-Time DSP-Based Video Super-Resolution System

former and the reduced size of the latter. Instead of allocating the whole grid at the external memory, which would definitely slow down the super-resolution process, this work proposes to divide each input frame into small pieces in a manner that assures that each very high resolution sub-grid separately fits into the internal memory. Once all the very high resolution sub-grids are computed, the final super-resolved image is obtained by combining them together. An example of this partition strategy is summarized in Fig. 5, where a QCIF image is partitioned into four pieces in order to be super-resolved by the DSP.

Fig. 5. Partition of the input video frames.

Based on these techniques, a set of optimizations have been applied to the original code mapped onto the DSP. Fig. 6 shows the computation times obtained for each optimization step when applied incrementally, being the optimizations shown in the abscissas axis the following ones: ƒ O1: First version, without any optimization. ƒ O2: The compiler options are used. This allows the use of SIMD instructions, loop unrolling techniques, and the obtaining of a better pipeline schedule. ƒ O3: Inefficient parts of the code are rewritten. ƒ O4: Floating point operations removal. The DSP used in this work is a fixed point one, so floating point operations are emulated using several instructions and cycles. ƒ O5: Use of optimized library functions for the calculation of the SAD in the motion estimator. ƒ O6: The most used variables are placed into internal memory, drastically reducing the time access to these variables. ƒ O7: Multiple memory accesses in a cycle, accessing up to eight bytes in a single memory access. ƒ O8: The memory accesses of the holes filling stage are optimized.

2267

Fig. 6. Computation time for each of the optimizations introduced.

All the time measures shown in Fig. 6 are obtained when the DSP is super-resolving 160×128 video sequences. As can be seen from this figure, one of the most important optimization steps is the use of the compiler options, since the computation time is three times shorter than the one obtained with the first version (O1). Other very important optimizations are the removal of floating point operations from the algorithm and the allocation of the data into internal memory. After this optimization process, the DSP is able to superresolve CIF video sequences sampled at 20 frames per second, which means a reduction in the computation time of almost a 98% percent when compared with the first mapped version. However, the frame rate achieved may not be enough for certain applications that operate with frame rates of 25 or 30 frames per second. For this reason, as an additional optimization step, the full search motion estimation algorithm has been replaced inside the DSP by the new three step search algorithm, which has demonstrated to exhibit a reduced computational load with negligible quality losses, with respect to the full search algorithm for dynamic super-resolution systems [15]. Due to this strategy, the hardware platform selected for this work is able to super-resolve CIF video sequences sampled at 38 frames per second. Nevertheless, in order to consider our implementation as a proper candidate for consumer imaging systems, not only the processing capabilities but also the quality of the images obtained by means of the proposed super-resolution algorithm must be considered. Consequently, the performance of the super-resolution algorithm, mapped onto the previously detailed hardware platform, is compared with respect to bilinear interpolation techniques. For this purpose the setup, shown in Fig.7, is now introduced. As is observed from this figure, an original highresolution input sequence is firstly decimated in order to obtain an aliased low-resolution sequence that represents the input sequence to both the super-resolution and the bilinear interpolation algorithms. Once both algorithms have processed the input data, their outputs are compared to the original highresolution sequence by means of the Peak Signal-to-Noise Ratio (PSNR). Finally, both PSNRs are compared in order to inspect if the super-resolution algorithm exceeds the performance provided by the bilinear interpolation.

2268

IEEE Transactions on Consumer Electronics, Vol. 55, No. 4, NOVEMBER 2009

The first frame of the test sequences used for this purpose is depicted in Fig. 8. According to the nature of the motion exhibited by each sequence, they are classified as sequences dominated by global motion (GALDAR and MOBILE) and sequences dominated by local movements (FOREMAN and DEADLINE). Finally, it is worth to mention that all these video sequences are in YUV 4:2:0 format and have a size of 176×144 pixels (QCIF). Table 1 shows the average PSNR results obtained for each of the test video sequences by applying the setup shown in Fig.7.

TABLE I INTERPOLATION AND SUPER-RESOLUTION RESULTS

GALDAR MOBILE FOREMAN DEADLINE

PSNR (dB) Interpolation Super-resolution 23.80 24.50 22.36 23.07 30.66 31.00 23.32 23.32

GALDAR

INTERPOLATED SEQUENCE (M×N)

SUPER-RESOLVED SEQUENCE (M×N)

MOBILE

FOREMAN

Fig. 7. Test setup.

As it is concluded from Table 1, the super-resolution algorithm implemented using the DSP is able to improve, or at least to match, the PSNR provided by the bilinear interpolation algorithm. In particular, Table 1 shows that the major improvements in terms of PSNR are obtained for sequences dominated by global movements.

DEADLINE Fig. 8. Snapshot of the first frame of the sequences used for the test.

S. Lopez et al.: A Novel Real-Time DSP-Based Video Super-Resolution System

However, as the PSNR is not always the best metric for evaluating the real quality of a video sequence, it is also necessary to visually compare the super-resolved against the interpolated test sequences. For this purpose, Fig. 9 shows enlarged details of the images that resulted after applying the bilinear interpolation and the super-resolution processes to some of the test sequences that appear in Fig. 8. As is clearly observed, the edges are better recovered for all the cases when applying super-resolution instead of interpolation, obtaining as a result more pleasant images.

(a)

2269

usefulness of these techniques for real-time applications becomes very limited. In order to overcome this issue, this paper has presented a novel implementation of a low-cost super-resolution algorithm onto a DSP-based hardware platform. Starting from an untimed C code, a successful methodology has been applied in order to map and verify the code onto the DSP, including a set of optimizations in order to increase its associated processing speed. As a result, the selected platform is able to super-resolve CIF video sequences at 38 frames per second. In order to increase this processing capability for higher size images, further management memory strategies should be considered.

(b)

Fig. 10. System setup.

ACKNOWLEDGMENT (c)

(d)

The authors would like to thank Dr. Valentín de Armas, Titular Professor of the University of Las Palmas de Gran Canaria, and Oliver Sosa and Eduardo Guerra, former MSc students of the University of Las Palmas de Gran Canaria, for their very valuable contributions to this work, as well as Prof. Derek Abbott, from the University of Adelaide (Australia), for his inspiring comments. REFERENCES [1]

(e)

(f)

Fig. 9. Enlarged details of the results obtained by applying interpolation (left) and super-resolution (right) for GALDAR ((a) and (b)), MOBILE ((c) and (d)), and FOREMAN ((e) and (f))

Finally, Fig.10 shows the setup designed in order to validate the DSP-based video super-resolution described in this paper, composed by a consumer video camera that provides the input video sequence to be super-resolved, a personal computer for downloading the super-resolution code onto the DSP, a projector that displays the super-resolved video sequence, and the DSP board itself.

[2] [3]

[4]

[5]

[6]

V. CONCLUSIONS Super-resolution techniques represent a challenging solution for increasing the resolution of a video sequence in the sense that they provide a superior image quality when compared with classical interpolation techniques, but at the expense of a severe increase in computational cost. Due to this reason, the

[7]

[8]

T.Q. Pham, “Spatiotonal adaptivity in super-resolution of undersampled image sequences,” PhD Thesis, Oct. 2006. A. Papoulis, “Generalized sampling theorem,” IEEE Transactions on Circuits and Systems, vol. 24, no. 11, pp. 652-654, Nov. 1977. S.C. Park, M.K. Park, M.G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21-36, May 2003. T. S. Huang and R. Y. Tsay, “Multiple frame image restoration and registration,” Advances in Computer Vision and Image Processing, T. S. Huang, Ed. Greenwich, CT: JAI, vol. 1, pp. 317-339, 1984. M.C. Hong, M.G. Kang, and A.K. Katsaggelos, “A regularized multichannel restoration approach for globally optimal high resolution video sequence,” Proceedings of SPIE Video Communications and Image Processing, vol. 3024, pp.1306-1316, Jan. 1997. M.C. Hong, M.G. Kang, and A.K. Katsaggelos, “An iterative weighted regularized algorithm for improving the resolution of video sequences,” Proceedings of International Conference on Image Processing, vol. 2, pp. 474-477, Oct. 1997. M.G. Kang, “Generalized multichannel image deconvolution approach and its applications,” Optical Engineering, vol. 37, no. 11, pp. 29532964, Nov.1998. R.C. Hardie, K.J. Barnard, J.G. Bognar, E.E. Armstrong, and E.A. Watson, “High-resolution image reconstruction from a sequence of

2270

[9]

[10]

[11]

[12]

[13]

[14]

[15]

IEEE Transactions on Consumer Electronics, Vol. 55, No. 4, NOVEMBER 2009

rotated and translated frames and its application to an infrared imaging system,” Optical Engineering, vol. 37, no. 1, pp. 247-260, Jan. 1998. N.K. Bose, S. Lertrattanapanich, and J. Koo, “Advances in superresolution using L-curve,” Proceedings of International Symposium on Circuits and Systems, vol. 2, pp. 433-436, May 2001. H. Stark and P. Oskoui, “High resolution image recovery from imageplane arrays, using convex projections,” Journal of Optical Society of America A, vol. 6, no. 11, pp. 1715-1726, Nov. 1989. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graphical Models and Image Processing, vol. 53, no. 3, pp. 231-239, May 1991Ǥ F. Pescador, G. Maturana, M.J. Garrido, E. Juarez and C. Sanz, “An H.264/AVC video decoder based on a latest generation DSP,” IEEE Transactions on Consumer Electronics, vol. 55, no. 1, pp. 205-212, Feb. 2009. R. Jianfeng, N. Kehtarnavaz and M. Budagavi, “Computationally efficient mode selection in H.264/AVC video coding,” IEEE Transactions on Consumer Electronics, vol. 54, no. 2, pp. 877-886, May 2008. T.-H. Tsai, C.-N. Liu and J.-H. Hung, “VLIW-aware software optimization of AAC decoder on parallel architecture core DSP (PACDSP) processor”, IEEE Transactions on Consumer Electronics, vol. 54, no. 2, pp. 933-939, May 2008. G.M. Callico, S. Lopez, O. Sosa, J.F. Lopez and R. Sarmiento, “Analysis of fast block matching motion estimation algorithms for video super-resolution systems”, IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp. 1430 - 1438, Aug. 2008.

Sebastián López was born in Las Palmas de Gran Canaria, Spain, in 1978. He received the Electronic Engineer degree by the University of La Laguna in 2001, obtaining regional and national awards for his CV during his degree. He got his PhD degree by the University of Las Palmas de Gran Canaria in 2006, where he is actually an Assistant Professor, developing his research activities at the Integrated Systems Design Division of Institute for Applied Microelectronics (IUMA). Since 2008 he is a member of the IEEE Consumer Electronics Society as well as a member of technical program committee of the IEEE International Conference on Consumer Electronics. Additionally, he currently serves as an active reviewer of the IEEE Transactions on Circuits and Systems for Video Technology, the Journal of Real Time Image Processing, and the IET Electronics Letters, as well as a member of the Publications Review Committee of the IEEE Transactions on Consumer Electronics and the technical program committee of the "Signal processing for multimedia" track of DATE 2010. His research interests include motion estimation algorithms and architectures, real-time super-resolution systems, video coding standards and reconfigurable architectures. Gustavo M. Callicó was born in Granada, Spain, in 1970. He received the Telecommunication Engineer degree in 1995 and the Ph.D. degree and the European Doctorate in 2003, all from the University of Las Palmas de Gran Canaria (ULPGC) and all with honours. From 1996 to 1997 he was granted with a research grant from the Educational Minister and in 1997 he was hired by the university as an electronic lecturer. In 1994 he joined the Institute for Applied Microelectronics (IUMA) and from 2000 to 2001 he stayed at the Philips Research Laboratories (NatLab) in Eindhoven, The Netherlands, as a visiting scientist, where he developed his Ph.D. thesis. He is actually an Assistant Professor at the ULPGC and develops his research activities in the Integrated Systems Design Division of the IUMA. He has more than 50 publications in national and international journals and conferences and has participated in 16 research projects funded by the European Community, the Spanish Government and international private industries. In addition, he is a current member of the Publications Review Committee of the IEEE Transactions on Consumer Electronics. His current research fields include real-time superresolution algorithms, synthesis-based design for SOCs, and circuits for multimedia processing and video coding standards, especially H.264.

Félix Tobajas was born in Zaragoza, Spain, in 1971. He received the M.S. and Ph.D. Degrees in Telecommunication Engineering from the University of Las Palmas de Gran Canaria, Spain, in 1996 and 2001, respectively. He joined the Institute for Applied Microelectronics (IUMA) in 1996, participating in more than 15 research and industrial projects since then. From 1997 to 2001, he worked as a consultant for Vitesse Semiconductor Corporation, Camarillo (USA) in the development of high-speed communication circuits. In 2003, he received the best Ph. D. award in Telecommunication Networks and Services from the Spanish Association of Telecommunication Engineers. He is currently a Titular Professor at the Department of Electronic Engineering, University of Las Palmas de Gran Canaria. His research areas include real-time H.264 video coding/decoding systems, Networks-on-Chip, high-performance switches, and low-power VLSI design. José F. López obtained the five years degree in Physics (specialized in Electronics) from the University of Seville, Spain, in 1989. Since then, he has conducted his investigations at the Research Institute for Applied Microelectronics (IUMA), where he is part of the Integrated Systems Design Division. José F. López also lectures at the School of Telecommunication Engineering, in the University of Las Palmas de Gran Canaria (ULPGC), being responsible of the courses Analogue Circuits and VLSI Circuits. In 1994, he obtained the PhD degree, being awarded by the ULPGC for his research in the field of High Speed Integrated Circuits. Dr. López was with Thomson Composants Microondes (now United Monolithic Semiconductor, UMS), Orsay, France, in 1992. In 1995 he was with the "Center for Broadband Telecommunications" at the Technical University of Denmark (DTU) and in 1996, 1997, 1999 and 2000 he was funded by the Edith Cowan University (ECU), Perth, Western Australia, to make research on low power, high performance integrated circuits and image processing. Dr. López has being actively enrolled in more than 15 research projects funded by the European Community, Spanish Government and international private industries. He has written around 70 papers in national and international journals and conferences. Roberto Sarmiento is a Full-Professor at the Telecommunication Engineering School at University of Las Palmas de Gran Canaria (ULPGC), Spain, in the area of Electronic Engineering. He contributed to set this school up, he was the Dean of the Faculty from 1994 to 1995 and Vice-Chancellor for Academic Affairs and Staff at the ULPGC from 1998 to 2003. In 1993, he was a Visiting Professor at the University of Adelaide, South Australia, and later at the University of Edith Cowan, also in Australia. He is a founder of the Research Institute for Applied Microelectronics (IUMA) and Director of the Integrated Systems Design Division of this Institute. Since 1990 he has published over 30 journal papers and book chapters and more than 100 conference papers. Roberto Sarmiento has been awarded with three six years research periods by the National Agency for the Research Activity Evaluation in Spain. He has participated in more than 35 projects and research programs funded by public and private organizations, from which he has been leader researcher in 15 of them. Between these projects, it has special mention those funded by the European Union like GARDEN and the GRASS workgroup. He has got several agreements with companies for the design of high performance integrated circuits, where the most important are those performed with Vitesse Semiconductor Corporation, California.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.