MNR: a novel approach to correct MPEG temporal distortions

May 28, 2017 | Autor: Carolina Miro | Categoria: Next Generation, Visual Quality, Electrical And Electronic Engineering, Frequency Domain
Share Embed


Descrição do Produto

S. Delcorso et al.: MNR A Novel Approach to Correct MPEG Temporal Distortions

229

-MNR:A Novel Approach to Correct MPEG Temporal Distortions Sandra Delcorso, Carolina Miro, and Joel Jung Abstract - As TV screens get ever larger, viewer tolerance to noise and artifacts has steadily declined. For next generation TVs such as Flat TV, LCoS (Liquid Cvstal on Silicon), etc, excellent picture quality becomes a distinguishing feature. Although the quality of most digital signals has significantly been improved, distortion and noise still creep in and are an important source of picture degradations. Post-processing is a sensible solution that achieves a visual enhancement of the reconstructed images. Some of the artifacts, such as blocking and ringing, have already been widely considered. Therefore, this paper concentrates on a novel post-processing technique that reduces another kind of distortion known as “mosquito noise”. This temporal “busyness” is discarded by applying an adaptive temporal filter that preserves sharpness and naturalness of the reconstructed video signal in the frequency domain. The proposed algorithm is efective and straightforward to implement: experimental results show that the presented temporal post-filtering process signlficantly improves the visual quality of video sequences. Moreover, the architectural options allow flexibility and cost trade-ofJ: Index Terms - Temporal artifact, Mosquito noise, DCT-based, MPEG-2. I. INTRODUCTION

A

s TV screens get ever larger, viewer tolerance to noise and artifacts has steadily declined. For next generation TV such as Flat TV, LCoS (Liquid Crystal on Silicon), etc, excellent picture quality becomes a distinguishing feature. Although the quality of most digital signals has been substantially improved in recent years, distortion and noise still creep in, and are an important source of quality degradation. The quantization of the DCT coefficients of each block produces highly noticeable impairments [ 141 in the reconstructed images due to its de-correlated block-contents encoding as well as the loss of information that it brings out. This produces perceivable alterations in the reconstructed video, mainly consisting of annoying visual effectslartifacts, especially visible at medium and low bit-rates. It is now well known that post-processing video sequences at the receiver improves the subjective quality in any lossy coding process. It allows an impressive diminution of compression artifacts. Spatial artifacts such as blocking (tiled-effect aspect), and ringing (ghost effect) have already been widely studied ([3], [7] and [12]). This paper will therefore focus on another kind of visual effect, Contributed Paper Manuscript received January 15,2003

a temporal one, called the mosquito noise that manifests itself as a “flying bunch of mosquitoes”. This paper is organized as follows: Section 2 describes the artifacts introduced by the MPEG compress2on and a state of the art on temporal artifacts reduction. Section 3 describes the proposed Mosquito Noise Reducer (MNR). The complexity evaluation of the MNR algorithm is performed in Section 4 and the experimental results are given in Section 5. Finally, conclusions are drawn and some ideas for further improvements are suggested. 11. MPEG CODING ARTIFACTS

Lossy signal compression is achieved by a quantization process carried out on the coefficients representing the signal. The information loss in this process is the root cause of a number of different artifacts in the reconstructed signal. The common types of distortions present in a block-based DCT video (such as MPEG) are typically as follows.

A. Blocking Artifact The blocking artifact is one of the major drawbacks of block-based coding techniques. It appears as intensity discontinuities at the boundary of adjacent DCT blocks in the reconstructed image that look like a tiling effect. The discontinuities come from separate coding of adjacent blocks. The higher the quantization, the stronger the blocking effect. Reducing the bloclung artifact has already been widely addressed. An algorithm called DFD (DCT Frequency Deblocking) showed a level of performance among the best for MPEG-2 TV bit rates [3]. It works in the DCT frequency domain and dramatically reduces the blocking artifact. B. Ringing Artifact The ringing looks like the rippling of an edge (kind of ghost effect). It is more pronounced along sharp edges in low energy sections of an image. It is caused by a coarse quantization of the AC coefficients of the DCT. C. Other Spatial Artifacts Some other spatial artifacts are listed here: “DCT basis image blocks” artifact manifests itself within the reconstructed block, having a distinct likeness to a DCT basis image. It looks as if the DCT basis image was drawn on top of the block. It is caused by coarse quantization of the coefficients, with a significant level of energy localized into a single coefficient.

0098 3063/00 $10.00 02003 IEEE

IEEE Transactions on Consumer Electronics, Vol. 49, No. I , FEBRUARY 2003

230

e

The staircase effect is the result of inadequate reconstruction or representation of image edges within blocks. The edge does not line up at the block borders with the edge in adjacent blocks, producing a staircase aspect. Colour bleeding causes the smearing of the colour from one side of an edge to the other. It occurs at edges on smooth backgrounds where high-frequency chrominance information is coarsely quantified. This artifact is specific to strong chrominance edges.

D. Mosquito Noise Artifact The mosquito noise effect manifests itself as a fluctuation of luminancekhrominance levels in a block on the boundary of moving objects and the background area. The intensity of fluctuations is usually not huge; however, since the human visual system is highly sensitive to change, this flickering becomes quite irritating. It is introduced by inter-frame coding: from frame to frame, the prediction error is coded with differing coarseness of quantization. Through the sequence, the same object might be coded differently,’ causing some fluctuations of its luminancelchrominancevalue. The literature review proves that mosquito noise has not been thoroughly studied. Its definition is not even clear yet, and differs from one article to the other. [ 11 sees the mosquito noise when a sharp edge separating two uniform regions occurs within a block whereas [6] defines it as a ‘tform of edge busyness distortion sometimes associated with movement, characterized by moving artifact and/or blotchy noise patterns superimposed over the objects”. [I41 describes it as a “jluctuation of luminance or chrominance levels in a block on the boundary of moving objects and the background area” from one frame to the other. Our definition meets with the latter. We would add that mosquito noise mainly appears when the original sequences contain motionless, very noisy and uniform areas encoded at low or medium bitrates. Many articles on temporal noise were written but very few to specifically post-process MPEG sequences [2] [4] [ 5 ] [9] [IO] [ I l l . Concerning papers specific to MPEG post-processing, they are not meant at first to deal with mosquito noise but blocking and ringing, and it happened that the filtering was accidentally efficient to reduce the mosquito noise. Finally, if a method performs a spatialtemporal filtering on encoded signals [7] [SI, it is always applied in the pixel domain.

Post-

-

-

sequence

;sed nce

TEMPORALLY WEIGHTED MEDIAN FILTER

U

MOTION DETECTOR k

Fig.1. Mosquito noise reducer architecture 111. MOSQUITO NOISEREDUCER (MNR)

As guessed from its definition, the mosquito noise is a temporal noise. Therefore, it requires a temporal filtering. The architecture of the mosquito noise reducer algorithm is based on a temporally weighted motion controlled median filter (Figure 1). This median filter will act spatially as well as temporally on the DCT coefficients. It will be applied on the full image and to avoid blurring the motion areas (differentiated by a threshold), the motion pixels will be kept unchanged.

A . Motion Detector Attention is paid to motion area, because temporally filtering a motion area will blur the image. So, a reliable distinction between motion and no motion area is crucial. It is based on a difference of two consecutive fields of same parity. The resulting difference image will illustrate motion and noise. Then, some clever filtering will remove the noise such that only motion pixels will remain. This motion detector is shown in Figure 2 . Motion detection is applied on both luminance and chrominance to give reliable results. Average filters achieve the highest noise reduction. A 3*2 average filter LPFl (Figure 3) is first applied to that image difference. This filter is slightly different for luminance and chrominance. It reduces the noise sensitivity of the motion detector, which could wrongly interpret it as motion information. On the chrominance, a 2*2 average filter is performed on U and V values. A second filter LPF2 (Figure 4) is then applied to suppress flickering artifacts between current and previous frame. This time, on colour processing, U and V pixels are combined. Previous

field Fig. 2. Motion detector

S . Delcorso et al.: MNR: A Novel Approach to Correct MPEG Temporal Distortions

23 1

Chrominance [U][V] Fig. 3. Low-pass filter ‘LPFl.

Luminance [Y] & Chrominance [UV] Fig. 4. Low-pass filter LPF2.

Finally, as the motion detector depends on signal differences, the absolute value (ABS) is then calculated to straightforwardly differentiate motion pixel from nonmotion pixels. If motion value is greater than a certain threshold, the pixel is considered as a motion pixel. Figure 5b shows the result of this detection algorithm. Let us call it the motion map. It can be noticed that although the sky is very noisy it is considered as a motionless area. Figure Sa shows the corresponding original image

B. Temporally Weighted DCT Median Filter Once motionless areas are detected, we can filter them. The approach relies on the notion that the complete block luminance/chrominance level is flickering and so, filtering the DC component will reduce this flickering effect.

Fig. 5h. The motion map: from the reconstructed picture, motion area in green (or dark), and motionless area in pink (or light).

As shown on Figure 6 and Figure 7, an 8*8 Discrete Cosine Transform is applied on the current, previous and next fields. Then let Mi be the DC value to be filtered at position (ij) in the current field, M2 and M3 its vertical neighbors. MI’ will take the median value of these three pixels. In a second step, let M4 and M5 be the DC values at position (ij) in the previous and following same parity field. The resulting value MI ” will be the median value of MI ’, M4 and M5 pixels. r-

DC-T ,

1

~*

Median

Median

IDCT

Fig. 6. The median filter is applied on the DC coefficients.

M1”= median(M4 , M5, median(M1, M2, M3))

Fig. 5a. Reconstructed picture of the “Thelma” sequence encoded at 2 Mbits/s.

The objective of this filter is to achieve noise reduction rather than blurring. Median filters are particularly effective when the noise pattern consists of strong, spikelike components and the characteristic to be preserved is edge sharpness. They are non-linear filters.

This doubled median weights the temporal aspect of the filter: the result of a spatial filtering is exploited to perform a temporal filtering. Once the current field has completely been filtered, an inverse DCT is carried on. Finally, to avoid blurring the image, the mosquito noise reducer will use the motion map to distinguish motion area from non-motion one and will replace on a block basis the filtered pixels by the non-filtered ones when motion is detected. In other words, if within a block, the number of detected motion pixels is greater than five, the block is considered as a motion block and all its pixels are replaced by the original ones.

IEEE Transactions on Consumer Electronics, Vol. 49, No. 1, FEBRUARY 2003

232

TABLE I COMPUTATIONAL COMPLEXITY OF THE MNR ALGORITHM

Field (t-I)

Field (t)

Field(t+l)

Fig. 7. Temporally weighted median filter.

C, Post-Processing Interlaced Video

The post-processing of full resolution 720*576 interlaced sequences is performed. Before postprocessing, both fields are extracted and processed separately. After processing, they will be re-combined. Post-processing each field separately is essential, otherwise errors due to spatial shift might blur the image. Pixels on same parity field have the same spatial position whereas pixels on two consecutive fields will be abovelbelow each other and applying some temporal filtering on different objects will blur the image. ’

IV. MNR COMPLEXITY EVALUATION

The implementation of the MNR algorithm must meet the TV system requirements. Consequently, it must guarantee the real time filtering of the CCIR-656 video format, i.e. 720x576 pixels at 25 f/s. The goal of this study is to estimate the complexity of the M N R algorithm in terms of computational complexity and memory accesses. A hardware architecture is then proposed for its implementation.

A . Computational complexity ofthe MNR Algorithm The main operations involved in the MNR algorithm are the image difference of two consecutive fields (DIFF), two average filters (LPF1 and LPF2), the DCT and the IDCT applied on blocks of 8 by 8 pixels (DCT8x8, IDCT8x8) and the median filter applied on three pixels. For each of these operations the complexity in terms of number of additions and multiplications per pixel has been computed considering that 5 DCTs and 2 median filters (in the spatial and in the temporal domain) are applied per pixel. Table I summarizes the results. The total number of operations to be performed per second for a CCIR video format is 791 Madd/s and 187 Mmul/s. One could think about a DSP solution implementation. However if other post-processing algorithms, generally expensive in computational power, must be implemented within the same TV system, this solution is likely not to be able to support the whole computational load.

R. Memovy Accesses To detect the motion of a pixel on the current field, the MNR algorithm needs to access two pixels in the current and previous fields. Three consecutive pixels in the current luminance field must be accessed to compute the spatial median filter of a current pixel. The resulting pixel is temporally filtered with its corresponding pixels in the previous and in the next fields. Two new accesses are then necessary for this operation. Let us call FI the bandwidth equivalent to the transfer of one luminance plane (Y) of one CCIR-656 image, which corresponds to 10.4 Mbyte/s (720 pixel * 288 pixel * 50 fields). The total number of memory accesses is then 10 FI, which is equivalent to 104 Mbyte/s. In a typical case of a memory working at 100 MHz with 32 bit words, a total bandwidth of 400 Mbyte/s is available. In this case the data transfer of the MNR algorithm takes 25 % of the total memory bandwidth. However, this estimation is optimal as it does not take into account the possible overhead due to data alignment in the memory. In a whole TV system the bandwidth spent by the MNR algorithm becomes a restricting parameter. C. MNR Architecture Block Diagram Paragraphs 4.A and 4.B show that the computational complexity of the algorithm and the memory accesses are too high to be implemented in a TV system for high volume applications. To overcome this problem, a dedicated hardware architecture is proposed and its complexity estimated. From a system point of view we consider that the MNR module is implemented in a shared memory system architecture. Three consecutive frames are stored in the system memory. The architecture is composed of two main computational blocks: the Motion Detection module and the Median Filtering module: They are combined to work in parallel as depicted in Figure 8. In this case, the Median Filtering is applied to each block of each field of the sequence. At the same time the Motion Detection of the block takes place. In case of a moving block detection,

~

S . Delcorso et al.: MNR: A Novel Approach to Correct MPEG Temporal Distortions

the original one will be sent to the memory to be displayed. Whereas in the other case, the processed block will be displayed.

c Memory

I

233

of 0.4 and 0.25 mm2 for a 0.18 and 0.12 pm technology respectively.

Gates

Block

I

2 DCT (8x8) 1 IDCT - (8x8) , L Motion Detection 2 Median Filter Total I

Fig. 8. MNR Architecture.

1) Computational blocks The Motion Detection module initially applies two 2dimensional filters on an image difference from two consecutive fields of the same parity. The components, luminance and chrominance of each pixel of the image difference are used to detect the movement of a block. An absolute value is then applied on each filtered pixel difference and afterwards, each pixel is compared with a constant. The result of the comparison determines if the pixel is a motion pixel. In the proposed hardware implementation the 2dimensional filters are decomposed in two 1-dimensional filters. Each 1-dimensional filter is applied on the horizontal and vertical directions on, respectively, 5 and 3 luminance samples and on 4 and 3 chrominance components. Thus, the number of gates of the filters can be decreased. The median filtering module applies a spatial and a temporal median on the DCT-transformed domain as shown in Figure 6. First, a median is applied per pixel on three DCT-blocks of the current field. Next, a median is applied per pixel on a DCT-block of three consecutive fields. So, five DCTs are computed per block to be filtered. 2) Complexity estimation in gate number Five DCT-blocks and one IDCT-block have to be computed for the processing of an output block. The architectures proposed for the DCT-8 and the IDCT-8 module can process two DCT coefficients per cycle at 54 MHz. Consequently, we consider that two hardware DCTs will be sufficient to support the real time processing of the 5 DCTs of the MNR algorithm. In the near future it can be expected to perform the 5 DCTs with only one DCT module. Table I1 shows the results of the performed gate count estimations. Table I11 shows the number of gates per bit considered for each operation to perfom this estimation. The MNR algorithm is composed of 32 kgates which results in a size

I

1

17.000 8.500 5.000 1.500 32.000

Size in mm2 (0.18 pm) I 0.2 I 0.1 0.06 0.02 0.4

Size in mm2 (0.12 pm) 0.1 0.06 0.04 0.01 0.25

TABLE111 NUMBEROF GATES PER BIT AND PER OPERATOR.

1 addition 1 mult (x bits * y bits)

10 gatehit 10 gatehit

1 inverter

1 gatehit

3) Embedded memory size Figure 9 shows the block diagram of the architecture proposed for the MNR implementation. In order to minimize the memory bandwidth between the operational part and the system memory, five DCTs are computed in parallel and the intermediate results stored in local memories (BC1: BC5). Median filtering in the spatial domain is performed on the blocks stored in BC2, BC3 and BC4, and a temporal median is applied on the fly between the resulting filtered block and the contents of BC1 and BC5. An IDCT is applied on the filtered data to obtain the post-processed luminance. A block-based time scheduling is proposed as it fits well with the data locality of the algorithm and it allows to reduce the size of the embedded memory. To estimate the size of the necessary embedded memory, we must distinguish between the different types of embedded blocks: Block of pixels (BP) = (8x8) x8 bit = 512 bits Block of Coefficients (BC) = (8x8) x12 bit = 768 bits Block of Filtering (BF) = 2x(12x10) x8 bit = 1920 (Y+Cr) bits Mask Bitmap (MB) = 2x(90x72) bit = 12960 (Y+Cr) bits We estimate that a total of about 23 kbit of embedded memory are necessary to implement this architecture. This memory takes a size of 0.73 mm2 in a 0.18 pm technology or 0.63 mm2 in a 0.12 pm technology. The results of the estimations are detailed in Table V. The type of memories considered is double port SRAM memories .

.

IEEE Transactions on Consumer Electronics, Vol. 49, No. 1, FEBRUARY 2003

234

4) I/O memoly bandwidth The resulting I/O memory bandwidth for this architecture is expressed in number of fields per second (FI) (see paragraph 1V.B). One FI represents the bandwidth equivalent to the Y component of one CCIR601 image, that is 10.4 Mbyte/s (720 pixel * 288 pixel * 50 fields). Thus, the total memory bandwidth required for the IP decoder is 6 FI (see Figure 8 and Table IV). With this architecture the I/O memory bandwidth is

I

INPUT

OUTPUT

F, - Current field (Y+Cr)

2 FI

F2 - Next field (Y+Cr)

2 FI

Fo - Precedent field (Y)

1 FI

F, - Current field (Y)

1 FI

TOTAL I/O Bandwidth

I

I

IR

M Oltion Detebetior

Size in kbit

Number of memories 4 BF 1 MB

1

4

I

13

3 BP 5 BC Total

I

1.5 4 23

Size in mm2 (0.18 pm)

I

0.2 0.27 Median Filtering (Y)

I

I

0.06 0.2 0.73

Size in mm2 (0.12 pm)

I I

0.17 0.23

I

0.06 0.17 0.63

6 FI

S. Delcorso et al.: MNR: A Novel Approach to Correct MPEG Temporal Distortions

when no mosquito noise is present. Table VI shows the subjective scores obtained. The sequences where the mosquito noise is encountered are essentially resulting from very noisy original sequences. The mosquito noise reducer algorithm efficiently removes the flickering effect on the uniform background area but also around the edges. Subjective tests confirmed that sharpness is preserved on sequences free of mosquito noise (Matchline, Basket, Mobcal): the average scores approach 0, meaning that the filtered sequence is “equivalent to” the non-filtered one on the rating scale. Moreover, the different viewers showed their preference for the filtered sequences (Thelma, Film, Meteo) with scores ranging from 1.2 to 2, meaning that it is “better than” the non-filtered sequence, on these sequences exhibiting strong or light mosquito noise. Preliminary subjective tests were performed to compare the results obtained when applying the DNR and MNR algorithms to a same sequence. If this noise is introduced by the MPEG encoding the MNR algorithm works better. If temporal noise in the sequence is snow-like, the results obtained with the DNR algorithm are better although it blurs the image a bit more.

V EXPERIMENTAL RESULTS

An accurate evaluation of the algorithm efficiency is not an easy task. PSNR (Peak Signal to Noise Ratio) is not relevant for this kind of evaluation as the proposed processing is temporal, and based on filtering methods. Other objective quality assessment methods, proposed during phase I of the VQEG (Video Quality Expert Group) study [13] for instance, are mostly based on spatial impairment detection, and are consequently also not relevant for mosquito noise reduction evaluation. This is why extensive subjective tests were conducted.

A . Subjective Tests Methodology A combination of assessments by expert and nake viewers is used. Tests are designed for Double Stimulus Quality Scale (DSQS). Each test consists of a pair of stimuli, including the reference and the .processed sequences. The tests are conducted with the rating scale shown in Figure 10.

Sequence

235

Sequence

VI

Figure 10. Rating scale for subjective tests. The viewing distance is in the range of four to six times the height of the picture, as recommended by the VQEG, and compliant with the Recommendation ITU-R BT.50010. The monitor is a 16/9 100 Hz Philips Matchline 111, 82 cm. The maximum observation angle is 30°, and the room illumination is low. Six sequences of various contents and characteristics have been MPEG-2 encoded at three different low bitrates each (all with M=12, N=3). The reconstructed sequences show MPEG-2 artifacts of different types and severities, including or not strong mosquito noise. TABLE VI SUBJECTIVE SCORES ARE DESIGNED SUCH THAT SEQUENCE 1 IS THE

I

Matchline Basket Mobcal Thelma Film Meteo

I

Nomosauitonoise No mosquito noise No mosquito noise Strong mosquito noise Strong mosquito noise Light mosquito noise

I

0.2 0 -0.1 1.9 2 1.2

B. Subjective Test Results The main objective of this evaluation is not only to guarantee that the proposed method is getting rid of mosquito noise but also that it does not degrade the image

I

CONCLUSION

In this report, an adapted post-processing technique is presented, which allows mosquito noise reduction introduced by DCT based compression techniques. Results are convincing and show an impressive gain in picture quality. The efficiency is increased when combined with deblocking and deringing algorithms. The method proposed in this article is not only efficient but also easy to implement. As it works in the DCT frequency domain, an implementation of this temporal filtering within a decoder is of interest. The complexity estimation of the MNR algorithm was performed. The bottleneck for a real time execution of this algorithm are the data accesses to the memory. A dedicated hardware architecture is proposed for the implementation of the MNR algorithm. In order to decrease the U 0 memory bandwidth the MNR hardware module uses about 23 kbits of embedded memory to store intermediate data. Finally, for 0.18 pm technology the total area estimate is 0.73 mm2 and 0.63 mm2 for 0.12 pm technology. The gate count and embedded memory size used as the main complexity indicators allow concluding that hardware implementation is feasible at a reasonable cost. Scaled versions of the algorithm have been analysed for further improvements. One version could consist in performing a temporal filtering only using two consecutive fields instead of three. Another version could be based on just using the DC coefficient to detect the movement of the blocks. Both techniques will reduce the computation complexity, the embedded memory size and the bandwidth of the implementation.

IEEE Transactions on Consumer Electronics, Vol. 49, No. 1, FEBRUARY 2003

REFERENCES Apostolopoulos J. G., Jayant N. S., “Post-processing for very low bit-rate video compression”, IEEE Trans. on Image Processing, vol. 8, n. 8, pp 1125-1129, August 1999. Boyce J.M., “Noise reduction of image sequences using adaptive motion compensated frame averaging”, IEEE Intemational Conference on Acoustics, Speech and Signal Processing, pp. 461465, San Francisco, March 1992. Caviedes J.E., Miro C., Gesnot A., “Algorithm and architecture for blocking artifact correction unconstrained by region types”, Proceedings ofPCS’2001, pp. 89-92, Seoul, April 2001. Dubois E. and Sabri S., “Noise reduction in image sequences using motion-compensated temporal filtering”, IEEE Trans. Communications, vol. COM-32, n. 7, pp. 826-831, July 1984. Fan C. and Namazi N. M., “Simultaneous motion estimation and filtering of image sequences”, IEEE Trans. on Image Processing, vol. 8, n. 12, pp. 1788-1795, December 1999. Fenimore C., Libert J. and Roitman P., “Mosquito noise in MPEG compressed video: test pattems and metrics”, In: Human Vision and Electronic Imaging, Proc. of SPIE, vol. 3959, pp. 604-612, 2000. Jung J., Antonini M., Barlaud M., “Optimal Decoder for BlockTransform Based Video Coders”, IEEE Trans. On Multimedia, To be published, December 2002. Liu Tsann-shyong and Chang Long-wen, “An adaptive temporalspatial filter for MPEG decoded video signals”, Multidimensional systems and signal processing, vol. 6, pp. 251-262, 1995. Lu Cheng-Chang, “Application of short-kemel filter pairs for temporal filtering of image sequences”, IEEE Trans. on Consumer Electronics, vol. 41, n. 1, pp. 49-51, February 1995. Ojo 0. A. and de Haan G., “Robust motion-compensated video upconversion”, E E E Trans. on Consumer Electronics, vol. 43, n. 4, pp. 1045-1056, November 1997. Ozkan M. K., Sezan M. I. and Murat Tekalp A., “Motionadaptive weighted averaging for temporal filtering of noisy image sequences”, SPIE Vol. 1657, Image Processing Algorithms and Techniques 111, pp. 201-212, 1992. Paek H. and Lee S., “A projection-based post-processing technique to reduce blocking artifacts using a priori information on DCT coefficients of adjacent blocks”, Proc. of IEEE Intemational Conference on Image Processing, vol. 2, pp. 53-56, Lausanne, September 1996. VQEG: Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, httwllwww. .... . voee.nre. 1999. .,

[ 141Yuen M. and Wu H. R., “Reconstruction artifacts in digital video compression”, SPIE vol. 2419, pp. 455-465, February 1995.

Sandra Del Corso was hom in France on September 27, 1971. She obtained her BSc degree from the University of Northumbria in 1993 and her MSc degree from the University of Edinburgh in 1994. She has 8 years‘research experience in the field of image processing, including 2.5 years in building CAD models from range images at UK robotics Ltd (ex National Advanced Research Robotics Center) in Manchester and 1 year in modeling ozone map of the world at the European Space Agency in Frascati, Italy. Since July 1998, she has been working for PRF (Philips Research France). Her research interests include objective quality etahation for high and low bit-rate consumer applications and MPEG pre-/post-processing. Carolina Mir6 Sorolla obtained her M.Sc. degree in telecommunications engineering from the “Universitat Politkcnica de Catalunya” (UPC) in Barcelona, Spain, and her Ph.D. in electrical and communications engineering from the “Ecole Nationale Supirieure des Telecommunications” (ENST) in Paris, France. Since July 1999, she has been working for Philips Research France as Senior Scientist in the Architecture of Microsystems and VLSI Group. Her main research interests are the VLSI architectures for video and multimedia applications and the iniprovements on video quality. Joel Jung was bom in France on June 15, 1971. He received the Ph.D. from the University of Nice - Sophia Antipolis, France, in 2000. From 1996 to 2000, he worked with the I3S/CNRS laboratory on the improvement of video decoders based on the correction of compression and transmission artifacts. He is currently working at Philips Research France, Suresnes, France, and his research interests are video decoding, postprocessing, perceptual models, objective quality metrics and low power codecs. He is involved in the SOQUET (System for management Of Quality of service in 3G nETworks) European project, and contributes to VQEG (Video Quality Expert Group) and MPEG/JVT standards.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.