A Perceptual Data Repository for Polygonal Meshes

July 27, 2017 | Autor: Samuel Silva | Categoria: Perceived Quality, Observational Study, Experimental Methodology
Share Embed


Descrição do Produto

2009 Second International Conference in Visualisation

A Perceptual Data Repository for Polygonal Meshes Samuel Silva1,2 , Beatriz Sousa Santos1,2 , Carlos Ferreira3,4 , Joaquim Madeira1,2 1 Institute of Electronics Engineering and Telematics of Aveiro 2 Department of Electronics, Telecommunications and Informatics – Univ. of Aveiro 3 Department of Economics, Management and Industrial Engineering 4 Operations Research Center – Univ. of Lisbon [email protected], [email protected], [email protected], [email protected] Abstract

Building on work carried out by the authors, including several observer studies to evaluate perceived quality of simplified models, a perceived quality data repository 1 is presented which gathers the data obtained on those studies. The main purpose is to provide perceived quality data which other researchers can use to perform a preliminary assessment of their work.

A repository containing perceived quality data for polygonal meshes, obtained through observer studies, is presented. It includes information regarding the experimental methodology, protocol and models used, with the purpose of allowing researchers to use it, e.g., for a faster preliminary assessment of their perceived quality metrics without the overhead of designing and performing an observer study.

In the following sections, the methodology used is presented and a description of the data available in the repository is provided along with some conclusions and ideas for future work.

Keywords—polygonal meshes, perceived quality

1

Introduction 2

Polygonal meshes are used in a wide range of applications. Models defined using polygonal meshes are mostly intended for visualization and should be perceived as having adequate (visual) quality. Meshes often need to be processed to suit particular criteria using a variety of methods such as compression [1], simplification [2] or surface noise removal [3]. The processed meshes have to be analysed to ascertain if they are still adequate for a specific purpose and several metrics and tools [4] [5] [6] have been proposed, which allow comparing meshes using criteria such as geometric distance, surface normal deviation or curvature properties. Even though these criteria might guarantee surface quality and/or a degree of similarity between two meshes, it is not clear how the information provided relates with mesh quality as perceived by human observers. During the past 10 years, research concerning automatic estimation of perceived quality of polygonal meshes has been carried out in different contexts [7][8]. To validate the outcomes of such research works it is important to compare them with perceived quality results obtained using observer studies [9, 10, 11, 12, 13] which require large preparation times and resources [14]. 1 Repository

Experimental Methodology

All the observer studies followed a similar methodology. A brief description of the main aspects concerning the used protocol and data set creation are presented below.

2.1

To assess visual quality, as perceived by the observers, their preferences and ratings where recorded. These two measures have been widely used in experimental sciences to obtain relative judgments from human participants [10]. With ratings, observers assign to a stimulus a number with a range and a meaning. With preferences, observers simply choose the stimulus with more of the identified quality. Both represent conscious decisions and both have proven useful in a wide variety of situations. Moreover, preferences and ratings are probably the most adequate indices of fidelity [10], i.e., visual similarity to an original. We also recorded decision times and number of interactions (performed on the model), since they seemed to be related to the degree of difficulty observers encounter in performing the preference and rating tasks.

available online at http://www.ieeta.pt/˜sss/repository

978-0-7695-3734-4/09 $25.00 © 2009 IEEE DOI 10.1109/VIZ.2009.41

Protocol

207

also shown that if the judged models are all very similar to the original, users tend to get annoyed during the observer study. Apart from that, using very complex models would also lead to poor interaction which would influence the evaluation, resulting in longer decision times and fatigue. Textures have been considered in a series of studies described in the literature [12, 16] and have an important influence in model perceived quality, as they can help mask surface artifacts. Nevertheless, until now we have considered a worst case scenario, by using models with no applied textures, in order to isolate the effects of geometry and surface properties in perceived quality. All observers received, first, an explanation about the context and aims of each observer study, and the tasks they had to perform. Then, they were asked for some information regarding their profile, as well as their informal consent. Each study was divided in two phases: while in the first observers had to perform a preference task, expressing their preferences among the versions of an original model processed using all processing methods, for each processing level, in the second they had to perform a rating task, rating just one version of a model obtained using a processing level and a processing method. They were always shown the original model. At the beginning of each phase observers were presented with some training sets, to allow them to get acquainted with the task they were going to perform, as well as with model manipulation. All models were read from disk before the experiment started, in order to reduce the waiting time between test situations. A software tool was developed to help implementing the protocol, as well as to easily collect and store data. In the first phase (see figure 1), observers were sequentially presented with an original model and several processed versions of it and were asked to classify them assigning first, second and third places, according to the perceived quality of each simplified model compared with the original. Although they were asked to try to always assign different preferences, observers could assign the same preference to more than one processed model, whenever discrimination between those models was not possible. In the second phase (see figure 2), observers were sequentially presented with an original model and a processed version of it and were asked to rate the latter, using a five level Likert scale [17] from 1 (very bad) to 5 (very good), once again based on perceived quality. The number of Likert scale levels selected had been found adequate to the observers capacity to distinguish between levels. Observers could interact with the models, choosing their position, orientation and scale factor which allows a more accurate analysis of the models than just some im-

Figure 1: User interface for the first experimental phase of the observer studies concerning simplified mesh models: the original model and four simplified versions are presented and preferences asked.

Figure 2: User interface for the second experimental phase of the observer studies concerning simplified mesh models: the original model is presented, side-by-side with a simplified version, and a rating asked.

2.2 Data Sets To evaluate perceived quality sets of processed models were created and the observers allowed to compare them with their originals. The main idea was to display a set of models, including the original model as well as several processed versions of it, and allow observers to express their preferences (first, second, third places, etc.), or to present an original model and one of its processed versions, allowing observers to rate the quality perceived on the latter. A criterion when choosing the original models was that their characteristics (number of vertices, surface properties, etc.) should allow for noticeable effects when the processing was applied to them. For example, a model should not be highly oversampled or the result of simplification might be too close to the original, no differences being noticed by observers [15]. Preliminary experiments have

208

ages taken from a few pre-determined viewpoints [11]. When an observer interacts with a model on the screen, the same transformation is applied to all the other visible models, maintaining them synchronized to avoid disorientation. However, only the manipulated model is seen moving in real-time, and the synchronization is only performed after the interaction stopped, in order to minimize the computational load and resulting in a smoother interaction. Moreover, the observer is allowed to reset all models to the initial conditions of position, orientation and scale factor. All models were illuminated using one light source (two or more might not lead to good results, if incorrectly set [18]) from the front top left corner of the scene. We considered that lighting from the top or side would not look “natural” to users and would pose some difficulties when analysing a model. Lighting from the front was also considered, yet it was discarded since it might result in artifact masking [11] and lesser shading “nuances” which might help detect artifacts. A specular component was added to produce highlights on the model surface. Eventhough they can disguise some minor artifacts, highlight shapes might reveal subtle surface differences.

3

of different models, the three simplification methods used were the same as on the first study. The lung model used in this second experiment was taken from those used in the first, to allow verifying results consistence across both experiments. Both preferences and ratings were collected. Finally, in the third study [23], 55 observers evaluated sets of models built from five different models, using four simplification methods and six simplification levels (10%, 20%, 27%, 35%, 43% and 50% of the original number of faces). The main purpose was to assess perceived quality results using a wider range of simplification levels, and how the presence of a new simplification method would influence the relative positions obtained by the other nethods in the previous studies. Two of the models used had been used in the second study, in order to allow verifying results consistence across both experiments. In this study, only preferences were collected to keep experiment times short.

4

Data Provided by the Repository

The following data from the conducted observer studies is provided online at http://www.ieeta.pt/˜sss/ repository: • Description of the main aspects of each observer study;

Observer Studies

Three observer studies have already been performed using the previously described protocol. Table 1 presents their main features. As processing method mesh simplification has been used because its impact on meshes is generally strong and several simplification tools can be easily obtained such as QSlim [], NSA [19], or simplification methods supported by the OpenMesh library [20]. Additionally, previous research work on perceived quality assessment has been performed using such methods, which would allow comparison with some of our findings. Finally, we considered maintaining the processing method along the three experiments while testing different aspects of the devised protocol. Eventhough texture has been considered in some of the studies described in the literature we have not yet included it in ours. Texture might help disguise a poor geometry but, in some situations, it might be a negative factor [11]. In the first study [21], 32 observers evaluated sets of models built from four lung models, using three simplification methods and two simplification levels (20% and 50% of the original number of faces). Both preferences and ratings were collected. In the second study [22], 65 observers evaluated sets of models built from five different models, using three simplification methods and two simplification levels (20% and 50% of the original number of faces). The main purpose was to use models with different nature and obtain data from a larger number of observers. To assess the effects

• Mesh models evaluated in the observer studies; • Tables containing the raw data collected during the observer studies, including decision times, number of interactions with the models and observer profile; • Tables containing the rank median obtained for each model. Apart from the collected data, the repository also includes the software applications developed to apply the presented protocol which can be configured to present any models desired.

5

Application Examples

The data provided in the repository has already been used by the authors to test several common mesh quality metrics as estimators of user perceived quality. In Sousa Santos et al. [24], data obtained in the first experiment was used to test the geometric distance and normal deviation metrics as estimators of user perceived quality leading to some evidence that the mean geometric distance might be a good estimator of user perceived quality for strong simplifications while the mean normal deviation worked better for light simplifications. The results for strong simplifications are depicted in the factorial planes presented in figure 4 [25]. Notice the similarity between the factorial plane concerning the results obtained for the 20% simplification level in the observer study (built from the data available

209

Figure 3: Some of the polygonal models used in the observer studies. Table 1: Observer studies performed and their main features. #Obs #Models #Methods # of Proc. Levels Pref. Ratings 32 4 3 2 ( 20% and 50%) yes yes 65 5 3 2 ( 20% and 50%) yes yes 55 5 4 6 (10%, 20% . . . ) yes no on the repository) and using the mean geometric deviation computed for the same models. The simplification methods are ranked in the same order in both situations. A similar study can be found in Silva et al. [27] but using the data from the second experiment and a wider range of metrics. These examples show a real application scenario of the data contained in this repository. Researchers can use it, for example, to perform a preliminary validation of a new perceived quality estimation metric they are working on or to test their ideas concerning how certain features of the models might influence user judgment. At a later research stage they will have to perform their own tests for a more reliable validation of their work. The protocol devised and tested can also be used to prepare new experiments or serve as a starting point for improvements and new ideas.

6

cance to the obtained results. A large quantity of data is provided regarding each observer study including the models used, the raw experimental data, a summary of the results obtained and the main conclusions after performing a statistical analysis. This data has already been used by the authors to test several metrics as estimators of user perceived quality as described in Santos et al. [24] and Silva et al. [27]. The provided data has been obtained using simplified models, yet we believe this is not a limitation to their applicability and much of the work in this field uses this kind of processing method. Nevertheless, it is of paramount importance to evaluate a new metric, for example, in as many different scenarios as possible. Thus, it is our purpose to continue adding data to the repository obtained by conducting further observer studies dealing with models processed using processing methods such as smoothing. By enabling access to such perceived quality data, we aim to allow researchers to perform a quicker preliminary assessment of their findings before spending the time and resources preparing their evaluation studies. At the same time, by providing the software developed to apply the devised protocol, it is possible for others to use this tested framework with different stimuli or even applied to different contexts.

Conclusion and Future Work

This paper presents a repository for perceived quality data of polygonal meshes. The data has been obtained through observer studies following a protocol where observers had to judge perceived quality using preferences and ratings. This repository includes not only perceived quality data but also the software tools (which can be configured to present different models) and models used. The experimental protocol was tested by checking results consistency across observer studies (performed with completely different subject groups) by introducing common stimuli and well defined changes in each observer study. The number of subjects involved in each observer study was large, which provides a reasonable degree of signifi-

Acknowledgments The first author is supported by grant SFRH/BD/38073/2007 awarded by the Portuguese Foundation for Science and Technology (FCT). The authors would like to thank all the participants in the observer studies.

210

Figure 4: Factorial planes, obtained using Statistica [26], depicting the association between simplification method and preference ranks as obtained in an observer study (left) and using the mean geometric distance metric (right).

References

[9] H. Rushmeier, B. Rogowitz, and C. Piatko, “Perceptual issues in substituting texture for geometry,” in Proc. SPIE Vol. 3959, Electronic Imaging and Human Vision V, pp. 372–383, 2000.

[1] P. Alliez and C. Gotsman, “Recent advances in compression of 3D meshes,” in Advances in Multiresolution for Geometric Modeling (N. Dodgson, M. Floater, and M. Sabin, eds.), pp. 3–26, SpringerVerlag, 2005.

[10] B. Watson, A. Friedman, and A. McGaffey, “Measuring and predicting visual fidelity,” in Proc. SIGGRAPH 2001, pp. 213–220, 2001.

[2] D. Luebke, “A developer’s survey of polygonal simplification algorithms,” IEEE Computer Graphics & Applications, vol. 21, no. 3, pp. 24–35, 2001.

[11] B. Rogowitz and H. Rushmeier, “Are image quality metrics adequate to evaluate the qualtiy of geometric objects,” in Proc. SPIE Vol. 4299, Human Vision and Electronic Imaging VI, pp. pp. 340–348, 2001.

[3] O. Sorkine, “Laplacian mesh processing,” in EUROGRAPHICS 2005 — State-of-the-Art Report, (Dublin, Ireland), 2005.

[12] Y. Pan, I. Cheng, and A. Basu, “Quality metric for approximating subjective evaluation of 3D objects,” IEEE Transactions on Multimedia, vol. 7, no. 2, pp. 269–279, 2005.

[4] P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: measuring error on simplified surfaces,” Computer Graphics Forum, vol. 17(2), pp. 167–174, 1998.

[13] M. Corsini, E. D. Gelasca, T. Ebrahimi, and M. Barni, “Watermarked 3-D mesh quality assessment,” IEEE Transactions on Multimedia, vol. 9, no. 2, 2007.

[5] M. Roy, S. Foufou, and F. Truchetet, “Mesh comparison using attribute deviation metric,” Internation Journal of Image and Graphics, vol. 4(1), pp. 1–14, 2004.

[14] R. Kosara, C. G. Healey, V. Interrante, D. H. Laidlaw, and C. Ware, “User studies: Why, how and when?,” IEEE Computer Graphics and Applications, vol. 23, no. 4, pp. 20–25, 2003.

[6] S. Silva, J. Madeira, and B. Sousa Santos, “Polymeco – an integrated environment for polygonal mesh analysis and comparison,” Computers & Graphics, vol. 33, no. 2, pp. 181 – 191, 2009. [7] I. Cheng and A. Basu, “Perceptually optimized 3-D transmission over wireless networks,” IEEE Transactions on Multimedia, vol. 9, no. 2, pp. 286–396, 2007.

[15] I. Cheng, R. Shen, X. Yang, and P. Boulanger, “Perceptual analysis of level-of-detail: The JND approach,” in Proc. Int. Symp. on Multimedia, pp. 533– 540, 2006.

[8] Z. Karni and C. Gotsman, “Spectral compression of mesh geometry,” in Proc. SIGGRAPH 2000, pp. 279– 286, 2000.

[16] J. Ferwerda, S. Pattanaik, P. Sjiley, and D. Greenberg, “A model of visual masking for computer graphics,” in SIGGRAPH ’97, pp. 143–152, 1997.

211

[17] V. Barnett, Sample Survey Principles and Methods. Arnold Hodder, 3rd ed., 2003.

SIACG06, (Santiago de Compostela, Spain), pp. 169– 178, 2006.

[18] R. Shacked and D. Lischinki, “Automatic lighting design using a perceptual quality metric,” Computer Graphics Forum, vol. 20, no. 3, pp. 215–227, 2001.

[23] S. Silva, B. Sousa Santos, J. Madeira, and C. Ferreira, “Perceived quality assessment of polygonal meshes using observer studies: A new extended protocol,” in Proc. SPIE Vol. 6806, Human Vision and Electronic Imaging XIII, (San Jos´e, California, USA), pp. 68060D.1–68060D.12, 2008.

[19] F. Silva, “NSA algorithm: Geometrical vs. visual quality,” in Proc. Int. Conf. Computational Science and its Applications (ICCSA 2007), pp. 515–523, 2007.

[24] B. Sousa Santos, S. Silva, C. Ferreira, and J. Madeira, “Comparison of methods for the simplification of mesh models of the lungs using quality indices and an observer study,” in Proc. 3rd International Conference on Medical Information Visualization – Biomedical Visualization (MediVis05), pp. 15–21, 2005.

[20] M. Botsch, S. Steinberg, S. Bischoff, and L. Kobbelt, “Openmesh — a generic and efficient polygon mesh data structure,” in 1st OpenSG Symp., (Darmstadt, Germany), 2002.

[25] D. G. Johnson, Applied Multivariate Methods for Data Analysis. Duxbury, 1998.

[21] S. Silva, B. Sousa Santos, J. Madeira, and C. Ferreira, “Comparing three methods for simplifying mesh models of the lungs: An observer test to assess perceived quality,” in Proc. SPIE 2005 Vol. 5749, Image Perception, Observer Performance, and Technology Assessment, pp. 484–495, 2005.

[26] “Statistica 6.0,” http://www.statsoft.com, (online Mar/2009). [27] S. Silva, B. Sousa Santos, C. Ferreira, and J. Madeira, “Comparison of methods for the simplification of mesh models using quality indices and an observer study,” in Proc. SPIE Vol. 6492, Human Vision and Electronic Imaging XII, pp. 64921L.1–64921L.12, 2007.

[22] S. Silva, C. Ferreira, J. Madeira, and B. Sousa Santos, “Perceived quality of simplified polygonal meshes: Evaluation using observer studies,” in Proc. Ibero-American Symposium in Computer Graphics

212

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.