PERIS: A programming environment for realistic image synthesis

July 6, 2017 | Autor: Qunsheng Peng | Categoria: Mechanical Engineering, Computer Graphics, Computer Aided Design
Share Embed


Descrição do Produto

Comput. & Graphics Vol. 12, Nos. 3/4, pp. 299-307, 1988 Printed in Great Britain.

0097-8493/88 $3.00 + .iX) © 1988 Pergamon Press plc

Computer Graphics in China

PERIS: A PROGRAMMING ENVIRONMENT FOR REALISTIC IMAGE SYNTHESIS YlNING ZHU, QUNSHENG PENG and YOUDONG LIANG CAD/CAM Research Centre, Zhejiang University, China Abstraet--A computer graphics system called PERIS is described. It offers two levels of user interfaces to serve both as a testbed to facilitate further research work and as a pragmatic system for CAD/CAM applications. Flexibility of the system is achieved by providing tools of solid modeling, surface modeling and procedural model representations for physical environment modeling and by including interfaces of immediate display, scan line rendering, ray tracing and two-way ray tracing for realistic image synthesis. A linear octree data structure is establishedto support the modeling and rendering processes,greatly reducing the computations involved. Meanwhile, we also introduce a new illumination model which unifies most of the existing models on a theoretical basis. An improved Cook-Torrance model is then derived. By examining the merits and limitations of the classic ray tracing methods, we propose a new rendering technique of two-way ray tracing that allows more accurate simulation of light propagation in the environment. 1. INTRODUCTION Computer graphics has been widely recognized nowadays both as an important discipline and as a powerful tool for the development of new technologies. The main task of computer graphics includes modeling the physical environment and rendering it into a suitable form for graphics output. Much effort has been made for 3D geometric modeling and realistic image synthesis and several graphics systems based on progress in these fields have been developed. They not only provide graphics interfaces for various applications but also serve as testbeds for further experiments. One of the early raster graphics testbed was designed by Whitted and Weimer[27 ]. In their system a set of utility programs based on raster scan conversion techniques for 3D shaded display is presented. The principal feature of this system is its flexibility of allowing several objects of different types to be converted and merged into common span buffers thus enabling selective display, cyclic animation and transparency effects to be simulated. Another testbed was developed in Cornell University and described by Hall and Greenberg [ 14 ]. Hall and Greenberg considered image synthesis a multistep process consisting of modeling of the physical environment, simulation of light propagation, intensity calculation at the image plane and conversion of intensity information for display. They tried to increase the flexibility in each of these steps. Unlike the testbed by Whitted and Weimer, the Cornell University testbed oriented more towards ray tracing to achieve global illumination effects. Recently, Cook et al. developed a rendering system called Reyes [ 7 ]. This system is very successful for high-quality computer animation of complex scenes. Many new techniques, such as stochastic sampling [ 6 ], distributed ray tracing [ 4 ], shade tree[5] and anti-aliased depth map shadow algorithm [ 21 ] were incorporated. In order to deal with unlimited number of objects, Reyes adopts a local rendering scheme to avoid frequent data access from the data base while retaining much of the global illumination effects by texture mapping.

Apparently, each system had its own developing environment and pursued different goals. PERIS is a raster graphics system developed in CAD/CAM Research Centre, Zhejiang University. The main goals of our system were: • To provide powerful geometric modeling tools. It should be capable of modeling a wide range of objects composed of planes, quadric surfaces, surfaces of revolution and arbitrary sculptured surfaces. The modeling operations should be simple and reliable. • To supply different quality level of raster display. For users requesting an immediate display, fast but less realistic images may well suit their needs. For users who pay much attention to the quality of the resulted images, highly realistic images would be more desirable. • To provide a user-friendly programming environment for developing new modeling and rendering algorithms. • To offer a convenient graphics interface for general C A D / C A M applications. In particular, the user should be able to translate, rotate, scale and even assemble his product on the screen. Both the calligraphic image and shaded image of the same object should be available. In the following, we shall examine the system structure and discuss the design principles for each step in details. 2. SVSTEM OVERVIEW PERIS is implemented on PS340, taking a DEC P D P - I I / 4 4 and a Universe 68000 as the host. The PS340 is a graphics workstation produced by Evans & Sutherland Corporation. It supports both calligraphic display and raster display. The system structure of PERIS is diagrammed in Fig. I. The system consists of basically two parts, i.e. environment modeling and image rendering. The geometry of most objects are modeled via solid modeling and surface modeling. Nevertheless, shapes of natural 299

YlNlNG ZHU ~ aL

300

Solid model

Texture DB

Light-source DB Illumination model description

Surface model

Geometric DB

Material

I Procedural model

L

DB

Datastructure readyfor renderng

I

PS340 I system

I

I

I soanline rendering I

i

i

I Raytracing rendering

4, ,ll I

I

Two-way I ray tracing

I C°m si i°n I D,sp,a,

Fig. 1.

objects such as rocks, clouds and trees are usually irregular and difficult to be described by analytic surfaces. In order to cover these objects by computer-generated images, procedural models are developed in PERIS. All the objects are converted into triangles before rendering. Four rendering techniques are available. They present different quality level of images. Further complex images can be obtained by image composition. Two kinds of user interface are offered. One is provided for system programmers who may need direct access to most of the system routines to enable them to experiment with new ideas. At the interface for general users, details of modeling and rendering process are shielded and the system is driven simply by system commands, user manu and various interactive devices. 3. MODEL REPRESENTATIONS

3.1 Solid modeling A complex scene may consist of various objects with different shapes. In order to model these objects, a powerful solid modeler with wide coverage should be included. Simple objects of regular shapes can be input directly. Complicated shapes are created via modeling operations on primitive objects. Nevertheless, these operations are computationally expensive due to a large number of intersection calculations between the boundary surfaces of objects involved. When a new type of boundary surface is to be included into the domain of the modeler, a set of subroutines should be devised for computing the intersection lines between this surface and all the other types of surfaces available in the system. This not only increases the size of the system, making the maintenance of the system difficult, but also limits its capability for modeling further

complex objects. In designing PERIS, we adopted a different approach. While still retaining their precise surface information, all boundary surfaces of objects are approximated by triangular patches. An efficient triangulation algorithm was also developed to divide a planar polygon into a minimum number of triangles. Thus, the only geometry we have to deal with during modeling operations is the triangle. The problem with this approach is the enormous number of triangles of the two objects involved in intersection tests. In contrast, a linear octree data structure[12] is established in PERIS. The cubic space of each tree node contains a limited number of triangular patches. During intersection operations only the triangular patches of the two objects occupying the same space are taken into account, hence saving a lot of unnecessary comparisons and computations for intersection tests. Further, the adoption of the unified triangular geometry simplifies the intersection calculation, making the modeling process fast and reliable. It must be mentioned that the intersection lines generated are only approximations to the intersection of the two surfaces. Although they may not locate precisely on either of the surfaces, they suffice for graphics applications. The accurate intersection between the two surfaces can be recovered if necessary by invoking numerical methods taking its approximation obtained as a rough estimate. To facilitate the modeling process, a set of local operations such as sweep, glue and reflection were incorporated into the system. These operations change the shape of an object locally without the expensive intersection calculation. They offer alternative ways to solid modeling.

Programming environment for realistic image synthesis 3.2 Surface modeling While solid modeling process takes care of the overall figuration or topology of an object, the focus of surface modeling is concentrated on the shape or geometry of each boundary surface. The shape of each surface is determined by its control mesh, which conforms to either rectangular or arbitrary polygonal topology. Many surface representation schemes are available for rectangular mesh, amongst which the Bezier patch, B-spline patch and Beta-spline patch are the most successful approaches. These three representation schemes are unified in [28] by a so called BBB-spline patch representation. Similar to the definition ofa Beta-spline surface, a BBB-spline surface consists of a number of piecewise Bezier patches with adequate geometric continuity. A number of parameters are provided for controlling the shape of a BBB-spline patch, more than that for a Beta-spline patch. By adopting specific values of shape parameters, all other surface representation schemes can be derived. It is worth mentioning that the BBB-spline patch possesses more freedom in surface design and shape modification can be localized without changing the control mesh. Three possible solutions are available in representing a surface corresponding to a given arbitrary polygonal mesh, i.e. recursive subdivision, N-sided patch and N patches meeting at one vertex. In PERIS, a new subdivision scheme of recursively cutting and grinding polyhedral15] is employed. Some useful properties such as the ease of shape control, the convex hull feature and the localized manipulations are provided by this scheme. In PERIS, a surface modeling process is invoked when a sculptured surface is encountered. Much attention was paid to the system flexibility in controlling the shape of a sculptured surface and joining two surfaces with certain geometric continuity.

3.3 Procedural models Rendering natural scenes is one of the main goals of realistic image synthesis. Because of the infinite number of details embedded in the natural objects, they are very difficult to be modeled using the existing surface-oriented techniques. Procedural models, however, offer the best solution to the problem of representing these natural objects[10, 21, 23]. A procedural model usually requires only a sparse data base. It is basically a recursive procedure with the values of its parameters determined by stochastic processes. When a procedural model is implemented, a growing number of object details with different appearances is produced. The overall shape of the object is controlled by designating a range for each parameter with its mean value and maximum variance. Note that if the associated stochastic process controls only parameters concerning the spatial occurrence of object details, the implementation of procedural models will result in static objects such as mountains, grasses, trees with self-similarity properties. If a time parameter is also included in the

301

stochastic process, a dynamic sequence of a natural scene will be established. In PERIS, procedural models for mountains, trees and grasses were designed for the composition of a highly realistic image. 4. RENDERING

4.1 Immediate display Immediate display is very useful either for CAD/ CAM applications or for configuring a realistic image. It offers a quick way to check if one's design is just as expected. Immediate display is made available by PS340, a high-performance computer graphics workstation. As PS340 is a self-support graphics system with it own commands, function networks and interactive devices controlled by its graphics processor, great attention was paid to establishing the communications between PERIS and PS340. In fact, we tried to include PS340 as a subsystem of PERIS. Rendering data are picked from geometric data base, material data base and light source data base (Fig. 1), composing a unified data structure, then converted into a format applicable to PS340. Function networks are set up via our specially designed interface to enable the users to work with PS340 interactively. All the displaying data on PS340 can be fed back and reprocessed by PERIS for other graphics output.

4.2 Illumination model To users who are more interested in realistic image synthesis, a low level interface is offered by PERIS. After modeling the physical environment, a rendering process follows. Intensity function is calculated at the image plane conforming to a specified illumination model. There are several illumination models simulating either locally or globally the light propagation between objects[14, 20, 26]. Most of them are empirical models, however, and lack the support of a theoretical basis. In developing PERIS, we built a new illumination model [ 32 ] which not only unifies most of the existing models on a rigorous theoretical basis but also opens a way for users to define their own models. To derive the new illumination model, we will first examine some important concepts in photometry. Let J denote luminous intensity of a light source along one direction. It is defined as the luminous flux emitted from the light source per unit solid angle around that direction, thus J = dF/do~. In order to describe the illumination of an emissive elemental area dSi, we define its luminance as the luminous flux it emits per unit solid angle per unit projected area in the direction of radiation, then

I = J/(dSicos(i)) = dE/(cos(i)dSldo~),

(1)

302

YIN1NG ZHU el aL

Fig. 2.

where I represents the luminance, i is the angle between the normal vector ofdS~ and the direction of its emission. Note that many objects are not self-emissive. They reflect or refract light received from the environment. It is the reflective luminance of these objects that make them visible and colorful. To determine the reflective luminance of an area dSj perceived by an observer, we must first calculate the luminous energy captured by dSj. Without losing generality, assume emissive area dS;is one of the contributors that radiate luminous energy towards dSj and 1i represent its corresponding luminance. The luminous flux dF~ emitted from dS~ to dS~ can then be calculated as (Fig. 2)

I = D K dFi/(cos(O)dSj) = DKIicos(Oj)dooj/cos(O) or

I = D K ( I ~ ( N . L ) d w j ) / ( N . V),

(4)

where unit vector N, L, V denote the normal of dSj, the direction from dSj to dSt and the viewing direction, respectively. In general, let S be the whole set of emissive areas that make contributions to dSj, we derive a new illumination model as follows

dFi =//cos(0i)dSidw~ = licos(O~)dSi(dSjcos(Oj)/r 2)

I = fs (DKI~(N.L)do~j)/(N. V)

= I~cos(Oj)dSj(dSicos(Oi)/r 2) or

= I~cos(Oj)dSjdwj. I = fs ( D K E ) / ( N . V) The luminous energy received by dSj may be absorbed, reflected or transmitted by dSj. Let K be the fraction by which the luminous flux is reduced by absorption of dSj and D be the distribution function of its emission per unit solid angle. Thus, the luminous energy reflected by dSj in the viewing direction can be described by

with E representing the luminous flux density of dSj. By specifying the value of the distribution function D in (5) to

d F = Dd~ogdFi. N

Hence the respective luminous intensity J J = dF/dw = DKdF~.

(2)

Icos(O)dSj,

(3)

d,o

By definition J =

where I is the luminance of ~ and 0 is the angle between the normal vector ofdSj and the direction of its radiation (here the viewing direction, see Fig. 3). Combine (2) and (3), we get

(5)

Fig. 3.

303

Programming environment for realistic image synthesis /

( N . V)/Ir, (N. V)(R. V)"/r(N.L), exp(-tan2( a ) / m 2) m2cos4( a ) l r ( N . L ) (m describes the roughness of a reflective surface ) and taking the ambient light component into account, we get Lambert model, Phong model and Cook-Torrance model [ 3 ] accordingly. Note that the integration of the distribution function D over the whole space should be equal to one, i.e. (Fig. 3) f D d~o = 1. Surprisingly, this normalized condition is satisfied by none of the three illumination models mentioned above. An improved Cook-Torrance model can be drawn by establishing a stochastic model of the surface geometry. Suppose the surface be composed of microfacets (Fig. 4), unit vector N b e the normal of elemental surface area dSj located on the X Y coordinates plane, unit vector H be the normal vector of one of the microplanes on dS2 and (x, y, z) represent a point on the microplane. Let { z ( x , y), y = constant} and { z ( x , y), x = constant} be two independent stable Gaussian processes. It can then be derived using the theory of stochastic process that the distribution function of angle a = cos -1 ( N . H) takes the following form p(a) = 2 sin(a)exp(-tan2(a)/m2)/ (m2cos3(a)).

(6)

Recall that the direction of the specular reflection of each microplane is dependent of its normal vector H, thus p ( a ) describes also the probability density of specular reflected light in each direction. The distribution function Ds of specular light on a rough surface is then obtained Ds = e x p ( - t a n 2 ( a ) / m 2 ) / ( I r m 2 c o s 3 ( a ) ) ( N . V) > 0.

7

(7)

It is slightly different from the distribution function in the original Cook-Torrance model by a factor (N. L ) / ( N . H ) . The difference becomes apparent when the incident angle of the light arrived gets large. By specifying different distribution function D satisfying the normalized condition, users can develop their own models to simulate surfaces with different illumination properties. Such a user interface is provided by PERIS. 4.3 Scan line rendering Scan line algorithms for shaded image display are recognized as basic and fast rendering algorithms. Classic scan line algorithms capitalize on the object coherence to reduce the computation involved, thus making the algorithms much efficient. Textures, shadows and simple transparency effects can easily be included into scan line algorithms to enhance the realism of the resulted image. The incorporation of a scan line algorithm into PERIS is facilitated by the unified rendering data structure adopted in PERIS. Since each surface is approximated by many triangular patches, direct application of scan line algorithms may produce facet effects. To recovery the smoothness of a surface, Phong's smoothing technique is employed. Shadows can also be simulated by introducing shadow volumes into the visual frustum. The method proposed by Crow [ 8 ] can readily be embedded into scan line algorithms with little modification of the program. 4.4 Ray tracing Ray tracing is the main technique available for highly realistic image synthesis. Global illumination effects such as reflections, refractions, transparency and shadows are achieved naturally by ray tracing without implementing complicated algorithms. The major problem associated with this technique is the large amount of computing time it consumes due to intersection calculations involved between each ray and all objects in the scenes. In PERIS, we adopted a new algorithm using space indexing techniques [ 19 ]. Recall the linear octree data structure established in the previous modeling process. It provides an adaptive partitioning of space. A subregion of a space cube is indexed following a scheme illustrated in Fig. 5. A voxel can then be encoded by an octal integer q~q2" • • q~ Q = q l . 8 lv-I + q2.8 N-2 + • . . + q~¢.8 °,

N

qi E {0,1, 2, 3, 4, 5, 6, 7),

~ X

Sd

where Ndenotes the resolution of the modeling space, qj identifies the index of its eldest ancestor and q2 identifies the index of its second eldest ancestor and so on. To encode a region larger than a voxel, a string of sexadecimal number F is appended to the end of the code, thus qlq2 " " "qiFF" • • F ~r

Fig. 4.

N

304

YININGZHU el al.

Z 6

0 Fig. 5.

represents a cubic region of size 2N-;* 2~-% 2 ~-i. Only the terminal nodes containing boundary surfaces of objects are reserved in our linear octree and they are sorted according to their encoding numbers. The successful adoption of linear octree data structure for ray tracing owes much to its simplicity of mapping a point to an octree node. Let a point be P ( x , y , z), the truncated integers of the three coordinates at resolution M ( M ~_ N ) have the following binary forms X=

4 6 " • "iM,

Y = j , j 2 " " "jM, Z = k l k 2 . • • kM,

where it, Jr, kt E {0, 1}, {it, j r , kt} corresponds to the 2 ~-t bit. It is easy to show that the octal number T = 4t2" • • tM tl = it + 2 j r + 4kt

(8)

specifies an octree node whose corresponding cubic region takes (X, Y, Z ) as its local origin (Fig. 5), hence containing point P inside. Using the adjacency information provided by the octree data structure, we can now conduct a ray to march forward and intersect the desired object directly. A subtle issue of this approach is how to skip the empty regions more efficiently before a ray finally hits a surface. The efficiency of our algorithm is achieved by decreasing the number of regions that a ray has to stride over, by reducing the computations involved in skipping an empty region and by performing a binary search to find the next region. As is well known, a ray to be traced in the object space can be represented mathematically in a parametric form: x = Xo + dxt, y=

yo + d y t ,

z = Zo + d z t ,

where (dx, dy, dz) is the ray vector. Let us define the direction along which the ray vector has its longest

component as the leading direction. Also the direction corresponding to the second longest component of the ray vector is defined as the second direction. The remaining direction in the space is called the third direction. It is evident that the ray is most likely to exit from the side plane of an empty cubic region perpendicular to the leading direction. If this is the case, two multiplications suffice for calculating the exit point. Otherwise, further calculations proceed and up to five multiplications might be involved in the rarest case where the ray does exit from the side plane perpendicular to the third direction. This is compared with Glassner's algorithm [ 13 ], in which a ray must intersect with each of the six bounding planes of an empty region and nine multiplications are always needed for finding the exit point. Note that the local origin and the side length of the concerned region can be obtained simply by decoding the octal number of the respective octree node. Next, we perform a binary search to look for a terminal octree node whose corresponding region contains the exit point, hence adjacent to the region just skipped. If an EMPTY node is found, we expect its corresponding region to be as large as possible so that the ray can make a big leap forward. An octal number q~q2" • • q~ specifying a voxel that contains the exit point is derived immediately by using (8). Three cases may result when a binary search is implemented. • The octal number is fully matched with the code of an existing node in the linear octree. Obviously, it is a voxel node containing boundary surfaces. The ray is tested against these surfaces to examine any possible intersections. • Only the front i digits of the octal number are matched and the remaining part of the matched code consists of a string of F. Then a PARTIAL node which in an ancestor of the particular voxel at level i has been found and the intersection process between the ray and the boundary surfaces contained is conducted. • The octal number is partially matched until the (i + 1)th digit is reached and the (i + 1)th digit of the corresponding node code is not "F." Then a terminal node (EMPTY) at level i + 1 corresponding to the largest homogeneous region covering the particular voxel has been derived. Not that this node encoded by q,q2" • • q~+,F. • • F is implicitly represented by the data structure. The side of the respective cube has length 2 N-i- i. By repeatedly carrying on the two steps mentioned above, we can find the strike point of each ray on the boundary surfaces in the scenes with little computations. 4.5 T w o - w a y r a y t r a c i n g Although ray tracing plays an important role in realistic image synthesis, it does have limitations in the quality of the resulted image as well as in the efficiency of the algorithm. The illumination model posed by ray tracing does not simulate the light propagation ema-

305

Programming environment for realistic image synthesis nated from the light sources through the environment. Illumination b y some of the luminous flux transmitted via these paths thus cannot be properly presented by ray-traced images. Since ray tracing is a view-dependent process, only the luminous flux transmitted via the reverse path of ray tracing is taken into account. A simple example serves to show this deficiency. As is known, an area of surface directly illuminated by specular lights looks shiny. This appearance might be lost, however, in an image rendered by ray tracing if the area produces little specular reflection or if the specular reflection generated deviates from the viewing direction. Besides, ray tracing takes a substantial amount of time to calculate shadows. For each hit by a ray on a surface, a detective ray is sent towards each light source to test if it is shielded by any object lying in between. In the case of rendering multiple images of the same scene, the shadow calculation has to be performed repeatedly, using no coherence resulted from rendering the same environment. To avoid these limitations, we incorporated a new two-way ray tracing technique. Two-way ray tracing consists of two phases. In the first phase, we trace rays emitted from the light sources. Rays incurred by a specular reflection of a shiny surface or by specular transmission of a transparent surface will be traced further. Areas of surfaces not directly illuminated by a light source are assumed in its shadow. In the second phase, we perform the inverse ray tracing originated from the eye. A modification is made to the intensity calculation of naive ray tracing that the illumination of an elemental surface area hit by a ray will include the contributions of its incident specular lights captured in the first phase as directed light sources[24]. Note that the first phase is a view-independent process. When multiple images of the same scene are to be rendered, this phase can be isolated and implemented only once, thus making great savings in computing time. Again, both phases are fully supported by linear octree data structure. The fast ray tracing methods using the space indexing techniques described in Section 4.4 are still effective, except that the cubic regions of all octree nodes are kept in voxel size in twoway ray tracing. For each region containing only one boundary surface, a stochastically located point as well as the normal of the surface at this point is reserved

Fig. 7.

for reference. When an incident ray enters such a region, the point will be served as the intersection point and new reflective ray or transmissive ray is generated. Since most of the octree nodes fall into this cataloque, considerable amount of intersection calculation is avoided. 5. IMPLEMENTATION The PERIS implementation environment includes a Universe 68000 microcomputer, a DEC PDP 11/44 machine and a PS340 graphics workstation. All codes are written in C a n d run under the UNIX operating system. PERIS is so organized that it can fully explore the benefits of the UNIX programming resources. We have used PERIS as a testbed for experimenting with new ideas in modeling and rendering. Figure 6 was generated by implementing a revised scan line algorithm. Figure 7 was generated by adopting a radiosity method. Figure 8 rendered by PERIS serves to demonstrate the illumination difference between the original Cook-Torrance model and our improved CookTorrance model. On the two subpictures, a copper vase is illuminated by two D6500 light sources. The different appearances of highlights on the vase edge reflect the different specular distribution function D, adopted by these two models. We have also used PERIS to generate high-quality images of complex scene. In fact, PERIS provides convenient ways to define, modify, test and render the image to be synthesized. Figures 9-11 present three different views of a living room. At the initial stage, we modeled each object in the scene and examined its

<

Fig. 6.

Fig. 8.

YIN1NGZHU et al.

306

Fig. 9.

Fig. 11.

shape by immediate display. The geometry of an object was modified if it was not satisfactory. Objects were then located into the scene at proper positions. Luminaire data were specified at the second stage. Two light sources were used and the material reflectance curves of the surfaces in the scene were obtained from the material data base. An appropriate illumination model was selected and we got an entire view of the scene by scan line rendering. Illumination effects were adjusted by changing the illumination specifications of the environment. At the third stage, a view-independent light buffer was established to facilitate shadow calculation and multiple views of this room were finally generated using ray tracing techniques.

various existing models and in designing special models for rendering surfaces with particular reflection and transmission distributions. As an example, we have derived an improved Cook-Torrance model. The technique of two-way ray tracing has also been advocated as a promising rendering method. It is a twophase process. During the first phase, the light propagation in the environment is simulated by tracing the light emitted from the light sources. View-dependent images are generated in the second phase via ray tracing adopting the illumination information retained in the first phase. It has been shown that two-way ray tracing not only enhances the realism of the resulted images but also increases efficiency when rendering the same environment in different viewing directions.

6. CONCLUSIONS A programming environment for realistic image synthesis has been described. It offers flexible tools both for geometric modeling and for image rendering. Complicated objects can be built via solid modeling or surface modeling operations. Natural objects are represented by procedural models. Four rendering interfaces are available in PERIS including immediate display, scan line rendering, ray tracing and two-way ray tracing, each having its own advantage and application area. A linear octree data structure is established to support both the modeling and rendering processes. Spatial coherency is then explored in these processes, greatly reducing the computation involved. When presenting PERIS, we have introduced a new illumination model. The new model established on a theoretical basis allows much flexibility in unifying

Fig. 10.

REFERENCES

1. J. F. Blinn and M. E, Newell, Texture and reflection in computer generated images. Comm. ACM 19(10) (Oct. 1976). 2. J. F. Blinn, Models of light reflection for computer synthesized pictures. Comp. Graphics 11 (2) ( 1977 ). 3. R. L. Cook and K. E. Torrance, A reflectance model for computer graphics. Comp. Graphics 15 (3) (198 l). 4. R. L. Cook, T. Porter and L. Carpenter, Distributed ray tracing. Comp, Graphics 18(3) (1984). 5. R. L. Cook, Shade trees. Comp. Graphics 18(3) 0984). 6. R. L. Cook, Stochastic sampling in computer graphics. ACM Trans. Graphics 5(1) (Jan. 1986). 7. R. L. Cook, L. Carpenter and E. Catmull, The Reyes image rendering architecture. Comp. Graphics 21(3) (1987). 8. F. C. Crow, Shadow algorithms for computer graphics. Comp. Graphics 11(2) (1977). 9. T. Duff, Compositing 3-D rendering images. Comp. Graphics 19(3) 0985). 10. A. Fournier, D. Fussell and L. Carpenter, Computer rendering of stochastic models. Comm. A CM 25 (6) ( 1982 ). 11. A. Fujimoto, T. Tanaka and K. Iwata, ARTS: accelerated ray tracing system. IEEE Comp. Graphics Appl. 6(4) (April 1986). 12. I. Gargantini, Linear octrees for fast processing of three dimensional objects. Comp. Graphics Image Processing 20(4)(1982). 13. A.S. Glassner, Space subdivision for fast ray tracing. IEEE Comp. GraphicsAppl. 4(10)(Oct. 1984). 14. R. A. Hall and D. P. Greenberg, A testbed for realistic image synthesis. IEEE Comp. GraphicsAppl. 3(1 l) (Nov. 1983). 15. W. Lu, T. G. Jin and Y. D. Liang, Surfaces generated by cutting and grinding polyhedra with arbitrary topological meshes--C-G Surfaces, in Proc. ofCADDM'87, Beijing (April 1987).

Programming environment for realistic image synthesis 16. T. Nadas and A. Fournier, GRAPE: An environment to build display processes. Comp. Graphics 21 (3) (1987). 17. Q. S. Peng, Volume modeling for sculptured objects. Ph.D thesis, University of East Anglia (Sept. 1983). 18. Q.S. Peng, A scan line algorithm for displaying sculptured objects, in Proc. of International Conf. on Engineering and Computer Graphics, Beijing (Aug. 1984). 19. Q. S. Peng, Y. N. Zhu and Y. D. Liang, A fast ray tracing algorithm using space indexing techniques, in Proc. of EUROGRAPHICS'87, Amsterdam (Aug. 1987 ). 20. B. T. Phong, Illumination for computer generated pictures. Comm. ACM 18(6) (June 1975). 21. W.T. Reeves, Particle system--A technique for modeling a class of fussy objects. ACM Trans. Graphics, 2(2) (April 1983). 22. W. T. Reeves, D. H. Salesin and R. L. Cook, Shadowing with texture maps. Comp. Graphics 21(3) (1987). 23. A. R. Smith, Plants fractals and formal languages. Comp. Graphics 18(3) (1984). 24. D. R. Warn, Lighting controls for synthesis images. Comp. Graphics 17(3) (1983). 25. J. R. Wallace, M. F. Cohen and D. P. Greenberg, A twopass solution to the rendering equations: A synthesis of

CAG 1 2 : 3 / 4 - B

26. 27.

28. 29. 30. 31. 32.

307

ray tracing and radiosity methods. Comp. Graphics 21(3) (1987). T. Whitted, An improved illumination model for shaded display. Comm. ACM 23(6) (June 1980). T. Whitted and D. M. Weimer, A software testbed for the development of 3D raster graphics system. ACM Trans. Graphics 1(1) (1982). X. Ye, Geometric continuity and geometric continuous curves and surfaces. Master's thesis, Dept. of Math., Zhejiang Univ. (1987). F. Yamaquchi and T. Tokieda, A unified algorithm for Boolean shape operations. IEEE Comp. Graphics Appl. 4(6) (1984). Y. N. Zhu, Generation of highly realistic images, in Proc. of the First Chinese Annual Conf. on Computer Graphics, Hangzhou (April 1985). Y. N. Zhu, A near real time shaded display algorithm for CSG representations, in Proc. of the First Chinese Annual Conf. on Computer Graphics, Hangzhou (April 1985). Y. N. Zhu, Illumination theory and its applications for realistic image synthesis, in Proc. ofCADDM'87, Beijing (1987).

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.