A system for human jaw modeling using intra-oral images

Share Embed


Descrição do Produto

1

A System for Human Jaw Modeling Using Intra-Oral Images Sameh M. Yamany and Aly A. Farag

Computer Vision and Image Processing Laboratory University of Louisville, Department of Electrical Engineering, Louisville, KY 40292 E-mail:(yamany,farag)@cairo.spd.louisville.edu, Phone:(502)852-6130, Fax:(502)852-1580 Abstract | A novel integrated system is developed to Bernard et al.[1] developed an expert system for or-

obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intraoral video camera. A modi ed Shape from Shading (SFS) technique using perspective projection and camera calibration is then used to extract accurate 3D information from a sequence of 2D images of the jaw. A novel technique for 3D data registration using Grid Closest Point (GCP) transform and genetic algorithms (GA) is used to register the output of the SFS stage. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine. The overall purpose of this research is to develop a model-based vision system for orthodontics that will replace traditional approaches and can be used in diagnosis, treatment planning, surgical simulation and implant purposes. Keywords |Image Sequence Analysis, Shape Representation, Registration.

O

I. Introduction

RTHODONTIC treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontist monitors this movement by means of visual inspection, intra-oral measurements, fabrication of plastic models (casts), photographs and radiographs, a process which is both costly and time consuming. Additionally, repeated acquisition of radiographs may result in undesired side e ects. Obtaining a cast of the jaw is a complex operation for the orthodontist, an unpleasant experience for the patient and may not provide all the details of the jaw. Current technology in dental radiography can provide the orthodontist with 3D information of the jaw. This includes computer tomography (CT), tomosynthesis, tuned-aperture computer tomography (TACT) and localized computer tomography. While dental radiology is now widely accepted as a routine technique for dental examinations, the equipments are rather expensive and the resolution, being adequate for maxillofacial imaging, is still too low for 3D dental visualization. Furthermore, the dose required to enhance the resolution is unacceptably high. Recently, e orts have been devoted to computerized diagnosis in orthodontics. This work was supported in part by grants from the Whitaker Foundation and the NSF (ESC-9505674).

thodontic measurements. However, the cephalometric measurements that are fed to the expert system are still acquired manually from the analysis of Xrays and plaster models. Laurendeau et al.[2] presented a computer-vision technique for the acquisition of orthodontic data from cheap dental wafers. The system is only capable of obtaining the teeth imprints and not the whole 3D jaw model. Stereo photogrammetry has been used to obtain the 3D jaw information from the cast[3]. However user interaction is needed in such systems to determine the 3D coordinates of some points on the cast. Other systems that could measure the 3D coordinates have been developed using either mechanical contact[4] or traveling light principle[5]. In this paper we describe a novel system (shown in gure 1) that obtains the 3D model not from a cast but from the actual human jaw. The system consists of the several modules. First, a sequence of images are obtained from overlapping segments of the jaw using a small intra-oral calibrated camera. Accurate shape from shading (SFS), enhanced by using the calibrated camera perspective projection, is applied to obtain 3D representation of the segments. The resulting 3D points from each segment are merged/registered using a novel and fast registration technique. A priori information is also used to align the registered segments on the correct geometry of the jaw. Once the 3D cloud of points are obtained, a surface mesh is tted to these points to obtain a closed contours model with no ambiguity and in a format suitable for the rapid prototype machine. Further processing is performed to separate individual 3D information for each tooth. The originality of this work comes from the fact that data acquisition is performed directly on the human jaw using a small o -the-shelf CCD camera. The acquisition time is relatively short and is totally painless and not unpleasant to the patient, a marked improvement compared to the current practices. The acquired digital model can be stored with the patient data and can be retrieved on demand. These models can also be transmitted over a communication network to di erent remote dentists for further

2

assistance in diagnosis. Dental measurements and virtual restoration could be performed and analyzed. Moreover, such a model will be a tremendous asset in dental training and teaching. Images

Data Acquisition CCD Camera

Rapid Prototype

. . .

Images

Filtering and Specularity Removal

Shah [10] and extend it to incorporate the camera perspective projection. A camera calibration process which takes into consideration both the intrinsic and the extrinsic camera parameters is done once before acquiring the sequence of images. To calibrate the camera, we can obtain the relation between the 3D coordinates M = f g and image coordinates m = f g as ; m = PM where is a scalar, m and M are the extended vectors [ m 1] and [ M 1] , and P is called the camera calibration matrix. In general, P = A [R t] where A is a matrix containing all the camera intrinsic parameters and R t are the rotation matrix and translation vector. The matrix P has 3  4 = 12 elements but has only 11 degrees of freedom because it is de ned up to a scale factor. The usual method of calibration is to use an object with known size and shape and extract the reference points from the object image. It can be shown that given points (  6) in general position, the camera can be calibrated. The perspective projection matrix P could be decomposed as [Bb] where B is a 3  3 matrix and b is the translation vector. X; Y ; Z

. . .

~

s~

x; y

Camera Calibration and Perspective Projection Calculation

Shape From Shading (SFS)

~

s

T

~

T

T

T

;

;

3D Points

Implant planning Tooth Movement Surgical Simulation

Teeth Separa− tion

Surface fitting Jaw Model

. .

.

Registeration Cloud of Points

N

N

m = BM + b M = B? ( m ? b) = ( )

s~

(1) (2)

or;

1

s~ s x; y Fig. 1. (Up)System Overview. (Down)Dentist oce setup to extract jaw model using a single CCD camera. We used a head model (called \Happy") that exactly resembles the This is an equation of a line in the 3D space correhuman head and jaw. sponding to the visual ray passing through the op-

tical center and the projected point m. By nding II. Shape from Shading using Perspective the scalar , ( ) will de ne a unique 3D point M on the object. Using the image brightness equation Projection and Camera Calibration Shape From Shading (SFS) has been primarily stud- found in[10], ied by Horn [6] and his colleagues at MIT. Since then ( )= ( ) there have been many recent developments in the al1 + p+ (3) gorithms[7][8][9]. Tsai and Shah [10] have described =p 2 2 2 2 1+ + 1+ + a linear technique to solve the SFS problem that is more suitable for our dental application because of its where and are the light source coordinates and speed and accuracy . This is important because the and are the gradient along the and at the reconstruction of the whole jaw requires the applica- object point M, they can simply be written as, tion of the SFS algorithm on more than 30 images. However, their approach, along with most of the sug- = ( ) ? ( ? 1 ) = ( ) ? ( ? 1) gested approaches referenced in the literature, use (4) orthographic projection instead of perspective projection. In the case of real images, signi cant errors which actually represents the gradient along a trimay occur when adopting the orthographic projec- angular surface patch. The brightness equation can tion system instead of the perspective one. Errors now be written as a function of ( ): may be magni ed if the camera is near the object which is the case in the dental application. Some SFS ( ( ) ( ) ( ? 1 ) ( ? 1)) = 0 approaches using perspective projection were found (5) in the literature[11][12][13]. In these approaches the measurements obtained are not metric as they lack Using the Taylors series expansion of the above equathe information about the camera parameters. In tion, we applied the Jacoby iterative method to obthis section we present a SFS approach that uses tain the 3D map, ( ), which gives a unique scalar the linear approximation implemented by Tsai and to every point in the image thus de ning a unique s

s x; y

E x; y

R p; q

pl p

p

pl

p

q

p

s x; y

q

ql q

p

l

q

l

ql

X

s x

;y

and q

s x; y

Y

s x; y

s x; y

g E x; y ; s x; y ; s x

s x; y

s

; y ; s x; y

:

3

3D point on the object. At the is as follows: n

s

(

x; y

) = ?1 ( s

n

x; y

n

th

(

d

ds x; y

s

g s

d

n

n

g s

pl

x; y

p

q

p

ppl q

x; y

),

x; y

n

x; y

?1 p ?( + ) ) ( ( )) = 1 + 2 + 2 p1 + + + 1) + p ( + 2 )( 2 p 1+ + 1+ 2+ 2 g s

n

?1 ) + ? ( ?( 1 )) ( ( )) (6) ( ) ds x;y

where

iteration, (

p

q

p

q

ql

p

2 l

q

2

l

(7)

qql

l

+

l

Now assuming the initial estimate of 0 ( ) = 0 for all pixels, the 3D map can be iteratively re ned using equation (6). The output of applying this SFS algorithm to each jaw segment image will be an array of 3D points describing the teeth surface in this segment. However, there is no relation between the 3D points of a segment and the following one. Thus we needed a fast and accurate 3D registration technique to link the 3D points of all the segments and produce one set of 3D points describing the whole jaw surface. Details of our novel 3D registration using the Grid Closest Point (GCP) transform and Genetic Al- Fig. 2. The results of the SFS applied on two intra-oral images. Each 3D surface is shown from di erent view angles and gorithms (GA) can be found in[14]. This technique the corresponding wire-frame was faster and more accurate than the existing techniques found in literature. s

x; y

III. Results and discussion

Figure 2 shows two intra-oral images for two teeth segments and the resulting SFS surfaces. We then applied the 3D registration on the output of the SFS module directly. It took two minutes on the average to register two consecutive segments. This was still time consuming due to the fact that the 3D coordinates from each segment are uncorrelated. We enhanced the time to register these data by mounting the camera on a 3D digitizer arm as shown in gure 3. Using equation[1], we can obtain the location and direction of the optical center in the camera calibration process; M = ?B?1 b, where M is the location of the optical center of the camera relative to the origin of the world coordinate used in the camera calibration process. By mounting the camera on the digitizer arm while calibrating it, M could always be referenced to the origin of the digitizer. The digitizer has ve degrees of freedom which is suitable for the movement of the camera inside the oral cavity. For each segment, the transformation matrix corresponding to the new location of M is calculated and the resulting 3D points are corrected accordingly. This procedure has greatly enhanced the time required for registration. Yet the registration is still oc

oc

oc

oc

Fig. 3. The 3D digitizer used to track the camera position inside the oral cavity

an essential part of the process because it compensates for slight patient movement during the data acquisition and SFS errors. Using all of the abovementioned techniques and algorithms, a 3D model of the jaw was obtained. We started by taking a sequence of 16 images of the upper jaw of the head model. Filtering and specularity removal is then applied to the images. These images are then fed to the SFS and with the a priory information on the perspective projection parameters of the camera used, 3D surfaces representing each image are obtained. The data are then corrected using the transformation information from the digitizer arm. 3D registration is then performed and the output is shown on gure 4.

4

Registration of the 3D surface segments is performed using the novel and fast GCP/GA technique. The described system is the rst phase in a project to replace current orthodontics practices. The next phase includes the analysis and simulation of orthodontics operations. The authors have some preliminary results in this second phase which involves nite element analysis performed on the segmented teeth[15]. The results obtained can be further improved using an integrated system including photometric stereo vision and shape from shading. The stereo can be used to calculate the height of peculiar pixels (e.g. edges) and shape from shading can be used to estimate the normal vector of all pixels.

Fig. 4. The output of registering 16 images to obtain the upper jaw model. The model is shown here from di erent view angles

Taking into consideration the fact that the data was acquired directly from inside the oral cavity, which is a very restricted and hard to maneuver environment, the resulting jaw model is faithful enough to show all the information about the patient's actual jaw in a metric space. Both the time and convenience for the patient must be considered when comparing this result with the result of scanning a cast. Improvement of the results could be accomplished by incorporating a structured light system to obtain an initial 3D mesh that can improve the output of the SFS technique. Yet this will make the system more bulky and harder to maneuver inside the oral cavity. IV. Conclusions and Future Work

The 3D reconstruction of the human jaw has tremendous applications. The model can be stored along with the patient data and can be retrieved on demand. The model can also be used in telemedicine because it can be transmitted over a communication network to di erent remote dentists for further assistance in diagnosis. Dental measurements and virtual restoration could be performed and analyzed. This paper describes the details of a complete system that acquires the jaw data directly from the oral cavity using small calibrated CCD intra-oral camera. The 3D metric information is obtained using a SFS and the calculated intrinsic and extrinsic camera parameters.

References [1] C. Bernard, \Computerized diagnosis in orthodontics," Proc. 66th Gen. Session Int. Assoc. Dental Res. Montreal , 1988. [2] D. Laurendeau and D. Possart, \A computer-vision technique for the aquisition and processing of 3-d pro les of dental imprints: An application in orthodontics," IEEE Transactions on Medical Imaging 10, pp. 453{461, Sep 1991. [3] S. Berkowitz, G. Gonzalez, and L. Nghiem-Phu, \An optical pro lometer-a new instrument for the threedimentional measurement of cleft palate casts," Cleft Palate J. 19, pp. 129{138, 1982. [4] F. P. G. M. van der Linden, H. Boesma, T. Z. K. A. Peters, and J. H. Raben, \Three-dimensional analysis of dental casts by means of optocom," J. Dent. Res. 51, p. 1100, 1972. [5] A. A. Goshtasby, S. Nambala, W. G. deRijk, and S. D. Campbell, \A system for digital reconstruction of gypsum dental casts," IEEE transactions on Medical Imaging 16, pp. 664{674, Oct 1997. [6] B. K. P. Horn and M. J. Brooks, Shape from Shading, Cambridge, Mass.: McGraw Hill, 1989. [7] K. Kimmel and A. Bruckstein, \Tracking level sets by level sets: A method for solving the shape from shading problem," Computer Vision and Image Understanding 62(2), pp. 47{58, 1995. [8] G. Q. W. amd G. Hirzinger, \Learning shape from shading by a multilayer network," IEEE Transactions Neural Network 17, pp. 985{995, July 1996. [9] R. Zhang, P. Tsai, J. Cryer, and M. Shah, \Analysis of shape from shading techniques," Proc. Computer Vision Pattern Recognition , pp. 377{384, 1994. [10] P. S. Tsai and M. Shah, \A fast linear shape from shading," IEEE Conference on Computer Vision and Pattern Recognition , pp. 734{736, July 1992. [11] K. M. Lee and J. Kuo, \Shape from shading with perspective projection," Computer Vision, Graphics, and Image Processing: Image Understanding 59, pp. 202{212, 1994. [12] J. K. Hasegawa and C. L. Tozzi, \Shape from shading with perspective projection and camera calibration," Computer and Graphics 20(3), pp. 351{364, 1996. [13] K. M. Lee and J. Kuo, \Shape from shading with generalized re ectance map model," Computer Vision and Image Undestanding 67, pp. 143{160, August 1997. [14] S. M. Yamany, M. N. Ahmed, and A. A. Farag, \Novel surface registration using the grid closest point (gcp) transform," IEEE International Confenrence on Image Processing, Chicago , October 1998. [15] S. M. Yamany and N. A. Mohamed, \Computer vision application in orthodontic," Technical Report TR-CVIP966, CVIP Lab, Unv. of Louisville, KY 40292, Dec 1996.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.