Real-time panorama canvas of natural images

Share Embed


Descrição do Produto

B. S. Kim et al.: Real-time Panorama Canvas of Natural Images

1961

Real-time Panorama Canvas of Natural Images Beom Su Kim, Sang Hwa Lee, and Nam Ik Cho, Member, IEEE Abstract — This paper deals with a real-time panorama algorithm for mobile camera systems. The proposed system generates the panoramic images and shows the intermediate results simultaneously on the mobile display. The proposed panorama system consists of feature point extraction, feature tracking, rotation matrix estimation, and image warping onto cylindrical surface. Feature points are extracted by fast Hessian detector in the low resolution and corner detector in the high resolution. Then, the feature points are tracked by matching between input frame and synthesized panorama image. The camera motion is modeled as a rotation matrix which is estimated using the tracked feature points. For realtime operation of panoramic image synthesis, we propose a method to estimate the rotation matrix using non-iterative least square method. We project the feature points of image onto the unit sphere surface in the world coordinates. This enables us to estimate the parameters of rotation matrix as the linear and non-iterative problem. Finally, we project the input frames onto panorama surface using the rotation matrix. We also implement a real-time display interface to show the intermediate panorama results while panorama image is being generated. Thus, the proposed panorama system paints the panorama canvas with real images. According to the experiments in the mobile systems such as mobile phone and tablet PC, the proposed system works well to generate the panoramic images in real-time.1 Index Terms — Panorama image, least square method, rotation matrix update, mobile system.

I. INTRODUCTION Panoramic image is to synthesize the consecutive multiple images together on the common virtual surface. The overlapped regions between images are first matched, and camera motion is estimated as the transformation matrix. Then, the images are warped onto the panorama surface using the estimated camera motion and geometric relation between the panorama surface and image coordinates [1], [2]. Panoramic images provide the users with wide scenes which cannot be captured by the single picture of usual cameras. Thus, panorama synthesis overcomes the limitations of view angles and resolutions in the usual cameras. Recently, panorama image systems have been popular in the mobile cameras and PC environments. Many researches 1 Beom Su Kim, Sang Hwa Lee, and Nam Ik Cho are with Department of Electrical Engineering, BK21 Information Technology, INMC, Seoul National University, Kwanak-gu, Seoul, 151-742, South Korea (emails: [email protected], [email protected], [email protected]).

Contributed Paper Manuscript received 10/15/11 Current version published 12/27/11 Electronic version published 12/27/11.

algorithms and commercial systems have been reported [5][7], [14], [18]. In the panorama algorithms, features matching and transformation estimation are the most important procedures since the images are spatially warped by the transformation. The early panorama systems assumed the fixed camera motions such as horizontal rotations with fixed angles using user-constrained interfaces. This simplified the calculations of transformation matrix with high accuracy, but the degree of freedom to handle panoramic images was restricted [7]. Lowe exploited the descriptor-based feature, SIFT to match the image correspondences and to estimate arbitrary camera motions automatically [5]. The descriptor-based features like SIFT [3] and SURF [8] improved the performance of automatic panorama synthesis. However, since the feature detection and automatic feature matching need much computational load, the approaches are not suitable for mobile camera systems with lower computing power. Adams proposed a feature tracking method to estimate translation camera motions in the mobile systems [14]. Even though the approach restricted the camera motion to the simple translation, the feature matching and transform estimation operated in the mobile systems. Wagner developed the feature tracking to estimate 3-DOF (degree of freedom) rotation matrix in the mobile systems [18]. His panorama algorithms tracked the feature points in the hierarchical multi-resolutions to decrease time consumption, and updated iteratively the rotation matrix using Gauss-Newton method. The panorama system in [18] generated the panoramic images in real-time. After spatially transforming the image coordinates, it is important to synthesize multiple images onto the common panorama surface without seams. Efros determined the image boundaries of overlapped images using dynamic programming, which selected the optimal boundary path in the overlapped regions [19]. The multiband blending [5] and image gradients method [20], were used to make the boundaries between the overlapped images smooth by assigning adaptive weights according to the local image characteristics. Due to the different camera viewpoints, the images are subject to have different exposures and brightness. Some approaches tried to remove the exposure differences and radiometric distortion of cameras. Uyttendaele corrected the exposure differences of images using block-based exposure adjustment [21]. Goldman eliminated vignette aberration of camera by introducing anti-vignette function obtained from the radiometric camera calibration [22]. This paper proposes the real-time panorama algorithms using feature matching in the mobile systems. Especially, we

0098 3063/11/$20.00 © 2011 IEEE

1962

IEEE Transactions on Consumer Electronics, Vol. 57, No. 4, November 2011

focus on the fast estimation of rotation matrices which is the main process for real-time and automatic panorama synthesis. We change the non-linear and iterative problem in estimating the rotation matrix into linear and non-iterative problem without loss of performance. This linear and non-iterative process improves the operation speed and eliminates the problem of floating point precision in the fixed integer coding for mobile devices. We demonstrate the proposed panorama algorithms in the usual mobile systems such as mobile phones and tablet PC. The rest of paper is organized as follows. Section II describes the proposed panorama algorithms in detail. Experimental results and demonstration with mobile phones are shown in Section III. Finally, we conclude the paper in Section IV. II. PROPOSED PANORAMA SYSTEM The proposed panorama algorithms consist of feature extraction, feature tracking in the multi-resolutions, rotation matrix estimation, warping, and display interface. The proposed system generates the panoramic images in realtime, and shows the intermediate panorama results on the display at the same time. We describe each algorithm in detail. A. Feature Extraction To estimate the camera motions automatically without user restriction, robust feature matching is required. We first extract the feature points using fast methods in the literature. We implement the feature extraction in the three multiresolutions to speed up. Since the features are not well detected in the lowest resolution due to noise and quantization error, we apply fast Hessian detector in [8],

 Lxx Lxy  H  ,  Lxy Lyy  2 2 2 Lxx  2 G, Lxy  G, Lxx  2 G, x xy y

B. Feature Tracking Feature matching needs much time to search for the correspondences between images. We have to match the detected features for all candidate features to find the best correspondences. The feature matching has been a bottleneck for real-time operation in the feature-based methods. Wagner proposed a good method to decrease time consumption of feature matching by tracking the previously detected features in the next image [18]. We also apply the tracking method for feature matching. Fig. 1 shows the panorama canvas and feature tracking. The panorama canvas is partitioned into 64  64 pixel blocks called cells. When the cell is completely filled with warped images, maximal 40 feature points are extracted in the cell. To track features in the panorama canvas, we reproject the warped image on the panorama canvas to input image coordinates, and perform the block-based matching within the search range. The initial position and rotational pose of input frame are guessed based on those of previous frames on the reprojected panorama canvas. The features to track are chosen within overlapped cells between the panorama canvas and warped input frame onto panorama canvas using initial camera pose. The selected features in the lowest resolution are first matched, and the tracking result in the lower resolution is refined in the higher resolutions. This hierarchical tracking scheme decreases the time consumption preserving the tracking accuracy. We should note that the features are not extracted in the input frame for tracking process. The features on the panorama canvas are tracked in the input frame. When a cell is completely filled with input frames, the features in the cell are extracted and stored for tracking of next frames.

(1)

where G is the Gaussian function. The recent study about feature detector for visual tracking shows that fast Hessian detector has the best repeatability even though the number of extracted features is small [11]. The Laplacian operation in (1) is approximated to the binary calculation for fast process. In the next higher resolutions, we use the faster corner detector in [10] since the feature points are better extracted in the higher resolutions only using the corner points. For each feature point, we define the descriptor just as the image pixels in 7  7 window. Modeling descriptor is timeconsuming process, so we just use the image block for feature matching.

Fig. 1. Panorama canvas and feature tracking. The cell (green block) is a 64  64 pixel block and is a basic unit to extract features (red points). The input frame is located at the position and rotational pose of previous frame. Then, block matching is performed in the search range for features. The tracking results in the lower resolutions are updated in the higher resolutions.

C. Selection of Features The features should be distinctive to be matched well. Usually, the features to be tracked are selected from detected features based on the strength in the detection process. The

B. S. Kim et al.: Real-time Panorama Canvas of Natural Images

response of feature detector is evaluated and the features with highest strengths are selected to be used in tracking [23]. In our feature detection process, each feature has also the strength from the Hessian matrix or corner detection filter. We propose a method to select features considering spatial distribution of features. When we use only the strength to select features, the features will concentrate on the complex regions on the panorama image since the strength of feature detectors are usually high in the regions. The concentration of features increases the matching error and warping distortion due to localized matching. It is required to distribute features as uniformly as possible to reflect the whole image characteristics. We propose a new strength of feature based on spatial distance between features. First, we select the feature with highest strength in the feature detection. Then, we select the feature with second highest strength, and assign the spatial distance from the first feature to the second feature as a new strength. The feature with third highest strength has the new strength by the minimum distance from the selected two features. This process to assign new strength is repeated to the next highest features with the minimum distance from the previously selected features. The proposed method selects the features uniformly in the images so that the tracking is stable. Fig. 2 shows the proposed method to select the features based on the spatial distances. The minimum distance is the new strength of features. Thus, some features with high strengths in the feature detection are eliminated in tracking.

1963

(a)

(b) Fig. 3. Distributions of selected features. The yellow points are detected initial featureas, and the red points are selected based on the strengths. (a) Method in [23], (b) Proposed method.

D. Estimation of Rotation Matrix Using the tracking results of features, we estimate the camera motion in the input frame. We model the camera motions as 3-D rotation between images. Usually, the users move little while capturing pictures for panoramic images, thus the camera motion is modeled by 3-D rotation without loss of accuracy. The 3-D rotation matrix has 3 parameters of T angles    x , y , z  according to the coordinate axes, and the 3-D camera rotation matrix is derived as below, O  x ,  y ,  z   Oz  z  Ox  x  Oy  y 

Fig. 2. Feature selection. The new strength is the minimum spatial distance from previously selected features.

And Fig. 3 shows the feature distributions by [23] and proposed method. As we can see in Fig. 2, the feature strengths concentrate on the local complex regions in the image. The localized matching increases geometric error due to local non-planar surfaces. The proposed method selects the features uniformly in the whole image. This improves the feature tracking performance.

cos  z   sin  z  0

 sin  z cos  z 0

0  1 0 0  0 cos  x 1  0 sin  x

0   cos  y   sin  x   0 cos  x    sin  y

0 sin  y   0  0 cos  y 

1

The other algorithms updated the rotation parameters of current input frame using the previous ones,

t  t 1   ,

(2)

where the incremental parameter vector  was estimated to update the rotation matrices. Finding the

1964

IEEE Transactions on Consumer Electronics, Vol. 57, No. 4, November 2011

parameters is not linear, so M-estimator in Gauss-Newton method was used in the iterative process [18]. About 20 iterations are usually required for convergence in the Mestimator. In addition, the increment of parameters is too small between input frames to express the fixed point coding. This causes the precision problem in the mobile devices such that the panoramic image synthesis does not work. We propose a method to transform the non-linear estimation problem into linear and non-iterative process. The non-linear warping on the cylindrical surface is the main reason to make the parameter estimation non-linear. Thus, we first project the feature points on the panorama canvas and input frames to a unit sphere in the 3-D world coordinates. And we find rotation parameters on the unit sphere. This projection onto unit sphere changes the nonlinear warping process to linear transform. Let’s define some notations as follows. W ( P | O)  M is warping by rotation matrix O of feature coordinates P in the input frame onto the panorama canvas M. The world coordinates of feature point is described as

Pw  ( X , Y , Z )  O 1  K 1     P 

Now, we get the coordinates of features that are projected on the unit sphere,

 R sin  u / R    , v  2 2  R v   R cos u / R     1

MS 

and

PS 

1 K    P  1

K 1     P  ,

(6)

where M S is the coordinates of features on the panorama canvas that projected on the unit sphere, and

PS is the coordinates of features on the input frame that projected on the unit sphere. After projecting feature points on the panorama canvas and those on the input frame onto the unit sphere, two corresponding points on the unit sphere are linearly related by rotation matrix O of current camera pose,

PS  OM S .

(3)

where   is the function to map 2-D coordinates into the hom ogenous coordinates by adding a z-coordinate of 1, and K is the camera calibration matrix. Since the panorama canvas is a cylindrical surface, 3-D coordinates (X, Y, Z) are mapped onto cylindrical coordinates (u, v) as below,

(5)

(7)

Using the relation between (5) and (7), we estimate rotation parameters in the linear process. Consequently, we derive the formulation using singular value decomposition (SVD) method to get the parameters from correlation matrix T of corresponding feature points on the unit sphere, N

 X M  (u , v)   R tan 1  Z 

 Y   ,  X 2  Z2 

T   PSi M SiT  U V T (4)

where R is the radius of cylinder related with focal length. Fig. 4 shows the warping onto cylindrical panorama canvas.

,

(8)

i 1

and the rotation matrix is derived as

1 0 0  O  U 0 1 0  V T , 0 0 s 

(9)

where s is a sign values [13], [15],





s  sign det UV T  .

(10)

In (9), the value s is -1 only when the reflection occurs. But there is no possibility that s is -1 since the correspondences are matched within a small search range. Thus, s is always 1, and the final rotation matrix is obtained as

O  UV T . Fig. 4. Projection onto cylindrical panorama canvas of 3-D world coordinates [7].

(11)

From the correlation matrix of feature correspondences on the unit sphere, we find the rotation matrix by the linear algebra.

B. S. Kim et al.: Real-time Panorama Canvas of Natural Images

1965

The process from (7) to (9) is a well-known linear algebra for 3  3 matrix, thus there is little computational load even in the mobile processors. Furthermore, the precision problem of fixed point coding does not occur. The proposed method estimates the rotation parameters in real-time in the mobile devices. E. Removal of Outliers When we match the corresponding feature points in tracking, there are some errors called outliers. The outliers are the feature points that are not spatially matched by the estimated rotation matrix. The rotation matrix of input frame is incorrectly estimated by the outliers. We remove the outliers out of tracked features and correct the rotation matrix more exactly. We use iterative RANSAC (RANdom Sample Consensus) method to remove the outliers [12]. In the case of M-estimator, there are many iterative operations due to Gauss-Newton method, so it is difficult to use iterative RANSAC. On the contrary, the proposed method performs the non-iterative linear operation in estimating the rotation matrix. Thus, the iterative RANSAC does not delay the panorama process in the proposed algorithm. The number of RANSAC iteration for estimating rotation matrix in (9) is determined as in [4],

N

log 1  p 



log 1  1   

S



,

(11)

where p is the probability to ensure that at least one of random samples is free from outliers within N iterations. It is usually set as 0.99. S is the minimal number of samples and  is the outlier ratio. We need at least 2 samples to generate a hypothesis for the rotational model and 10 iterations to remove the outliers sufficiently. Note that the iterative RANSAC is performed only in the lowest resolution since it is enough for updating the parameters in the hierarchical scheme when the parameters are well estimated in the lowest resolution. For speeding up evaluation of each hypothesis, we adopt early rejection scheme which is to reject hypotheses that have large difference from previous camera pose in the early stage. In removing the outliers in iterative RANSAC, we evaluate the reliability of refined rotation matrix by checking the ratio of inliers. The number of inliers is widely varied according to the image complexity, and it is not suitable for testing the correctness of rotation matrix. We exploit the ratio of inliers over all extracted features. When the ratio of inliers is high, the estimated rotation matrix is reliable. However, when the ratio of outliers is high, we discard the rotation parameters and track the feature points again until the reliable rotation matrix is obtained. The removal of outliers and evaluation of rotation matrix are all performed in real-time.

F. Re-tracking Mode When feature tracking fails, the current input frame cannot be warped on the panorama canvas since there is information of position and rotation pose. Tracking failure occurs when the camera moves too fast, or there are few features to be tracked, or the camera moves out of panorama canvas. We implement a re-tracking mode for tracking failure. If the tracking result is not reliable, we try to track all the features on the panorama canvas until the reliable parameters are estimated. Thus, the users finish the panorama operation as they want without stopping. The panorama operation is finished whenever every cell in the panorama canvas is completely filled with input frames, or when the users interrupt the panorama operation. TABLE I A COMPARISON OF TIME CONSUMPTION FOR PARAMETER ESTIMATION Iterative method (M-estimator)

Proposed method

Small (512*128)

0.4ms ~ 0.7ms

0.007~0.009ms

Middle (1024*256)

0.8ms ~ 1.0ms

0.01ms~0.02ms

1.0ms ~ 1.5ms

0.03ms~0.05ms

Resolution

Large (2048*512)

TABLE II A COMPARISON OF TIME CONSUMPTION FOR USING RANSAC Resolution

Proposed method w/o RANSAC

Proposed method with RANSAC

Small (512*128)

0.007ms~0.009ms

0.007ms~0.014ms

III. EXPERIMENTAL RESULTS We implemented the proposed panorama algorithms in the mobile devices at 1GHz CPU. The input video is 320  240 frame, and panorama canvas is 2048  512. We also designed an AR (augmented reality) display to show the intermediate panorama results and current input frame while generating panoramic image. To compare the performance of proposed system, we implemented the method in [18] using Mestimator. However, the M-estimator does not work well in the case of small camera motions due to precision problem in the fixed point coding for mobile devices. Table I and II show the comparison of time consumption in estimating the rotation parameters. Time consumption is dependent on the number of tracked features. For fair comparison, we restrict the same number of features to track in the each resolution. In Table I, the proposed linear and noniterative method is much faster than the iterative M-estimator. Compared with no RANSAC, the proposed method with RANSAC is not so slow for real-time operation.

1966

IEEE Transactions on Consumer Electronics, Vol. 57, No. 4, November 2011

(a)

Fig. 5 and 6 shows the panorama results to compare with the iterative M-estimator and proposed linear method. Since there are some outliers in the initial tracked features, the estimated rotation matrices are not reliable in many frames. As we can see in the non-iterative method, there are many warping errors. And the input frames far from the center of panorama canvas are missed due to failure of feature tracking. Compared with the M-estimator, the proposed method using the iterative RANSAC shows almost same results with the Mestimator. This proves that the proposed method is correctly derived and more efficient in the mobile devices.

(b)

(c) Fig. 5. Comparison of panorama results. (a) M-estimator, (b) Proposed linear method without RANSAC, (c) Proposed linear method with RANSAC.

(a)

(b)

(c) Fig. 6. Comparison of panorama results. (a) M-estimator, (b) Proposed linear method without RANSAC, (c) Proposed linear method with RANSAC.

Fig. 7. Demostration of successive operation to generate panoramic images. The current input frame and intermediate result are shown at the same time. Users can fill the panorama canvas with the natural images by moving the camera as they want.

Fig. 7 shows the continuous operation of panorama synthesis in real-time. The intermediate panorama results are shown with the current input frame simultaneously. The users take a panoramic image watching the intermediate results immediately. For each input frame, the features on the panorama canvas are tracked to warp the input frame at the correct position and pose. When the camera moves slowly, the

B. S. Kim et al.: Real-time Panorama Canvas of Natural Images

tracking is faster and stable since we use the previous position and parameters as the initial status for tracking. However, when the camera moves far from the previous position, then the initial parameters are not helpful and the features on the whole canvas are considered to find matched position and pose. Thus, there is no problem that the camera moves on the synthesized panorama regions. As you can see in Fig. 5, the users can fill the natural images onto the empty panorama surface by moving the camera as they want. The panorama surface seems to be a painting canvas with natural images. Finally, Fig. 8 shows some panorama results to complete the canvas. IV. CONCLUSION This paper has proposed a real-time panorama algorithm for mobile camera systems. The proposed panorama system consists of feature point extraction, feature tracking, rotation matrix estimation, and image warping onto cylindrical surface. Feature points are extracted by fast Hessian detector in the low resolution and corner detector in the high resolution. Then,

1967

the feature points are tracked between input video frame and synthesized panorama image. The camera motion is modeled as a rotation matrix which is estimated using the tracked feature points. For real-time operation of panoramic image synthesis, we have proposed a method to estimate the rotation matrix using non-iterative least square. We project the feature points of image onto the unit sphere surface in the world coordinates. This enables us to estimate the parameters of rotation matrix as the linear and non-iterative problem. Finally, we project the input frames onto panorama surface using the rotation matrix. We also implement a real-time display interface to show the intermediate panorama results while panorama image is being generated. Thus, the proposed panorama system paints the panorama canvas with real images. According to the experiments in the mobile systems such as mobile phone and tablet PC, the proposed system works well to generate the panoramic images in real-time. Since the users are able to watch their own panorama images while they are making, the proposed system is expected to be interested in mobile systems.

Fig. 8. Some panorama results to complete the canvas.

1968

IEEE Transactions on Consumer Electronics, Vol. 57, No. 4, November 2011

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

[17] [18]

R. Szeliski, “Image alignment and stitching: A tutorial,” Technical Report MSR-TR-2004-92, Microsoft Research. R. Szeliski, “Video mosaics for virtual environments,” IEEE Computer Graphics and Applications, vol. 16, no. 2, pp. 22–30, 1996. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91-110, 2004. R. I. Hartley and A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, 2000. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, vol. 74, no. 1, 2007. S. J. Ha, H. I. Koo, S. H. Lee, N. I. Cho, and S. K. Kim, “Panorama mosaic optimization for mobile camera system,” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1217-1225, 2007. S. J. Ha, S. H. Lee, N. I. Cho, S. K. Kim, and B. J. Son, "Embedded panoramic mosaic system using auto-shot interface," IEEE Transactions on Consumer Electronics, vol. 54, no. 1, pp. 16-24, 2008. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “SURF: Speeded-up robust features”, Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008. K. Park and S. Bang, “Motion estimation during photographing towards minimal solution for mobile panoramic images,” IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp. 992-998, 2008. E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” European Conference on Computer Vision, 2006. S. Gauglitz, T. Höllerer, and M. Turk, “Evaluation of interest point detectors and feature descriptors for visual tracking”, International Journal of Computer Vision, vol. 94, no. 3, pp. 335-360, 2011. M. Fischler and R. Bolles, “Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography,” Communication of the ACM, 1981. M. Brown, R. I. Hartley, and D. Nister, “Minimal solutions for panoramic stitching,” IEEE Conference on Computer Vision and Pattern Recognition, 2007. A. Adams, N. Gelfand and K. Pulli, “Viewfinder alignment,” Computer Graphics Forum (Proceedings of Eurographics), pp. 597-606, 2008. K. S. Arun, T. S. Huang and S. D. Blostein, “Least-squre fitting of two 3-D points set,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, 1987. P. Baudisch, D. Tan, D. Steedly, E. Rudolph, M. Uyttendaele, C. Pal, and R. Szeliski. “Panoramic viewfinder: providing a real-time preview to help users avoid flaws in panoramic pictures,” In Proceedings of the 17th Australia conference on Computer-Human Interaction, 2005. D. Steedly, C. Pal, and R. Szeliski, “Efficiently Registering Video into Panoramic Mosaics,” In Proceedings of the 10th IEEE International Conference on Computer Vision , 2005. D. Wagner, A. Mulloni, T. Langlotz, and D. Schmalstieg, "Real-time panoramic mapping and tracking on mobile phones," Virtual Reality Conference, 2010.

[19] A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and transfer,” ACM SIGGRAPH, pp. 341-346, Aug. 2001. [20] A. Zomet, A. Levin, S. Peleg, and Y. Weiss, “Seamless image stitching by minimizing false edges,” IEEE Trans. Image Processing, vol. 15, no. 4, Apr. 2006. [21] M. Uyttendaele, A. Eden, and R. Szeliski, “Eliminating ghosting and exposure artifacts in image mosaics,” Proc. Computer Society Conf. CVPR, vol. 2, pp. 509-516, Dec. 2001. [22] D. B. Goldman and J. Chen, “Vignette and exposure calibration and compensation,” Int. Conf. Computer Vision, vol. 1, pp. 899-906, Oct. 2005. [23] M. Brown, R. Szeliski, and S. Winder, “Multi-Image using Multi-Scale Oriented Patches,” IEEE Conference on Computer Vision and Pattern Recognition, 2005.

BIOGRAPHIES Beom Su Kim received the B.S. and M.S. degrees in electrical engineering from Seoul National University, Seoul, Korea, in 2007 and 2010, respectively. He is currently studying for Ph. D degree in electrical engineering from Seoul National University, Seoul, Korea. His research interests are image stitching, augmented reality, embedded panorama system, and digital image processing including document image. Sang Hwa Lee received the B.S., M.S., and Ph.D. degrees in electrical engineering from Seoul National University, Seoul, Korea, in 1994, 1996, and 2000, respectively. He had been a visiting researcher in NHK STRL in Tokyo, Japan from 2000 to 2002. He has joined BK21 information technology, Department of Electrical Engineering, Seoul National University, since 2003, where he is currently a BK research professor. His research interests include image processing, stereoscopic system, HCI, pattern recognition, and computer vision. Nam Ik Cho received the B.S., M.S., and Ph.D. degrees in control and instrumentation engineering from Seoul National University, Seoul, Korea, in 1986, 1988, and 1992, respectively. From 1994 to 1998, he was with the University of Seoul, Seoul, Korea, as an Assistant Professor of Electrical Engineering. He joined the School of Electrical Engineering, Seoul National University, in 1999, where he is currently a Professor. His research interests include speech, image, video signal processing, and adaptive filtering.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.