Desktop virtual studio system

June 8, 2017 | Autor: M. Hayashi | Categoria: Real Time Computing, Man Machine Interface, Electrical And Electronic Engineering
Share Embed


Descrição do Produto

IEEE TRANSACTIONS ON BROADCASTING, VOL. 42, NO. 3, SEPTEMBER 1996

278

Desktop Virtual Studio System M. Hayashi, K. Enami, H. Noguchi, K. Fukui, N. Yagi, S. Inoue, M. Shibata, Y. Yamanouchi, Y. Itoh NHK Science and Technical Research Laboratories 1-10-1 1 Kinuta, Setagaya-Ku, Tokyo, 157 Japan Abstract - We have been developing a Desktop Virtual Studio (DVS) that allows video producers to create various studio shots on a desktop work space as if those are shot in a real studio. Electronically generated or stored sets, actors, and a camera are integrated in a virtual workspace, and producers can manipulate them through specially designed man-machine interfaces. Our prototype DVS system can realize virtual shooting where two prerecorded actors are performing in virtual sets generated by real-timecomputer graphics. In this paper we discuss the prototype DVS and its application to Desktop Program Production (DTPP). I. INTRODUCTION

The filming process for program production in a TV studio has not changed much for years, and studio sets, props, actors, lighting equipment, and cameras are still the fundamental elements in the studio. The remarkable progress of electronic technology has made TV cameras compact and easy to use and has enabled lighting equipment to be remotely controlled. The process of program production, however, is still done almost the same way as before: studio sets are constructed on a studio floor, actors perform under an appropriate lighting condition, and programs are filmed with camera movement provided by a cameraperson. Since all the elements should be in the same place at the same time, schedules still need to be adjusted, time is lost when mistakes make refilming necessary, and otherwise efficient studio work becomes havoc. It looks like a new way of studio work could be opened up, if this need for simultaneity could be eliminated and creators freed from its waste of time and complexity. We have therefore been developing a system called the "Desktop Virtual Studio" (DVS), which is based on the following process. 1) Generate or convert studio sets, actors, cameras as reproducible form respectively. 2) Reconstitute these in a virtual studio space implemented on Publisher Item Identifier S 0018-9316(96)08301-1

a work station. 3) Film scenes with a virtual camera and modify them through special man-machine interfaces provided on a desktop workspace. Since studio work such as designing sets, actor's performance, camera work, etc. are separated both spatially and temporally in the DVS, each can be individually performed or corrected later. This is expected to make video production environment more efficient and flexible. We have developed an experimental DVS system, in which sets are generated by real-time computer graphics (CG),and actors are previously recorded in a video disc with camera work data. After these sets and actors are integrated to reconstruct a virtual 3-D space, creators set up a virtual camera at a desired position in the virtual space, and film the world where the prerecorded actors appear to perform in the CG set with new camera work. The creatorsutilize a specially designed controller to pan, tilt, and zoom the virtual camera, and can change the positions of the actors and the virtual camera and modify the CG set freely using a man-machine interface on a desktop workspace. In this paper, we first discuss the conventional virtual studio in order to figure out the requirements for the DVS and then describe the experimental system designed to meet those requirements. Next we explain a technique necessary for the DVS, a technique called "virtual shooting." Finally, we briefly describe the results of an experiment and conclude with a brief summary and a mention of some future challenges. 11. THE CONVENTIONAL VIRTUAL STUDIO

Virtual studios have recently become very popular and quite a few systems are available commercially [11[2] We already have developed several types of virtual studio systems [3][4] for various TV program productions. In a virtual studio, a studio set image is generated by using CG or is reproduced from an image memory that stores set images filmed beforehand. The general diagram of a virtual studio is shown in Fig. 1. A CG

279

I

(3) Prepare special man-machine interfaces for creators who Electronic set generator

operate the DVS. An experimental system designed to meet these requirements

V

is explained in detail in the next section.

3. Actor

Output image

TV camera

Fig. 1. General diagram of a virtual studio.

111. DESKTOPVIRTUAL STUDIO EXPERIMENTAL SYSTEM The configuration of the experimental system is shown in Fig. 2. A set image is generated by 3-dimensional real-time CG. Actor images filmed beforehand in a real studio space

are image-composed to get an output image. The CG image

are stored in laser disks along with camera data obtained during the real filming process. The final image is obtained by carrying out the virtual filming of a virtual 3-D space where

is geared to the foreground camera movements in real-time by obtaining camera data (pan, tilt, zoom, etc.) from sensors attached to the camera and sending the data to a CG machine. Thus, in the composite image, the foreground and the background are synchronized with the camera movements

the actor images and a virtual camera are set up in the CG set. Camera work is performed using a virtual camera controller. The system consists of four blocks - (1) virtual set, (2) virtual actor, (3) virtual camera, and (4) man-machine interface which are explained as follows.

made by a cameraperson. The latest version of the virtual studio is the crane camera image and 3-dimensional CG compositing system used for a special program of NHK entitled "Brain and Mind."[5] The system composes an image filmed by a crane camera which can move 3-dimensionallyin a studio space with a real-time CG image geared to the crane camera movement. Since a CG set can be corrected while studio work is going on, this version makes it easy to work

(1) Virtual set

studio set image and a actor image filmed by a real camera

with the trial and error of filming and set correction and, consequently, to get effective shots efficiently.

On the other hand, there are some problems with the conventional virtual studio. Although it separates studio sets and other things to some degree and frees creators from the simultaneity by providing the set images electronically, the simultaneity of an actor and a camera come out as usual. The process such as setting a camera in front of an actor in a real studio space, and filming the actor performing simultaneously, has not changed. To solve this problem, an actor and a camera need to also be provided electronically. The DVS proposed in this paper provides a solution based on the knowledge obtained in using the conventional virtual studios. The basic requirements are the following: (1) Treat a set, an actor, and a camera as suitable form of data such as modeling data of the set shape, image data of the actor and camera data of the camera movement. (2) Realize a CG studio space in three dimensions and realtime.

The 3-dimensionalreal-time CG set image is generated by a graphic work station "ONYX" of SGI. The generation of the CG image requires 3-dimensional modeling data of the set, positions of light sources, and a position and direction of viewpoint. Although the modeling data of the set and positions of light sources are made in advance, a user can change some: of the set modeling data (for example, height of a ceiling) through a man-machine interface on a work station (see later explanation). The position and the direction of the viewpoint is decided from a virtual camera whose position is set by the man-machine interface and whose direction is given by a virtual camera controller. (2) Virtual actor An actor performing in front of a chroma-key blue screen is filmed beforehand, and recorded on a laser disk. A cameraperson pans a real camera following up the actoir moving around on a floor at a full size on a image screen to prevent the actor from getting out of the screen. During filming, panning data are obtained via a sensor attached to a camera head and are stored in a computer disk along with a time code of the laser disk. In a reproduction process, the actor image from the laser disk is put through a real-time image processor so that the camera work which is made in the time when the real filming is done is cancelled. And at the same time, a new camera work is given by image-processing the actor image. Image-

Virtual studio set generator (real-time CG)

CG sets modeled Output image

top view of the floor to set generator T-.;

camera data #I---------:

I

camera data #2--------; Each laser disk includes six acts shot in advance.

positions can be Change of CG sets changed with mouse

Operation of Virtual Camera (panning, tilting, zooming)

Man-machine interface

Fig. 2. System configuration of the experimental Desktop Virtual Studio. processing parameters are calculated from the actor position set up by a man-machine interface, the virtual camera data set up by a virtual camera controller (see later explanation), and the real camera data stored in the computer. This refilming method is called "virtual shooting" and is explained in detail in Section 4. (3)Virtual camera A user can pan, tilt, and zoom the virtual camera using a virtual camera controller and can change the position of the virtual camera through a man-machine interface. A usual

Fig. 3. Operation environment of the Desktop Virtual Studio.

camera head mounted on a desktop work space is used for the virtual camera controller (see Fig. 3) so that a user can perform virtual filming as if he or she was handling a real camera. The camera head is equipped with sensors to obtain the pan, tilt, and zoom data used in the virtual shooting process. Two image-processed actor images and a CG set image are composed by multilayer composite hardware to get an output image. Composite order of each layer is changed by the system in accordance with a position of the actors on a

CG floor in order to obtain an effect that, for instance, one

Fig. 4. Man-machine interface screen of the Desktop Virtual Studio.

28 1

actor hides the other one.

(4) Man-machine interface The multiwindow system of a work station is used for a man-machine interface. The screen is shown in Fig. 4. A user can select the prerecorded actors and the sets, change the position of the actors and the camera on a set floor, modify the set shape, etc. by using a multiwindow and a mouse. The screen is designed to enable a user carry out direct operations on images by using image icons. A user can change the position of the actors and the camera and can modify the set at anytime while checking those in the final composite image, Furthermore, since the camera work by the virtual camera controller is obtained in video frame (every 1/30 second), a user can operate the virtual camera with the feeling of panning,

v world coordinate

*

X

obiect coordinate 2D plate on which real obiect image is

I-

-

world coordinate (b)

-

tilting and zooming a real camera. I v . PRINCIPLE OF VIRTUAL SHOOTING

As explained in the previous section, the refilming method of canceling an original real camera work and giving a new virtual camera work, is called "virtual shooting." The realtime image processor preforms this virtual shooting process as described followings. In virtual shooting, an image filmed in a certain piece of camera work is regarded as a single plate in a threedimensional space. This plate is then filmed again using different camera work. In this paper we refer to the camera shooting in a real space as a "real camera," the object filmed as a "real object," the plate on which a 2-D image is mapped as a "virtual object," and the camera set in a virtual space to shoot the 2-D plate as a "virtual camera." Virtual shooting is a geometric transform from a 2-D image shot by a real camera to a 2-D image filmed by a virtual camera. Let us consider a real 3-D object and a real camera in the world coordinate (see Fig. 5-a). Here the projection plane of the real camera is a 2-D flat surface crossing the viewing line of the real camera at a right angle and passing through a point that represents the position of the real object. This projection plane in fact represents the virtual image of the object mapped onto a plate. If Oi are the vertices coordinates of the 3-D real object in the world coordinate, the vertices coordinates Ri of the virtual object mapped on the plate can then be expressed by the following (see Fig. 5-a,b).

world coordinate

'virtual camera (4

,v' I

obtained image (e) Fig. 5. Process of virtual shooting.

where, d distance between the real camera and the real object, Rr: Rotation matrix of the real camera, Tr: Translation matrix of the real camera.

282

Pi being the coordinates of Ri in the world coordinate, Pi = TT-' R , '

T i 1 Ri

(2) By moving the virtual object to the origin point of the world coordinate and after changing its scale and orientation, further moving to a new position of the virtual object, we now obtain Ui (see Fig. 5-c): Ui = R, S ( P i

-

A)

+

M

(3) where, Ra: Rotation matrix of the virtual object, S: Scale matrix of the virtual object, A: Position vector of the real object, M: Position vector of the virtual object

Ui represents the vertices coordinates of the virtual object. The final 2-D image Vi obtained by shooting Ui with the virtual camera (see Fig. 5-d) is expressed by the following (see Fig. 5-e).

[i) vi

=

= R, T, Ui

[Z0j

(4)

where, Rv: Rotation matrix of the virtual camera, Tv: Translation matrix of the virtual camera, f Focal length of the virtual camera in pixel. \

This transform is a perspective projection. The real-time image processor is designed to perform the perspective projection by given parameters.

Fig. 6. Output images of the Desktop Virtual Studio.

Output images of the system are shown in Fig. 6. Because of the temporal separation of an actor's performance and a virtual filming, a user can perform camera work repeatedly, in a trial-and-error process. Moreover, because of the spatial

in a tiny studio space. Since all the elements in studio work have been implemented electronically, the studio work has shifted from a real studio to a desktop work space. Therefore, a final video production in the DVS is performed by an individual user working on a desktop work space. Since the environment in which the user is operating the virtual camera is a world of

separation of the real space where the actor is acting from the virtual space where the virtual filming is performed, a user can film the virtual actors performing in a very large studio space even through the actor's performance was actually filmed

just himself and the man-machine interfaces, he can proceed by trial and error and pursue the best shot free from the extraordinary pressure usually on the individual filming an actor in a real studio.

v. EXPERIMENT RESULT

283

These features of the DVS mean that it is expected to be applicable to a DTPP[6] (Desk Top Program Production)

electronic set," Proceedingsof the Digimedia Conference, Geneva, Apr. 1995.

providing a production environmentenabling users to produce

[5] K. Haseba, A. Suzuki, K. Fukui, M. Hayashi, "Real-time

a various TV programs efficiently, easily, and personally. In fact, the prototype DVS system described in this paper has been made as one of the various elements in the DTPP such as image database, video effect, and editing, etc. Even at the prototype stage, however, the DVS can be used in producing simple programs, in simulating the designing of a studio set, in training camerapersons,a simulation of studio work to share

compositing system of a real camera and a computer graphic image," Proceeding of International Broadcasting Convention(Amsterdam), Sep. 1994. [6] K. Enami, K. Fukui, N. Yagi, "Program Production in the Age of Multimedia - DTPP: Desktop Program Production - ,"IEICE Trans. Inf. & Syst., Vol. E79-D, No. 6, June 1996.

production images to all the staffs before actual shooting, and

so on. VI. CONCLUSION The Desktop Virtual Studio (DVS) described here, focusing on an experimental system, is intended to free studio work

BIOGRAPHY Masaki Hayashi graduated in 1981 from electronics engineering from Tokyo Insti-

from both spatial and temporal simultaneity constrains by

tute of Technology from which he also

providing the basic elements of studio work such as a set, an

received his Masters degree - in 1983. He joined NHK the same year, and has been with NHK Science and Technical Research Laboratories since 1986. He is engaged in research on image-processing, CG, image compositing systems and virtual studios.

actor, and a camera electronically . The system allows video creators to produce various studio shots on a desktop workspace in real-time as if they were filming in a real studio space. An experimental DVS system is described here, and the virtual shooting method is explained. In the experimental system, a camera head and a multiwindow system on a work station are used as man-

Kazumasa E n a m i received his B.E.

machine interfaces. In the future, more flexible interfaces need to be figured out with reference to the interfaces used in the

degree and Ph.D. in electronics

field of Virtual Reality. In more fundamental research, we will investigate the interaction between actors and will also investigate electric image components linked to the actors and the sets. REFERENCES

R. Kunicki, VAP Video Art Production GmbH, "ELSET - Electric set for broadcast studios," Proceedings of the Digimedia Conference, Geneva, Apr. 1995. (http:// www.studio.sgi.com/Features/virtualSets/accom.html) CYBERSET, Orad, http://www.orad.co.i1. S . Shimoda, M. Hayashi and Y. Kanatsugu, "New chroma-key imaging technique with Hi-Vision background," IEEE Trans. on Broadcasting, Vo1.35, no.4, pp.357-361, Dec.1989. T. Yamashita, M. Hayashi, "From synthevision to an

engineering from Tokyo Institute of Technology in 1971 and 1989, respectively. Since joining NHK, he has accrued considerable experience in the fields of research and development on digital video signal processing and parallel processors. He is now Director of the Program Production Technology Research Division at the Science & Technical Research Laboratories of NHK. Dr. Enami is a member of the Society of Motion Picture and Television Engineers (SMPTE) and of Television

Hideo Noguchi received his M.A. degree in electrical engineering from Tokyo University of Agriculture and Technology in 1972. Since 1972 he has been working at NHK (Japan Broadcasting Corporation), He was involved in research on an image

284

database and development of the Still Picture Retrieval System for Broadcasting Stations. He is a senior research engineer at NHK Science and Technical Research Laboratories. He is a member of IEICE (the Institute of Electronics, Information and Communication Engineers), ITEJ (the Institute

Laboratories. He was engaged in image processing and image database. He received Ph. D. from Tokyo University in 1992. He joined ATR in 1995 and is now engaged in communication using multimedia.

of Television Engineers of Japan), IPSJ (Information Processing Society of Japan), and IIEEJ (the Institute of Image Electronics Engineers of Japan) .

Masahiro Shibata graduated in 1979 from the Electronics Department, Faculty of Engineering, Kyoto University from which he also received his Masters degree in 1981. He joined NHK (Japan Broadcasting Corporation) the same year, and has been with NHK Science and Technical Research Laboratories since 1984. He is engaged in research on information retrieval technology, image database, video production systems and interactive systems.

Kazuo Fukui received his Masters degree from electronics engineering from Tokyo Institute of Technology in 1975. He joined NHK the same year, and has been with NHK Science and Technical Research Laboratories since 1979. He is now a senior research engineer of the Program Production Technology Research Division. He is engaged in research on computer graphics, image and video signal processing for program production. Nobuyuki Yagi - received the B.E., M.E., and Ph.D. degrees all in electrical engineering, from Kyoto University, Japan, in 1978, 1980, and 1992, respectively. He j o i n e d NHK (Japan B r o a d c a s t i n g Corporation) in 1980. Since 1982, he has worked for NHK Science and Technical Research Laboratories. He is now Deputy Director of Program Production Research Division. He has been engaged in research on image and video signal processing, motion interpretation, image coding, human interfacing, multiprocessor, and video production. He received the Best Paper Awards from the ITE (Institute of Television Engineers), in 1991. He is a secretory of Technical Committee on Image Engineering of IEICE (Institute of Electronics, Infomation and Communication Engineers), an editor of IEICE Transaction on Information and Systems. Seiki Inoue graduated in 1978 from the Electrical Engineering Department, Faculty of Engineering, Tokyo University, and obtained a Master's degree in Electronics from there in 1980. He joined NHK the same year. Since 1983, he had been with NHK Science and Technical Research

Yuko Yamanouchi received her Masters degree from electronics engineering from Tokyo Institute of Technology in 1988. She joined NHK the same year. Since 1990, she had been with NHK Science and Technical Research Laboratories. She was engaged in computer graphics for program production.

Yasumasa Itoh received the Bachelor's degree in engineering from University of Electro-communication , Chofu, Tokyo in 1 9 8 8 , and t h e M a s t e r ' s d e g r e e i n engineering from Tokyo Institute of Technology, Yokohama, Japan in 1990. He is a research scientist with NHK Science and Technical Research Laboratories, Tokyo, Japan, from 1990. His research interest includes video signal processing, video editing system of high definition television (HDTV), signal standard conversion and machine vision.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.