Towards a Personal Robotic-aid System

Share Embed


Descrição do Produto

Towards a Personal Robotic-aid System Juyi Park*†, Palis Ratanaswasd*, Edward E. Brown Jr.*, Tamara.E. Rogers**, Kazuhiko Kawamura* and D. Mitchell Wilkes* *Center for Intelligent Systems, Vanderbilt University, Nashville, Tennessee 37235-0131 USA (Tel : 1-615-343-9607; Fax : 1-615-322-7062; E-mail: [email protected], {palis; maverick; wilkes; kawamura}@vuse.vanderbilt.edu) **Department of Computer Science, Tennessee State University, Nashville, Tennessee 37207 USA (Tel :1-615-963-1520, E-mail: [email protected])

Abstract A robotic-aid system could be more effective if the system were intelligent enough to understand the user needs and adapt its behaviors accordingly. This paper presents our efforts to realize such a personal roboticaid system through a multi-agent robot control architecture. We will describe a framework for humanrobot interaction, two cognitive agents responsible for human-robot interaction, and a set of memory structures. Several applications illustrate how the system interacts with the user. Keywords: Humanoid robot, multi-agent system, human-robot interaction 1. Introduction Figure 1. Early ISAC robotic-aid system Robotics has evolved from the industrial robots in the 1960s to nontraditional robots for surgery and search and rescue in the 2000s. One class of robot that is gaining popularity is the anthropomorphic or humanoid robot [1,2]. Starting in 1995, the Cognitive Robotics Laboratory (CRL) of Vanderbilt University has been developing a humanoid robot called the Intelligent SoftArm Control (ISAC). Originally designed to assist the physically disabled [3] (Figure 1), ISAC gradually became a general-purpose humanoid robot to work with a human as a partner or an assistant [4] (Figure 2). In controlling such a robotic-aid system, performance issues of rapid motion and high precision are less important as opposed to industrial counterparts. Instead, the need is to maximize the quality of the human-robot interaction and to produce a humanfriendly interface to the system. Our goal is to develop a personalized cognitive robotic system to achieve just such a human-friendly interaction. Personalized means that the robot behaves differently depending on the state of the user, considering information on the user’s health, physical state, emotions, and habits. For example, if the user is a small child the robot may decide to move more slowly than normal in order not to frighten the child. Or, you expect high quality service from a waiter who knows you well, so the robot may serve you well if the robot understands the state of the server. Hence, we believe that the development of a personalized robotic-aid system is a valuable step toward a human-friendly man-machine interface. †

Figure 2. Vanderbilt’s humanoid robot ISAC In order to realize such a personalized robot, we developed a multi-agent-based architecture robot control incorporating several types of memory structures. Earlier in the research we used a blackboardbased architecture for the communication of distributed components such as vision, voice and arm control. Although the blackboard approach worked well for relatively simple systems, a more sophisticated architecture is required for high-level control of a humanoid robot. As a remedy we developed a multiagent architecture for parallel, distributed robot control [5] based on a unique design philosophy [6]. In this paper we report recent progress in developing two high-level agents, the Human Agent and the Self

Contacting author 1



Agent, plus memory structures enabling ISAC to learn new skills and store personalized information about the user. The Human Agent is the humanoid’s internal representation of the human. It includes information concerning the location, activity, and state of the human, as determined through observations and conversations. The Self Agent is the humanoid’s internal representation of itself. It provides the system with a sense of self-awareness concerning the performance of the hardware, as well as the progress and effectiveness of tasks and behaviors. Our approach to robot memory structures is through short- and long-term memories called the Sensory EgoSphere (SES) and the Procedural Memory (PM), respectively. The SES is a data structure that encapsulates short-term memory for the humanoid in a time-varying, spatially indexed database that connects the environment to a geodesic hemisphere [7]. It allows ISAC to maintain a spatially indexed map of relative sensory data in its environment. The PM is a data structure that encapsulates both primitive and meta behaviors and forms a basis for learning new behaviors and tasks [8]. The following section describes our multi-agent architecture for implementation of personalized humanrobot interaction. The details of the architecture as they pertain to the Human Agent and the Self Agent are given in Sections 3 and 4, respectively. In Section 5 we give a short demonstration of the system to clarify its function and utility. Conclusions follow in Section 6.

Cognitive: internal working of the system, e.g., human (mind and affective state) and humanoid (reasoning, communication of its intention).

Because it can be difficult to determine cognitive aspects of humans consistently, we limit out cases where both humans and humanoid intend to achieve a common goal. We are also interested in giving the humanoid its own emotional or affective module to make HRI more socially pleasant [9]. ISAC is equipped with sensors (cameras, microphones, infrared sensors) for capturing communication modes including face detection, finger pointing, etc.. An infrared motion detector provides ISAC with a means of sensing human presence. We use MS Speech engines for detecting human speech and use a sound-localization system [10]. Likewise, we are developing techniques for ISAC to give feedback to people through speech and gestures. We are also developing the use of a visual SES display projected on ISAC’s monitor. The interface is based on multi-agent based architecture [11,12] called the Intelligent Machine Architecture (IMA) developed at Vanderbilt [5]. Figure 3 illustrates the overall IMA agent structure with the short and long term memory structures.

2. Design of Personalized Human-Robot Interaction 2.1 Categorizing Users The first step of developing personalized robotic-aid systems is to categorize users based on their physical and psychological conditions. Physical condition includes user’s height, weight and age. For disabled people, various conditions will exist depending on their body conditions. Psychological condition includes preferred working speed and preferred color. Robot should remember the users condition and select proper behavior from its database. And then, it should adjust parameters for the selected behaviors or generate new behavior using the existing behaviors. The HA finds out the user’s condition using intention reading and human database. SA selects the best behavior for the human intention and it generate new behavior for the user when it is needed.

Figure 3. IMA agents and memory structures

3. Human Agent for Personalized Human-Robot Interaction The HA [13] is a virtual agent, realized as a collection of IMA agents that serve as an internal active representation of people in the robot’s environment [14] that detect, represent and monitor people. The HA facilitates HRI by determining appropriate interaction behaviors. Five IMA agents perform the core functions of the HA and are grouped as two compound IMA agents (Monitoring Agent and Interaction Agent). The HA receives input concerning the human from various atomic agents that detect the physical aspects, such as a person’s face, location, etc. The HA attempts to communicate with various supporting agents responsible for functions such as detecting features of people, interfacing with memory data structures, or reporting human status information to the Self Agent.

2.2 Framework for Human-Robot Interaction In the CIS, our philosophy for software design for a humanoid is to integrate both the human and the robot in a unified multi-agent based framework [4]. Thus, we group aspects of Human-Robot Interaction (HRI) into the three categories: • Physical: structure or body, e.g., physical features, manipulation capabilities • Sensor: channels used to gain information e.g., world, environment, each other 2

3.1 Monitoring Agent

3.2 Interaction Agent

The Monitoring Agent monitors human features using speech recognition, speaker identification, face detection and recognition, sound localization, and motion detection, each of which contribute to an awareness of humans and their actions in the environment. It includes three atomic IMA agents, the Observer, Identification, and Human Affect Agents. Certain atomic agents, categorized as Human Detection Agents (HDAs) each detect a feature of the human, such as the location of a face, a voice, etc., and report this to the Observer Agent as shown in Figure 4. One HDA, called the Face Detection Agent, is template-based and returns location and confidence of detection. Another HDA, Sound Localization Agent, determines the direction of the source based on the relationship between the energy of the two stereo channels. The Currently, in our research, the Observer Agent integrates this information to detect and track the human during an interaction and integrates data to represent that there are 0, 1, or 2 people in the environment.

The Interaction Agent handles more pro-active functions in HRI interactions: • handling communication via an Interaction Agent • modeling the interaction with the human via the Social Agent. The Human Intention Agent in an AgentBuilder window (Figure 5), handles verbal and non-verbal communication between the humanoid and human. The AgentBuilder program is an IMA development tool that provides the interface for the design, testing, and execution of IMA agents. It processes two types of intention from people called expressed or inferred. The HA maps speech directed toward the robot into an expressed intention, based on mapping of keywords. If the person greets or requests a task of the robot, it is considered an expressed intention. Other intentions represent what the robot can infer based on human actions labeled inferred intentions. As an illustration, when a person leaves the room, ISAC assumes that the person no longer intends to interact and resets its expectations. An initial suite of human intentions include the intent to communicate (which is used to eliminate speech or sounds with no communicative intent), intent to interact with ISAC, intent for ISAC to perform a task, intent to end interaction. Based on observations of interactions between ISAC and various people, we have included and a special case intent to observe.

Figure 4. Human Agent and supporting atomic agents The Identification Agent identifies the human in order to personalize interactions. It receives information from Human Identification Agents. For speaker identification, we have implemented a simple speaker identification routine using a forced-choice classifier that operates on a library of known speakers stored in the human database. The face recognition algorithm utilizes Embedded Hidden-Markov Models and the Intel OpenSource Computer Vision Library [15]. Location and identity information is posted on the SES. The Human Affect Agent, in a similar design paradigm to the above Observer and Identification Agents, receives input from various agents, called Affect Estimation Agents, which each represent a feature of human affect. Currently, this agent receives input from a Word Valence Agent, which monitors the human's speech for words known to contain positive or negative expression. The goal of this agent is to provide ISAC with knowledge of the person's affective state to ultimately influence the interaction method.

Figure 5. Human Intention Agent, shown in an AgentBuilder window The Social Agent contains a rule set for social interaction which enables the robot to interact with people more naturally. The rule base is a production system which operates on features such as levels of interaction, current human intention, and provides suggestions of appropriate behaviors to the Self Agent for consideration. The Social Agent constantly monitors the state of certain variables that represent external 3

which registers it on the SES. For a known object registered on the SES, the human tells ISAC to find the desired object. When this information is returned, the world coordinates of the object are sent to the Right Arm Agent and the robot performs a pointing gesture to point to the requested object.

information, such as the number of people, current person, and new intentions. The Self Agent then interprets this suggestion in the context of its own current state, e.g. current intention, status, tasks, etc. Figure 6 shows the model of the levels of interaction engagement in the HA that we developed as the basis for modeling social interaction. These levels progress from a state of no interaction to an ultimate goal of completing a task for the humanoid with (or for) a person. • Level 1: Solitude. No detection of anyone in the environment; ISAC may choose actions to actively attract people with whom to interact. • Level 2: Awareness of People. ISAC is aware of people but has not interacted with them yet. • Level 3: Acknowledgement. ISAC actively acknowledges the presence of a person when someone approaches. • Level 4: Active Engagement. Represents that stage of active interaction.

Head Agent

Look at finger point

Learn Object

Angles for Data Registration

Human Agent

SES

Data Registration

(a) Data Retrieval

SES Find Object

Human Agent Data Information

3D World Angles

Right Arm Agent

(b) Figure 7. Schematic for (a) Learn Object and (b) Find Object 3.4 Human Database ISAC needs to know information about the person and how the interaction is progressing in order to offer an intelligent interaction. To centralize this information, a Human Database was developed to interface with the HA. The database is a repository of information about people with whom the robot has previously interacted and provides two major functions. The first is to store personal data used to recognize and identify people, such as names, image, and voice. (Table 1). The respective recognition libraries will then store the actual processed exemplars for recognition.

Figure 6. Levels of interaction within the HA 3.3. Demonstration in Human-Guided Learning through Shared Attention

Object

This demonstration through finger-pointing utilizes the HRI framework of Figure 4 to allow people to direct ISAC's attention to objects in its workspace. ISAC is directed by a human to look at several objects (assorted color blocks) on a table. When ISAC is told to look at the green block, ISAC looks for the pointed finger. ISAC then takes the position of the green block and registers the location and name onto its SES. ISAC is directed to look at a red block and repeats the previous actions. After the blocks are registered onto the SES, ISAC returns to its initial position (looking straight ahead). The human instructs ISAC to look at one of the previously taught objects. ISAC retrieves the object named from the SES and saccades to the location given in the SES. The application involves several framework levels for directing attention to known and unknown objects. Two aspects of this interaction, learning and recalling object locations, are shown schematically in Figure 7. To direct ISAC's attention to an unknown object, the application begins with speech from the human directing the robot's attention to an object. The HA activates the Human Finger Agent and parses the name of the object. The Human Finger Agent finds a pointed finger to fixate on the object. At this point, the HA sends the object name and location to the SES Manager,

Table 1. Example of People Met Database Face Name Last meeting learned 2002-06-18 1 Karla Yes 11:33:34 2002-04-26 2 Juyi Yes 19:47:23 2002-06-20 3 Juan Yes 10:15:32

Voice learned Yes Yes Yes

The second database table stores relevant task-related information about the interactions that the robot has had with people (Table 2). The table tracks tasks that people have requested or intentions that ISAC has inferred from observing them. It acts as an interaction log and could be searched to determine the robot’s most recent or frequent tasks across all people or on a given day.

4

Table 2. Example of Intention Histories Index Name Intent 1

Juyi

Color Game

2

Palis

Recognize

3

Juan

Handshake

4

Karla

Color Game

models to isolate and identify multiple prosthetic functions from the crosstalk EMG activity commonly occurring at the stub of an amputated person’s limb [18]. Hudgins experimented with neural networks to achieve multiple functions using EMG signals [19]. Northrup demonstrated how force and torque information could be modeled from the tonic (gravity related) and phasic (speed related) waveform components of the EMG signal in order to simulate arm reaching motions on ISAC [20]. Northrup’s work represented the first phase of the CRL’s attempt to develop agents for ISAC having biologically inspired control architectures.

Time 2002-06-18 11:33:34 2002-06-20 09:35:54 2002-06-20 09:37:23 2002-06-20 09:39:43

3.5 Measuring Intention via Surface EMG Signals As part of our ongoing investigation to identify and measure human intention, the Cognitive Robotics Laboratory has been developing an agent that extracts position and velocity information from surface electromyographic (EMG) signals obtained from human antagonistic muscle pairs. Specifically, these EMG signals are measured from a human subject’s biceps and triceps muscles when the subject performs elbow flexion motions with a load at the wrist to various angles positions in the sagittal plane (see Figure 8). Ultimately, we wish to use this agent to demonstrate the control and teleoperation of ISAC’s robotic arm using surface EMG signals (see Figure 9).

(a)

(b) Figure 9. (a) Control of ISAC’s McKibben artificial muscle based arm via surface electromyographic signals from a user’s arm muscles; (b) EMG output extracted from a user’s during a muscle contraction using Neurodyne’s System/3 EMG amplifier

Figure 8. Human user performing elbow flexion motions in the sagittal plane at various elbow angle positions located on the large, multi-colored board

The second phase of this project is to endow the biologically inspired control agent with the ability to measure a person’s intention when they perform an arm motion. Our goal is to effectively capture the inherent primitive motion behaviors derived from a muscle contraction or the associated sensory motor control mechanisms that arise when a person thinks about performing an arm reaching motion. This information represents the person’s volition or intention and may be valuably used to control a flexible, multi degree-offreedom, robotic system such as ISAC in a human-like manner.

Surface electromyographic signals offer a noninvasive method for teleoperating a robotic arm based on a user’s volition or intention [16]. There have been various research attempts using EMG signals to control rehabilitation robotic systems. Farry conducted research in this area at NASA Johnson Space Center using the 16 degree-of-freedom Utah/MIT Dexterous Hand [17]. Most researchers have focused their attention on isolating the various feature patterns that can be classified from the EMG signal using a neural network or statistical analysis. Graupe used ARMA 5

the cosine tuning functions described above. ISAC serves as the experimental test-bed for this project. Afterwards, this position and velocity data will be used as the control inputs for ISAC’s arm. The EMG signals will be obtained using a Neurodyne System/3 EMG amplifier. The goal is to create a human-machine interface that maps the surface EMG signal from the user’s arm muscles to an associated pressure value for a particular joint on ISAC’s arm [24]. When the user, for example, curls his left arm by contracting his bicep muscles and extending his triceps, ISAC emulates the behavior by curling its arm using the corresponding joint muscles. Ultimately, it is desired that this “EMG teleoperation” be performed in real-time. Consequently, McKibben artificial muscles are well suited for this type of research since they are innately flexible, springlike structures that operate and behave similarly to actual human muscles. Results from this project will be published in upcoming journal papers.

We use techniques from the neuroscience literature to perform our analysis. Flanders suggests that the relationship between the EMG signal and the static directional force can be spatially tuned and modeled using cosine tuning functions [21]. The formula for fitting EMG data to a cosine tuning function used by Flanders is modeled as: EMG = C + A * F cos(θ 0 − θ ) where: C = Constant offset A = Scaling factor F = Magnitude of force at the wrist θ0 = Force direction of maximum EMG amplitude θ = Current force direction EMG = Amplitude of the EMG signal

(1)

This method is similar to that proposed by Georgopoulous [22] where he used cosine tuning functions to spatially tune motor cortical activity in the cerebral cortex. His goal was to compute the neuronal population response during arm reaching movements in a particular direction. Flanders hypothesizes that the spatial tuning of static EMG signals represents the convergence of multiple, descending motor commands from the motor cortex. She suggests that a “cortical to motoneuronal” transformation may evolve if one assumes that all neuronal activity within the motor system (related to movement and direction) could be described using multiple cosine tuning functions. The direction having the largest neural activity is known as the “preferred direction”. By knowing the preferred directions of cortical neurons and the preferred direction of motoneurons, the transformation between the two could be modeled as a weighted mapping. If the preferred direction of a motoneuron is clearly identified after a phasic EMG burst resulting from muscle activation, then that information could be used to control a robotic arm. Hence, a human-machine interface linking cortical motion signals (representing a person’s intention) with motoneuronal phasic EMG signals to the desired motion of a robotic arm could well be derived. This derivation is possible if the appropriate cosine tuning functions are known and can be implemented in some type of non-linear control scheme. In other words, it can be hypothesized that a humanoid robot could be indirectly connected and controlled by a human brain via surface electromyographic signals if a mapping from the phasic EMG signal to a specific joint angle can be calculated. The implication here is that a non-invasive, but clearly definitive pathway linking the robot to the human brain is formed and identified. Thus, not only is a humanmachine interface developed, but essentially, a brainmachine interface emerges in the process as well [23]. Therefore, in this CRL research project, we are using cosine tuning functions to model EMG signals. Specifically, we are analyzing the surface EMG activation pattern from the biceps and triceps muscles during static and dynamic muscle contractions in the sagittal plane. We will use this information to derive the preferred direction of movement and velocity using

4. Self Agent for Cognitive Control The Self Agent (SA) is responsible for ISAC’s cognitive activities ranging from sensor signal monitoring to task monitoring to decision-making (Figure 10). The SA integrates failure information from sensors and maintains information about the task-level status of the humanoid. Its cognitive aspects include recognition of human intention and selection of appropriate actions by forming plans and activating them. The structure and the functions of the Self Agent are provided in Figure 11. Conflict resolution is currently handled based on the relative priority of the conflicting actions. 4.1 Central Executive Inspired by several non-robotic fields such as cognitive psychology and computational neuroscience, the architecture for cognitive robots includes a structure called the Central Executive (CE), which functions similarly to the one in the human’s brain [25]. That is, the CE provides cognitive control to the robot where it is needed in tasks that require the active maintenance and updating of context representations and relations to guide the flow of information processing and bias actions [26]. The focus of cognitive control on context processing and representation should allow robots to intelligently adjust to the changes of internal and external contingencies, while actively maintaining context information of internal goal representations [25]. Other aspects of cognitive control include goal selection and updating in context switching tasks and performance monitoring and adjusting [27]. The CE interacts with the HA, STM, LTM, and other agents within the SA to construct and invoke plans to perform various tasks. The goal of each generated plan is determined by input from the Intention Agent, an internal structure of the CE. Constructed plans are put into action by activating appropriate behaviors stored in the LTM to form new procedural memory or motions, guided by the attention networks and sensory information received from the SES which is a part of 6

the STM. The status of the human passed on from the HA is taken into consideration by the Intention Agent while the status of the humanoid also affects how the robot generates the plans within the CE.

Figure 12. Modular controller for ISAC

Figure 10. Levels of monitoring and action in the Self Agent

5. Experimental Results 5.1 Demonstration of Personalized Behavior Through social interaction, ISAC learns several things about each person, e.g., their height, favored hand (right or left) and their favorite drink. This personalized information is stored in the Human Database shown in Table 3. Thus, when someone approaches the robot and states, “I am Juyi, give me my drink.” The HA processes this information and notifies the SA of the request. Once the Central Executive determines whether to comply, it coordinates the behavior so that ISAC picks up the person’s favorite drink and offers the beverage at an appropriate position based on the person’s dominant hand, height and location. Table 3. Sample human database for demonstration Name Favorite Drink Main Hand Height

Figure 11. SA and supporting Atomic Agents

4.2 Modular Controller In order to implement effective behavior based control, we employed the idea of modular controller by Wolpert and Kawato [28]. Each module has a model and a controller for a specific behavior. When a command is given from the Self Agent, an estimator in each module estimates the next state. Central Executive selects a control module based on the task relevancy calculated with the command, current and estimated states. Once a module is selected, only the controller is activated to control ISAC. Figure 12 illustrates this modular controller.

Karla

Coke

Right -Handed

Short

Juyi

Coffee

Right -Handed

Tall

Juan

Water

Left- Handed

Tall

5.2 Demonstration of Appropriate Social Interaction In order to evaluate the performance of the Human Agent, we performed several experiments [29]. People were allowed to approach the robot and choose from given tasks. ISAC demonstrated both generation of social behaviors not explicitly requested and requested tasks. The interaction began with a person greeting ISAC who, in turn, greeted the person. The person requested a task such as, “ISAC hold an object to be given to (another specific person).” The system was evaluated for success in performing the tasks and for the social appropriateness of the robot behavior. 7

Of the 28 explicit requests for tasks, the robot chose to execute tasks 23 times. On 5 occasions it did not execute the request. One of those times, the robot could not execute the task because it: 1) did not process the utterance as a request, 2) chose not to perform the task due to interplay of the social rules, or 3) chose to execute the task, but execution was interrupted as the person requested another task. The rule base in the Social Agent was designed to provide ISAC with general properties of being polite and working with others as a foundation for developing personal robots. The rule-generated responses of the robot were compared with random generated responses for various situations. Human raters were given 17 interpersonal interaction situations, shown in Table 4, and a pair of responses for each. The social situations were categorized in general terms, referring to tasks or conversations without specifying the nature of them. These were developed at the level of resolution corresponding to what human intentions ISAC can detect and label. Each question presented a situation and the response of a person in the situation. The rater was asked to score the social appropriateness of the response on a scale from one to five with a higher score indicating greater appropriateness (see Figure 13). Each of the situations was presented with the robot’s rule-generated response. Each situation was also independently presented with a response randomly chosen from the robot’s options An analysis of variance showed that the robot responses were generally more “socially appropriate” than random. The results were then tested to determine if there existed a significant difference in social appropriateness scores based on whether the response was one of the robot’s versus a randomly chosen one. The difference was found to be statistically significant. Thus, in general, ISAC’s responses were more socially appropriate than random action selections, demonstrating that the system enabled the robot to perform at a measurable level of social appropriateness.

Table 4. List of Interaction situations for evaluating social appropriateness, shown with ISAC’s responses. Situation 1 2 3 4 5 6

7 8 9 10

11

12 13

14 15

16

17

Response

Person A is

Then

Alone and has been alone for a while Alone. Person B walks up and does not say anything. Alone. Person B walks up and says “Hello”. Alone. Person B requests a task.

Look for someone to talk to Invite person to interact Greets person B

Alone. Person B asks, “How are you?” Talking with Person B. Person C walks up and does not say anything. Talking with Person B. Person C walks up and says “Hello” Talking with Person B. Person C requests a task Talking with Person B. Person B leaves. Doing task for Person B, who is present. Person C walks up and does not say anything. Doing task for Person B, who is present. Person C walks up and says “Hello” Doing task for Person B, who is present. Person C requests a task. Doing task for Person B, who is present. Person B requests a different task Doing task for Person B, who is present. Person B leaves interaction Doing task for Person B, who is not present. Person C walks up and does not say anything. Doing task for Person B, who is not present. Person C walks up and says “Hello” Doing task for Person B, who is not present. Person C requests a task

Replies to person

Starts requested task

Looks to see who Person C is Greets person C Starts requested task End current task Looks to see who Person C is Greets Person C

Starts new task Starts new task

Ends current task Invites person to interact Greets Person C

Starts new task

6. Conclusion In order to realize a personalized robotic-aid system with our humanoid robot, ISAC, we developed a multiagent architecture. In particular, we developed two high level agents, the Human Agent and the Self Agent. The Human database within the Human Agent stores personalized data, and other agents within the Human Agent are used to recognize each person’s intention and state. The Self Agent is responsible for performing the behaviors requested by the Human Agent according to the user’s state. We believe that this approach is effective for achieving personalized human-robot interaction, resulting in a more human-friendly robot.

Figure 13. Mean of response scores by situation number (I: ISAC; R:Random) 8

[12] J. Ferber, Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, Harlow, UK, Addison-Wesley, 1999 [13] K. Kawamura, T.E. Rogers and X. Ao, “Development of a Human Agent for a MultiAgent- Based Human-Robot Interaction”, International Conference on Autonomous Agents and MultiAgent Systems, pp. 1379-1386, 2002 [14] R. Bajcsy, “Active Perception,” Proc. of IEEE 76, vol. 8, pp. 996–1005, August 1988 [15] A. Nefian and M. Hayes, “Face Recognition using an Embedded HMM,” IEEE Conf. on Audio and Video-Based Biometric Person Authentication, Washington, DC, pp. 19-24, 1999 [16] G.R. McMillan, “The Technology and Applications of Biopotential-Based Control,” RTO Lecture Series 215 on Alternative Control Technologies: Human Factors Issues, Bretigny, France, Chapter 7, pp. 7:1-11, October 7-8, 1998 [17] K.A. Farry, I.D. Walker and R.G. Baraniuk, “Myoelectric Teleoperation of a Complex Robotic Hand,” IEEE Trans. Robotics and Automation, vol. 12, no. 5, pp. 775-788, October 1996 [18] D. Graupe, J. Magnussen and A.A. Beex, “A Microprocessor System for Multifunctional Control of Upper-Limb Prosthesis via Myoelectric Signal Identification,” IEEE Trans. Automatic Control, vol. AC-23, no. 4, pp. 538-544, August 1978 [19] B. Hudgins, P. Parker and R.N. Scott, “A New Strategy for Multifunction Myoelectric Control,” IEEE Trans. Biomedical Engineering, vol. 40, no. 1, pp. 82-94, January 1993 [20] S.G. Northrup, Biologically Inspired Control of a Humanoid Robot with Non-Linear Actuators, Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, August 2001 [21] M. Flanders and J.F. Soechting, “Arm Muscle Activation for Static Forces in Three Dimensional Space,” Journal of Neurophysiology, vol. 64, no. 6, pp. 1818-1837, 1990 [22] A.P. Georgopoulos, R.E. Kettner and A..B. Schwartz, “Primate Motor Cortex and Free Arm Movements to Visual Targets in ThreeDimensional Space. II. Coding of the Direction of Movement by a Neuronal Population,” Journal of Neuroscience, vol. 8, no. 8, pp. 2928-2937, August, 1988 [23] W. Craelius, “The Bionic Man: Restoring Mobility,” Science, vol. 295, pp. 1018-1021, February 8, 2002 [24] E.E. Brown Jr., D.M. Wilkes, K. Kawamura and K. Sagawa, “Development of an Upper Limb, Intelligent Orthosis using Pneumatically Actuated McKibben Artificial Muscles,” Proc. of 7th International Conference on Rehabilitation Robotics, Evry, France, pp. 24-30, April 25-27, 2001 [25] T.S. Braver and D.M. Barch, “A Theory of Cognitive Control, Aging Cognition, and Neuromodulation,” Neuroscience and Behavioral Reviews, vol. 26, St. Louis, MO, Elsevier Science Ltd., pp. 809-817, 2002

Acknowledgements This work has been supported in part through DARPAMARS2020 (NASA-JSC Grant# NAG9-1446), DARPA SMDC (Grant #DASG60-1-01-0001) and also through In-House Research Funding through the Center for Intelligent Systems at Vanderbilt. The authors would like to thank members of the Cognitive Robotics Lab, past and present, for their contributions. References [1] K. Kawamura, R.A. Peters II, D.M. Wilkes, W.A. Alford and T.E. Rogers, "ISAC: Foundations in Human-Humanoid Interaction," IEEE Intelligent Systems and their Applications, vol. 15, no. 4, pp. 38-45, July-August, 2000 [2] T. Fukuda, R. Michelini, V. Potkonjak, S. Tzafestas, K. Valavanis and M. Vukobratovic, “How Far Away Is ‘Artificial Man’?” IEEE Robotics and Automation Magazine, pp. 66-73, March 2001 [3] K. Kawamura, S. Bagchi, M. Iskarous and M. Bishay, “Intelligent Robotic Systems in Service of the Disabled,” IEEE Transactions on Rehabilitation Engineering, vol. 3, no. 1, pp. 1421, 1995 [4] K. Kawamura, R.A. Peters II, R. Bodenheimer, N. Sarkar, J. Park, A. Spratley, and K.A. Hambuchen, “Multiagent-based Cognitive Robot Architecture and its Realization,” International Journal of Humanoid Robotics, vol. 1, no. 1, pp.65-93, March 2004 [5] R.T. Pack, D.M. Wilkes and K. Kawamura, “A Software Architecture for Integrated Service Robot Development,” Proc. of IEEE Systems, Man and Cybernetics, pp.3774-3779, 1997 [6] K. Kawamura, R.T. Pack, M. Bishay and M. Iskarous, “Design Philosophy for Service Robots,” Robotics and Autonomous Systems, vol. 18, pp. 109-116, 1996 [7] R.A. Peters II, K.A. Hambuchen, K. Kawamura and D.M. Wilkes, “The Sensory EgoSphere as a Short-Term Memory for Humanoids,” Proc. of 2nd IEEE-RAS International Conference on Humanoid Robots, pp. 451-459, 2001 [8] D. Erol, J. Park, E. Turkay, K. Kawamura, O.C. Jenkins and M.J. Mataric, “Motion Generation for Humanoid Robots with Automatically Derived Behaviors,” Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Washington, DC, October 2003, pp. 18161821, 2003 [9] C. Breazeal, Designing Sociable Robots, Cambridge, MA, MIT Press, 2002 [10] A.S. Sekman, M. Wilkes and K. Kawamura, “An Application of Passive Human-Robot Interaction: Human Tracking-based on Attention Distraction,” IEEE Trans. on Systems, Man and Cybernetics – Part A: Systems and Humans, vol. 32, no. 2, pp.248-259, 2002 [11] M. Minsky, The Society of Mind, New York, NY, Simon and Schuster, 1986 9

[26] R. O’Reilly, T. Braver and J. Cohen, “A Biologically Based Computational Model of Working Memory,” Models of Working Memory: Mechanisms of Active Maintenance and Executive Control. A. Miyake and P. Shah, Eds. Cambridge, UK, Cambridge University Press, 1999 [27] B. Hommel, K.R. Ridderinkhof and J. Theeuwes, Cognitive Control of Attention and Action: Issues and Trends. Psychological Research, vol. 66, pp. 215-219, 2002 [28] D.M. Wolpert and M. Kawato, “Multiple Paired Forward and Inverse Models for Motor Control,” Neural Networks, vol. 11, pp.1317-1329, 1998 [29] Tamara Rogers, The Human Agent: A Model For Human-Robot Interaction, Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, August 2003

10

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.