A real-time, hierarchical, sensor-based robotic system architecture

Share Embed


Descrição do Produto

Journal of Intelligent and Robotic Systems 21: 1–27, 1998. c 1998 Kluwer Academic Publishers. Printed in the Netherlands.

1

A Real-Time, Hierarchical, Sensor-Based Robotic System Architecture ? TIMOTHY HEBERT, KIMON VALAVANIS ?? and RAMESH KOLLURU Robotics and Automation Laboratory (RAL) CACS and A-CIM Center University of Southwestern Louisiana, Lafayette LA, 70504, U.S.A.; e-mail: [email protected] (Received: 4 June 1996; in final form: 21 May 1997) Abstract. A robotic system architecture is presented and its real-time performance, when used to control a robotic gripper system for deformation-free handling of limp material, is evaluated. A major problem to be overcome has been the integrability and compatibility issues between various components of the developed system. The software and hardware protocols and interfaces developed to control and coordinate the subsystem operations and interactions are presented. The performance of the developed real-time, hierarchical, sensor-based robotic system architecture is found to meet and satisfy a set of system operational constraints and demands as dictated by the industry. Key words: hierarchical control, system architecture, sensor-based control, system integration, compatibility

1. Introduction The objective of this paper is to present and discuss a real-time, hierarchical, sensor-based robotic system architecture implemented at the USL Robotics and Automation Laboratory. The overall system architecture follows the three interactive level hierarchical structure of organization, coordination and execution [22], appropriately modified and enhanced to accommodate the operational requirements and constraints of the system under consideration. The robotics system is shown in Figure 1 and the system components are listed in Table I. This system is capable of performing the following tasks: • • • • • •

Real-time robot control (AdeptOne, AdeptThree, PUMA) System error identification and recovery Vision based object tracking Real-time, bidirectional, 256 level speed control of conveyor belts Multi-system coordination and control Collision avoidance in semi-structured environments using potential fields

? Acknowledgement: This work has been partially supported by LEQSF Industrial Ties Grant R-3130. ?? To whom all correspondance should be addressed.

VTEX(C) PIPS No.:141640 MATHKAP JINT1377.tex; 21/11/1997; 15:33; v.7; p.1

2

T. HEBERT ET AL.

Figure 1. Configuration of Intelligent Material Handling System (IMHS) testbed. Table I. IMHS system components AdeptOne and AdeptThree robots Puma 560C Mark II series robot Two Metzgar A-series Conveyor systems 8096 Embedded Controller for bidirectional control of conveyors Two Pulnix Tm-540 series CCD cameras Adept Vision Multiplexer for AdeptOne Force Sensors for AdeptOne, AdeptThree Inductive, Capacitive, Optio-electric, and Photo-electric proximity sensors Suction generation unit with 2.5 HP blower Custom-made grippers for limp material handling High Accuracy positioning systems for AdeptOne

In addition to the above, an interactive graphics package, the SILMA CIMStation, is used for simulation of the system model before the actual (real-time) system operation. Thus, a complete system simulation and real-time system operation is accomplished. Given such a diverse system, a major problem to be overcome has been the integrability and compatibility issues between the system components, in order to

JINT1377.tex; 21/11/1997; 15:33; v.7; p.2

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

3

meet and satisfy a set of system operational constraints. Thus, specific objectives of the paper include the design and implementation of: • hardware-based interfaces to implement five distinct subsystems and allow supervisory control of two vision cameras, four arrays of proximity sensors, two conveyor units, a vacuum generation unit, and AdeptThree and AdeptOne industrial robot arms; and, • software based protocols to implement all necessary hardware connections and software based interfaces to coordinate module actions including, object recognition and tracking, gripper/target pre-grasp alignment, and object manipulation. Further, a prototype gripper designed to handle limp material (shown in Figure 8 of Section 2), analytically discussed in [11–14] is interfaced with the AdeptThree robot for the purpose of the overall system real-time performance evaluation. The rest of the paper is organized as follows: Section 2 includes a brief review of existing sensor paradigms and sensor control structures. Section 3 introduces the overall architectural structure of the implemented system exploring the hardware modules and the interconnections and interfaces between the individual units. Section 4 is a description of the software control algorithms focusing on the algorithms which accomplish the sub-tasks of rough and fine alignment of the gripper and the target object, and subsequent manipulation of the object. The paper concludes with a case study integrating the control architecture with an indigenously designed robotic gripper for handling limp material. 2. A Review of Existing Sensor Paradigms and Control Structures Sensors and the utility of sensor information plays a major role in the design and implementation of a control architecture. The actual sensors selected, the role of the sensor information, the task to be implemented, as well as many other factors influence the design parameters of any system. Presented are several different sensor integration paradigms and control structures. A sensor integration paradigm is the basic concept upon which the role and implementation of sensor integration can be built. Artificial neural networks represent an attempt to model control structures after actual neural patterns. Priebe and Marchette [16], for example, have proposed a self-organizing neural network architecture for multi-sensor integration on an autonomous robot. The ability of a neural network to reorganize the “thought” patterns allows implementations of control architectures of this type to adapt to the different environmental stimuli determined through sensor information. Object-Oriented Programming is a paradigm which greatly emphasizes the modularity of the developed system. The sensor information is encapsulated within a defined object; input and output to the object is through message passing. A set of interfaces is developed along with each sensor object to accomplish the

JINT1377.tex; 21/11/1997; 15:33; v.7; p.3

4

T. HEBERT ET AL.

tasks required of the particular sensor. Rodger and Browse [17] have used this type of paradigm to implement multisensor object recognition, and Allen [2] has developed an object-oriented framework for multisensor robotic tasks. Lou et al. [15] have developed a paradigm, called Hierarchical Phase-Template Paradigm, which is based upon four temporal phases involved in information acquisition. The four phases coincide with ranges of the target from the sensor. The phases begin at the “far away” phase and process through the “near to”, and the “touching” phases, ending with the “manipulation” phase. Different sensors are associated with each phase yet the information gathered in any phase is placed into a template common to all phases. This allows the already registered object information to be passed to the next phase as the need arises. Network and rule-based systems are, perhaps, the most common form of control architectures implemented in sensor integration. These control types are particularly useful when a high level of sensor fusion needs be performed. Rulebased networks are most useful as top-level controllers due to their organizational nature [1]. The NBS (National Bureau of Standards, or as it is known today the National Institute of Standards and Technology, NIST) Sensory and Control Hierarchy consists of an “ascending sensory processing” hierarchy coupled to a “descending task-decomposition” control hierarchy. Fewer numbers of sensors requiring high levels of sensor processing receive the more complex tasks. These tasks can be decomposed into sub-tasks and shared with the more numerous, low-level sensors. Processing at each level can be done in parallel with the other levels [1]. A three level hierarchical structure is used here to implement the structure of the developed system, as specified by the theory of Saridis and Valavanis [22]. The physical devices establish the execution level. Two levels of modules make up the structure of the control level. The five higher modules include the vision coordinator, the sensor coordinator, the vacuum coordinator, the conveyor coordinator, and the robot module. These modules communicate with each other and the controllers on the next lower level to accomplish sub-tasks dictated by the highest level of the designed system, the organization level. 3. Overall System Architecture 3.1. THE ARCHITECTURAL MODEL Intelligent machines feature an application of the theory of hierarchical intelligent control, which is based on the principle of increasing precision with decreasing intelligence (IPDI) [22]. Most modern manufacturing systems and autonomous systems are modeled and controlled by using hierarchical control architectures as represented in [21]. The functional system architecture follows the three-level hierarchical system structure [22]:

JINT1377.tex; 21/11/1997; 15:33; v.7; p.4

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

5

Figure 2. Hierarchical intelligent control architecture.

Organization level: This contains the system knowledge-base and is associated with the maximum “intelligence” contained in the system. Intelligence constitutes knowledge of the system and surroundings (world model) and ability to react and interact with the environment based on both external and internal stimuli. The organizer accepts and interprets user (input) commands and related feed-back from the lower levels, defines the task sequences to be executed in real-time and processes the knowledge-information, with a high degree of intelligence and little or no precision. Coordination level: This level consists of logical components which serve to interface the organizational level with distinct modules at the execution level. The coordination level is essential for dispatching organizational information to the lowest level and is composed of coordinators, each performing a specific set of functions. The coordination level is concerned with the formulation of the actual control problem to be executed by the lowest level agents. The coordination level also provides scheduling, tuning, supervision, error-detection and possibly error-recovery, etc. Execution level: The execution level interfaces with the world environment for performance of tasks with a high degree of precision via low-level sensors and actuators of the system. Feed-back from the execution level agents is used to populate the system knowledge-base at higher levels of the system. Figure 2 shows the expanded three level hierarchical control structure developed. In what follows, a description of each module is summarized [8].

JINT1377.tex; 21/11/1997; 15:33; v.7; p.5

6

T. HEBERT ET AL.

Figure 3. The vision system module.

The vision unit contains the AdeptVision AGS which includes a vision multiplexer and two grayscale CCD Pulnix TM-540 cameras and is utilized in two types of processing: inspection and object recognition. Amongst other functions, the vision unit is capable of object recognition and boundary detection. The vision unit recognizes objects entering the conveyor system and extracts information which is helpful in tracking the object and eventually positioning the gripper above the fabric for pickup. Figure 3 is a representation of the physical components of the vision module along with some of the tasks performed within the unit. The sensor unit includes all processes involving non-visual sensors. Actions and sub-tasks of the three sensor controllers are dictated in the coordination level by the Sensor Coordination module. The lowest level within the sensor module contains proximity sensors and force sensing modules (FSM). Each of these three physical units has a logical unit associated with it. The logical control unit handles actual data readings and control signals to the physical devices. These modules represent the highest levels of precision as well as the lowest levels of intelligence implemented in the robotic gripper system architecture. As shown in Figure 4, two classes of sensors are controlled by the sensor coordinator, force/torque sensors, and proximity sensors. In the developed system, JR3 force-torque sensors, which belong to the semiconductor strain gauge type of sensors, are used. The force sensor is being used in the guarded mode to monitor the amount of force exerted by the gripper to prevent damage to the gripper and mounted sensors due to collisions arising from misperceptions of the height of the target object. Four types of proximity sensors have been integrated with the developed robotic gripper system. An array of photo-optic proximity sensors utilizes edge

JINT1377.tex; 21/11/1997; 15:33; v.7; p.6

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

7

Figure 4. The sensor system module.

detection to position the gripper to match the pose (position and orientation) of the target object. Photo-electric proximity sensors are used to gage a distance suitable for activation of suction through the gripper. The photo-optic sensor employed is a diffuse-reflective photo-optic proximity sensor. Based on a manually adjusted sensitivity setting and a measurement of the level of diffusion of the diffracted light wave the presence or absence of an object is determined. A binary signal line conveys an on/off signal to the BIO board of the robot controller. This signal is interpreted as object detected, or object not detected. The photo-electric sensor is also a diffuse-reflective proximity sensor. Again a red LED generates the signal beam. With the photo-electric sensor, the sensing tip transduces the received LED beam to an electric signal. This electric signal is then propagated back to the amplifier for interpretation. This conversion process at the sensor tip introduces a little noise thus making the signal a little less accurate than the photo-optic realization of the proximity switch. A capacitive proximity sensor mounted directly on the bottom plate of the gripper is used for detection of fabric presence, specifically for the purpose of activating suction. The capacitive sensor utilized in this project has a front detachable remote sensing plate. This feature makes feasible the utilization of the entire bottom surface of the gripper as a sensing plate. Currently a single, centrally located sensing strip is being used as the remote plate. The plate, a thin sheet of conducting material, extends the area of the capacitive field generated. This increases sensing distances and also makes sensor mounting simpler and more straightforward. Inductive proximity sensors are utilized in conjunction with the conveyor unit to determine the linear speed of the conveyor belt. The sensor is mounted against the end of the drive cylinder of the conveyor belt. A single notch in the end of

JINT1377.tex; 21/11/1997; 15:33; v.7; p.7

8

T. HEBERT ET AL.

Figure 5. The suction system module.

the cylinder’s shaft allows for a different sensor state. As the circumference of the drive cylinder is a constant factor, and easily measured, simple Newtonian laws provide the vehicle of calculation to achieve the desired conveyor rate. The suction unit interfaces with the robot and gripper module and the sensor module, to enable suction through the gripper when the material is directly positioned under the gripper. Further, depending on the number of panels of material and the type of material, the amount of suction necessary for pickup changes. The suction module needs to be capable of control so that it can generate different amounts of suction, as needed. An integral part of the suction unit, shown in Figure 5, is a 2.5 HP blower. This blower is connected via a ribbed, plastic 1 inch diameter hose to the gripper chamber. A 3-way diverter valve allows alternation of activation and deactivation states of the suction to the gripper. This valve is connected to one of the TTL binary input/output modules of the Adept controller. This robot controller signal activates or deactivates airflow to the gripper chamber. The air flow path of the suction module stems from one valve of the three way diverter valve. This air path then divides to provide two air paths: one path connected to the gripper chamber, the second chamber serves as a bleedoff line.As the bleed-off valve is opened, the air flow from the gripper is reduced thus decreasing the pressure potential. A second stepper motor coupled to a valve is placed in the air line to the gripper. As this valve is closed, the air flow to the gripper is directly regulated. The motion of the bleed-off valve is inversely related to the motion of the main-line valve. As the main-line valve is shut off, the bleed-off valve is opened. Thus the total volume of air flow to the blower is maintained; yet the air flow to the gripper is still totally regulated. The conveyor unit is capable of variable speed, electronically controlled movement. It is required to stop upon the detection of the material directly under the gripper by the hand-mounted sensors. The hardware configuration is shown in Figure 6.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.8

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

9

Figure 6. The conveyor system module.

The Frequency Modulation Driver (FMD), which drives the conveyor unit, sends a modulated alternating-current signal to a three phase 1 HP induction motor, coupled to the conveyor drive cylinder. Further, a real-time control of the conveyor belts is accomplished which allows a user to reverse direction of the belt motion as well as vary the conveyor speed from zero percent to 100 percent (in increments of 1/256). Finally the robot and gripper unit, consisting of the Adept industrial arm and the indigenously designed gripper, is responsible for the proper positioning and orientation of the gripper on the material to be manipulated. It is further responsible for reliable manipulation of the material from a pickup station to a placedown station under the effect of suction. The material must not distort during the pickup, transfer, or the placedown phases of manipulation. The robot and gripper module is also responsible for reliable, accurate and rapid operation of the robotic gripper system. The Adept robot arm has four degrees of freedom, as can be seen from Figure 7. Two motors are located at the base, which produce horizontal link motion. The other two motors are on the forearm: one drives the lead screw which produces a translational movement along the vertical axis, while the other rotates the gripper about the vertical axis. Though transmission mechanisms are used, no gear reducers are employed in the arm design. As a consequence, the features of the direct-drive approach are mostly maintained. The gripper, drawn in Figure 8, is a 9 inch by 12 inch flat plate gripper which relies on pressure differential as the mechanism of pickup. The bottom plate of the gripper is perforated in a 9 × 12 hole pattern to allow air flow through the gripper. The gripper is mounted to the robot arm as an end-of-arm tool.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.9

10

T. HEBERT ET AL.

Figure 7. The robot system module.

Figure 8. Isometric and bottom view of flat-plate gripper.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.10

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

11

Figure 9. System network architecture.

3.2. SYSTEM INTERCONNECTIONS/INTERFACES Alone each of the above modules are cutting edge technologies; however, if these devices were not interfaced together, their worth would be minimal. The physical components are implemented across several different processing units. In order for each of these to perform their correct function at the desired time, a great deal of communication must be achieved. Consider a specific case reflected in Table II. The left column of Table II lists major steps of the algorithm used in the developed control structure for tracking and manipulation of objects. The right column lists the modules involved in the respective step of the manipulation process. The modules involved refer to the Organizaton, Coordination, and Controller level units discussed in conjunction with the overall system architecture and diagrammed in Figure 2 (The Hierarchical Intelligent Control Architecture). Three communication protocols are utilized within the system. The Adept controllers function with a multibus protocol as the backbone bus. The vision module, implemented within the AdeptOne controller, utilizes the multibus to communicate with the VAL controller. Similarly on the AdeptThree, the Force/Torque unit interfaces to a multibus and thus to the organization level implemented on the VAL controller of the AdeptThree. Serial communication and Binary Input Output protocols complete the group of protocols utilized. As shown in Figure 9, the main body of the organization module resides within an AdeptThree Robot Controller. The auxiliary body of the organization module utilizes an AdeptOne Robot Controller as the computational tool. These two machines are connected through a serial connection employing normal seri-

JINT1377.tex; 21/11/1997; 15:33; v.7; p.11

12

T. HEBERT ET AL.

Table II. Manipulation algorithm and involved architectural modules Algorithm step

Involved module(s)

Detection and recognition of object

Vision Coordinator

Extract object pose and transform to world coordinates Transmit object pose to controller

Vision Coordinator Organizational Level

Reposition gripper to intercept object (Coarse Positioning)

Organizational Level Robot/Gripper Coordinator Sensor Coordinator (Force Cntrl.)

Intercept object on moving conveyor

Conveyor Coordinator Sensor Coordinator (Proximity Cntrl.) Organizational Level

Utilizing proximity sensors, fine tune the alignment of the gripper and the object

Sensor Controller (Proximity Cntrl.) Robot/Gripper Coordinator Sensor Coordinator (Force Cntrl.)

Grasp and manipulate object

Suction Coordinator Robot/Gripper Coordinator Sensor Coordinator (Proximity Cntrl.)

al connectivity (9600 baud, 7 data bits, etc.). Each section of the organization module is designed to operate its functions independently of the other with the exception of information pertaining to detected objects and synchronization routines. When an individual controller, either the auxiliary or the main controller, reaches a point where synchronization is required, it suspends program action until the timing signal has been received. Program operation then continues as normal; this method achieves strict alternation of the two controllers. The AdeptVision AGS processor board performs the image processing required by the vision module. It communicates to the camera via standard coaxial cable. The AdeptVision board is rack mounted inside the AdeptOne robot controller chassis. Communication between the vision processing board and the AdeptOne controller (and thus to the organization level module) is achieved through the system multi-bus. The Force Sensing Module also utilizes the system multi-bus of the Adept controller. The force processing board is mounted inside the body of the Adept controller allowing a very close integration of the force sensor with the robot controls. Information passed from the Force Sensing Module to the organizational level include high level information such as torque’s and forces exerted by the world on the gripper and such low level information as interrupt signals generated when excessive forces are acting on the gripper.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.12

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

13

Both the conveyor controls and the suction controls operate out of separate, PC mounted, mini-controllers. These control algorithms, developed indigenously, operate on commercially available, self-sufficient controllers that rely on the PC only for high level (user) interface and other outside communication. The Adept controllers are interfaced to these controllers via a standard RS232–C, 9600 baud, serial interface. The major interface of the Sensor Module utilizes the BIO, or binary input output, board of the Adept Controller. Through this BIO board, the Adept controller can receive and transmit logical signals to external devices with only a 2ms delay; in this case, to and from the integrated sensors. The BIO board, as an intrinsic part of the Adept controller, allows a coupling of the signals generated through the various sensor modules to the Robot/Gripper Controller. 4. The Control Structure As presented earlier, the control problem is to locate a target object, align the gripper above the object, and manipulate the object. To accomplish this process, a manipulation algorithm is developed. As can been seen in Figure 10, the vision system performs object recognition on the target object then obtains the object’s position and orientation. This pose is transformed from the coordinate frame of the camera into the world coordinate frame (located at the base of the AdeptThree robot). The gripper is positioned to intercept the object. A conveyor belt transports the object from the camera to within the robot envelope. Once sensors detect the object under the gripper an array of optical proximity sensors help the system fine tune the relative alignment of the object and the gripper. Upon achieving perfect alignment, suction is activated and the object is manipulated as desired. Two classes of sensors are utilized in the control system, visual and nonvisual sensors. The vision system obtains information about the environment which guides, or cues the placement of the gripper and consequently of the second set of sensors. The following two sections discuss the role of these two sets of sensors in the implemented control system. The first section discusses Rough Positioning of the gripper utilizing the vision system. The second section details fine tuning of the gripper pose using non-visual sensors. 4.1. ROUGH POSITIONING The primary role of the visual sub-system is to gain the approximate pose of a target object which may need to be manipulated. A Pulnix CCD camera monitors the conveyor surface for the presence of an object. While waiting for an object to appear on the conveyor surface attentive visual control [5] is utilized to reduce the time lag between successive image grabs of the conveyor surface. In visual attentive control, the attention of a imaging device is focused on a smaller area of interest. In the implemented system, the camera window is reduced to

JINT1377.tex; 21/11/1997; 15:33; v.7; p.13

14

T. HEBERT ET AL.

Figure 10. The system control algorithm.

a single line whose normal is parallel to the motion of the conveyor’s motion. The processing time within this window, is consequently reduced, allowing more frames to be acquired within a period of time. This increases the reaction rate of the control system to the presence of an object, increasing the overall system performance. When the object is entirely within the view of the full frame camera, the vision coordinator then activates a full window camera and identifies salient features of the object. These features include recognition and identification of the object, as well as location of the object’s centroid and the major axis for a best fit ellipse. The centroid information coupled with the location of the major axis information

JINT1377.tex; 21/11/1997; 15:33; v.7; p.14

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

15

Figure 11. Dynamic selection of the primary sensors.

is passed along to the robot coordinator to be used to reposition the gripper at the appropriate position to intercept the target object. The object pose is passed to the Robot/Gripper Coordinator which repositions the gripper, with the correct orientation, to intercept the object advancing on the conveyor surface. Depending on the orientation of the object (and now the gripper), different sensors determine the interception of the object by the gripper. Optimally, all of the object is under the gripper when the conveyor motion is halted for fine tuning. Therefore the sensors which are furthest from the leading edge of the object should signal interception as shown in Figure 11. The proximity sensors, positioned on the four corners of the gripper are used to detect the presence of the object. Based on the object’s orientation, two of the four sensors will be selected as primary sensors. The primary sensors will signal the state where a majority of the object is under the gripper, allowing fine tuning to begin. 4.2. FINE POSITIONING Six optical proximity sensors are utilized to fine tune the relative position of the gripper and the target object. Each of these sensors can determine only if the object is located directly under the point of the sensor. The fusion of the information gathered from these “point sensors”, combined with knowledge of the previous state of the gripper help to accomplish the alignment task. The next two sections of the presentation focus on the implementation of the “fine tuning” of the relative alignment of the gripper and the object. First the method of modeling the information gathered from the sensors is presented. An accurate model of the information gained through the sensors is vital to the system as it allows the control structure to track the relative position of the object and the gripper. This information is then passed on to a sensor information processing unit, the sensor coordinator (Figure 2), which based on the previous state of the system and the newly gained state, determines the next motion taken by the robot control unit to complete the alignment process.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.15

16

T. HEBERT ET AL.

Modeling sensor information. Four of the six point sensors used in fine tuning are located on the four corners of the gripper. The remaining two sensors are symmetrically offset from the center of the bottom plate of the gripper. Two measures of information are associated with the perimeter sensors. The first is a simple, unique numerical weight for identification purposes. The second measure associated with the perimeter sensors is a unit vector corresponding to the sensors relative position to the gripper coordinate frame. Figure 12 shows the gripper together with the coordinate frame and the position of the six fine tuning sensors. The unit vector of an active sensor is assigned as shown in the previous figure. An inactive sensor (one that does not sense the object) is assigned a unit vector of magnitude zero. The unit vector of each sensor is then combined and the result is the system motion vector. The motion vector is utilized in sensor processing to determine direction and distance of movements. Each active sensor contributes its unique numerical weight to the status vector, a two dimensional vector which represents the current status of the sensors. The first dimension of the status vector corresponds to the sum of the unique weights of each sensor. The second dimension indicates the number of perimeter sensors active. Table III shows each of the six fine tuning sensors together with their contribution to the motion vector and the status vector. Sensor processing models. Proper and accurate registration of the sensor information allows sensor processing to be accomplished with a minimum overhead. During the fine tuning task, the objective of sensor processing is to align the gripper and the target object. This alignment is accomplished with one of two types of motions. There are two types of motions taken by the robot control unit to fine tune the pose (position and orientation) of the gripper, a translation (vector motion) and a rotation. When the motion algorithm calls for the calculation of a new

Figure 12. Gripper coordinate frame and relative sensor positioning.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.16

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

17

Table III. Sensors involved in fine tuning and their respective motion vector and status vector components Sensor

F1 F2 F3 F4 P1 P2

Weight

1 2 4 8 16 32

Motion vector Mi

Mj

+1 −1 +1 −1 – –

+1 +1 −1 −1 – –

move, it looks to the status vector, S. If the number of perimeter sensors is even (Sn MOD 2 == 0) then a vector motion is taken, otherwise a rotation is called for. When either a rotation or a vector motion is initiated the actual move is accomplished in a series of small moves. Between, and during each of the moves, the sensor controller monitors the state of the sensors. These moves are halted in one of two ways, either the move reaches the specified end point, or the state of the sensors changes. When a sensor changes its logical state, S and possibly M change, the current motion is halted. The motion vector (M) provides a direction of motion within the coordinate frame of the gripper. Therefore the first thing that must be achieved when a translation motion is called for is to transform the motion vector into the world/robot coordinate frame. After the world frame translation vector is calculated, a robot move is initiated to a position shifted, by W, from the current location. Immediately after the initiation of the move, the status of the sensors is monitored for changes so that the control system can react accordingly. A rotational correction of the gripper position is done with constant degree increments. The direction of the rotation; however, is determined by the current motion vector. A modified arctangent of the motion vector provides the direction of rotation. The motion control algorithm implemented attempts to position sensors above the target object then “lock” those sensors in (i.e., keep the sensors detecting the object). When new sensors are locked in, a favorable change in the system state occurs, and the current system state vectors, the motion vector and the status vector, become the new “locked in” state. After a motion is taken by the system,

JINT1377.tex; 21/11/1997; 15:33; v.7; p.17

18

T. HEBERT ET AL.

Table IV. Motion control state table Compare Sw and Slw

Compare Sn and Sln

System state

>

>

1

< =

2 3

> < =

4 5 6

=

7 0

<

= Sn = 0 ∗∗ Sn > 2 ∗

System response

update S1 calculate a new move do opposite last move a) update S1 and calculate a new move b) do opposite last move do opposite last move a) do negative direction, same as last move b) do opposite last move calculate a new move execute special handler positioning complete

the current system vectors are compared to the last locked vectors so that the next move can be determined. Table IV is the state table of the motion control system, where Sl is the locked vector and S is the current status vector. The objective of the sequence of moves generated by this state table is to create one of two states. Primarily, three perimeter sensors (Sn > 3) detecting the object, are required to determine that the object is lined up perfectly, and that the motion algorithm can exit. The other desirable condition is to add a new sensor to the list of sensors detecting the object (Sw > Slw with Sn > Sln ). This happens in two actual system states, state 1 and state 3a. At these points in the control flow, the locking vector Sl is updated indicating that a desirable state change has occurred. Any other system state indicates that one or more sensors has lost “contact” with the object. A special system state exists when no perimeter sensors sense the object. This error condition causes a special exception handler to be initiated which adjusts the pose of the gripper so that the singularity no longer is present and so that the normal course of motion can continue. Based on the internal system state, one of five system responses is called for 1. Positioning complete 2. Calculate a new move 3. Do negative direction, same as last move 4. Do opposite last move 5. Execute special handler If a favorable state change has occurred, the system calls for the calculation of a new move. The type of motion then taken is determined as described earlier. When an “opposite last move”, or a “same as last move” determination of the next motion is called for, the motion type of the last move determines which

JINT1377.tex; 21/11/1997; 15:33; v.7; p.18

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

19

move is taken next. When one of these types of motion calculation is called for, the state vectors (the motion vector and the status vector) are not updated; they simply examine the last move taken and compliment it or repeat it respectively. The exception handler routine is an important aspect of the control scheme. This zero condition handler, handles the special cases when Sn = 0 (system state 0 from table IV). When the condition arises that the exception handler must be called, there is no guiding information from the current state of the system. As zero perimeter sensors are active (thus a zero condition exists) the system can get no approximation of where the object is; nor can it get an indication of where to move next. Due to the precarious position of the preceding pose of the gripper (the position immediately before the move resulting with Sn = 0), a singularity exists. Both a rotate and a vector motion, from this position and orientation of the gripper, will end up back in a zero condition a majority of the time. The gripper must back up to a pose where two conditions are present. First, the gripper cannot be in another zero condition. Second it must be possible to approach the same approximate position as the zero condition except with a slightly different pose so that the singularity does not reoccur. 5. A Case Study The case study involves integration of the implemented control architecture with a robotics gripper developed to manipulate limp materials. 5.1. PROBLEM FRAMEWORK One of the unsolved problems in the apparel industry is that of automated limp material handling. The current apparel manufacturing process begins as the material is spread and separated into plies. The material is then transferred to a table where it is cut, either manually or with an automated CNC cutter. The fabric is then transferred to the sewing machines where the manufacturing process is completed. Due to the limp nature of fabric, conventional industrial robotic grippers deform or distort the fabric making automated pick and place routines neither accurate nor reliable [6]. For a discussion of existing grippers and systems developed specifically for limp material handling, the reader is referred to [4, 9, 10, 15, 20]. It is important for a robotic system working within semi- or unstructured environments to be capable of autonomous and intelligent behavior, at least to some extent. The greater the uncertainty in the environment, the greater the necessity for the system to be autonomous and intelligent. The Robotics and Automation Laboratory at the University of Southwestern Louisiana has developed a series of robotic grippers which address automated limp material handling. There is now a need to integrate multiple sensory infor-

JINT1377.tex; 21/11/1997; 15:33; v.7; p.19

20

T. HEBERT ET AL.

mation, implementing a sensor based control structure for the robotic system so that it can be integrated into a manufacturing environment. This case study addresses the need for the development of the sensor based control system to accurately locate and retrieve desired fabric pieces, thus integrating sensors into the existing gripper control structure [11–14]. The developed gripper system is tested when it is integrated within a static environment and a dynamic environment. A static environment (structured) is fully defined; the information regarding the pose of the object is known a-priori’, the object is always positioned, either manually or automatically, where it is supposed to be, the location of the drop off point is fixed. In this environment, the robot moves to the predefined position and enables the gripper to perform the operation of manipulation. On the other hand, a dynamic environment is semistructured or unstructured. In this situation, the robot is required to learn where the object is located and perform the operation of manipulation. 5.2. TESTING WITHIN A STATIC ENVIRONMENT The process of testing within a static environment involves the following: A panel of fabric, cut to the dimensions of the gripper is placed on cutting table blocks located and oriented at a position fixed, and thus known to the robot at all times. The fabric panel is always loaded, either manually or automatically, on the cutting table block in the appropriate position and orientation. The gripper is required to pick up from the cutting table block located on one of the conveyors and place it at a designated “place down” spot on the other conveyor. Results when using the AdeptThree robot indicate a system reliability of 100%, when operating at robot speeds under 80–90% of the maximum speed capacity. Fabric manipulation rates of an average of 15 panels per minute have been accomplished using the AdeptThree robot at 80% of its maximum capacity. Using the AdeptOne robot at 80% of its maximum speed, fabric manipulation rates of about 22 panels/minute have been accomplished at a system reliability of 99.6–100%. Table V. Comparison of manipulation rates and reliability of AdeptOne and AdeptThree robots under a static environment Robot speed

50% 60% 70% 80%

AdeptOne robot

AdeptThree robot

Panels/minute

Reliability

Panels/minute

Reliability

16 18 20 22

100% 100% 100% 99.6%

12 13 14 15

100% 100% 100% 100%

JINT1377.tex; 21/11/1997; 15:33; v.7; p.20

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

21

5.3. TESTING WITHIN A DYNAMIC ENVIRONMENT While testing the gripper in a static environment, it is important to remember that the results gained are at a premium. Zero noise affects the operation results. Thus only defects or imprecisions within the components or interfaces, or physical limitations of the components, will hinder the results. The set of results obtained from experimentation within a static environment provides an upper bound on the performance of the system. A more applicable set of results can be obtained by placing the system into a dynamic environment. An environment is defined to be dynamic, if there is limited, or no, a-priori information about the environment within which the system is required to operate. Several visual and non-visual sensors are used to impart sensory capabilities to the robotic gripper system, in order for it to perform adequately within a semistructured environment. Three sets of experimental tests are performed using the developed system. The first set of experiments presented in the sequel pertains to the situation when only the visual sensors are used to position the gripper. A second set of experiments are performed during which the vision sensors (cameras) are used for rough robot positioning, and the gripper-mounted proximity sensors are activated to aid in fine-positioning of the gripper for perfect alignment with the object. The vision sensors cue the proximity sensors by providing a rough pose estimate to the proximity sensors, which then help perform fine positioning of the gripper. The third set of tests performed utilizes both visual and non-visual sensors, yet one of the sensors which perform fine tuning provides no readings (i.e., a faulty sensor condition). The set of experiments performed with and without establishing the hierarchical relationship between the sensors provides justification for the implementation of a hierarchical control system, as evidenced by the results presented in the sequel. The results of the third test set provide justification for expansion of the existing sensors into a larger array of sensors, as discussed in the final chapter. The physical layout and configuration of the developed system favors the use of AdeptThree robot for the purposes of experimentation. As a consequence, the experimental results presented in the following pertain to the situation when the AdeptThree robot is used at varying speeds of 50%, 80% and 100% of its maximum capability for manipulation of fabric panels. The process of testing and performance evaluation of the developed system in a semi-structured apparel manufacturing environment is based on the following experimental setup: Panels of fabric, cut to the dimensions of the gripper are loaded randomly on one of the two conveyors, at varying positions and orientations, one pre-separated panel at a time. The area vision cameras, associated with AdeptOne controller, detect the presence and identify the fabric panel. The gripper integrated with the robot is roughly positioned on the top of the object, using the pose information. The gripper-mounted non-visual sensors are used to accurately align the edges

JINT1377.tex; 21/11/1997; 15:33; v.7; p.21

22

T. HEBERT ET AL.

Table VI. Results of timing analysis using only vision sensors for manipulation Speedbelt

tvis

ttrans

trobot (sec) Robot speed 50% 80% 100%

ttotal (sec) Robot speed 50% 80% 100%

30% 40% 50% 60% 70% 80%

0.24 0.25 0.27 0.30 0.31 0.32

2.31 2.31 2.31 2.31 2.31 2.31

2.41 2.41 2.41 2.41 2.41 2.41

5.46 5.47 5.49 5.52 5.53 5.54

1.74 1.74 1.74 1.74 1.74 1.74

1.56 1.56 1.56 1.56 1.56 1.56

4.79 4.80 4.82 4.85 4.86 4.87

4.61 4.62 4.65 4.68 4.68 4.69

of the gripper with the edges of the fabric panel. Once the gripper is perfectly aligned with the fabric suction is enabled through the gripper, to be turned off later when the robot reaches a predefined placedown area. Experiments using only vision sensors. The set of experimental results presented here pertains to the situation where sensory information from the only vision system is used for pose estimation of the object on the moving conveyor. The total time required to perform one “pick and place” (manipulation) operation is given as follows: ttotal = tvis + ttrans + trobot ,

where ttotal = Total time for one manipulation operation tvis = Time required for vision processing by AdeptOne vision controller ttrans = Data transfer time from AdeptOne to AdeptThree robot arm trobot = Sum of the discrete time values needed for the movement of the robot arm. The vision processing time varies with the speed of the conveyor. When the conveyor is moving slowly, the vision system is able to get a better picture of the object, as compared to the case when the belt is moving at faster speeds. Thus, the vision system can perform image processing operations faster because of higher clarity in the features of the image. This is evident from the vision processing times or vision time (in seconds) presented in the table. As mentioned earlier, the vision processing is performed by the AdeptOne controller and the data is transferred to the AdeptThree robot, which entails a constant overhead; data transfer time or transfer time (in seconds) as represented in the table. It can be seen from the above results that fabric manipulation rates of up to 10–13 fabric panels per minute can be obtained, with the belt moving between 30–80% of its rated speed and the AdeptThree robot moving at 50–100% of its maximum rated speed. It is to be noted that if the gripper were to be integrated with the faster AdeptOne robot, much higher manipulation rates can be accomplished. This is

JINT1377.tex; 21/11/1997; 15:33; v.7; p.22

23

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

Table VII. Results of timing analysis using visual and non-visual sensors for manipulation Speedbelt

30% 40% 50% 60% 70% 80%

tvis

0.24 0.25 0.27 0.30 0.31 0.32

ttrans

2.31 2.31 2.31 2.31 2.31 2.31

ttune

0.8 1.2 1.3 2.5 2.9 3.0

trobot (sec) Robot speed

ttotal (sec) Robot speed

50%

80%

100%

50%

80%

100%

1.0 1.0 1.0 1.0 1.0 1.0

0.8 0.8 0.8 0.8 0.8 0.8

0.7 0.7 0.7 0.7 0.7 0.7

4.30 4.76 4.88 6.11 6.52 6.63

4.10 4.56 4.66 5.91 6.32 6.43

4.05 4.46 4.56 5.81 6.22 6.33

because of the fact that since the vision processing is performed by the AdeptOne vision controller, the data transfer to the robot arm is through the system multibus, rather than a serial communication line. This drastically reduces the data transfer time ttrans . Further, being lighter than AdeptThree robot, the AdeptOne robot arm moves at a much faster rate, decreasing the factor trobot substantially. Experiments using the complete hierarchical sensory control system. In what follows, the results of experimental analysis performed when using visual and non-visual proximity sensors for robot positioning are presented and discussed. Within this framework, the developed system can be considered as utilizing and following the hierarchy established by the developed hierarchical sensor-based control system discussed in the earlier chapters. In this case, the total time required to perform one “pick and place” (manipulation) operation is given as follows: ttotal = tvis + ttrans + ttune + trobot ,

where all the terms except for ttune are as explained earlier. The term ttune is defined as: ttune = Time required for fine tuning the pose of the gripper for perfect alignment with the object. It is seen that the total time required for object manipulation increases as the speed of the conveyor increases. The reasons for this occurrence are twofold: firstly, there exists an inverse proportionality between the speed of the conveyor and the vision processing time; the reasons for this are explained earlier. Secondly, as the speed of the conveyor increases, its overshoot increases; as a consequence the amount of movement required during the fine-tuning phase increases, increasing the fine-tuning time. It is seen from the above results that there is a distinct cut-off point: 50% belt speed, above which the performance of the system deteriorates. This conclusion can be corroborated from the results

JINT1377.tex; 21/11/1997; 15:33; v.7; p.23

24

T. HEBERT ET AL.

obtained when using just the vision sensors. At slower speeds of the belt, fabric manipulation rates of up to 15 panels are possible. Even the worst case manipulation rate of 9–10 panels/minute is within the required range specified as a system requirement. At slower belt speeds the rates of manipulation range between 12– 15 panels of material per minute which exceeds the expectations from the system. Testing system fault tolerance. In order to test the robustness of the developed system, an experimental set is established to test the full (visual and nonvisual) control system, with the exception of one or more faulty sensors. To accomplish this, one sensor tip is isolated from its amplifier, effectively disabling the sensor. Thus a faulty sensor is modeled as one in which no reading is given, the sensor is “stuck at zero”. Two types of sensors greatly effect the operation of fine tuning: the photo-optic and the electro-optic proximity sensors. If more than one photo-optic (perimeter) sensor is disabled, fine tuning can never be achieved, because the system requires object detection by at least three of these sensors in order to claim perfectalignment. Therefore one photo-optic (perimeter) and one electro-optic (central) sensor are disabled. As discussed earlier, the two perimeter sensors, which serve to halt conveyor belt motion, are labeled as the primary sensors for purposes of fine tuning. Either one of the primary sensors or one of the non-primary sensors can be disabled to affect the test set. System performance under these fault conditions is as expected. Very little sensor redundancy is performed at this level, thus each sensor’s reading is essential. Each sensor, when working, directs the gripper to move into one of the four coordinate quadrants of a Cartesian coordinate frame. If one sensor is disabled, the gripper will never move towards that quadrant. Furthermore, absence of a sensor’s signal will cause the calculation of a new move to reduce the number of perimeter sensors by 1 (if the faulty sensor should be detecting the object), causing the opposite type of corrective motion to be taken. The system will take a vector motion when it should be rotating, and vice versus. With a non-primary sensor disabled, the gripper picked the object 20% of the time. With a primary sensor disabled, only 10% of the time was the pick operation successful. In addition, with a primary sensor disabled, 30% of the time the fabric was never even detected by the gripper (i.e., the conveyor belt was never stopped so that fine tuning and pickup could happen). This test set was performed at the lowest conveyor speed (30%) and robot speed (50%) utilized in the previous test sets. Even at these low speeds the 15% of the time the alignment operation was successful, operation times were more than twice that experienced with a full sensor suite. The results serve as justification for the presence of each sensor in the system. It also points to a need for redundant sensors, or larger sensor arrays to facilitate

JINT1377.tex; 21/11/1997; 15:33; v.7; p.24

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

25

Figure 13. Reliability and timing values at various conveyor speeds. Reliability and timing values at various conveyor speeds.

fine tuning. Additional sensors could decrease the weight of a particular sensor when determining the probability of success of a particular operation. 6. Conclusions This discussion has focused on a hierarchical sensor based control structure. A three level hierarchical structure implements the actual structure of the system, while a modified temporal phase template serves as the paradigm for sensor integration. A manipulation algorithm has been developed and explained which relies upon the integration of visual and non-visual sensor arrays to define the environment. In cueing the positioning of the gripper mounted proximity sensors, the vision sub-system fulfills a vital need within system implementation. Once the object reaches the gripper, the vision system has provided a rough position of the target object, making possible only minor adjustments to the alignment of the gripper and the object. A method of registering and modeling the information of the optical “point sensors” is also explained along with its role in the processing of the sensor information. This processed information is the basis for motion to fine tune the alignment of the gripper and target object. From the presented case study, it can be concluded that the robotic gripper system integrated with the implemented control architecture, performs at its fullest potential, with highest performance when using both visual and non-visual proximity sensors to assist the process of object manipulation. Figure 13 displays two graphs comparing the operation of the system utilizing vision sensors only

JINT1377.tex; 21/11/1997; 15:33; v.7; p.25

26

T. HEBERT ET AL.

and using the complete array of sensors. It can be seen that while the reliability is tremendously high when using vision and proximity sensors for manipulation, the number of panels manipulated per minute decreases significantly, as the speed of the belt increases. It is concluded that if the speed of the belt were to be kept below 50% of its maximum, using both vision and proximity sensors results in a highly reliable manipulation of about 13–15 panels of fabric per minute. Thus exceeding the system requirements of 8–10 panels per minute. References 1. Abidi, M. A. and Gonzalez, R. C.: Data Fusion in Robotics and Machine Intelligence, Boston, 1992. 2. Allen, P. K.: A Framework for Implementing Multisensor Robotic Tasks, in: Proceedings for the ASME International Computers in Engineering Conference and Exhibition, New York, NY, 1987, pp. 303–309. 3. Asada, H. and Youcef-Toumi, K.: Direct-Drive Robots: Theory and Practice, Cambridge, Mass., 1987. 4. Biggers, K. B. et al.: Low Level Control of the Utah/MIT Dexterous Hand, in: IEEE International Conference on Robotics and Automation (1986), pp. 61–66. 5. Clark, J. and Ferrier, N.: Control of Visual Attention in Mobile Robots, in: Proceedings for the 1989 IEEE International Conference on Robotics and Automation (1989), pp. 826–831. 6. Czarnecki, C.: Automated Stripping: A Robotic Handling Cell for Garment Manufacture, in: IEEE Robotics & Automation Magazine 2(2) (1995), pp. 4–8. 7. Fu, K. S., Gonzalez, R. C., and Lee, C. S. G.: Robotics: Control, Sensing, Vision, and Intelligence, New York, 1987. 8. Hebert, T.: Sensor Based Control of a Robotic Gripper, M.Sc. Thesis, University of Southwestern Louisiana, 1996. 9. Jacobsen, S. C. et al.: Design of the Utah/MIT Dexterous Hand, in: IEEE International Conference on Robotics and Automation (1986), pp. 1520–1532. 10. Jacobsen, S. C. et al.: Development of the Utah Artificial Arm, IEEE Transactions on Biomedical Engineering 2(1) (1982), pp. 464–481. 11. Kolluru, R.: Modeling, Design, Prototyping and Performance Evaluation of a Robotic Gripper System for Automated Limp Material Handling, Ph.D. Dissertation, University of Southwestern Louisiana, 1996. 12. Kolluru, Valavanis, Hebert, Steward, Sonnier: Design of a Robotics Gripper System for an Automated Deformable Material Manipulator, in: Proceedings of the Sixth International Symposium on Robotics and Manufacturing, Montpelier, France, 1996. 13. Kolluru, Valavanis, Steward, Sonnier: A Flat-Surface Robotic Gripper for Handling Limp Material, in: IEEE Robotics and Automation Magazine 2(3) (1995), pp. 19–25. 14. Kolluru, Valavanis, Steward, Sonnier: A Sensor-Based Robotic Gripper for Limp Material Handling, in: Third IEEE Mediterranean Symposium on New Directions in Control and Automation (11–13 July 1995), pp. 68–76. 15. Luo, R. C., Lin, M., and Scherp, R. S.: The Issues and Approaches of a Robot Multisensor Integration, in: Proceedings of the IEEE International Conference on Robotics and Automation (1987), pp. 1941–1946. 16. Mason, M. and Salisbury, K. J. Jr.: Robot Hands and the Mechanics of Manipulation, Cambridge, Mass., 1985. 17. Priebe, C. E. and Marchette, D. J.: Temporal Pattern Recognition: A Network Architecture for Multisensor Fusion, Proc. SPIE, Intelligent Robots and Computer Vision: Seventh in a Series, Cambridge, Mass., 1988. 18. Rodger, J. C. and Browse, R. A.: An Object-Based Representation for Multisensory Robotic Perception, in: Proc. for the Workshop on Spatial Reasoning and Multisensor Fusion, St. Charles, Illinois, 1987, pp. 13–20.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.26

A REAL-TIME, HIERARCHICAL, SENSOR-BASED ROBOTIC SYSTEM ARCHITECTURE

27

19. Ruokangas, Black, Martin, Schoenwald: Integration of Multiple Sensors to Provide Flexible Control Strategies, in: Proceedings of the 1986 IEEE International Conference on Robotics and Automation (1986), pp. 1947–1953. 20. Salisbury, J. and Craig, J.: Articulated Hands: Force Control and Kinematic issues, in: International Journal of Robotics Research, ed. G. Saridis 1(1) (Spring 1982), pp. 25–36. 21. Schilling, R. J.: Fundamentals of Robotics: Analysis and Control, Englewood Cliffs, NJ, 1990. 22. Valavanis, K. P. and Saridis, G. N.: Intelligent Robotics Systems: Theory, Design and Applications, Boston, 1992.

JINT1377.tex; 21/11/1997; 15:33; v.7; p.27

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.