Visual servoing of a parallel robot system

Share Embed


Descrição do Produto

Visual Servoing of a Parallel Robot System 1

A. Traslosheros , 1J. M. Sebastián, 2L. Ángel, 3F. Roberti, 3R. Carelli [email protected], [email protected], [email protected], [email protected], [email protected]. 1

Universidad Politécnica de Madrid, DISAM, Madrid, España. Universidad Pontificia Bolivariana, Bucaramanga, Colombia 3 Instituto de Automática, Universidad Nacional de San Juan, San Juan, Argentina. 2

Abstract – This paper describes the visual control of a parallel robot called “RoboTenis”. The system has been designed and built in order to carry out tasks in three dimensions and dynamical environments, thus the system is capable to interact with objects which move up to 1m/s. The control strategy is composed by two intertwined control loops: The internal loop is faster and considers the information from the joins, its sample time is 0.5 ms. Second loop represents the visual servoing system which is external loop to the first mentioned, second loop represents the study purpose, that it is based in the prediction of the object velocity which is obtained form visual information and its sample time is 8.3 ms. Lyapunov stability analysis, system delays and saturation components has been taken into account. Keywords – Parallel robot, visual control strategies, tracking, system stability.

I. Introduction The vision systems are becoming more frequently used in robotics applications. The visual information makes possible to know about the position and orientation of the objects presented in the scene and the description of the environment with a relative precision. Although the above advantages, the integration of visual systems in dynamical works presents many topics which are not solved correctly yet, thus many important investigation centers [1] are motivated to investigate about this field, such as in the Tokyo University ([2] and [3]) where fast tracking (up to 2 m/s) strategies in visual servoing are developed. In order to study and implementing the different strategies of visual servoing, the computer vision group of the UPM (Polytechnic University of Madrid) has decided to design the ROBOTENIS. Actually the implemented controller make possible to achieve high speed dynamical works. In this paper some experiments made to the system are described, the principal purpose is the high velocity (up to 1 m/s) tracking of a small object (black ping pong ball) with three degrees of freedom. It should be mentioned that the system is designed with higher attributes

1-4244-0830-X/07/$20.00 ©2007 IEEE.

in order to response to future works. Typically the parallel mechanisms possess the advantages of high stiffness, low inertia and large payload capacity; however, the principal weakness is the small useful workspace and design difficulties. As shown is fig. 1, mechanical structure of RoboTenis system is inspired in DELTA robot [4]. Kinematical model, Jacobian matrix and the optimized model of the RoboTenis system have been presented in other previous woks [5]. Dynamical analysis and the joint controller have been presented in [6] and [7]. The dynamical model is based upon Lagrangian multipliers thus, is possible to use forearms of non-negligible inertias in the development of control strategies. Two control loops are incorporated in the system: the joint loop is a control signal calculated each 0.5 ms, at this point dynamical model, kinematical model and PD action are retrofitted. The other loop is considered external since is calculated each 8.33 ms; this loop uses the visual data and is described in detail along this work.

Fig. 1. RoboTenis System

II. DESCRIPTION OF THE SYSTEM This section describes the experimental environment, the elements that are part of the system and the functional

characteristics of each element. For more information: [8]. For more information see [8]. A. Experimentally Environment The objective of the system resides in the tracking of a ping pong ball along 600 mm. Image processing is conveniently simplified using a black ball on white background. The ball is moved through a thread (fig. 2), and the ball velocity is close to 1 m/s.

Data Acquisition Card. The data acquisition system is composed by the Matrox Meteor 2-MC/4 card and is the responsible of visual data and is used in double buffer mode, the current image is sampled meanwhile the previous image is being processed. Image Processing. Once images are acquired; the visual system segments the ball on white background. The centroid and diameter of the ball are calculated using a subpixel precision method. The 3D position of the ball is possible to calculate through preliminary camera calibration. The control system requires the velocity of the ball which estimated through Kalman filter ([9] and [10]) from the ball position. C. System of Positioning Control The positioning system is composed by DSPACE 1103 card which is the responsible of: the generation of the trajectories, the calculation of kinematical models, dynamical models and control algorithms. The motion system is composed by AC brushless servomotors, Ac drivers and gearbox (for more information see [8]).

Fig. 2. Work environment

B. Vision System RoboTenis system has a camera located on the endeffector as the fig. 2 shows. This camera location merge two important aspects; when the robot and the object are relatively faraway each other, the field of view of the camera is wide but some error are generated in the ball position measurement; when the ball and robot are near each other, the field of view of the camera is narrow but the precision of the ball position measurements is increased. The above mentioned will be usable in future works as catching or hitting to the ball. Thus some characteristics of the components are the following. Camera. The used camera is the SONY XC-HR50 and its principal characteristics are: High frame rate (8.33ms) and 240 x 640 pixels resolution; the used exposure time was 1 ms; progressive scan CCD; relatively small size and little weight: 29x29x32mm and 50gr, see fig. 3.

Fig. 3. Camera on the RoboTenis system

D. Characteristics and Functions The visual controller is conditioned by some characteristics of the system and there are others inherited by the own application; some of them are: - High uncertainly in data from the visual system. The small sample data (8.333 ms) makes bigger the errors of the velocity estimation. For example, if the ball is located 600mm from the robot (camera) then the diameter of the ball it will be measured in 20 pixels and, if the estimation of the position has an error of 0.25 pixels, then the estimated distance error will be 8mm approximately, in the same order the estimation of the ball velocity error will be 1 m/s. The errors above cited origin an elevated discontinuity and the control action required exceeds the robot's capacities; the Kalman filter implemented helps to solve the problem. - The target speeds for the robot should be given continually. In order to avoid high accelerations, the trajectory planner needs continuity in the estimated velocity. For example, an 8 mm error in one sample rate (8.333ms) will demand acceleration equivalently to 12 times the gravity. The system guarantees a soft tracking performance by means of a trajectory planner; the trajectory planner is specially designed for reference shifting, thus the target point can be changed at any moment. - Some additional limitations have to be considered in the system. The delay between the visual acquisition and the joint motion is estimated in two visual servoing sample rate (16.66ms). This delay is attained to some principal reasons as image capturing, image transmission and image processing. The maximum velocity the end-effector of RoboTenis is 2.5m/s.

III.

VISUAL CONTROL OF THE SYSTEM

The coordinated systems considered are shown in the fig. 4; they are Σw, Σe, and Σc which represent the word coordinate system, the end-effector robot system and the camera coordinate system respectively. Other notations defined are: c pb represents the position of the ball in the camera coordinate system, wpe represents the position of the robot end effector in the word coordinate system; wpe is obtained by means of the direct kinematical model. The rotation matrix are constant and known as wRe , wRc , eRc; eTc is obtained by means of the camera calibrations.

If (2) is substituted into (1) then we obtain: (3)

e(k )= c pb* − cRw ( w pb (k )− w pc (k ))

The system is supposed stable and in order to guarantee that the error will decrease exponentially, thus we choose: (4) e(k ) = −λe(k ) con λ > 0 c Deriving (2) and supposing that Rw is constant, we obtain: (5) e(k ) =− cR ( w v (k )− w v (k ))

w

b

c

Substituting (3) and (5) into (4), we obtain: w

(6)

vc (k )= w vb (k ) −λ cRwT (c pb* − c pb (k ))

Where

w

v c (k )

and

w

represent the camera and ball

v b (k )

[

velocities respectively. Since can be expressed as:

w

]

v e (k ) = w v c (k ) the control law

[

u (k )= w vb (k ) −λ cRwT c pb* − c pb (k )

]

(7)

The equation (7) is composed by two components: a w component which predicts the position of the ball ( vb (k ) )

and the other contains the tracking error ( [ pb ( k )− pb (k )] ).The ideal control scheme (7) requires a perfect knowledge of all its components, which is not possible, a more realistic approach consist in generalizing the previous control as (8) u (k )= w vˆ (k ) −λ cR T c p * − c pˆ (k ) c

Fig. 4. Coordinated considered systems.

Although several alternatives exists [11], the controller selected is based in the ball position. Schematic control can be appreciated in the fig. 5, the error function is obtained c * though the difference between the referenced position ( pb (k ) ) c and the measured position ( pb (k ) ) this difference must be constant because the goal is the tracking of the ball. Once the error is obtained, the controller calculates the velocity of the end effector. By means of the trajectory planner and the Jacobian matrix, all the joint motions are calculated. “k” indicates the sample time considered. c

pb* ( k ) e (k ) +

-

u (k ) Controller c

pb ( k )

Jacobian matrix

q (k ) ROBOT

A. Visual Servoing Model.

c

* b

e(k )= p − pb (k )

(1) If ball position is referenced in the coordinate system of the camera then the position of the ball can be expressed as: c (2) p (k )= cR ( w p (k )− w p (k )) w

[

b

c

]

b

As is shown in (8), the estimated variables are represented by the carets which are used instead of the true terms. The basic controller is shown in the fig. 6. c

pb* ( k ) +

e(k ) -

u (k )

− λ cR wT

+

Jacobian matrix

+

q (k )

ROBOT

Vˆb ( k )

w

Velocity estimator w

pb ( k )

+

w

c T c Rw pe

p c (k )

-

w

+

+

p e (k )

Forward kinematics

c T Rw

c

pˆ b ( k )

Vision system

Fig. 6. Visual servoing control system architecture.

B. Adjusting The λ Parameter.

In the fig. 5 can be observed that the position error can be expressed as follows:

b

w

Vision system

Fig. 5 The Visual servoing uses the ball position

c

b

*

b

c

Fundamental aspect in the performing of the visual servoing system is the adjustment of λ thus, λ will be calculated in the less number of sample time periods and will consider the system limitations. This algorithm is based in the future positions of the camera and the ball; this lets to the c c * robot reaching the control objective ( Pb (k )= Pb (k ) ).

The future position (in the k+n instant) of the ball in the word coordinate system is: w (9) pˆ (k + n)= w pˆ + wvˆ (k )Tn b

b

b

Where T is the visual servoing sample time period (8.333 ms) and the future position of the camera in word coordinate system in the k+n instant is: w (10 pc (k + n)= w pc + wvc (k )Tn ) The control objective is reaching the target position in the shorter time as be possible, thus if we substitute: the target position, the future ball and camera position into (2): c * c (11) p − R w pˆ (k + n)− w p (k + n) = 0 b

w

[

b

]

c

Substituting (9) and (10) into (11), we obtain (12). c * c pb = Rw w pˆ b (k ) − w vˆb (k )Tn − w pc ( k ) − w vc (k )Tn

[

]

It has been indicated that the control law is:

u ( k )= w vˆb (k ) −

w

ve (k ) = w vc (k ) ,

[

(13)

]

If (8) and (13) are compared, we can obtain the λ parameter as: (14) 1 λ= Tn The equation (14) gives a criterion for adjust λ as a function of the number of samples required (n) for reaching the control target. c

pb* ( k )

e (k )

+

-

The term z − r represents a delay of “r” periods respecting the control signals. If we consider that the visual information c ( pb (k ) ) has a delay of 2 sampling times (r=2) with respect to the joint information, then at an instant k+n, the future position of the ball can be obtained by: w pˆ (k + n)= w pˆ (k − r )+ w vˆ (k − r )T (n + r ) (15) b

+

b

The future position of the camera is given by (10). Using the (14) is possible to adjust the λ for the control law by considering the following aspects: -The wished velocity of the end-effector is represented by (13). In physical systems the maximal velocity is necessary to be limited. In our system the maximal velocity has a direct effect in the minimal number of sampled time periods (T) which the target function can be achieved. The number of sampled time periods (T) is not the same in all joints, thus this data in the more restrictive joint (the bigger time) will be used in the λ parameter calculation. It is desirable that the velocity of the robot can be continuous.

u (k )

− λ cR wT Vc (k − r )

The visual servoing architecture proposed in fig. 6, doesn’t consider the physical limitations of the System such as delays and maximum system operation components. In fig. 7 we propose the visual control structure that considers the limitations above mentioned.

b

(12)

if (2) is considered

1c T c * c Rw pb − pˆ b (k ) Tn

C. Implemented Algorithm

q (k )

Jacobian matrix

+

ROBOT

w

-

z −r

Vˆb ( k − r )

w

+ c

pˆ b ( k )

c

Rw

Forward kinematics

Velocity estimator

Vˆb ( k − r )

c

w

r ⋅T

pb ( k − r ) +

+

c

+

c

w

c

p c (k − r )

+

w

z

−r

R wT

p c (k )

c

pe

+

w

p e (k )

RwT

pˆ b ( k − r )

Position filter

c

pb ( k − r )

Vision system

Fig. 7. Visual servoing control proposed architecture.

D. System Stability. By means of Lyapunov analysis is possible to probe the system stability; it can be demonstrated that the error converges to zero if ideal conditions are considered;

otherwise it can be probed that the error will be bounded under the influence of the estimation errors and unmodeled dynamics. For the stability analysis we consider (5), and (7), to obtain the close loop expression:

e(k ) + λe(k ) = 0 We choose a Lyapunov function as: 1 V = e T ( k )e (k ) 2

V = eT (k )e(k ) = −eT (k )λe(k ) < 0

(16) (17) (18)

Equation (18) implies that e(k ) → 0 when k → ∞ ; but if w

ve ≡ u is not true then: w

of the visual servoing algorithm proposed for the RoboTenis system. The control objective consists in preserving a constant distance ([0, 0, 600]T mm) between the camera and the moving target. The ball is hanged to the structure and is moved by manual drag (see fig 2). Different arbitrary trajectories have been performed. As example, in fig. 8 is represented an experiment in which the space evolution of the ball is shown.

A. Performance Tracking Indexes.

vˆe (k ) = u (k ) + ρ (k )

(19)

Where ρ (k ) is the velocity error vector which is produced by the bad velocity estimations and unmodeled dynamics. We consider (19) and, (16) can be written as: (20 e(k ) + λe(k )= cRw ρ (k ) ) We consider again a Lyapunov function as: (21) 1 V = e T ( k )e ( k ) 2

V = eT ( k )e(k ) = −eT (k )λe(k ) + eT (k )cRw ρ (k )

(22)

The sufficient condition for V < 0 is: (23) ρ e > λ If we consider that ρ (k ) → 0 (this has two means: The velocity controller is capable to make the system to reach w

ve (k ) → u and there is not error in the velocity estimations) then e(k ) → 0 ; otherwise we can conclude that when (23) is

not fulfilled, the error does not decreases and remains finally bounded by: (24) ρ e ≤

λ

The arbitrary movement of the ball makes complicated the systematic study of the experiments; in consequence two tracking indexes have been defined. The tracking indexes have been defined according to the tracking error (difference between the target and real position) and the estimation of the ball velocity. Two indexes are: Tracking Relation (table I) is defined in (25) as the relation between the average of the tracking error module and the average of the estimated velocity module of the ball. This relation is expressed in mm/1/mm/s and allows isolating the result of each experiment from particular features of motion of the ball made in other experiment. (25) 1 N Tracking relation =

∑ e( k ) N k =1 N 1 ∑ w vˆb (k ) N k =1

Average of the tracking error in function of the estimated ball velocity. In order to makes an easier comparison; the estimated ball velocity has been divided in 5 groups which the tracking error is shown in the table II. Table I. Tracking relation using a proportional and predictive control laws ALGORITHM Tracking relation Proportional 40.45 Predictive 20.86 Table II. Grouped Average tracking Error Velocity / 200400600V800 32.5 13.5

B. Studied and Compared Control Laws Two control laws have been used: Proportional control law: It does not consider the predictive component of (8), thus: (26) u (k ) = −λ cR T c p * (k )− c pˆ (k ) w

Fig. 8. 3D ball movements

IV.

EXPERIMENTAL RESULTS

In this section, the experiments results are related to visual tracking object up to 1 m/s. The results show the performance

[

b

]

b

Predictive control law: Considering the predictive component of (8), thus: (27) u (k )= w vˆ −λ cR T c p * (k )− c pˆ (k ) b

w

[

b

b

]

The Table I and Table II show the indexes result when the two control laws are applied. The Numerical results presented

have been obtained from the average of 10 experiments of each control algorithm. The results show a better performance of the system when the predictive control algorithm is used, thus a smaller tracking relation error and smaller grouped average tracking error is observed. Figures 9 and 10 show the tracking error for the control laws which were used fin order to carry out the tracking task.

makes possible that RoboTenis can move faster than its characteristics allow it. The Lyapunov stability of the proposed system was probed under some ideal and non ideal conditions. When the conditions are ideal, it was probed that the error converges to zero otherwise, if the conditions are nonideal, the error will be finally bounded. The experiments were carried out in order to illustrate the high performance of the system was; a ball tracking was successfully achieved and the error was smaller than 20mm, the visual loop has a sample time of 8.33 ms. In future Works, new control strategies are considered in order to attain a tracking velocity of 2m/s. With the same purpose, the interaction of the system with the environment and (or) works as caching or hitting the ball are desirable. For more information consult: http://www.disam.upm.es/vision/projects/robotenis/ References [1]

Fig. 9. Proportional control law. (a) Tracking error

Fig. 10. Predictive control law. (a) Tracking error.

V. CONCLUSIONS In this paper a novel visual servoing structure is presented, this control strategy is used in a parallel robot in order to reach a high velocity tracking of an object which moves in unknown trajectories. RoboTenis has been the first parallel robot known which uses a visual tracking control system in dynamic environments. The control strategy is based in obtaining the smaller number of time samples in which the control target function is able to achieved. The existing delays in the system, the saturations in the velocity and acceleration are considered in de control model thus, this consideration

1Kragic, D., Christensen, H.I. (2005). Advances in robot vision. Robotics and Autonomous Systems. 52 (1), 1-3 [2] Kaneko, M., Higashimori, M., Takenaka, R., Namiki, A., Ishikawa, M. The 100 G capturing robot - too fast to see. IEEE/ASME Transactions on Mechatronics. Volume 8, Issue 1, pp 37 – 44. March 2003 [3] Senoo, T., Namiki, A., Ishikawa, M. High-speed batting using a multijointed manipulator. 2004 IEEE International Conference on Robotics and Automation, ICRA '04. Volume 2, pp 1191-1196, 26 Apr – 1 May, 2004. [4] R. Clavel,, DELTA: a fast robot with parallel geometry. 18th International Symposium on Industrial Robot, pp. 91-100, 1988, Sydney, Australia. [5] Ángel, L., Sebastián, J.M., Saltarén, R., Aracil, Sanpedro, J. (2005). RoboTenis: Optimal Design of a Parallel Robot with High Performance. IEEE/RSJ International Conference on Intelligent Robots ans Systems (IROS). 2-6 August 2005. Alberta, Canadá. [6] Ángel, L., Sebastián, J.M., Saltarén, R., Aracil, R., Gutiérrez, R. (2005). RoboTenis: Design, Dynamic Modeling and Preliminary Control. IEEE/ASME AIM2005, 24-28 July 2005, Monterey, California USA. [7] Ángel, L., Sebastián, J.M., Saltarén, R., Aracil. RoboTenis System. Part II: Dynamics and Control. 44 IEEE Conference on Decision and Control and European Control Conference (CDC-ECC’05), Sevilla, 2005. [8] Ángel, L. Control Visual de Robots Paralelos. Análisis, Desarrollo y Aplicación a la Plataforma Robotenis. Tesis Doctoral de la Universidad Politécnica de Madrid. Diciembre 2005 [9] Gutiérrez, D. Estimación de la posición y de la velocidad de un objeto móvil. Aplicación al sistema RoboTenis. Proyecto Fin de Carrera de la E.T.S.I.I. de la Universidad Politécnica de Madrid [10] Gutiérrez, D., Sebastián, J.M., Ángel, L. Estimación de la posición y velocidad de un objeto móvil. Aplicación al sistema Robotenis. XXVI Jornadas de Automatica, 6-8 septiembre 2005, Alicante [11] Hutchinson, S.A., Hager, G.D., Corke, P.I. (1996): A tutorial on visual servo control. IEEE Trans. Robotics and Automation, 12-5 651-670. [12] Carelli R., Santos-Victor J., Roberti F. and Tosetti S., “Direct visual tracking control of remote cellular robots”, Robotics and Autonomous Systems. Volume 54, pp. 805-814 (2006).

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.