Adaptive servo visual robot control

Share Embed


Descrição do Produto

Robotics and Autonomous Systems 43 (2003) 51–78

Adaptive servo visual robot control Oscar Nasisi∗ , Ricardo Carelli Instituto de Automática, Universidad Nacional de San Juan, Av. San Mart´ın (Oeste) 1109, 5400 San Juan, Argentina Received 11 December 2001; received in revised form 27 November 2002

Abstract Adaptive controllers for robot positioning and tracking using direct visual feedback with camera-in-hand configuration are proposed in this paper. The controllers are designed to compensate for full robot dynamics. Adaptation is introduced to reduce the design sensitivity due to robot and payload dynamics uncertainties. It is proved that the control system achieves the motion control objective in the image coordinate system. Simulations are carried out to evaluate the controller performance. Also, discretization and measurement effects are considered in simulations. © 2003 Elsevier Science B.V. All rights reserved. Keywords: Visual motion; Robots; Tracking systems; Non-linear control systems; Adaptive control

1. Introduction The use of visual information in the feedback loop presents an attractive solution to motion control of autonomous manipulators evolving in unstructured environments. In this context, robot motion control uses direct visual sensory information to achieve a desired relative position between the robot and a possibly moving object in the robot environment, which is called visual servoing. The visual positioning problem arises when the object is static, whereas when the object is moving, the visual tracking problem is established instead. Visual servoing is treated in references as [1–6]. Visual servoing can be achieved either with the so called fixed-camera approach or with the camera-in-hand approach. With the former, cameras fixed in the world-coordinate frame capture images of both the robot and its environment. The objective of this approach is to move the robot in such a way that its end-effector reaches some desired object visually captured by the cameras in the working space [7–10]. With the camera-in-hand configuration, a camera mounted on the robot moves rigidly attached to the robot hand. The objective of this approach is that the manipulator move in such a way that the projection of a static or moving object will be at a desired location in the image as captured by the camera [11–17]. Most of the above cited works, however, have not considered the non-linear robot dynamics in the controller design. These controllers may result in unsatisfactory control under high performance requirements, including high-speed tasks and direct-drive robot actuators. In such cases, the robot dynamics has to be considered in the controller design, as partially done in [18,19] or fully included in [10,20,21]. In the visual servoing control some uncertainties may arise in relation to camera parameters, kinematics and robot dynamics. Some authors have addressed the problem of camera uncertainties, e.g. in [22–25] for different camera ∗

Corresponding author. E-mail addresses: [email protected] (O. Nasisi), [email protected] (R. Carelli). 0921-8890/03/$ – see front matter © 2003 Elsevier Science B.V. All rights reserved. doi:10.1016/S0921-8890(02)00370-6

52

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

configurations. Kinematics uncertainty is treated in [26]. With the ever-growing power of visual processing and a consequent increase in frequency bandwidth of visual controllers, the issues of compensating robot dynamics and designing controllers that reduce sensibility to dynamic uncertainties are becoming more important. As regards uncertainties in robot dynamics, robust control solutions have been proposed in [10,27,28], and adaptive control solutions in [29,30] for the fixed-camera visual servoing configuration. This paper deals with the adaptive control of robot dynamics using the camera-in-hand visual servoing approach. In previous work [31,32], the authors have proposed adaptive controllers for the camera-in-hand configuration assuming uncertainties in robot dynamics. The present paper proposes a positioning and a tracking adaptive controller using visual feedback for robots with camera-in-hand configuration. Feedback signals come directly from internal position and velocity sensors and from visual information. It is proved that the positioning control errors converge asymptotically to zero, and that the tracking errors for moving objects are ultimately bounded. The controllers are based on the robot’s inverse dynamics, the definition of a manifold in the error space [33], an update-law [34], and, for moving objects, on the estimation of the target velocity. As far as the authors know, these are the first direct visual adaptive stable controllers which include non-linear robot dynamics. Although the main contribution of the work is the development of these adaptive controllers with the corresponding stability proofs, the paper also includes some simulation studies to show the performance of the proposed controllers. The paper is organized as follows. Section 2 presents the robot and the camera models. In Section 3, the adaptive controllers for positioning and tracking control objectives are presented. Section 4 gives the stability analysis for both controllers. Section 5 describes the simulation studies for a two degree-of-freedom (DOF) direct-drive manipulator. Finally, Section 6 presents some concluding remarks.

2. Robot and camera models 2.1. Model of the robot When neither friction nor any other disturbance is present, the joint-space dynamics of an n-link manipulator can be written as [35]: ˙ q˙ + g(q) = τ, H(q)q¨ + C(q, q)

(1)

where q is the n×1 vector of joint displacement, τ the n×1 vector of applied joint torques, H(q) the n×n symmetric ˙ q˙ the n × 1 vector of centripetal and Coriolis torques, and g(q) positive definite manipulator inertia matrix, C(q, q) the n × 1 vector of gravitational torques. The robot model, Eq. (1), has some fundamental properties that can be exploited in the controller design [36]. ˙ q˙ is uniquely defined—matrices Skew-symmetry. Using a proper definition of matrix C—only the vector C(q, q) H and C in Eq. (1) satisfy   dH(q) ˙ x = 0 ∈ Rn . xT − 2C(q, q) (2) dt Linearity. A part of the dynamics structure in Eq. (1) is linear in terms of a suitable selected set of robot and payload parameters: ˙ q˙ + g(q) = (q, q, ˙ q)θ, ¨ H(q)q¨ + C(q, q)

(3)

˙ q) ¨ is an n × m matrix and θ is an m × 1 vector containing the selected set of robot and payload where (q, q, parameters.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

53

2.2. Robot differential kinematics ˙ the corresponding The differential kinematics of a manipulator gives the relationship between joint velocities q, end-effector translational velocity W v, and angular velocity W ω. They are related through the geometric Jacobian Jg (q) [37]:   Wv ˙ (4) = Jg (q)q. Wω If the end-effector pose (position and orientation) is expressed by regarding a minimal representation in the operational space, it is possible to compute the Jacobian matrix through differentiation of the direct kinematics with respect to joint positions. The resulting Jacobian, termed analytical Jacobian JA (q), is related to the geometric Jacobian through [37]:   I 0 Jg (q) = JA (q), (5) 0 T(q) where T(q) is a transformation matrix that depends on the parameterization of the end-effector orientation. 2.3. Camera model A TV camera is supposed to be mounted at the robot end-effector. Let the origin of the camera coordinate frame (end-effector frame) with respect to the robot coordinate frame be W pC = W pC (q) ∈ Rm0 with m0 = 3. The orientation of the camera frame with respect to the robot frame is denoted as W RC = W RC (q) ∈ SO(3). The image captured by the camera supplies a two-dimensional array of brightness values from a three-dimensional scene. This image may undergo various types of computer processing to enhance image properties and extract image features. It is assumed here that the image features are the projection onto the 2D image plane of 3D points in the scene space. A perspective projection with a focal length λ is also assumed, as depicted in Fig. 1. An object (feature) point C p with coordinates [ C p Cp C p ]T ∈ R 3 in the camera frame projects onto a point in the image plane with O x y z image coordinates [ u v ]T ∈ R2 . The position ξ = [ u v ]T ∈ R2 of an object feature point in the image will be referred to as an image feature point [38]. In this paper, it is assumed that the object can be characterized by a set of feature points. For sake of completeness, some preliminars concerning single and multiple feature points are recalled below. 2.3.1. Single feature point Following the notation of [20], let W pO ∈ Rm0 be the position of an object feature point expressed in robot coordinate frame. Therefore, the relative position of this object feature located in the robot workspace, with respect

Fig. 1. Perspective projection.

54

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

to camera coordinate frame is [ C px C py C pz ]T . According to the perspective projection [4], the image feature point depends uniquely on the object feature position W pO and camera position and orientation, and is expressed as     Cp u λ x ξ= = −α C , (6) pz C py v where α is the scaling factor in pixels/m due to camera sampling and C pz < 0. This model is also called the imaging model [20]. The time derivative yields    Cp Cp x ˙x 1 0 − Cp    αλ  z C  ξ˙ = − C  (7) p˙ y   . Cp   pz  y C 0 1 −C p˙ z pz On the other hand, the position of the object feature point with respect to the camera frame is given by C  px C  C W W  py  = RW (q)[ pO − pC (q)]. Cp

(8)

z

By invoking the general formula for velocity of a moving point in a moving frame with respect to a fixed frame [39], and considering a fixed object point, the time derivative of (8) can be expressed in terms of the camera translational and angular velocities as [13] C  p˙ x C  C W W W W (9)  p˙ y  = RW {− ωC × ( pO − pC (q)) − vC }. Cp ˙

z

After operating, there results C   −1 0 p˙ x C    p˙ y  =  0 −1 Cp ˙

0

z

0

0

0

0

Cp

−1

z C − p

y

−C pz

Cp

0

−C p

Cp

y

  x·



0

x

CR

W (q)



0 CR

0

W (q)



Wv

C



,

C

(10)

where W vC and W ωC stand for the camera’s translational and angular velocities with respect to robot frame, respectively. The motion of the image feature point as a function of the camera velocity is obtained by substituting (10) into (7):   −1 ˙ξ = − αλ  Cp  z  0

0 −1

Cp

x

Cp Cp

z

y

Cp

z

Cp Cp x y Cp z C p 2 + C p2 z y Cp z



C p2 + C p2 z x Cp z

Cp Cp x y − C pz

Cp

   · 

y

−C p

x

C

RW (q) 0

0 CR

W (q)

 W

vC



C

 .

(11) Instead of using coordinates C px and C py of the object feature described in camera coordinate frame, which are a priori unknown, it is usual to replace them by coordinates u and v of the projection of such a feature point onto

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

the image frame. Therefore, by using (7)    C R (q) Wv 0 W C ˙ξ = Jimage (ξ, C pz ) , Wω C R (q) 0 C W where Jimage (ξ, C pz ) is the so-called image Jacobian defined by [4,13]:   αλ u uv α2 λ2 + u2 0 − v   Cp Cp αλ αλ z z   Jimage (ξ, C pz )  .   v α 2 λ2 + v 2 uv αλ − 0 −u Cp Cp αλ αλ z z

55

(12)

(13)

Finally, by using (4) and (5) we can express ξ˙ in terms of robot joint velocity q˙ as   C R (q) 0 W C Jg (q)q˙ ξ˙ = Jimage (ξ, pz ) C R (q) 0 W    C R (q) I 0 0 W C ˙ = Jimage (ξ, pz ) JA (q)q. C R (q) 0 T(q) 0 W 2.3.2. Multiple feature points In applications to objects located in a three-dimensional space, three or more feature points are required to make the visual servo control solvable [17,21]. The above imaging model can be extended to a static object located in the robot workspace, having p object features points. In this case, W pO ∈ Rp·m0 is a constant vector which contains the p object feature points, and the feature image vector ξ ∈ R2p is redefined as  C px1  C pz1     C     u1  py1     Cp  z1   v1           ξ =  ...  = −αλ  ...  ∈ R2p .        C  pxp   up     C pzp  vp    C  pyp  Cp zp The extended image Jacobian Jimage (ξ, C pz ) ∈ R2p×m0 is given by     u1 C , pz1   Jimage v1     .   .. Jimage (ξ, C pz ) =  ,       up   Jimage , C pzp vp where C pz = [ C pz1

Cp

z2

···

Cp

zp

]T ∈ R p .

(14)

56

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Using Eqs. (13) and (14), the time derivative of the image feature vector can be expressed as ˙ ξ˙ = J(q, ξ, C pz )q,

(15)

where C

C

C

J(q, ξ, pz ) = Jimage (ξ, pz )

RW (q)



0 CR

0

W (q)

I

0

0

T(q)

 JA (q)

(16)

will be called the Jacobian matrix hereafter in this paper. 2.3.3. Moving object When the object moves in the robot framework, the derivative of Eq. (8) can be expressed as 

Cp ˙

x



C  C  p˙ y  = RW {−W ωC × (W pO − W pC org ) + (W p˙ O − W vC )}.   Cp ˙z As both the camera-in-hand and the object are moving, there exists a relative velocity between each other. Therefore the object velocity in the camera frame can be calculated as 

Cp ˙

x





−1

C    p˙ y  =  0   Cp 0 ˙z C

p˙ x



C   p˙ y  = Cp ˙z

C

−C pzo

0

0

0

−1

0

Cp

−1

−C p

0

RW (q)

Cp

yo

 W

0 CR

0

0

zo

vC



W (q)

C

Cp

 yo

 −C pxo  ,

xo

(17)

0

 + C RW W p˙ O ,

(18)

where W vO and W ωC are the translational and angular velocity of the camera with respect to the robot frame. The movement of the feature point into the image plane as a function of the object velocity and camera velocity is expressed by substituting (18) into (7): αλ ξ˙ = − C pzo



mT1

C

mT2

1    0

0 1

Cp

−C

0 CR

0

 αλ ξ˙ = − C pzo

RW (q)



W (q)

(19)

 xo

pzo  C  RW W q˙ O , Cp  yo

−C

˙ Jg (q)q,

pzo

(20)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

where 

−1 0

   Cp  xo   Cp zo   C C  pxo pyo m1 =   Cp zo   C 2 C 2   − pzo + pxo  Cp  zo Cp





0 −1

   Cp  yo   Cp zo    C p2 + C p 2 m2 =  zo yo  Cp  zo   C C   − pxo pyo  Cp zo  C − pxo

        ,        

yo

57

          .        

By analysing the last result in Eq. (20), it can be directly concluded that ξ˙ = J(q, ξ, C pz )q˙ + JO (q, C pO )W p˙ O , where αλ JO (q, C pO ) = − C pzo

 1    0

0 1

Cp

−C

(21) 

xo

pzo  C  RW . Cp  yo

−C

(22)

pzo

A simple generalization to multiple feature points can be obtained similarly as in Section 2.3.2. 3. Adaptive controller 3.1. Problem formulation Two cases are considered: position control for a fixed object and tracking control for a moving object. Case (a). The object does not move and a desired trajectory is given for image features in the image plane. The following assumptions are considered: Assumption 1. The object is fixed, W p˙ O (t) = W vO (t) = 0. Assumption 2. There exists a joint position vector qd such that, for a fixed object, it is possible to reach the desired features vector ξ d . Assumption 3. For a given object situation additionally, J and J−1 are bounded.

Wp

O,

there exists a neighbourhood of qd where J is invertible and,

Assumption 4. The depth C pz , i.e. the distance from the camera to the object, is available to be used by the controller. A practical way to obtain C pz is by using external sensors as ultrasound or additional cameras in the so-called binocular stereo approach [9]. Assumption 1 reduces the control problem to a positioning one. Assumption 2 ensures that the control problem is solvable. Assumption 3 is required for technical reasons in the stability analysis. Now, the position adaptive servo visual control problem can be formulated.

58

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Control problem. By considering Assumptions 1–4, the desired features vector ξ d , the initial estimates of dynamic parameters θ in Eq. (3) and a given object situation C pO , find a control law ˆ ˙ ξ, θ) τ = T(q, q,

(23)

and a parameter update-law dθˆ ˆ t) ˙ ξ, θ, = (q, q, dt

(24)

˜ = (ξ d − ξ(t)) → 0 as t → ∞. such that the control error in the image plane ξ(t) Case (b). The object moves along an unknown path. The following assumptions are considered: Assumption 1. The object moves along a smooth trajectory with bounded velocity W

p˙ O (t) = W vO (t)

and acceleration dW vO (t)/dt = W aO (t). Assumption 2. There exists a trajectory in the joint space qd (t) such that the vector of desired fixed features ξ d is achievable: ξ d = i(W pC (qd (t)), W pO (t)). Assumption 3. For the target path W pO (t), there exists a neighbourhood of qd (t) where J is invertible and additionally J and J−1 are bounded. Assumption 4. The depth C pz , i.e. the distance from the camera to the object, is available to be used by the controller. A practical way to obtain C pz is by using external sensors as ultrasound or additional cameras in the so-called binocular stereo approach [9]. Assumption 1 establishes a practical restriction on the object trajectory. Assumption 2 ensures that the control problem is solvable. Assumption 3 is required for technical reasons in the stability analysis. Now, the adaptive servo visual tracking control problem can be formulated. Control problem. By considering Assumptions 1–4, the desired features vector ξ d , the initial estimates of dynamic parameters θ in (3), the initial estimates of target velocity W vˆ O (t) and its derivative dW vˆ O (t)/dt, find a control law: ˆ W vˆ O , W aˆ O ) ˙ ξ, θ, τ = T(q, q,

(25)

and a parameter update-law: dθˆ ˆ W vˆ O , W aˆ O , t) ˙ ξ, θ, = (q, q, dt

(26)

˜ = ξ d − ξ(t) is ultimately bounded by a sufficiently small ball Br . such that the control error in the image plane ξ(t) 3.2. Control and update laws Case (a). Let us define a signal υ in the image error space: υ=

dξ˜ ˜ + ξ. dt

(27)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

The following control law is considered: τ = Kυ + θˆ

59

(28)

with υ = J−1 υ,

(29)

 ˙ q˙ − d [(JT J)−1 ]υ ˙ ξ, υ)θˆ = −H(q) (JdT J)−1 (JdT J)q˙ + (JdT J)−1 (JdT J) (q, q, dt d T −1 ˜ + g(q), ˙ J) JT ξ] +C(q, q)[(J d

d

(30)

ˆ ˆ ˙ and gˆ (q) are the estimates where K and  are positive definite gain (n × n) and (2p × 2p) matrices, H(q), C(q, q) of H, C and g, respectively. Parameterization of (28) is possible due to property 2.1. To estimate θ, the following parameter update-law of the gradient type [40] is used: dθˆ ˙ υ, ξ)υ = T (q, q, dt with a positive definite adaptation gain (m × m) matrix. Case (b). Let us define the same signal υ in the image error space as for Case (a): υ=

dξ˜ + ξ˜ = −ξ˙ + ξ˜ dt

with ξ˙ = dξ/dt = Jq˙ + JO W vO . Target velocity W vO and its time derivative dW vO /dt can be estimated through a second order filter: b0 p W W vˆ O = 2 pO (t), p + b1 p + b 0 W

aˆ O =

dW vˆ O b 0 p2 W = 2 pO (t). dt p + b1 p + b 0

(31)

(32)

(33) (34)

Therefore υˆ  = −

dξˆ + ξ˜ dt

(35)

with dξˆ = Jq˙ + JO W vˆ O . dt Now, the following control law is proposed: τ = Kυˆ  + θˆ

(36)

(37)

with ˜ υˆ  = J−1 υˆ = −q˙ − J−1 JO C vˆ O + J−1 ξ,

(38)

ˆ W vˆ O , W v˙ˆ O )θˆ = H(q){J˙ −1 υˆ − J−1 J˙ q˙ − J−1 J˙ O W vˆ O − J−1 JO W v˙ˆ O − J−1 JO Jq˙ − J−1 JO W vˆ O } (˙q, υ, −1 ˜ + g(q), ˙ JO W vˆ O + J−1 ξ} (39) +C(q, q){−J ˆ ˆ ˙ and gˆ (q) are the estimates where K and  are positive definite gain (n × n) and (2p × 2p) matrices, H(q), C(q, q) ˙ and g(q). Parameterization of (37) is possible due to property 2.1. of H(q), C(q, q)

60

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

To estimate θ, the following parameter update-law is considered: dθˆ ˙ υ, ˆ W vˆ O , W aˆ O )υˆ  − Lθˆ = T (q, dt

(40)

with and L, positive definite adaptation gain (m × m) matrices.

4. Stability analysis In this section, two propositions describe the stability properties of the adaptive controllers proposed in Section 3. First, the following technical lemma is considered. Lemma 1. Let the transfer function H ∈ R(s)n×n be exponentially stable and strictly proper. Let u and y be its input and output, respectively. If u ∈ Ln2 ∩ Ln∞ then y, y˙ ∈ Ln2 ∩ Ln∞ and y → 0 as t → ∞. Lemma 1 shown in [33], implies that the filtering by an exponentially stable and strictly proper filter of a square-integrable and bounded function there results not only in a square-integrable and bounded function, but in maintaining this property for its time derivative. Besides, it leads the output to converge to zero. Case (a). Let us consider the control law (28) and update law (31) in closed loop with the robot and camera models (1) and (6), as well as Assumptions 1–4 for Case (a). Then, there exists a neighbourhood of qd (t) such that: (a) (b) (c)

θ˜ = θ − θˆ ∈ Lm ∞. 2p 2p υ ∈ L∞ ∩ L2 . ˜ = (ξ d − ξ) → 0 as t → ∞. ξ(t)

Proof. The closed-loop system is obtained by combining (1) and (28): KJ−1 υ + θˆ = Hq¨ + Cq˙ + g.

(41)

ˆ = θ − θ(t) ˜ and Eqs. (29) and (31) we obtain By using θ(t) ˜ Kυ + Hυ˙  + Cυ = θ.

(42)

Let us consider the local non-negative function of time: T ˜ V = 21 υT J−T HJ−1 υ + 21 θ˜ −1 θ,

(43)

whose time derivative along the trajectories of (42) is T ˜ V = 21 υT Hυ + 21 θ˜ −1 θ,

(44)

˜ 1 ˙ −1 υ + θ˜ T −1 dθ . V˙ = υT J−T [−KJ−1 υ + θ˜ − CJ−1 υ] + υT J−T HJ 2 dt

(45)

By regarding property 2.1 and the parameter update-law of Eq. (31), it results V˙ = −υT J−T KJ−1 υ ≤ 0.

(46)

2p 2p ˙ Eqs. (31) and (46) imply θ˜ ∈ Lm ∞ and υ ∈ L∞ . By time-integrating V , it can also be easily shown that υ ∈ L∞ . ˜ ˜ By regarding ξ(t) ˜ the output of an exponentially stable and Finally, to prove (c), we note that υ = (dξ/dt) + ξ. ˜ → 0 as t → ∞. strictly proper linear filter with input υ, Lemma 1 allows to conclude that ξ(t)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

61

Remark 1. If more features than DOF of the robot are taken, a non-square Jacobian matrix is obtained. In this case a re-definition of υ as ˜ d(JT ξ) ˜ υ= + (JT ξ) (47) dt should be used. A similar reasoning as that of proposition 2.1, enables to reach the same conclusions about control system stability. Case (b). Let us consider the control law (37) and update law (40) in closed loop with the robot and camera models (1) and (6), as well as Assumptions 1–4 for Case (b). Then, there exists a neighborhood of qd (t) such that: (a) (b) (c)

θ˜ = θ − θˆ ∈ Lm ∞. υˆ  ∈ Ln∞ . ˜ = ξ d − ξ is ultimately bounded. ξ(t)

Proof. The closed-loop system is obtained by combining (1) and (37): Kυˆ  + θˆ = Hq¨ + Cq˙ + g.

(48)

Using θˆ = θ − θ˜ and Eqs. (38) and (39) it is obtained: ˜ ˆ  + Cυˆ  = θ, Kυˆ  + HDυ

(49)

ˆ  = Dυˆ  +ε , with ε = J−1 JO (C vO −C vˆ O ), C vO −C vˆ O = ˆ  is the estimate of υ time-derivative. Also, Dυ where Dυ  εO the estimate error, and Dυˆ the time-derivative of υˆ  . Then HDυˆ  = −(K + C)υˆ  + θ˜ − ε,

(50)

where ε = Hε . Let us consider the local non-negative function of time: T ˜ V = 21 υˆ T Hυˆ  + 21 θ˜ −1 θ,

(51)

whose time-derivative along the trajectories of (50), and considering as well the parameter update-law (40), is ˆ ˙ υˆ  + θ˜ T [− T υˆ  + Lθ]. V˙ = υˆ T [−(K + C)υˆ  + θ˜ − ε] + 21 υˆ T H

(52)

By regarding property 2.1, there results T T V˙ = −υˆ T Kυˆ T − θ˜ −1 θ˜ − υˆ T ε + θ˜ −1 Lθ.

(53)

By defining the following expressions: µK = σ min (K), µΓ −1 L = σ min ( −1 L), γ Γ −1 L = σ max ( −1 L),  where σ i = λi (AT A) denotes singular values of A for i : min, max: ˜ 2 + υˆ T  ε + γ Γ −1 L θ θ. ˜ V˙ ≤ −µK υˆ  2 − µΓ −1 L θ From the expressions: 2  1 ˜ 2 1 ˜ ˜ θ + ζ 2 θ2 , − 2θ θ − ζθ = 2 θ ζ ζ

(54)

(55)

62

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78



2

1 T υˆ  − ηε η

=

1 T 2 υˆ  − 2υˆ T  ε + η2 ε2 η2

with ζ, η ∈ R+ , it can be written: 2 1 ˜ θ − ζθ , ζ 2  1 η2 1 1 T T T 2 2 2 2 υˆ  ε + η ε = 2 υˆ  + ε − υˆ  − ηε . 2 2 η 2η

˜ θ = θ

1 ˜ 2 ζ2 1 θ + θ2 − 2 2 2 2ζ



By neglecting the negative terms we can obtain the following equations: ˜ θ ≤ θ

1 ˜ 2 ζ2 θ + θ2 , 2 2ζ 2

υˆ T  ε + η2 ε2 ≤

1 η2 T 2 ˆ  υ  + ε2 . 2 2η2

Now, going back to V˙ :     2 2 1 γ −1 ˜ 2 + γ Γ −1 L ζ θ2 + η ε2 V˙ ≤ − µK − 2 υˆ T 2 − µΓ −1 L − Γ 2 L θ 2 2 2η 2ζ

(56)

(57)

that can be expressed as ˜ 2 + ρ, V˙ ≤ −α1 υˆ T 2 − α2 θ

(58)

where α1 = µK −

1 > 0, 2η2

α2 = µΓ −1 L −

γ Γ −1 L > 0, 2ζ 2

ρ = γ Γ −1 L

ζ2 η2 θ2 + ε2 . 2 2

(59)

Eq. (51) can be stated as ˜ 2, V ≤ β1 υˆ T 2 + β2 θ

(60)

where β1 = (1/2)γ H , β2 = γ −1 , γ H = supq [σ max (H)], γ −1 = σ max ( −1 ). Then V˙ ≤ −δV + ρ with

δ = min

(61)

 α1 α2 . , β1 β2

˜ T is ultimately bounded inside a ball B, ˆ  , θ) Since ρ is bounded, (61) implies that υˆ  ∈ Ln∞ , θ˜ ∈ Lm ∞ and x = (υ which proves (a) and (b). 2p In addition, from (38), υˆ = Jυˆ  and by recalling Assumption 2, υˆ ∈ L∞ . Besides, υˆ can be expressed in terms of υ as dξ˜ + ξ˜ + JO (vO − vˆ O ) = υ + JO εO . (62) υˆ = dt ˜ + ξ˜ is ultimately bounded as well. From the last equation, Since JO εO is bounded, it means that υ = (dξ/dt) ξ˜ = O where, O is a linear operator with finite gain. Therefore ˜ ≤ Oυ ξ and since υ is ultimately bounded, ξ˜ is also ultimately bounded, which proves (c).

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

63

Remark 1. If more features than DOF of the robot are regarded, a non-square Jacobian matrix is obtained. In this case a re-definition of υ as ˜ d(JT ξ) ˜ υ= (63) + (JT ξ) dt should be used. By reasoning just like in Proposition 2, it is possible to reach the same conclusions on control system behaviour.

5. Simulations Computer simulations have been carried out to show the stability and performance of the proposed adaptive controllers. The robot used for the simulations is a two DOF manipulator, as shown in Fig. 2. The meaning and numerical values of symbols in Fig. 2 are listed in Table 1. The elements Hij (q)(i, j = 1, 2) of inertia matrix H are 2 2 H11 (q) = m1 lc1 + m2 (l12 + lc2 + 2l1 lc2 cos (q2 )) + I1 + I2 , 2 H21 (q) = m2 (lc2 + l1 lc2 cos (q2 )) + I2 ,

2 H12 (q) = m2 (lc2 + l1 lc2 cos (q2 )) + I2 ,

2 H22 (q) = m2 lc2 + I2 .

Fig. 2. Two DOF manipulator scheme.

Table 1 Parameters of the manipulator Description

Notation

Value

Length of link 1 (m) Length of link 2 (m) Center of gravity of l1 (m) Center of gravity of l2 (m) Mass of l1 (kg) Mass of l2 + camera (kg) Inertia of l1 (kg m2 ) Inertia of l2 + camera (kg m2 ) Acceleration of gravity (m/s2 )

l1 l2 lc1 lc2 m1 m2 I1 I2 g

0.45 0.55 0.091 0.105 23.9 4.44 1.27 0.24 9.8

64

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

˙ The elements Cij (q, q)(i, j = 1, 2) of the centrifugal and Coriolis matrix C are ˙ = −m2 l1 lc2 sin (q2 )˙q2 , C11 (q, q) ˙ = m2 l1 lc2 sin (q2 )˙q1 , C21 (q, q)

˙ = −m2 l1 lc2 sin (q2 )(˙q1 + q˙ 2 ), C12 (q, q) ˙ = 0. C22 (q, q)

Table 2 Parameters of the camera Description

Notation

Value

Focal length (m) Scale factor (pixels/m)

λ α

0.008 72727

Fig. 3. Trajectory in the image plane.

Fig. 4. Trajectory in the robot workspace.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

65

The entries of gravitational torque vector g are given by g1 (q) = (m1 lc1 + m2 l1 )g sin (q1 ) + m2 lc2 g sin (q1 + q2 ),

g2 (q) = m2 lc2 g sin (q1 + q2 ).

Numerical values for the camera model are listed in Table 2. All constants, design parameters and variables in the control system are expressed in the International System of Units (SI). Linear parameterization of Eqs. (31) and (39) leads to the parameter vector:  T 2 2 θ = m1 lc1 m1 lc1 m2 lc2 m2 lc2 m2 I1 I2 . 2 , m l , I ) are known with For controller design, it is assumed that the values of parameters of link1 (m1 lc1 1 c1 1 2 uncertainties of about 10% and for link2 (m2 lc2 , m2 lc2 , m2 , I2 ) with uncertainty of about 20%.

Fig. 5. Evolution of control errors.

66

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

5.1. Case (a)—adaptive position control Simulations are carried out using the following design parameters  = diag{5, 1.8},

K = diag{50, 5},

= diag{0.7, . . . , 0.7}.

Robot initial conditions are q1 (0) = 30◦ ,

q2 (0) = 45◦ ,

q˙ 1 (0) = 0,

q˙ 2 (0) = 0

and initial estimates of vector θ are 2 (0) = 0.264, m1 lc1

m1 lc1 (0) = 2.632,

m2 lc2 (0) = 0.671,

m ˆ 2 (0) = 5.328,

2 m2 lc2 (0) = 0.0846, Iˆ1 (0) = 1.397, Iˆ2 (0) = 0.288.

Fig. 6. Evolution of parameter estimates.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

67

Fig. 7. Trajectory in the image plane.

The object feature point was placed at  T W pO = 0.5 −0.2382 −0.74 . Simulations were carried out in two stages. In the first stage, we consider the adaptive controller with uncertainty in robot dynamics parameters. The second stage presents the non-adaptive control with wrong estimates of dynamic parameters, which were set at the same values of initial estimates in the adaptive controller. Simulation results are shown in Figs. 3–6. Fig. 3 shows the image feature trajectories on the image plane for the adaptive and non-adaptive controllers. Fig. 4 represents the trajectory of the manipulator’s end-effector, again for the adaptive and the non-adaptive cases. Fig. 5 presents the evolution of the control errors. It is clearly seen from the above figures that the adaptive controller achieves a better control performance compared to the non-adaptive one. For the adaptive case, the control errors tend to zero, while for the non-adaptive case, the controller is unable

Fig. 8. Trajectory in the robot workspace.

68

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 9. Coordinate W x in the work plane.

to eliminate the steady state errors. By analyzing Fig. 6 which represent the evolution of parameter estimates, it can be concluded that for the involved signals, the proposed controller does not present parametric convergence, i.e. θ˜ does not converge to zero as t → ∞. 5.2. Case (b)—tracking adaptive control Simulation conditions are the same as for Case (a). It is considered that a point object moves within the manipulator’s environment by describing a circular trajectory of radio r = 0.2 m and angular speed ω = 1.57 rad/s. Parameters of the speed estimating filter are selected as b0 = 104 , b1 = 200, (33) and (34). Simulations were carried out considering the adaptive and non-adaptive cases to obtain comparative performance results, which are shown in Figs. 7–12. Fig. 7 shows the trajectory of image characteristics during the tracking process. Fig. 8, on the other hand, shows the trajectory of the manipulator’s end-effector in the robot frame. For the adaptive case, initial

Fig. 10. Coordinate W y in the work plane.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

69

Fig. 11. Evolution of control errors.

estimates (θˆ O ) of the parameters are taken equal to the fixed wrong parameters of the non-adaptive case. For a better display of the adaptive controller’s performance, Figs. 9 and 10 present coordinates W x and W y of the manipulator and the object trajectories. Here, it is clearly seen the good tracking performance for the example considered in the simulation. Control errors are explicitly shown in Fig. 11. Finally, the evolution of the estimates of dynamic parameters is presented in Fig. 12. From the above figures it can be noted the improvement in the manipulator’s performance when the adaptive controller is used, as compared to the fixed controller. For the adaptive case, the control errors enter and remain into a smaller neighbourhood of the ideal zero control error.

6. Discretization and measurement noise effects In the previous section, a tracking adaptive servo-visual control algorithm in the continuous domain has been proposed, and its stability analysis has been done. The application feasibility of the proposed algorithm on a computer system motivates its discretization.

70

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 12. Evolution of parameter estimates.

In this section, the discretization of the control law and the update law are outlaid, for their digital implementation. Besides, through computer simulations, the performance of the proposed control algorithm with several sampling times and measurement and discretization noises is evaluated. The proposed scheme has two feedback loops with different sampling times. The first one (T1 ) is the fast dynamics loop, and is in charge of controlling the manipulator using the joint position and velocities measurement. The second loop (T2 ), with slower dynamics, computes and estimates the velocity of the moving object based on the images from a video camera, setting the tracking references for T1 loop. The discretization is obtained as follows: dx ∼ xk − xk−1 , (64) = dt T where T is the sampling period.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

71

Table 3 Performance for different sampling times No.

T1 (s)

T2 (s)

1 2 3 4 5 6

0.0025 0.0025 0.0025 0.0025 0.0040 0.0150

0.0025 0.0250 0.0500 0.1000 0.0500 0.0500



˜ dt ξ

517.02 454.84 376.22 362.24 568.2 982.56

˜ final ξ 3.88 3.45 3.81 4.74 3.84 22.67

Fig. 13. Trajectory in the robot workspace. Simulations 1, 3, 4 and 6 of Table 3.



ˆ dt θ

138.01 120.91 100.6 90.73 151.53 229.74

72

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

The discrete equation of both the control law and the parameter update-law for the adaptive controller are τ kT1 = KˆνkT1 + φkT1 θˆ kT1

(65)

with kT1 (q˙ kT1 , νˆ kT1 , W vˆ OkT2 , W v˙ˆ OkT2 )θˆ kT1

ˆ kT1 ){J˙ −1 νˆ kT1 − J−1 J˙ q˙ kT1 − J−1 J˙ O : W vˆ O − J−1 JO : W v˙ˆ O − J−1 JO Jq˙ kT1 = H(q kT2 kT2 ˆ kT , q˙ kT ){−J−1 JO : W vˆ O + J−1 ξ˜ kT } + gˆ (qkT ), −J−1 JO : W vˆ OkT2 } + C(q 1 1 1 kT2 2

θˆ kT1 = (I − T1 L)θˆ (k−1)T1 + T1 φTkT1 νˆ kT1 ,

Fig. 14. Norms of the control errors. Simulations 1, 3, 4 and 6 of Table 3.

(66) (67)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

73

νˆ kT1 = J−1 νˆ kT1 = −q˙ kT1 − J−1 JO vˆ OkT2 + J−1 ξ˜ kT2 .

(68)

The filter equations for the estimate of the object velocity and acceleration are −1   I bO W b1 2I b I 1 + + bO ( pOkT2 − W pO(k−1)T2 ) + vˆ OkT2 = vˆ O(k−1)T2 − 2 vˆ O(k−2)T2 , + T2 T2 T2 T22 T22 T2

(69)



v˙ˆ OkT2

−1 I a1 = + + aO T2 T22   aO W 2I a1 ˙ I ˙ W W × ( pOkT2 − 2 pO(k−1)T2 + pO(k−2)T2 ) + vˆ O(k−1)T2 − 2 vˆ O(k−2)T2 . + T2 T2 T22 T2

(70)

Several simulations have been carried out by considering different sampling times and measurement noises to evaluate the performance of the developed discrete controller. The simulation conditions are the same as those of the continuous case (see Section 5.2). Let us consider different sampling times both for the faster and slower dynamic loops. The following gain matrices were selected:  = diag{20, 20},

K = diag{40, 40},

= diag{0.2, . . . , 0.2},

L = diag{0.08, . . . , 0.08}.

In the simulations, it is considered the case where the point object moves in the manipulator environment describing a circular trajectory, with the same velocity and trajectory radius considered for the continuous case. The filter parameters for the velocity and acceleration estimation were b0 = 104 , b1 = 200, a0 = 104 and a1 = 200. Table 3 shows the different sampling times used for the various simulation conditions and the results obtained based on three error indexes. The evaluated indexes are the integral of the control error norm, the error norm once the stationary state is reached, and the integral of the norm of the manipulator dynamic parameters. In Table 3, T1 represents the sampling time of the faster dynamics loop and T2 is the vision loop sampling time. Figs. 13 and 14 show the trajectory of the robot and the norm of the control errors for some simulation conditions of Table 3. A second simulation was performed to determine the influence of perturbations in the closed-loop system due to measurement and sensing errors. Besides, non-modelled robot dynamics was also assumed. In this last experience, the same simulation conditions of the previous ones were considered, i.e. the initial and final manipulator positions, and initial uncertainties of the parameters. It was also considered a real sampling time for the evaluation, i.e. T1 = 0.0025 s and T2 = 0.05 s. Table 4 Performance for different measurement noises Order

1 2 3 4 5 6 7 8 9 10

Q (b)

12 6 4 12 12 12 12 12 12 6

∇q1 (σ12 ) 3.14 × 10−12 3.14 × 10−12 3.14 × 10−12 0.00042 3.14 × 10−12 3.14 × 10−12 3.14 × 10−12 3.14 × 10−12 3.14 × 10−12 0.00042

∇q2 (σ22 ) 7.66 × 10−12 7.66 × 10−12 7.66 × 10−12 0.00042 7.66 × 10−12 7.66 × 10−12 7.66 × 10−12 7.66 × 10−12 7.66 × 10−12 0.00042



q˙ 1 , q˙ 2 m ¯

σ2

0 0 0 0 0 0 0.5 1 10 1

0.05 0.05 0.05 0.05 0.25 0.5 0.05 0.05 0.05 0.5

˜ dt ξ

˜ final ξ

422.05 428.88 867.13 1055.8 1906.0 3599.2 1023.9 2293.7 5694.5 1781.7

9.39 8.94 8.17 51.6 21.24 52.2 8.78 24.5 125.64 60.4



ˆ dt θ

119.22 121.8 243.09 111.95 494.23 874.73 331.15 394.56 171.24 1083.2

74

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 15. (a) Trajectory in the robot workspace; (b) error norm. Simulation 1 of Table 4.

The controller tuning was done using the same gain matrices of previous sections, because they guarantee an acceptable performance for the given conditions. Various cases were considered regarding the control system performance against perturbations, as shown in Table 4. These cases are: • Quantization noise. It arises when considering certain number of bits in the image discretization process (see Table 4, rows 1–3). It should be concluded that the image discretization process has a poor influence on the system behaviour. • Measurement noise introduced by the optical encoders. In this case, a noise with zero mean and different variances for each joint actuator are considered (see rows 1 and 4, Table 4). The real values for the noise introduced by optical

Fig. 16. (a) Trajectory in the robot workspace; (b) error norm. Simulation 4 of Table 4.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

75

Fig. 17. (a) Trajectory in the robot workspace; (b) error norm. Simulation 9 of Table 4.

encoders do not affect the system performance, but when the noise is high enough, an important degradation in the control objective can be noted. • Velocity measurement noise due to the tachometer. This case is obtained by assuming Gaussian noise with different mean and variance values (rows 1, 5–9, Table 4). The degradation in system behaviour is remarkable when the mean value of the noise increases. It can be seen that the system, under the above mentioned conditions, tends to be unstable. • Worst case. Finally, last row of Table 4 (row 10) considers the worst case and as it was expected, the system performance is poor. Figs. 15–19 show the results for the simulations and conditions of Table 4. Figs. 15–18 show simulation results for conditions of cases 1, 4, 9 and 10 of Table 4. Curves of figure (a) represent the evolution of the manipulator’s

Fig. 18. (a) Trajectory in the robot workspace; (b) error norm. Simulation 10 of Table 4.

76

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 19. (a) and (b) Norm of the parameters vector of the manipulator. Simulations 1, 4, 9 and 10 of Table 4.

end-effector and curve (b) the norm control error. Finally, Fig. 19 depicts the norm of the parameter vector estimate. Curve (a) shows this norm for cases 1 and 4, and curve (b) the same for cases 9 and 10.

7. Conclusions This paper has presented a positioning and a tracking adaptive controller for robots with camera-in-hand configuration using direct visual feedback. Full non-linear robot dynamics has been considered in the controller design. Control errors are proven to asymptotically converge to zero for the positioning controller and be ultimately bounded for the tracking one. The work has been focused on the control problem, without considering the real-time image processing problem, which is assumed as already solved. Simulations illustrate the capability of the proposed controllers to attain suitable control performance under robot dynamics uncertainties.

References [1] K. Hashimoto, Visual servoing-real-time control of robot manipulators based on visual sensory feedback, in: K. Hashimoto (Ed.), Visual Servoing, World Scientific, Singapore, 1994. [2] C.P., Visual Control of Robots, Research Studies Press Ltd., 1996. [3] R. Hutchinson, G.D. Hager, P. Corke, A tutorial on visual servo control, IEEE Transactions on Robotics and Automation 12 (1996) 651– 670. [4] P. Corke, M. Good, Dynamic effects in visual closed-loop systems, IEEE Transactions on Robotics and Automation 12 (1996) 671–683. [5] All, Special issue on visual servoing, IEEE Robotics and Automation Magazine, 5 (1996). [6] P. Corke, S. Hutchinson, Real-time vision, tracking and control, in: Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, April 2000. [7] P. Allen, A. Tomcenko, B. Yoshimi, P. Michelman, Automated tracking and grasping of a moving object with a robotic hand–eye system, IEEE Transactions on Robotics and Automation 9 (1993) 152–165. [8] G.D. Hager, W.C. Chang, A.S. Morse, Robot hand–eye coordination based on stereo vision, IEEE Control System Magazine 15 (1995) 30–39. [9] G.D. Hager, A modular system for robust positioning using feedback from stereo vision, IEEE Transacations on Robotics and Automation 13 (1997) 582–595.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

77

[10] R. Kelly, Robust asymptotically stable visual servoing of planar robots, IEEE Transactions on Robotics and Automation 12 (1996) 759– 766. [11] L.E. Wiess, A.C. Sanderson, C.P. Newman, Dynamic sensor-based control of robots with visual feedback, IEEE Journal of Robotics and Automation 3 (1987) 404–417. [12] F. Chaumette, P. Rives, B. Espiau, Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing, in: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, April 1991, pp. 2248–2253. [13] K. Hashimoto, T. Kimoto, T. Ebine, H. Kimura, Manipulator control with image-based visual servoing, in: Proceedings of the IEEE international Conference on Robotics and Automation, Sacramento, CA, June 1991, pp. 2267–2272. [14] W. Jang, Z. Bien, Feature based visual servoing of an eye-in-hand robot with improved tracking performance, in: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, April 1991, pp. 2254–2260. [15] B. Espiau, F. Chaumette, P. Rives, A new approach to visual servoing in robotics, IEEE Transactions on Robotics and Automation 8 (1992) 313–326. [16] H. Hashimoto, T. Kubota, M. Sato, F. Harashima, Visual control of robotics manipulator based on neural networks, IEEE Transactions on Industrial Electronics 9 (1992) 490–496. [17] F. Chaumette, A. Santos, Tracking a moving object by visual servoing, in: Proceedings of the IFAC World Congress, vol. 9, Sydney, 1993, pp. 409–414. [18] N.P. Papanikolopoulos, P.K. Khosla, T. Kanade, Visual tracking of a moving target by a camera mounted on a robot: a combination of control and vision, IEEE Transactions on Robotics and Automation 9 (1993). [19] N.P. Papanikolopoulos, P.K. Khosla, Adaptive robotic visual tracking: theory and experiments, IEEE Transactions on Automatic Control 38 (1993) 429–445. [20] K. Hashimoto, H. Kimura, Dynamic visual servoing with non-linear model-based control, in: Proceedings of the IFAC World Congress, vol. 9, Sydney, Australia, June 1993, pp. 405–408. [21] K. Hashimoto, T. Ebine, H. Kimura, Visual servoing with hand–eye manipulator—optimal control approach, IEEE Transactions on Robotic and Automation 12 (1996) 766–774. [22] A. Astolfi, L. Hsu, M. Netto, R. Ortega, A solution to the adaptive visual servoing problem, in: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, May 21–26, 2001, pp. 743–748. [23] E. Malis, Visual servoing invariant to changes in camera intrinsic parameters, in: Proceedings of the Eighth IEEE International Conference on Computer Vision, vol. 1, July 7–14, 2001, pp. 704–709. [24] M. Asada, T. Tanaka, K. Hosoda, Adaptive binocular visual servoing for independently moving target tracking, in: Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, April 2000. [25] E. Zergeroglu, D. Dawson, Y. Fang, A. Malatpure, Adaptive camera calibration control of planar robot: elimination of camera space velocity measurements, in: Proceedings of the IEEE International Conference on Control Applications, September 25–27, 2000, pp. 560– 565. [26] C. Cheah, K. Lee, S. Kawamura, S. Arimoto, Asymptotic stability of robot control with approximate Jacobian matrix and its application to visual servoing, in: Proceedings of 39th IEEE Conference on Decision and Control, vol. 4, December 12–15, 2000, pp. 3939– 3944. [27] E. Zergeroglu, D. Dawson, M. de Queiroz, S. Nagarkatti, Robust visual-servo control of robot manipulators in the presence of uncertainty, in: Proceedings of the IEEE 38th Conference on Decision and Control, vol. 4, December 7–10, 1999, pp. 4137–4142. [28] A. Maruyama, M. Fujita, Robust visual servo control for planar manipulators with the eye-in-hand configurations, in: Proceedings of the IEEE 36th Conference on Decision and Control, vol. 3, December 10–12, 1997, pp. 2551–2552. [29] L. Hsu, P. Aquino, Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera, in: Proceedings of the IEEE 38th Conference on Decision and Control, vol. 2, December 7–10, 1999, pp. 1248–1253. [30] L. Hsu, R. Costa, P. Aquino, Stable adaptive visual servoing for moving targets, in: Proceedings of 2000 American Control Conference, vol. 3, June 28–30, 2000, pp. 2008–2012. [31] R. Carelli, O. Nasisi, B. Kuchen, Adaptive robot control with visual feedback, in: Proceedings of the American Control Conference, Baltimore, MD, June 1994. [32] O. Nasisi, R. Carelli, B. Kuchen, Tracking adaptive control of robots with visual feedback, in: Proceedings of the 13th IFAC World Congress, San Francisco, USA, June 1996, pp. 265–270. [33] J. Slotine, N. Li, Adaptive manipulator control: a case of study, in: Proceedings of the IEEE International Conference on Robotics and Automation, Raleigh, NC, April 1987. [34] K. Narendra, A. Annaswamy, Stable Adaptive Systems, Prentice-Hall, Englewood Cliffs, NJ, 1989. [35] M. Spong, M. Vidyasagar, Robot Dynamics and Control, Wiley, New York, 1989. [36] R. Ortega, M. Spong, Adaptive motion control of rigid robots: a tutorial, Automatica 25 (6) (1989) 877–888. [37] L. Sciavicco, B. Sciciliano, Modeling and Control of Robot Manipulators, McGraw-Hill, New York, 1996. [38] J. Feddema, C. Lee, O.R. Mitchell, Weighted selection of image features for resolved rate visual feedback control, IEEE Transactions on Robotics and Automation 7 (1991) 31–47. [39] J.J. Craig, Introduction to Robotics Mechanics and Control, Addison-Wesley, Reading, MA, 1986. [40] S. Sastry, M. Bodson, Adaptive Control: Stability, Convergence and Robustness, Prentice-Hall, New York, 1989.

78

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 Oscar Nasisi was born in San Luis, Argentina, in 1961. He received the Electronics Engineering degree from the National University of San Juan, Argentina, the M.S. degree in Electronics Engineering from the National Universities Foundation for International Cooperation, Eindhoven, The Netherlands, and the Ph.D. degree from the National University of San Juan in 1986, 1989, and 1998, respectively. Since 1986, he has been with the Instituto de Automática, National University of San Juan, where he currently is a Full Professor. His research areas of interest are artificial vision, robotics, and adaptive control.

Ricardo Carelli was born in San Juan, Argentina. He graduated in Engineering from the National University of San Juan, Argentina, and obtained a Ph.D degree in Electrical Engineering from the National University of Mexico (UNAM). He is presently Full Professor at the National University of San Juan and Senior Researcher of the National Council for Scientific and Technical Research (CONICET, Argentina). He is Adjunct Director of the Instituto de Automática, National University of San Juan. His research interests are in robotics, manufacturing systems, adaptive control and artificial intelligence applied to automatic control. Prof. Carelli is a Senior Member of IEEE and a Member of AADECA-IFAC.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.