Stable adaptive control of non-minimum phase systems

Share Embed


Descrição do Produto

Volume 2, N u m b e r 3

SYSTEMS & C O N T R O L LETTERS

October 1982

Stable adaptive control of non-minimum phase systems * B. E G A R D T Dept. of Research and Innovation, KX2, ASEA AB, S-721 83 Viisterbs, Sweden

C. S A M S O N I.R.LS.A, Laboratoire d'Automatique, Campus de Beaulieu, 35042 Rennes Cedex, France Received 1 May 1982 Revised 20 June 1982 An adaptive pole placement design for not necessarily m i n i m u m phase systems is analyzed with respect to stability. Conditions are given for boundedness of closed-loop signals when the process is subject to bounded disturbances. The most restrictive one can be avoided in a local stability result.

Keywords: Adaptive control, Stability, Pole assignment.

1. Introduction The problem to adaptively control a linear system with constant but unknown parameters has attracted much interest. This particular formulation of an adaptive control problem has lead to fairly simple and practical algorithms as well as an extensive amount of literature on e.g. convergence and stability properties. Probably the most widespread approach to the problem is to use a separation between identification and control, i.e. to use a control design as if the parameters were known and then just replace the unknown parameters by some estimates. SelfTuning Regulators (STR) and Model Reference Adaptive Controllers (MRAC) belong to this class of schemes. * This work was partly carried out at Information Systems Lab, Stanford University and partly supported by the National Science Foundation under Grant NSF ENG-78-10003 and the U.S. Army Research Office, contract DDAG-29-79C-0215.

0167-6911/82/0000-0000/$02.75

Considerable attention has been paid to the task of presenting a global stability result for STR's and MRAC's. Only during the few recent years the problem has been solved in some generality. Most of the results concern direct control schemes in the absence of disturbances, e.g. [1-7]. The term 'direct' here indicates that the scheme is designed in such a way as to estimate the controller parameters directly. Indirect schemes, on the other hand, are based on a two-step procedure consisting of estimation of the open-loop system parameters and then calculation of the control parameters. The distinction between the two approaches is not always clear and e.g. several STR's can be seen as directly or indirectly depending on the viewpoint that is preferred, e.g. [8,9]. Therefore it is not surprising that stability results for some direct schemes can be applied also to the corresponding indirect schemes, [I0]. This holds also for some convergence results, [11]. In spite of the similarity mentioned above, the stability analysis of indirect schemes - which are often based on pole placement design, see [12-19] - often differs from the analysis of direct schemes. For example, the proofs axe often centered around a persistently exciting condition, e.g. [14,17-20]. We believe that this assumption is not necessary and more a consequence of the technical difficulties encountered in the analysis of adaptive control schemes that do not cancel the process zeros, the only ones that can be used with a non-minimum phase system. Some stability results that are not based on a persistently exciting condition, have been presented for indirect schemes. For example, pole placement was considered in [21] and a linear-quadratic design was analyzed in [22-25]. In this paper we combine the techniques of Egardt [2,5,10] with the methods of Samson [22-25] to analyze the stability properties for a pole placement algorithm in the case of general bounded disturbances. As in some of the references cited above, the desire has been to get a stability result for systems with disturbances that are not char-

© 1982 North-Holland

137

Volume 2, Number 3

SYSTEMS & CONTROL LETTERS

acterized as particular stochastic processes. For the latter situation, completely different techniques have been used to prove stability and convergence of adaptive algorithms, see e.g. [26].

October 1982

where

0T=[a, .....

a . , b , ..... b . , c , ..... c.,].

(5)

~ _ l = [y,_l ..... y , _ , , u , _ l . . . . . u , _ , , v,_, ..... vt_,, ].

2. P r o b l e m

formulation

This section gives the necessary prerequisites for the formulation of the adaptive control problem to be considered. This includes the process description and a brief account of the o b s e r v e r / controller configuration that forms the basis of the adaptive controller.

The second equivalent representation is a state equation in observable canonical form, convenient for the formulation of the state-variable feedback: Xt+ I

y,=A(q-')y,_,

+ B(q-')u,_,

where

+a,q -''-l),

/~(q-') =b, +b2q-'+'"

+b.q -¢'-'),

Remark. Coefficients in the A- and B-polynomials may be zero and it is not necessary to know the exact number of time-delays in the process.

BT--[b,

(2)

0 " ' " b,,], 0 ...

0],

Remark. It is easy to check that the state vector in (7) is given by x, = HrPt_,

(8)

where

I i I ......

an I

b,

y,=A(q-')y,_,

+ B(q-')u,_,

+ C ( q ' ) v , _ 1+v,.

(3)

Two equivalent representations of (3) will be used later. The first one is convenient for identification purposes: y, = 8"r~0,_, + v,

(4)

......

•"

H= ."

= cl + c 2 q - I + . . . + c , , q - m ,

where v t is bounded and m ~
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.