Every Continuous Nonlinear Control System Can be Obtained by Parametric Convex Programming

Share Embed


Descrição do Produto

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 8, SEPTEMBER 2008

1963

REFERENCES

G 0 T 0 s:t: (Di A + Di G) P + P (Di A + Di G) < P i = 1; 2; . . . ; n hi Gyij  0; i = 1; 2; . . . ; n; j = 1; 2; . . . ; v inf

where hi is a n-dimensional row vector whose ith element is 1 and other elements is 0, yij 2 @ + i is vertices of hypercube fxjx 2 n ; xi = 1; Ai x  0g, v is the number of vertex of hypercube fxjx 2 n ; xi = 1; Ai x  0g. If  0 and k = 0, go to Step 4). If k > 0,  0 or  k , go to Step 4). Otherwise, set k = k + 1, k = , go to the next step. Step 3) Using G obtained previously, solve the following linear matrix inequality optimization problem for P and

P 0 T 0 s:t: (Di A + Di G) P + P (Di A + Di G) < P i = 1 ; 2; . . . ; n P >0

[1] L. Hou and A. N. Michel, “Asymptotic stability of systems with saturation constraints,” in Proc. 35th Conf. Decision Control, 1996, pp. 2624–2629. [2] R. Mantri, A. Saberi, and V. Venkatasubramanian, “Stability analysis of continuous time planar systems with state saturation nonlinearity,” in Proc. Int. Symp. Circuits Syst., 1996, pp. 60–63. [3] H. Fang and Z. Lin, “Stability analysis for linear systems under state constraints,” IEEE Trans. Autom. Control, vol. 49, no. 6, pp. 950–955, Jun. 2004. [4] D. Liu and A. N. Michel, “Asymptotic stability of systems with partial state saturation nonlinearities,” in Proc. 33rd Conf. Decision Control, Lake Buena Vista, CA, 1994, pp. 1311–1316. [5] H. Kar and V. Singh, “Stability analysis of discrete-time systems in a state-space realisation with partial state saturation nonlinearities,” Inst. Elect. Eng. Proc. Control Theory Appl., vol. 150, no. 3, pp. 205–208, 2003. [6] T. Hu and Z. Lin, Control Systems with Actuator Saturation: Analysis and Design. Boston, MA: Birkhäuser, 2001.

inf

< 0 or  k , then go to Step 4). Otherwise, set k = k + 1, k = , go to Step 2). If < 0, (1) is global asymptotic stable at the origin. Oth-

Every Continuous Nonlinear Control System Can be Obtained by Parametric Convex Programming Michel Baes, Moritz Diehl, and Ion Necoara

if

Step 4)

erwise, no conclusion can be drawn. III. NUMERICAL EXAMPLE Consider (1) with

A=

1 :1

1: 6

1

0 0 : 4 0 1 :2 2 :4 : 00:8 00:9 01:5

Index Terms—Continuous feedback laws, model predictive control (MPC), parametric convex programming (PCP).

Using Theorem 1 in [3], we can not obtain any result on stability problem. Otherwise, using Theorem 2 in this note, we can obtain suitable P and G as follows:

P

=

G=

91:6448

37:2098

110:5317

110:5317

45:5072

148:1661

37:2098

17:5660

Abstract—In this short note, we define parametric convex programming (PCP) in a slightly different manner than it is usually done by extending convexity not only to variables but also to the parameters, and we show that the widely applied model predictive control (MPC) technique is a particular case of PCP. The main result of the note is an answer to the inverse question of PCP: which feedback laws can be generated by PCP? By employing results of convex analysis, we provide a constructive proof—yet not computational—that allows us to conclude that every continuous feedback law can be obtained by PCP.

45:5072

03:1112 00:0092 03:1059 026:0082 015:9061 031:0808 : 032:9510 013:4110 045:5564

Obviously, G is not diagonally dominant. Fig. 1 shows the state trajectory with initial state x0 = [1 01 01]T . IV. CONCLUSION This note gave a new and less conservative stability condition for linear systems under state saturation. By giving less constraints on free matrix G, a less conservative stability condition was given. An iterative linear matrix inequality approach was also given to test the stability of the system. The proposed method can be easily extended to linear systems with partial state saturation and controller synthesis problems. A numerical example was used to demonstrate the effectiveness of the presented algorithm.

I. INTRODUCTION In parametric programming, an optimization problem is considered where the data are functions of some parameters. Explicit parametric programming techniques systematically subdivide the parameter space into characteristic regions where the optimal value and an optimizer are given as explicit functions of the parameters [1]–[3]. Manuscript received April 16, 2008. Current version published September 24, 2008. This work was supported by the Research Council KUL (Center of Excellence on Optimization in Engineering (OPTEC) EF/05/006, GOA AMBioRICS, IOF-SCORES4CHEM and Ph.D./postdoc/fellow grants), the Flemish Government via FWO (PhD/postdoc grants, projects G.0452.04, G.0499.04, G.0211.05, G.0226.06, G.0321.06, G.0302.07, G.0320.08 (Convex Dynamic Programming for MPC), G.0558.08, research communities ICCoS, ANMMM, MLDM) and via IWT (PhD Grants, McKnow-E, Eureka-Flite), the EU via ERNSI, and by the Belgian Federal Science Policy Office: IUAP P6/04 (DYSCO, Dynamical systems, control and optimization, 2007-2011). Recommended by Associate Editors R. Tempo and M. Fujita. The authors are with the Center of Excellence “Optimization in Engineering” (OPTEC), Electrical Engineering Department, Katholieke Universiteit Leuven, Leuven—Heverlee 3001, Belgium (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TAC.2008.928131

0018-9286/$25.00 © 2008 IEEE Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on March 15,2010 at 10:24:33 EDT from IEEE Xplore. Restrictions apply.

1964

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 8, SEPTEMBER 2008

In recent years, a new interest in parametric programming arose from model predictive control (MPC) [3]–[7], a well-known technique in the system theory and optimal control community. MPC is the most successful advanced control technology implemented in industry due to its ability to handle complex systems with hard input and state constraints [8]–[10]. The essence of MPC is to determine a control profile that optimizes a cost criterion over a prediction window and then to apply this control profile until new process measurements become available. Then the whole procedure is repeated. Feedback is incorporated by using the measurements to update the optimization problem for the next step. In this paper, we study parametric convex programming (PCP), i.e., optimization problems where the functions defining the cost and the constraints are convex in both variables and parameters. The main result of the paper consists in deriving a constructive proof which shows that any continuous feedback law can be obtained from a PCP problem. Typically, such controllers may arise from MPC-type problems with convex stage costs and constraint sets. This note is organized as follows. In Section II, we define the main ingredients of an MPC scheme and in Section III, we introduce PCP and we show that linear MPC can be formulated as a PCP. In Section IV, we consider the inverse problem of PCP: we show that given a continuous feedback law and a strictly convex function, there exists a convex feasible set and a jointly convex cost function, whose minimizer over this set for a fixed state coincides exactly with the given feedback law. We conclude with some directions for future research. II. MODEL PREDICTIVE CONTROL We consider the discrete time dynamic system

x(k + 1) = f (x(k); u(k)); k 2

(1)

where x 2 n is the state and u 2 m is the input. The control and state sequence must satisfy (x(k); u(k)) 2 2 for all k  0, where usually 2 is a compact, convex subset of n+m . We employ u = (u0 ; u1 ; . . . ; uN01 ) to denote a control sequence over a prediction horizon of length N and (k; x; u) to denote the state solution of (1) at step k when the initial state is x at step 0 and the control u is applied. By definition (0; x; u) := x. We also define a cost function

VN (x; u) =

N01

`(x i ; u i ) + V f (x N )

(2)

i=0 where xi := (i; x; u) (thus x0 = x), `(xi ; ui ) is the stage cost associated to state xi and control ui , and Vf (xN ) is a terminal cost. For a

given initial condition x, the set of feasible input sequences is defined by

5N (x) := fu 2 Nm : (xi ; ui ) 2 28 i 2 f0; . . . ; N 01g; xN

2 Xf g

where Xf specify some terminal constraints on the state. We denote with XN the set of initial states for which a feasible input sequence exists, i.e.,

XN := fx 2 n : 5N (x) 6= ;g:

(4)

The optimal control problem for a given initial state x is formulated as follows:

N (x) : VN0 (x) := u25inf (x) VN (x; u):

(5)

It follows that the domain of VN0 is the set XN . The optimal control problem N (x) yields an optimal control sequence u0N (x) 2 arg minu25 (x) VN (x; u) for all x 2 XN :

0 (x ) : u0N (x) = u00 (x); u10 (x); . . . ; uN0 1

(6)

We can obtain an infinite-horizon controller by repeatedly solving the finite-horizon optimal control problem (5) where the current state of the plant is used as an initial state for the optimization. From the computed optimal control sequence only the first control sample u00 (x) is implemented and the whole procedure is repeated at the next step when new measurements of the state are available. This is referred to as the receding horizon implementation of the controller and the resulting design method is called model predictive control. Note that the optimization problem (5) depends on a parameter, the state x, appearing in the cost function but also in the constraints. Therefore, we can view (5) as a parametric program

g0 (x) = inf u g(x; u) s:t:(x; u) 2 0 where we can easily identify u

f(x; u) : x 2 XN ; u 2 5N (x)g.

u; g(x; u)

(7)

VN (x; u) and 0

III. PARAMETRIC CONVEX PROGRAMMING: CONVEXITY IN BOTH VARIABLES AND PARAMETERS We now define parametric convex programming. Definition 3.1: Let 0 be a nonempty subset of n 2 m and let g : 0 ! . The parametric optimization problem (7) is called a Parametric Convex Program (PCP) if the function g is convex and if the set 0 is convex. The reader should note that in our definition of PCP we consider convexity in the combined variable vector (x; u) as in [11] and not only in u as it is sometimes considered (see, e.g., [4]). First let us remark that linear MPC and sometimes even robust linear MPC can be posed in the framework of PCP, as defined before. Indeed, in this case f (x; u) = Ax + Bu, 2 and Xf are polyhedral sets (i.e., described by linear inequalities), and `(x; u) = xT Qx + uT Ru, Vf (x) = xT P x, where Q; R; P  0 (i.e., the matrices Q, R and P are positive semi-definite). Then the optimal control problem (5) can be written as follows:

VN0 (x) = inf uT H u + 2xT F u + xT Gx u s:t: Dx + E u  d

(8)

where H , F , G, D , E and d are easily obtained from Q, R, the equality 1 j xi = Ai x + i0 j =0 A Bui0j01 for all i  0, with x0 = x as denoted before, and from the linear inequalities that define 2 and Xf . Since H FT  0, it follows that VN (x; u) = uT H u + the matrix F G 2xT F u + xT Gx is a convex function in both variables and parameters. Moreover, the set 0 = f(x; u) : Dx + E u  dg is convex. In

conclusion, the MPC feedback law can be obtained in this case as so(3) of a PCP followed by a projection (i.e., we extract the first m lution components of the optimizer). The following result provides sufficient conditions under which the optimal value function g 0 and the optimizer 0 (x) 2 arg minu fg(x; u) s:t: (x; u) 2 0g are “well-behaved”. The continuity of g0 is shown in [1, Theor. 2.2.11] and the continuity of 0 is proved in [12]. Statements involving convexity are easily shown. Theorem 3.1: Suppose that 0 is a compact and convex set and g is a continuous and convex function. Then, the optimal value function g 0 is continuous and convex. If additionally g (x; 1) is strictly convex for each fixed x, then we can always select a continuous optimizer 0 . Algorithms for solving PCP problems exactly or approximately can be found for example in [3], [13], and [14]. IV. THE INVERSE PROBLEM OF PARAMETRIC CONVEX PROGRAMMING The discussion from the previous section suggests to consider the following problem: instead of characterizing the properties of a control

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on March 15,2010 at 10:24:33 EDT from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 8, SEPTEMBER 2008

1965

Proof: Indeed, in view of Caratheodory’s Theorem (see, e.g., [16, Theor. 2.3]), there exist nonnegative numbers i ; ti and vectors xi 2

for all 1  i  n + m + 2 that satisfy the following conditions. n+m+2 i = 1 , i  0 8i . 1) i=1 2) (xi ; (xi ); ti ) 2 S 8i. +m+2 i xi . 3) x = in=1 n+m+2 i (xi ). 4) u = i=1 +m+2 i ti . 5) t = n i=1 In view of the convexity of g 0 , we have

t=

+m+2

n

=1

i

 g0

t

i i

+m+2

n

=1



+m+2

n

=1

g0 (x ) i

i

i

x i

i

= g0 (x):

i

Fig. 1. Graph of ( ) = of a PCP?

sin( ). Can this function be obtained as solution

law corresponding to a given optimal control problem, one might seek to find a convex optimization problem for which a given control law is optimal. For example, considering the feedback control law (x) = x sin(x) (its graph is displayed in Fig. 1) we want to determine the convex function g and the convex set 0 such that the corresponding optimizer of the PCP (7) is this particular feedback law. The required convexity of 0 and g prevents us to consider the two following trivial constructions. The problem

min t s:t: t  g 0 (x) u = (x) u;t

does not satisfy our requirements, as its feasible set is not convex due to the constraint u = (x). We also reject the optimization problem

min ku 0 (x)k u

as its objective function is not convex. In the sequel we provide a constructive proof—yet not computational—that allows us to conclude that  can be obtained from a PCP. The main result of the paper is stated in the following theorem. Theorem 4.1: Let  n be a closed convex set,  : ! m be a continuous function and g 0 : ! be a continuous strictly convex function. Then we can construct a PCP in the form (7) which has the optimal value function g 0 and as optimizer . Proof: The proof is divided into several lemmata given below. We consider the sets1:

S := f(x; (x); t) : x 2 ; g0 (x)  tg; E := conv(S) and the epigraph of g 0 defined as [15]

epig0 := f(x; t) : x 2 ; g0 (x)  tg: Lemma 4.2: For every (x; u; t) 2 1Given a set conv( ) =

:

E , we have g0 (x)  t.

, conv( ) denotes the convex hull of =1 0 and .

:

Lemma 4.3: The set E is the epigraph of a convex function. Proof: It suffices to show that for every (x; u; t) 2 E , and every t0  t, the point (x; u; t0 ) belongs to E . This is obvious from Lemma 4.2: it suffices to replace every ti by ti + (t0 0 t) in Caratheodory’s decomposition of (x; u; t0 ). The convexity of E implies the convexity of the corresponding function. Lemma 4.4: Let AM be subsets of n for every nonnegative real number M such that AM  AM when M1  M2 . Let A := limM !1 AM = [M 0 AM . Then conv(A) = [M 0 conv(AM ). Proof: The inclusion conv(A)  [M 0 conv(AM ) is easy to prove. If x 2 [M 0 conv(AM ), there exists a number t  0 such that x 2 conv(At ). Hence, by Caratheodory’s Theorem, we have some nonnegative numbers 1 ; . . . ; n+1 that sum up to 1 and points +1 i xi . But xi 2 A, so that x1 ; . . . ; xn+1 2 At for which x = ni=1 x 2 conv(A). We also use Caratheodory’s Theorem to show the reverse inclusion. If x 2 conv(A), there exists 1 ; . . . ; n+1 2 + and x1 ; . . . ; xn+1 2 +1 i = 1 and n+1 i xi = x. Since A is the union A such that ni=1 i=1 of the sets AM , each of these points xi belongs to some AM for a Mi  0. As the sets AM form a growing sequence, we have xi 2 AM , where M := maxfM1 ; . . . ; Mn+1 g. Hence x 2 conv(AM ), and the inclusion is proved. Lemma 4.5: The set E is closed. Proof: Let us define the sets

S E

M M

:= f(x; (x); t) : x 2 ; kxk  M; g0 (x)  tg; := conv(S ): M

According to the previous lemma, we have E = [M 0 EM . Thus, it suffices to prove that for every M 2 the set EM is closed. Let us fix M 2 such that EM is nonempty. Now, let f(xk ; uk ; tk )gk0 be a sequence of EM that converges to z = (x; u; t), and let us prove that z belongs to EM as well. We use Caratheodory’s Theorem: there exist nonnegative numbers i;k , ti;k and vectors xi;k 2 EM for all 1  i  n + m + 2 and k 2 that satisfy the following conditions: n+m+2 i;k = 1, i;k  0 8 i; k 2 ; 1) i=1 2) (xi;k ; (xi;k ); ti;k ) 2 SM 8i; k ; +m+2 i;k xi;k 8k ; 3) xk = in=1 n+m+2 i;k (xi;k ) 8k ; 4) uk = i=1 +m+2 i;k ti;k 8k . 5) tk = n i=1 Since the vectors ( 1;k ; . . . ; n+m+2;k ) belong to the n + m + 2-dimensional simplex, which is a compact set, we can assume without loss of generality that they converge to the vector ( 1 ; . . . ; n+m+2 ). Similarly, as the sequences fx1;k gk0 ; . . . ; fxn;k gk0 belong to the compact set \ B[0; M], where B[0; M] := fx : jxj  M g, we

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on March 15,2010 at 10:24:33 EDT from IEEE Xplore. Restrictions apply.

1966

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 8, SEPTEMBER 2008

can assume that they converge to v1 ; . . . ; vn 2 \ B[0; M], respec+m+2 i vi . By continuity of , we tively. Hence, we have xk ! n i=1 +m+2 i (vi ). Since the get (xi;k ) ! (vi ), implying uk ! in=1 0 function g is continuous and convex, we can write

t

t =

+m+2

n

k

t

i;k i;k

=1



i

+m+2

n

g0 (x ) ! i;k

=1

+m+2

n

i;k

g0 (v )  g0 (x): i

=1

i

i

i

In conclusion z 2 EM . Let g : 2 m ! be the function having E as epigraph. In view of the previous lemma, the function g is lower semi-continuous (see [17] for more details). According to our construction g and 0 are defined as follows:

0 = f(x; u) : 9 t < 1such that(x; u; t) 2 E g g : 0 ! ; g(x; u) = inf ft s:t:(x; u; t) 2 E g: t

x 2 , the convex optimization problem inf fg(x; u) s:t: (x; u) 2 0g has a unique minimizer, which is (x). Proof: Let us fix x 2 . In view of Lemma 4.2, we know that g0 (x)  inf ft : (x; u; t) 2 E g. Moreover, the point (x; (x); g0 (x)) Lemma 4.6: For every

Fig. 2. Epigraph sin( ).

of function

for which PCP (7) has the optimizer ( ) =

u

achieves the equality. It remains to show that this is the only point of E doing so. Suppose that there is another point z := (x; u; g0 (x)) 2 E . Invoking once again Caratheodory’s Theorem, the point z can be represented as

u=

+m+2

n

=1 n+m+2

(x ); x = i

+m+2

n

i

=1

i

g0 (x) =

x; i

i

i

t

i i

=1

i

where ti

 g0 (x ). Now, i

+m+2

n

=1

i

g0 (x )  g0 i

i

+m+2

n

=1

i

= g0 (x) 

x i

i

+m+2

n

=1

g0 (x ) i

i

i

implies that equality holds throughout. Since the function g 0 is strictly convex, we deduce that xi = x for all i. The procedure is illustrated in Fig. 2 which displays the epigraph E of the cost function g corresponding to the feedback law given at the beginning of this section (i.e., (x) = x sin(x)). From the proof of the previous lemma, it follows that we need strict convexity of the optimal value function in order to prove uniqueness of the optimizer. This is not surprising since Theorem 3.1 also requires strict convexity of the cost function in order to be able to select a continuous optimizer. Note that the construction provided in this section for the convex set 0 and the continuous and convex function g which define the PCP inf u fg(u; x) s:t (x; u) 2 0g is not unique. We have many possibilities to select such a set 0 and function g . V. CONCLUSION In this note, we have provided a constructive proof showing that any continuous feedback law can be obtained from solving a parametric convex program. It is well known that continuous nonlinear feedback laws can arise from solving model predictive control type problems with convex stage costs and constraint sets, a special case of parametric

convex programming. Our result thus provides a positive answer to the question if every continous state feedback can be obtained by embedded optimization, as in MPC. We would like to point out, however, that our construction of the parametric convex program is not unique and implies no computational algorithm. A natural question that can arise from this note would be to particularize our results to piecewise linear controllers: can any continuous piecewise linear feedback law be obtained by parametric linear programming? Should such a construction be possible, it might offer computational advantages for explicit MPC algorithms. A direction that is worth to be also investigated is the inverse problem of nonlinear optimal control theory: for a given system model f and control law  find out if there exist stage costs ` and constraint sets 2 that give rise to a parametric convex program for which the control law  is optimal, similar to a related question posed and answered by Kalman for the case of linear control systems [18]. ACKNOWLEDGMENT The authors thank J. C. Willems and C. Jones for inspiring discussions.

REFERENCES [1] A. Fiacco, Introduction to Sensitivity and Stability Analysis in Nonlinear Programming. London, U.K: Academic, 1983. [2] T. Gal, Postoptimal Analyses, Parametric Programming, and Related Topics. Berlin, Germany: de Gruyter, 1995. [3] F. Borrelli, Constrained Optimal Control of Linear and Hybrid Systems, ser. Lecture Notes in Control and Information Sciences. Berlin, Germany: Springer, 2003, vol. 290. [4] A. Bemporad, M. Morari, V. Dua, and E. Pistikopoulos, “The explicit linear quadratic regulator for constrained systems,” Automatica, vol. 38, no. 1, pp. 3–20, Jan. 2002. [5] M. Diehl and J. Björnberg, “Robust dynamic programming for min-max model predictive control of constrained uncertain systems,” IEEE Trans. Automat. Control, vol. 49, no. 12, pp. 2253–2257, Dec. 2004. [6] P. Tondel, T. Johansen, and A. Bemporad, “An algorithm for multiparametric quadratic programming and explicit MPC solutions,” Automatica, vol. 39, no. 3, pp. 489–497, Mar. 2003. [7] E. Kerrigan and J. Maciejowski, “Feedback min-max MPC using a single linear program: Robust stability and the explicit solution,” Int. J. Robust Nonlin. Control, vol. 13, no. 3–4, pp. 1–18, 2003. [8] J. Maciejowski, Predictive Control With Constraints. Harlow, U.K.: Prentice–Hall, 2002.

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on March 15,2010 at 10:24:33 EDT from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 8, SEPTEMBER 2008

[9] D. Mayne, J. Rawlings, C. Rao, and P. Scokaert, “Constrained model predictive control: Stability and optimality,” Automatica, vol. 36, no. 7, pp. 789–814, Jun. 2000. [10] C. García, D. Prett, and M. Morari, “Model predictive control: Theory and practice—A survey,” Automatica, vol. 25, no. 3, pp. 335–348, May 1989. [11] B. Bank, J. Guddat, D. Klatte, B. Kummer, and K. Tammer, Non-Linear Parametric Optimization. Berlin, Germany: Akademie-Verlag, 1982. [12] S. Robinson and R. Day, “A sufficient condition for continuity of optimal sets in mathematical programming,” J. Mathemat. Anal. Applic., vol. 45, pp. 506–511, 1974. [13] A. Bemporad and C. Filippi, “Approximate multiparametric convex programming,” in Proc. 42nd IEEE Conf. Decision and Control, Maui, HI, Dec. 2003, pp. 3185–3190. [14] J. Björnberg and M. Diehl, “Approximate robust dynamic programming and robustly stable MPC,” Automatica, vol. 42, no. 5, pp. 777–782, May 2006. [15] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.: Cambridge University Press, 2004. [16] A. Barvinok, A Course in Convexity, ser. Graduate Studies in Mathematics. Providence, RI: AMS, 2002, vol. 54. [17] R. T. Rockafellar, Convex Analysis, ser. Princeton Mathematics Series. Princeton, U.K.: Princeton University Press, 1970, vol. 28. [18] R. Kalman, “When is a linear control system optimal?,” Trans. ASME, J. Basic Eng., ser. D, vol. 88, pp. 51–60, Mar. 1964.

1967

he desires and improves that convergence by means of an automatic adaptation of the perturbation sizes. This technical note is organized as follows. Section II describes the H2 =H control problem formulation and our path-following algorithm in detail. Section III applies this algorithm to the numerical example of [1] and compares the results with those given in [1] and those given in another paper in which this same example has been used. The differences between these various solutions will also be discussed. Section IV applies our algorithm to two more benchmark examples, which are drawn from a well-known library. The results of Sections III and IV are also compared with those obtained with a direct BMI-solving software. In Section V, an alternative solution for the computation of an initial, suboptimal value of the controller is proposed for the situation where the H2 and the H constraints apply to closed-loop transfer functions starting from the same input. Finally, Section VI will conclude this work with some comments.

1

1

II. APPLICATION TO THE MIXED H2 =H

1 CONTROLLER DESIGN

A. Problem Formulation

1

The mixed H2 =H control problem has been introduced in the early 1990s in [2], where the problem of optimal control with robust stability constraint of [3] was transformed into a convex optimization approach. Given the following linear system:

An Improved Path-Following Method for Mixed Controller Design Eric Ostertag

Abstract—Among the few methods available to solve bilinear matrix inequalities (BMIs) occurring in control design, the path-following method, published some years ago, appears to be one of the best ones, as far as linearization methods are concerned. However, few details are given in the literature about its implementation and limits. In this technical note, this controllers with full method is applied to the design of mixed details of the algorithm and some improvements over the one which has been published a few years ago. The results obtained with the numerical example given in that same publication, as well as with some other examples, are compared with those given by other methods, including a direct BMI-solving program. Index Terms—Bilinear matrix inequalities (BMIs), mixed trol, path-following method.

con-

x_ = Ax + Bw w + Bu u z1 = C1 x + D1w w + D1u u z2 = C2 x + D2w w + D2u u y = Cy x + Dyw w

the aim is to compute a feedback gain matrix K such that for u = Ky the H2 -norm of the closed loop from w to z2 is minimized while the H -norm of the closed loop from w to z1 is less than some imposed level . In the following, only static state feedback control will be taken into account, which is obtained by setting Cy = I and Dyw = 0 in (1). The H2 condition imposes further that D2w = 0. In addition, we will also assume D1w = 0, which will be the case in all the numerical examples treated in Sections III–V. The closed loop has then the following state–space description:

1

x_ = (A + Bu K )x + Bw w = Ac x + Bw w z1 = (C1 + D1u K )x = C1c x z2 = (C2 + D2u K )x = C2c x:

I. INTRODUCTION To solve problems involving bilinear matrix inequalities (BMIs), an elegant step-by-step method, implying linearization at its central step, the path-following method, has been published some years ago [1]. The algorithm had not been given, however, in all details in that work, and contained some unlucky mistakes, which reflect themselves in the results of the numerical example given as an illustration, thus making it somewhat difficult for the reader who wants to apply this method. The main purpose of this technical note is to reformulate this algorithm with full details and some corrections, and to introduce an additional feature, which lets the user adapt it to the convergence accuracy Manuscript received May 25, 2007; revised November 8, 2007. First published September 12, 2008; current version published September 24, 2008. The author is with the Laboratoire des Sciences de l’Image, de l’Informatique et de la Télédétection, LSIIT, UMR ULP-CNRS 7005, F-67412 ILLKIRCH, France (e-mail: [email protected]). Digital Object Identifier 10.1109/TAC.2008.928309

(1)

(2)

The formulation of this problem by means of linear matrix inequalities (LMIs) and BMIs has been established later in [4]–[8]. We will use here the BMI optimization formulation of [1] with some corrections: minimize

2

subject to

P1

 0; P2  0;T (A + B u K )

P1 + P1 (A + Bu K )+ T (C1 + D1u K ) (C1 + D1u K )

(A + B u K )

P2 C2 + D2u K

TP Bw 1

T P + P (A + B K ) 2 2 u TP Bw 2 T (C2 + D2u K )

Z

 0;

P1 Bw

0 2 I P2 Bw 0I  0

0

Tr(Z ) < 

(3a) (3b)

2

:

0018-9286/$25.00 © 2008 IEEE Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on March 15,2010 at 10:24:33 EDT from IEEE Xplore. Restrictions apply.

(3c)

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.