The spectral factorization problem for multivariable distributed parameter systems

Share Embed


Descrição do Produto

THE SPECTRAL FACTORIZATION PROBLEM FOR MULTIVARIABLE DISTRIBUTED PARAMETER SYSTEMS Frank M. CALLIER and Joseph J. WINKIN This paper studies the solution of the spectral factorization problem for multivariable distributed parameter systems with an impulse response having an in nite number of delayed impulses. A coercivity criterion for the existence of an invertible spectral factor is given for the cases that the delays are a) arbitrary (not necessarily commensurate) and b) equally spaced (commensurate); for the latter case the criterion is applied to a system consisting of two parallel transmission lines without distortion. In all cases, it is essentially shown that, under the given criterion, the spectral density matrix has a spectral factor whenever this is true for its singular atomic part, i.e. its series of delayed impulses (with almost periodic symbol). Finally, a small-gain type sucient condition is studied for the existence of spectral factors with arbitrary delays. The latter condition is meaningful from the system theoretic point of view, since it guarantees feedback stability robustness with respect to small delays in the feedback loop. Moreover its proof contains constructive elements.

1 Introduction In the control literature, much attention has been paid to the spectral factorization problem for linear time invariant lumped - and distributed parameter systems. This problem is met under di erent forms in several applications. A classical one is the solution of Wiener-Hopf type problems, i.e. systems of integral equations on a half line, with kernels of a speci c type. A lot of results in this eld have been developed by Gohberg and Krein: see e.g.[18], [22] and the book [17]. Another context where the factorization problem arises is the theory of feedback control system design: LQ-theory and robust feedback stability, see below; another issue may be the multiplier technique in passivity theory (circle criterion, etc...): see e.g. [36], [16]. The speci c spectral factorization problem of this paper is motivated by applications in feedback control system design, more precisely in the analysis of closed-loop stability robustness, i.e. the analysis of the graph distance between two possibly unstable systems for obtaining robustness estimates of feedback stability, and in the solution to the Linear-Quadratic optimal control problem by frequency domain techniques for distributed parameter, i.e. in nite-dimensional state-space, systems with bounded or unbounded control

and/or observation operators: see e.g. [8] - [10], [39], [27] - [31] and [33] - [34] and references therein. This paper studies the multivariable spectral factorization problem in the framework of the Callier-Desoer algebra of possibly unstable distributed parameter system transfer functions (see e.g. the survey paper [11] or the book [15]), whose corresponding subalgebra of proper stable transfer functions is denoted by A^? . The starting points are references [8] and [9] where one investigates repectively singlevariable general spectral factorization and multivariable spectral factorization with no delayed impulses. The results obtained in those papers are extended here to multivariable distributed parameter systems with an impulse response having an in nite number of delayed impulses. Criteria for the existence of an invertible spectral factor are given for the cases that the delays are a) arbitrary and b) equally spaced (commensurate case). The analysis of case a) is based on a result of [26] ([1],[2]), while that of case b) is a corrolary ( already present in [39]). In both cases the condition is the coercivity on the extended imaginary axis of the matrix spectral density. An essential step in the suciency proof of this condition is to show that, once the singular atomic part of the spectral density (i.e. its series of delayed impulses) has an invertible spectral factor, then so does the overall spectral density. Indeed, the problem is then reduced to spectral factorization with no delayed impulses, whose solution is known, see [9]. It is also recalled that the coercivity condition implies the existence of invertible matrix spectral factors with entries in a larger algebra than A^? , viz. H1 , see e.g. [25], [34] and references therein. Finally a stronger small-gain type condition is proved to be sucient for the existence of matrix spectral factors with entries in A^? . This last condition is seen to be meaningful from the system theoretic point of view, since it guarantees feedback stability robustness with respect to small delays in the feedback loop. The results are illustrated by some examples. In particular, the results of the case of commensurate delays are applied to a system consisting of two transmission lines in parallel without distortion. The paper is organized as follows. Section 1 contains the present introduction and a list of notations and abbreviations. The solution of the general spectral factorization problem of spectral densities with arbitrary delays is described in Section 2, whereas the detailed proofs are given in Section 3. The next section is devoted to two particular cases which are important in applications. The nal section contains some conclusions. A list of notations and abbreviations and a remark on the causal-anticausal decomposition of a distribution with support on the real axis are given below.

List of notations and abbreviations :

IR, (respectively IR? ; IR+ ) := set of real (respectively nonpositive-, nonnegative-real) numbers; CI := eld of complex numbers; CI + , (respectively CI 0+ ) := fs 2 CI : Re(s)  , (respectively > )g ( is omitted if  = 0); S , (respectively S0 ) := fs 2 CI :   Re(s)  ?, (respectively  < Re(s) < ?)g;

LTD, (respectively LTD? ; LTD+ ):= set ofCI -valued Laplace transformable distributions with support on IR, (respectively IR? ; IR+ ); (:) := Dirac delta distribution (Dirac impulse); f^(:) := (two-sided) Laplace transform of f 2 LTD; A^ := set of Laplace transforms of all f 2 A; L1 := class of all functions f , with support on IR+, such that R01 jf (t)j exp(?t)dt < 1; Mat (A) := set of matrices having entries in A; Ann := set of n-by-n matrices having entries in A ; M  := hermitian transpose of the matrix M ; F(t) := F (?t), parahermitian transpose of F 2 Mat (LTD), equivalently F^(s) = F^ (?s) (= F^ (j!) for s = j!); M  0, (respectively > 0) := M is a positive semi-de nite (respectively positive de nite) matrix; log+ x := max(log x; 0).

Preliminary remark :

+X 1

fi(: ? ti) 2 LTD, (where fa is a CI -valued function and fi 2 CI ; i = 0; 1;   ), we associate f + := fa+ (:) + fsa+ (:) 2 LTD+ and f ? := fa? (:) + fsa? (:) 2 LTD? such that fa+ := fa, almost everywhere (with respect 1 to the Lebesgue measure), and fsa+ := 2?1f0(:) + fi (: ? ti ) on IR+ ; fa? := fa, almost To any distribution f = fa(:) + fsa(:) := fa (:) +

i=?1 X

i=1

?1

everywhere, and fsa? := 2?1f0(:)+ fi(:?ti) on IR? and f = f + +f ?. The distributions i=?1 f + and f ? are called respectively the causal part and the anticausal part of f . X

2 Spectral Factorization with Arbitrary Delays 2.1 Description of the problem

The spectral factorization problem is studied here in the framework of the distributed parameter system transfer functions algebras A^? and B^, which have been introduced and analyzed in [5], [6], [7]. See also the tutorial paper [11] and the book [15]. Let   0. An impulse response f 2 LTD+ is said to be in AR () i , on t  0; f (t) = fa(t) + fsa(t) where its regular functional part fa is such that 01 jf (t)j exp(?t)dt < 1 P1 and its singular atomicPpart fsa := i=0 fi (: ? ti ), where t0 = 0; ti > 0 for i  1 and fi 2 CI for i  0 with 1 i=0 jfi j exp(?ti ) < 1. An impulse response f is said to be in A? i f 2 A() for some  < 0. A^? denotes the algebra of distributed parameter system proper-stable transfer functions, i.e. Laplace transforms of elements in A? .

De nition 2.1 Let F = Fa + Fsa := Fa() + 1i=?1 Fi( ? ti) be a matrix of Laplace transformable distributions of a real variable t 2 IR, where a) Fa (:) is a n  n-matrix valued function, Fi is a constant n  n-matrix, for i = 0; 1;   , P

t0 = 0, ti > 0 and t?i < 0 for i  1, and () denotes the Dirac delta distribution (Dirac

impulse); b) F is parahermitian self-adjoint, i.e. F (t) = F (t) := F (?t) , or equivalently

Fa(t) = (Fa)(t); F?i = Fi and t?i = ?ti for i  0 ;

(1)

c) the Laplace transform of its causal part (i.e. its part with support on the nonnegative real P1 + + ? 1 numbers), viz. F = Fa (:) + 2 F0 (:) + i=1 Fi (: ? ti ), is a matrix with all its entries in the algebra A^? , i.e. F^ + 2 Mat (A^?) ; (2) and d) F^ is positive semi-de nite on the imaginary axis, i.e. F^ (j!)(= F^(j!) = F^ (j!))  0 for all ! 2 IR : (3) A spectral factor of the matrix spectral density F^ is a matrix valued function R^ = R^ a + R^ sa = P1 R^a() + k=0 Rk  exp(?  k ) which is in Mat(A^?) i.e. has all entries in A^?, such that

F^ (j!) = R^ (j!)  R^ (j!) for all ! 2 IR: 2

(4)

The spectral factorization problem which is considered here consists of nding a necessary and sucient condition on the matrix spectral density F^ under which it admits an invertible spectral factor R^ , i.e. such that R^ and R^ ?1 are in Mat (A^? ).

2.2 Main Results

Theorem 2.1 [Spectral factorization with arbitrary delays] Let a matrix spectral

density F^ be given as in De nition 2.1, such that conditions (1)-(3) hold. Under these conditions, F^ has an invertible spectral factor R^ , such that R^ is in Mat(A^? ) together with its inverse, if and only if it is (uniformly) coercive on the imaginary axis, i.e. the following inequality holds: (5) there exists  > 0 such that F^ (j!)  I for all ! 2 IR; : Moreover, if (5) holds, then all invertible spectral factors of F^ are unique up to left multiplication by a constant unitary matrix U: 2

Remark 2.1 a) The proof of this result is essentially based on the fact that, under condition

(5), a given spectral density matrix has an invertible spectral factor whenever so has its almost periodic part F^sa , and on the fact that the latter holds by [26, Corollary 1], which is a byproduct of [2, Theorem 3]; see also [24, Theorem 5.1], [3, proof of Theorem 2.3 (especially Lemma 3.2)] and references cited therein, for additional information. Then, by using a symmetric extraction technique, the problem can be reduced to the spectral factorization of a spectral density without delays, which has been solved in [9, Theorem 1]. See Subsection 3.1. b) Theorem 2.1 is a generalization of a similar earlier result on invertible factorizability of a matrix spectral density with equally-spaced delays, see [39, Theorem 3.1M]. See Theorem

4.1 below for more detail. c) Condition (5) is also known to be a necessary and sucient condition for the existence of invertible spectral factors in Mat(H1 ), see [25, Theorem 3.7, p.54]; see also [34, p.316]. Moreover, condition (5) is stronger than ! 2 IR 7! (1 + !2)?1  log+kF^ (j!)?1k is integrable; (6) where log+ x := maxflog x; 0g for any real number x > 0. The latter condition is necessary and sucient for the existence of an outer spectral factor in Mat(H1 ) ( analytic with no zeros in the open right half-plane, bounded on the imaginary axis with zeros still allowed), see [25, Theorem 6.14, p.124]. d) Theorem 2.1 con rms the conjecture stated in [27, p.174], in the case of the algebra A^? and it solves the open problem described in [12].

A direct and important byproduct of Theorem 2.1 is the existence of normalized coprime fractions for distributed parameter system transfer functions. Recall that A^? is our selected class of distributed proper-stable transfer functions. It contains the multiplicative subset A^1?, i.e. of transfer functions that are bounded away from zero at in nity in CI +, i.e. that are biproper-stable. Possibly unstable transfer functions are selected to be in the algebra B^, where f^ 2 B^ i f^ = n^ :d^?1 with n^ 2 A^? and d^ 2 A^1 ? . Note that by [5, Theorem 3.3] a transfer ^ function is in B i it is the sum of a completely unstable strictly proper rational function and a stable transfer function in A^? ; hence d^ above can always be chosen biproper-stable rational, e.g. [32, Fact 20, p. 13]. Multivariable plants have transfer matrices P^ in Mat (B^) described by a right matrix fraction P^ = N^ D^ ?1 where N^ and D^ are in Mat (A^? ) and det D^ is in A^1 ? ; if this holds and there exist U^ and V^ in Mat (A^? ) such that U^ N^ + V^ D^ = I (Bezout identity), (or equivalently [N^ (s)T D^ (s)T ]T has full column rank in CI + ), then P^ in Mat (B^) is ^ D^ ) in Mat (A^? ); right coprime fractions are unique said to have a right coprime fraction (N; up to multiplication on the right by a factor in Mat (A^? ) together with its inverse; moreover D^ above can always be chosen biproper-stable rational such that D^ (1) = I , see [7, proof of Theorem 2.1]. ^ D^ ) in Mat (A^? ): (N; ^ D^ ) De nition 2.2 Let P^ 2 Mat (B^) have a right coprime fraction (N; is said to be normalized i (N^ N^ + D^  D^ )(j!) = I for all ! 2 IR : (7) We call (right) coprime fraction spectral density the expression F^ := N^N^ + D^ D^ :2

(8)

Remark 2.2 It follows from the proof of Theorem 2.2 below and from Theorem 2.1 that normalized right coprime fractions of a transfer matrix P^ 2 Mat (B^) are unique up to right multiplication by a constant unitary matrix: see Subsection 3.2.

Theorem 2.2 [Normalized coprime fractions] Every transfer matrix P^ 2 Mat (B^) has normalized right coprime fractions unique up to right multiplication by a unitary matrix. 2

Example 1: Consider a system consisting of the parallel interconnection of a strictly proper stable system with transfer function P^1 = (P^1)a 2 L^ 1  A^? for some  < 0 (i.e. the impulse

response P1 is absolutely continuous) and a proper stable system with transfer function ^ ()  A^? (i.e. the impulse response P2 is purely singular atomic). The P^2 = (P^2)sa 2 LA system transfer function is given by P^ (s) = [P^1(s); P^2(s)] and is proper stable, i.e. P^ is in ^ D^ ) := (P; ^ I ) , where I is the two-by-two identity matrix, A^1?2 . It follows that the pair (N; is a right coprime fraction of P^ ; whence, by the proof of Theorem 2.2, the corresponding right coprime fraction power spectral density F^ := N^ N^ + D^  D^ = I + P^  P^ has a spectral factor R^ invertible in A^2?2, whence (N^ R^ ?1; D^ R^ ?1) is a normalized right coprime fraction of P^ . Moreover the singular atomic part of F^ is a diagonal matrix which is given by F^sa = diag [1; 1 + P^2P^2]: By Lemma 3.3, F^sa has a spectral factor R^ 1 which is given by R^1 = diag [1; W^ ] ; where W^ 2 A^? is a spectral factor (invertible in Mat (A^? )) of 1 + P^2 P^2 . Therefore, by the proof of Theorem 2.1, a spectral factor R^ 2 Mat (A^? ) (invertible in Mat (A^? )) of F^ is given by R^ = R^ 2  R^ 1 , where R^ 2 = R^ 2a + R20 2 Mat (A^? ) is a spectral factor (invertible in Mat (A^? )) of (R^ 1?1F^aR^ 1?1) + I .

3 Proofs of the Main Results 3.1 Proof of Theorem 2.1

Throughout this subsection, it is assumed that we are given a matrix spectral density F^ as in De nition 2.1, such that conditions (1)-(3) hold. We rst prove the uniqueness of spectral factors, up to left multiplication by a constant unitary matrix, under condition (5). Lemma 3.1 [Multiplicity of spectral factors] Let a matrix spectral density F^ be given as in De nition 2.1, such that conditions (1)-(3) hold, and assume that (5) holds. If R^1 and R^2 are spectral factors of F^ (9) then there exists a constant unitary matrix U , i.e. U 2 Mat (CI ) with U  = U ?1 , such that R^2 = U R^1 :2 (10)

Proof: Let R^1 and R^2 be two spectral factors of the spectral density F^ . Hence the following identity holds on the imaginary axis: R^1R^2?1 = R^1?1R^2 = (R^1R^2?1)? 1 :

Since the matrix function on the left-hand side of this identity is in Mat(A^? ), it is bounded and holomorphic in an open right half-plane containing CI + , whence it can be analytically extended to a bounded entire function U . Then, by Liouville's theorem, see e.g. [21, pp.203204], U is a unitary constant matrix such that (10) holds. 2

Necessity: If R^ is an invertible spectral factor, in Mat(A^?), of F^ , then, by e.g. [11, Theorem 1] or [15, Lemmas 7.1.5 and 7.2.1] (see also Fact 1c below), inf fj det R^ (s)j : s 2 CI + g > 0. Since R^  (j!) = R^ (j!) , it follows that inf fdet F^ (j!) : ! 2 IRg = inf fj det R^ (j!)j2 : ! 2 IRg > 0 : Hence (5) holds. 2 Suciency: The proof of suciency of condition (5) for the existence of invertible spectral factors is divided into three steps, which correspond to the following three lemmas, viz Lemmas 3.2, 3.3 and 3.4. In order to prove these results, the following classes of Laplace transformable distributions should be introduced.

De nition 3.1 Let   0. The sets LA+(), LA?() and LA() of Laplace transformable distributions are given respectively by

LA+() := A();

(11)

LA?() := ff 2 LTD? : f (?) 2 LA+()g;

(12)

1 1 X jfij: exp(?jtij) < 1g :2 j fa(t)j: exp(?jtj)dt < 1 and ?1 i=?1

(13)

and LA() := LA+() + LA?() := ff 2 LTD : f = fa + fsa = fa(:) + P1 i=?1 fi  (: ? ti ) , with fa (:) a CI ?valued function, fi 2 CI for i = 0; 1;    and t0 = 0, ti > 0 for i = 1; 2;    and ti < 0 for i = ?1; ?2;   , such that Z

The class LA() is known, in the literature, as a Beurling-type algebra, see e.g. [20, Chapter 4] , and is sometimes called the Wiener{Pitt (WP) algebra, see e.g. [1], [2]. LA() is equiped P1 with a two-sided convolution product as follows. Let f = fa(:) + k=?1 fk (: ? tk ) and g = ga(:) + P1 l=?1 gl  (: ? l ) belong to LA( ), then (f  g)(t) := where

1 X 1 1 X f (t ? s)g(s)ds = (f  g)a(t) + fk gl (t ? tk ? l ) ?1 k=?1 l=?1

Z

(f  g)a(t) := (fa  ga )(t) +

1

X

k=?1

fk :ga(t ? tk ) +

1

X

l=?1

gl :fa(t ? l ):

(14a) (14b)

Moreover LA() can also be equiped with a norm k:k1 which is de ned by

kf k1 :=

1 1 X jfk j: exp(?jtk j): j fa(t)j: exp(?jtj)dt + ?1 k=?1

Z

(15)

Observe that (see e.g. [8, Fact 3.1]), under the norm k:k1 ,

LA() is a commutative convolution Banach algebra with unit element ();

(16)

and

LA+() and LA?() are closed subalgebras of LA(): (17) The distributions belonging to the classes de ned above satisfy several important and useful properties which are listed in the following two facts, see e.g. [5] and [8, Fact 3.1]. Recall that a complex valued function g is almost periodic in a vertical strip S (; 0 ) if, for any  > 0, there exists L > 0 such that each interval of length L on the imaginary axis contains at least one -translation number j , i.e. one point j such that

jg(1 + j (! +  )) ? g(1 + j!)j <  for all ! 2 IR and 1 2 [; 0]; see e.g. [14, p.73, Corollary p.75]. Moreover g is almost periodic in a right half-plane CI + if g is almost periodic in every vertical strip contained in CI +.

Fact 1 [Properties of distributions in LA+()] a) If f = fa(:) + fsa(:) is a distribution in

LA+(), then its Laplace transform f^ satis es the following properties: (i) f^ is uniformly continuous in CI +; (ii) f^ is holomorphic in CI 0+; (iii) f^sa is analytic almost periodic in CI +,

(iv) [Riemann-Lebesgue] (f^ ? f^sa)(s) ! 0 as jsj ! 1 in CI +; where in particular, for any 0   , jf^a(1 + j!)j ! 0 as j!j ! 1 uniformly in 1 2 [; 0] ;

b) F 2 Mat (LA+()) is invertible in Mat (LA+()) i inf fj det F^ (s)j : s 2 CI +g > 0 : (18) c) Let F be in Mat (LA+(1)) for some 1 < 0, or equivalently in Mat (A?). Then + ? 1 ? 1 ^ ()) for some  < 0, i F^ 2 Mat (A^?), i.e. F^ 2 Mat (LA inf fj det F^ (s)j : s 2 CI + g > 0 :2 (19) Fact 2 [Properties of distributions in LA()] Let   0. Then a) If f = fa + fsa is a

distribution in LA(), then its Laplace transform f^ satis es the following properties: (i) f^ is uniformly continuous in S ;

(ii) f^ is holomorphic in S0 ; (iii) f^sa is analytic almost periodic in S ; (iv) [Riemann-Lebesgue] (f^ ? f^sa)(s) ! 0 as jsj ! 1 in S ; where in particular jf^a(1 + j!)j ! 0 as j!j ! 1 uniformly in 1 2 [; ?]:

b) F 2 Mat (LA()) is invertible in Mat (LA()) i inf fj det F^ (s)j : s 2 S g > 0 :2

(20)

Note that LA()nn , where n is a positive integer, can also be equiped with a norm denoted by k:k1 , which is given by (15) whenPn = 1 and by (21) below when n  2, viz. the norm nn is de ned by of a matrix distribution F = Fa(:) + 1 k=?1 Fk  (: ? tk ) in LA( )

1 1 X kFk k: exp(?jtk j); (21) k Fa(t)k: exp(?jtj)dt + ?1 k=?1 where k  k is any induced (matrix) norm on CI nn , (e.g. the norm induced by the euclidean

kF k1 :=

Z

vector norm). It is important to observe that, under the norm k  k1 ,

LA()nn is a convolution Banach algebra with unit element I();

(22)

where I is the n-by-n identity matrix, and

LA()nn is not a commutative algebra unless n = 1: (23) Lemma 3.2 [Coercivity] Let a matrix spectral density F^ be given as in De nition 2.1, such that conditions (1)-(3) hold. Under these conditions, if F^ is coercive on the imaginary axis, i.e. (5) holds, then so is its almost periodic part F^sa , i.e. there exists  > 0 such that F^sa (j!)  I for all ! 2 IR :2 (24)

Proof: First observe that condition (24) holds i inf fdetF^sa(j!) : ! 2 IRg > 0:

(25)

Now, by the uniform continuity of detF^ in some vertical strip (see Fact 2a(i)), it follows from the coercivity of F^ , i.e. (5), or equivalently inf fdetF^ (j!) : ! 2 IRg > 0; (26) that detF^ is bounded away from zero in a vertical strip, i.e. there exists a  < 0 such that

inf fdetF^ (s) : s 2 S g > 0 , (see also [8, Lemma 3.1b and Fact 3.1c] (where 0 has been set equal to 0)). Thus, in view of Fact 2a (iv) and since detF^sa = (detF^ )sa is analytic almost periodic in S (see Fact 2a(iii)), it follows by e.g. [14, Theorem 3.6] that inf fdetF^sa(s) : s 2 S g > 0 , (see [8, Fact 3.1c]); whence (25) holds. 2

Lemma 3.3 [Almost periodic spectral factorization] Let a matrix spectral density F^

be given as in De nition 2.1, such that conditions (1)-(3) hold, and assume that it is almost periodic, i.e. F^ = F^sa: (27) Under these conditions, if F^ is coercive on the imaginary axis, i.e. (24) holds, then F^ has an invertible almost periodic spectral factor R^ , such that R^ = R^ sa is in Mat (A^? ) together with its inverse. 2

Proof: We proceed in two steps. Step 1: Assumptions (1)-(3) and condition (24) imply the existence of an almost periodic + ^ ^ ^ square matrix-valued function R = Rsa 2 Mat (LTD ) such that a) F^sa(j!) = R^(j!)  R^ (j!) for all ! 2 IR ; (28) b) ^ + (0)) and inf fj det R^ (s)j : s 2 CI + g > 0; R^ 2 Mat (LA (29a) or equivalently, by Fact 1b,

^ +(0)) : R^ and R^ ?1 are in Mat (LA (29b) Indeed, assume that F^ = F^sa is a CI nn -valued matrix function. By condition (24), F^ satis es the following inequality on the imaginary axis: xF^ ()x  kxk2 for all x 2 CI n: Moreover, since, for all ! 2 IR, F^ (j!) is a hermitian matrix (see (3)), the parameter  given by 1 ^

 := lim !1 2 [arg(x F (j!)x)]!=? ; which is independent of the nonzero vector x 2 CI n , is such that  = 0. It follows, by [2, Theorem 3], or equivalently by [26, Corollary 1], that (28)-(29b) hold for some invertible almost periodic matrix-valued function R^ = R^ sa.

Step 2: [Analytic extension] The square matrix-valued function R^ of Step 1 satis es R^ and R^ ?1 are in Mat (A^?): (30)

Indeed, denote by S (1; 2); (S 0(1; 2) respectively) the vertical strip fs 2 CI : 1  Re s  2g; (fs 2 CI : 1 < Re s < 2g respectively). Rewrite now (28) as R^ = R^?1:F^sa : (31)

Observe (29b) where R^ ?1(s) = R^ ?1(?s) , and note that F is in Mat (LA()) for some  < 0. Then by Fact 1a (i)-(ii) and Fact 2a (i)-(ii), a) the right-hand side of equation (31) is holomorphic in S 0(; 0) and continuous in S (; 0), b) the left-hand side of (31) has the same properties with respect to CI o+ and CI + respectively. Hence [21, Theorem 7.7.1.] can be applied to (31) such that by analytic extension (continuous up to the boundary), equation (31) holds in the strip S (; 0). This implies R^ ( + ) = R^?1( + )  F^sa( + ) on the j! ? axis: (32) Note now that in (32),

^ (0)) ; F^sa( + ) 2 Mat (LA or equivalently exp(?)Fsa() 2 Mat (LA(0)); similarly ^ ? (0)) R^?1( + ) 2 Mat (LA

(33) (34)

or equivalently exp()R?1() 2 Mat (LA+ (0)). Hence by (31)-(34) using the convolution in ^ (0)). Now note that exp(?)R() has LA(0), there follows by (16) that R^ ( + ) 2 Mat (LA ^ + (0)) or equivalently its support on t  0. Hence we get R^ ( + ) 2 Mat (LA R^ 2 Mat (L^ A+()) for some  < 0 : (35) Finally observe that by (29a), inf fj det R^ (s)j : s 2 CI + g > 0 : this together with (34) implies the conclusion (30) by Fact 1c. 2

Lemma 3.4 [Spectral factorization without delays] Let a matrix spectral density F^

be given as in De nition 2.1, such that conditions (1)-(3) hold, and assume that it has no delays, i.e. without loss of generality F^ = F^a + I: (36) Under these conditions, if F^ is coercive on the imaginary axis, i.e. (5) holds, then F^ has an invertible spectral factor R^ without delays, such that R^ = R^ a + R0 is in Mat (A^? ) together with its inverse. 2

Proof: See [9, proof of Theorem 1]. 2 By Lemmas 3.2 and 3.3, it follows from condition (5) that the almost periodic part F^sa of F^ has an invertible almost periodic spectral factor R^ 1, such that R^ 1 = (R^1)sa is in Mat (A^? ) together with its inverse. It follows that the spectral density G^ given by G^ := (R^1?1F^ R^1?1) = (R^1?1F^aR^1?1) + I ;

has no delays and is coercive on the imaginary axis, whence, by Lemma 3.4, it has an invertible spectral factor R^ 2 without delays, such that R^ 2 = (R^2)a + R20, where R20 is a constant unitary matrix, is in Mat (A^? ) together with its inverse. It follows that the square matrix-valued function R^ := R^ 2R^ 1 2 Mat (A^? ) is a (right) spectral factor, invertible in Mat (A^? ), of the spectral density F^ . Finally, the uniqueness of R^ up to left multiplication by a constant unitary matrix holds by Lemma 3.1. This concludes the proof of Theorem 2.1.

2 Remark 3.1 Without loss of generality, a matrix spectral density F^ which is given as in

Theorem 2.1 and which is coercive on the imaginary axis, i.e. such that (1) - (3) and (5) hold, is close to the identity matrix, i.e. such that kF^ ? I k1 := supfkF^ (j!) ? I k : ! 2 IRg < 1: Indeed, since A^?  H1 , it follows from (1) - (3) and (5) that I  F^ (j!)  I for all ! 2 IR; for some  and  > 0: Hence F^ can be rewritten as F^ = 2?1 ( + )(I + G^ ) , where G^ := 2( + )?1 F^ ? I is such that kG^ k1  ( ? )( + )?1 < 1. The idea of factorizing a spectral density close to the identity is paramount in spectral factorization theory and has been extensively used in the literature, see e.g. [35], [36], [16], [2], [26], [17], [39] and [24]. In particular the proof of [2, Theorem 3] (which is used in the proof of Lemma 3.3) is based on this idea, which leads to a conceptual method for the computation of spectral factors, see also Subsection 4.2.

3.2 Proof of Theorem 2.2

This proof is based on the following two lemmas. Every transfer matrix P^ in Mat (B^) has a right Lemma 3.5 [Right coprime fraction] + ^ D^ ) in Mat (LA ^ ())  Mat (A^? ) for some  < 0, where coprime fraction (N;

N (t) = Na(t) +

1

X

i=0

Ni(t ? ti) and D(t) = Da(t) + I(t) ;

with Na () and Da () in Mat (L1 ) for some  < 0. This structure is necessary as soon as one requires the denominator distribution D() to have no delayed impulses with a singular atomic part Dsa () = I(). 2

Proof: This result follows e.g. from the proof of [11, Theorem 6] or similar results in [7, Theorem 2.1], [15, Chapter 7]. 2 Lemma 3.6 [Spectral factorization of a coprime fraction spectral density] Let ^ D^ ) in Mat (A^? ) given, without loss of P^ 2 Mat (B^) have a right coprime fraction (N;

generality, as in Lemma 3.5. Then the right coprime fraction power spectral density F^ given by (8) has a spectral factor R^ invertible in Mat (A^? ). 2

^ D^ ) Proof: It follows from identity (8) and from the fact that the right coprime fraction (N; +

^ ()) for some  < 0, (see Lemma 3.5), that F^ satis es conditions (1)-(3). In is in Mat (LA view of Theorem 2.1 and the proof of Lemma 3.2, it remains to be shown that inf fdet F^ (j!) : ! 2 IRg > 0 : (26) ^ D^ ) is a right coprime fraction, given as in Lemma 3.5, Since (N; D^ sa  I ; and

2 6 4

(37)

3

D^ (j!) has full column rank for all ! 2 IR : N^ (j!) 7 5

(38)

Now (37) implies that

F^sa(j!) = (N^saN^sa + D^ saD^ sa)(j!) = (N^sa N^sa)(j!) + I > 0 for all ! 2 IR ; whence

(det F^ )sa(j!) = det F^sa(j!) > 0 for all ! 2 IR ; or equivalently, by Fact 2a(iv), there exists a > 0 such that inf fdet F^ (j!) : j!j > g > 0 :

(39)

Furthermore, in view of (38), det F^ (j!) > 0 for all ! 2 IR ;

(40)

since by (37), 2

3

D^ (j!) 7 F^ (j!) = [D^ (j!)N^ (j!)]: 64 for all ! 2 IR : 5  0 N^ (j!) Hence condition (26) holds. 2 ^ D^ ) of P^ , such that (N; ^ D^ ) 2 Mat (A^? ) is Now consider any right coprime fraction (N; given as in Lemma 3.5. By Lemma 3.6, the right coprime fraction power spectral density F^ = N^N^ + D^ D^ has a spectral factor R^ invertible in Mat (A^?). Hence (N^ R^ ?1; D^ R^ ?1) is a normalized right coprime fraction of P^ . This concludes the proof of Theorem 2.2. 2

4 Important Particular Cases 4.1 Spectral Densities with Equally-Spaced Delays

An important particular case in applications is the spectral factorization of spectral densities with equally spaced delays, resulting in spectral factors also having equally spaced delays, see e.g. Example 2 below. It can be shown that a result similar to Theorem 2.1 holds in this framework: see Theorem 4.1 below. This can be done by using the algebras LAT (), LA+T () and LA?T (), where T is a xed positive real number, which are closed subalgebras respectively of LA(), LA+ (), and LA? (), and which are de ned as follows:

LA+() := ff T

2 A() : fsa(:) =

1

X

i=0

fi(: ? iT ); where fi 2 CI for i = 0; 1;   g ;

LA?T () := ff 2 LTD? : f (?:) 2 LA+T ()g ;

and

(41) (42)

LAT () denotes the set of all distributions f 2 LTD of the form f = fa + fsa = fa(:) + 1 fi  (: ? iT ) , with fa (:) a CI ?valued function and fi 2 CI for i = 0; 1;   , such that i=?1 P

1 1 X jfij: exp(?jijT ) < 1 : j fa(t)j: exp(?jtj)dt < 1 and ?1 i=?1

Z

(43)

It turns out that these algebras enjoy the same properties than those used in the previous section (see, in particular, Facts 1 and 2) and that the proof of Theorem 4.1 goes along the lines of the proof of Theorem 2.1, where one should use [18, x14] instead of [2, Theorem 3] in the proof of Lemma 3.3 (Step 1). See [39] for more detail.

Theorem 4.1 [Spectral factorization with equally-spaced delays] Let a matrix spec-

tral density F^ be given as in De nition 2.1, such that conditions (1)-(3) hold; assume, in addition that F^ has T -equally spaced delays, i.e. F^ + 2 Mat (LA+T ()) for some  < 0. Under these conditions, F^ has an invertible spectral factor R^ , such that R^ is in Mat(A^? ) ^ +T ()) for with T -equally spaced delays together with its inverse, i.e. R^ and R^ ?1 2 Mat (LA some  < 0, if and only if F^ is (uniformly) coercive on the imaginary axis, i.e. (5) holds. Moreover, if this condition holds, then all invertible spectral factors of F^ are unique up to left multiplication by a constant unitary matrix and thus have T -equally spaced delays. 2

Remark 4.1 Any spectral factor R^ of a spectral density with T -equally spaced delays is such that R^ sa is periodic along any vertical line in CI + . More precisely, for any 0  0, the complex-matrix valued function of a real variable ! 7! R^ sa (0 + j!) is periodic of period

2T ?1.

Remark 4.2 A result similar to Theorem 4.1 can be stated for the case that the delays are integer multiples of a nite number of positive real numbers that are linearly independent over ZZ. In that case, F^sa is quasi-periodic [1], and one can use [1, Theorem 3] for getting a

s

 

R1L1C1G1 6 u1(t) Transmission Line 1 1 = RL11 = G C1

 

@@ -1 x

0

R2L2C2G2 6 u2(t) Transmission Line 2 2 = RL22 = G C2

Z1

s

@

P

Z2

?

?

@ @ ?? ?

 3     y(t)

Figure 1: Distributed parameter circuit with transmission lines result similar to Lemma 3.3. The resulting spectral factor has delays that are integer multiples of the aforementioned real numbers.

We conclude this section by a simple but nontrivial illustrative example.

Example 2: Consider a distributed parameter circuit depicted in Fig.1 consisting of two

k RLCG - transmission lines without distortion, i.e., RLkk = G Ck =: k , k = 1; 2, loaded by resistances Zk , k = 1; 2, and coupled by an ideal summator realized by an operational ampli er. The inputs (controls) are the voltages u1(t), u2(t) and the output (observation) is the measured voltage y(t) = y1(t) + y2(t), where yk (t), k = 1; 2, denote the voltage on Zk at time t.

It has been proved in [19, Formula (13)] (see the appendix for some detail) that the transfer function P^k , k = 1; 2, of each transmission line is given by the formula ?srk y ^ 1 +  e k (s) k ^ Pk (s) = u^ (s) =   1 + k e?2srk ; k = 1; 2 ; k k 2k

where

zk ; and  = e k rk : zk = CLk ; rk = v1 = Lk Ck ; k = ZZk ? k k k k + zk s

q

The parameters zk ; vk and k are called respectively the wave impedance, the velocity of the wave propagation and the re ection coecient. Now, since

y(t) = y1(t) + y2(t) ; the circuit has the following transfer function matrix 2

P^ (s) = [P^1(s); P^2(s)] = 4 1 + 1 

1

e?sr 1 + 2  e?sr ; 1 +  e?2sr 2 1 +  e?2sr 1

1 2 1

3

2

1

2 2 2

2

5

:

k 2

Since k < 1, k = 1; 2, (see [19, Text after formula (9)]) the following geometric series expansion holds: 1 ?k !n X 1 +  k e?srk e?2rk ns : P^k (s) = 2

k n=0 k By applying the inverse Laplace transform to the last identity, it follows that 1 ?k n 1 +  k (t ? (2n + 1)rk ) : Pk (t) =  k n=0 2k X

!

Since each Pk has impulses located at the points (2n + 1)rk , n  0, the impulse response matrix P(t) will have equally spaced delayed impulses i r1 and r2 are commensurate (i.e. rationaly related), i.e. the velocities of the wave propagation in the two lines are commensurate. More precisely, if r1 and r2 are rationaly related such that r1 = q  r2 for some rational number q = ndqq , where nq and dq are integers, then P(t) has T-equally-spaced delayed impulses with T = nr1q = dr2q . Moreover P^ is proper{stable, i.e. more precisely, P^ = P^ sa is in A^1?2. It follows that the ^ D^ ) := (P^ ; I ), where I is the two-by-two identity matrix, is a right coprime fraction pair (N; of P^ . By the proof of Theorem 2.2 and by Theorem 4.1, the corresponding right coprime fraction power spectral density F^ := N^ N^ + D^  D^ = I + P^  P^ = F^sa has a spectral factor P1 R^ = R^sa = k=0PRk exp(?:kT ) invertible in A^2?2. In addition the power spectral density F^ reads F^ (s) = 1 i=?1 Fi exp(?siT ), such that, under the conformal mapping transformation z = exp(?sT ), an approximate spectral factor R^ of F^ can be computed e.g. by a Bauer-type method [40], [39], by a Newton-type method [37] - [38], or via a discrete time Riccati equation, see e.g. [28].

Remark 4.3 Observe that, in view of Theorem 2.1 and the proof of Lemma 3.6, the com-

mensurability assumption is not needed for the existence of an invertible spectral factor of the spectral density F^ considered in Example 2 above. Hence Theorem 2.1 provides here a tool for the investigation of spectral factors of spectral densities in the neighborhood of the spectral density above: the latter can be called a nominal spectral density even more so because a spectral factor of such a spectral density is computable by discrete spectral factorization algorithms as those mentioned above.

4.2 Spectral Densities Close to the Identity

Inspired by Remark 3.1, we conclude this section by giving a sucient condition for the existence of spectral factors with arbitrary delays. This condition concerns spectral densities which are close to the identity matrix. It leads to a conceptual method for the computation of a spectral factor, which is based on the alternating projection principle, see e.g. [35], [2, proof of Theorem 1], or equivalently on a xed point equation leading to a causal power series expansion of the spectral factor, see e.g. [16, Section 9.5] or [13, Corollary 1.2, p.39].

Theorem 4.2 [Spectral factorization close to the identity] Let a matrix spectral den-

sity F^ be given as in De nition 2.1, such that conditions (1)-(3) hold. Under these conditions, if F^sa is of the form F^sa = k (I ? G^ ); (44) ^ ()) for some  < 0, is such that where k is a positive constant and G^ , in Mat (LA kGk10 < 1; (45) then F^ has a spectral factor R^ invertible in Mat (A^? ). Moreover this spectral factor is given by R^ = R^2 R^1; (46) where R^ 1 is the spectral factor of F^sa which is given by R^1 = k1=2(I ? S^); (47) where S^ := L[(N  G)+] 2 Mat (A^?) ; (48) ^ ? ) is the where L[F ] denotes the Laplace transform of F 2 Mat (LTD) and N^ 2 Mat (LTD solution to the xed point equation N = I + (N  G)?; (49) and where R^2 is a spectral factor of (R^1?1F^aR^1?1) + I: 2 (50) Proof : First consider, for any 1  0, the following class of Laplace transformable distributions with support on IR : A(1) := ff 2 LA(1) : fa  0g: (51) Note that A(1) is a closed, i.e. Banach, subalgebra of LA(1), for any 1  0. Now observe that F^sa is in Mat (A^(0)). Therefore, by using the factorization theorem in [16, Section 9.5, pp. 211-215] in the framework of the (noncommutative) Banach algebra Mat (A(0)) (see also e.g. [36, Section 3.7, pp. 72-81]), it follows from (44)-(45) that F^sa has a spectral factor R^ 1 in Mat (A^(0)), which is given by (47)-(49). Now, by arguments similar to those used in Step 2 of the proof of Lemma 3.3 (i.e. essentially by using an analytic extension technique), it can be proved that R^ 1 is actually in Mat (A^? ) together with its inverse. The nal conclusion is obtained by following the lines of the proof of Theorem 2.1. 2

Corollary 4.1 Consider a transfer (function) matrix P^ 2 Mat (B^) given by P^ = P^s + P^u, where P^s = P^a + P^sa is a proper stable transfer matrix in Mat (A^? ) and P^u is a completely

^ D^ ) in unstable strictly proper rational matrix. Let P^ have a right coprime fraction (N; Mat (A^? ) given by N^ = P^s  D^ u + N^u ; D^ = D^ u ; (52) where (N^u ; D^ u ) is a proper stable rational right coprime fraction of the unstable part P^u of P^ such that without loss of generality D^ (1) = D^ u(1) = I: (53) Under these conditions, if

kPsak10 = kPsakA(0) < 1 ;

then the right coprime fraction power spectral density F^ given by F^ := N^N^ + D^ D^

(54) (7)

has a spectral factor R^ = R^ a + R^ sa invertible in Mat (A^? ), whence P^ 2 Mat (B^) has normalized right coprime fractions in Mat (A^? ) unique up to multiplication by a unitary constant matrix. 2

Proof : Observe that, in view of (52)-(53), the singular atomic part F^sa of F^ is given by (44), with k = 1 and G^ = ?(P^sa) P^sa, such that, by (54), (45) holds. The conclusion follows by Theorem 4.2. 2 Remark 4.4 Recall that the norm kk10 is an upper bound of the H1-norm kk1. It follows that the small-gain type sucient condition (54) guarantees feedback stability robustness with respect to small delays in the feedback loop, see e.g. [4, Theorem 1] and [23, Theorem 1.1 and Remark 8.6]. Condition (54) is thus meaningful from the system theoretic point of view.

5 Conclusion The solution to the spectral factorization problem has been analyzed for multivariable distributed parameter systems with an impulse response having an in nite number of arbitrary delayed impulses. A coercivity criterion for the existence of a spectral factor has been derived in the general case and particularized to the important special case of equally-spaced delays. In the latter case, it has been applied to a system consisting of the parallel interconnection of two transmission lines without distortion. In addition, a small-gain type sucient condition has been derived for the existence of spectral factors with arbitrary delays. The latter condition has also been indicated to be meaningful from the system theoretic point of view.

Finally an interesting subject of future research should be the development of computational methods for spectral factorization, especially in order to get numerical procedures for solving the Linear-Quadratic optimal control problem for in nite-dimensional state-space semigroup systems with bounded or unbounded observation and/or control operators and for solving related Riccati equations.

Acknowledgment

This work was supported by the Human Capital and Mobility European programme (project number CHRXCT-930-402) and by a Research Grant (1997-2001) from the Facultes Universitaires Notre-Dame de la Paix at Namur. The authors thank Dr. Piotr Grabowski (Academy of Mining and Metallurgy, Institute of Automatics, Cracow, Poland) and Prof. L. Rodman and I.M. Spitkovsky (College of William & Mary, Dept. of Mathematics, Williamsburg, Virginia, USA); the former for helpful discussions concerning Example 2, the latter two for indications concerning the literature. They also thank the reviewers of this paper for their constructive remarks; they especially acknowledge that one of the reviewers mentioned references [2] and [26], which led to the proof of the suciency part in Theorem 2.1; these references were translated by Prof. G. Plotnikova (FUNDP, Dept. of Mathematics, Namur, Belgium), whose help is gratefully mentioned.

Appendix: Transfer function of a transmission line: A transmission line model as in Example 2 is described by the following equations 8 > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > :

Lk @i@tk (x; t)

=?

@vk (x; t) @x

? Rk ik (x; t); 0  x  1; t  0

Ck @v@tk (x; t) = ? @i@xk (x; t) ? Gk ik (x; t); 0  x  1; t  0

ik (1; t)Zk = vk (1; t);

t0

vk (0; t) = uk (t);

t0

yk (t) = vk (1; t) The d'Alembert solution of the rst two equations above are 8 > > < > > :

ik (x; t) = e? k t k (x?vk t)2?zk k (x+vk t) v (x; t) = e? k t k (x?vk t)+ k (x+vk t) k

2

9 > > = > > ;

9 > > > > > > > > > > > > > > > = > > > > > > > > > > > > > > > ;

:

(55)

(56)

where k ; k are arbitrary smooth functions. Subsituting (56) into the boundary conditions we get (57) k (1 + vk t) = k k (1 ? vk t) 1 e? k t[ (?v t) + (v t)] = u (t) (58) k k k k k 2

yk (t) = 21 e? k t[k (1 ? vk t) + k (1 + vk t)] : Since the rst equation holds for all t 2 IR, replacing t by t ? rk yields k (vk t) = k k (2 ? vkt) : Taking (60) and (57) into account in (58) and (59) we get 8 > > < > > :

1 e? k t [k (?vk t) + k k (2 ? vk t)] = uk (t) 2 yk (t) = 1+2k e? k tk (1 ? vk t)

9 > > =

:

> > ;

(59) (60)

(61)

Now de ne the following new state variables 8 > > < > > :

It follows that

x2k (t) := e?2 k t k (?vk t) x1k (t) := x2k (t ? rk ) = k e?2 k t k (1 ? vk t) 8 > > > > > > > < > > > > > > > :

x1 (t) = x2 (t ? rk ) k

k

x2k (t) + kk x1k (t ? rk ) = uk (t) 2

9 > > > > > > > = > > > > > > > ;

:

9 > > = > > ;

:

(62)

(63)

yk (t) = 1+kk x2k (t ? rk ) Indeed, the rst equation of (63) clearly follows from the second equation of (62) and the third equation of (63) from the second equation of (61) and the second equation of (62). Substituting 1 x1 (t ? r ) = 1 x2 (t ? 2r ) = e? k (t?2rk )  (?v (t ? 2r )) = e? k t  (2 ? v t) k k k k k k 2k k 2k k 22k 2 k and the rst equation of (62) into the rst equation of (61) yields the second equation of (63). Finally the transmission line transfer function P^k is obtained by applying the Laplace transform to equations (63).

References

[1] R.G. Babadzhanyan and V.S. Rabinovich, Systems of integral-di erence equations on the half-line, Dokl. Akad. Nauk ArmSSR, Vol. 81, No. 3, 1985, pp. 107{111. (in Russian) [2] R.G. Babadzhanyan and V.S. Rabinovich, On the factorization of almost-periodic functional operators, Di erential and Integral Equations and Complex Analysis, University Press, Elista, 1986, pp. 13{22. (in Russian) [3] J.A. Ball, Yu.J. Karlovich, L. Rodman and I.M. Spitkovsky, Sarason interpolation and Toeplitz Corona theorem for almost periodic matrix functions, manuscript, 1998.

[4] J.F. Barman, F.M. Callier and C.A. Desoer L2-stability and L2-instability of linear time-invariant distributed feedback systems perturbed by a small delay in the loop, IEEE Transactions on Automatic Control, Vol. AC-18, No. 5, 1973, pp. 479{484. [5] F.M. Callier and C.A. Desoer, An algebra of transfer functions for distributed linear time-invariant systems, IEEE Transactions on Circuits and Systems, Vol. 25 , 1978, pp. 651{662 (Ibidem, Vol. 26, 1979, p. 360). [6] F.M. Callier and C.A. Desoer, Simpli cations and clari cations on the paper "An algebra of transfer functions for distributed linear time-invariant systems", IEEE Transactions on Circuits and Systems, Vol. 27 , 1980, pp. 320{323. [7] F.M. Callier and C.A. Desoer, Stabilization, tracking and disturbance rejection in multivariable convolution systems, Annales de la Societe Scienti que de Bruxelles, T. 94 , 1980, pp. 7{51. [8] F.M. Callier and J. Winkin, The spectral factorization problem for SISO distributed systems, in "Modelling, robustness and sensitivity reduction in control systems" (R.F. Curtain, ed.), NATO ASI Series, Vol. F34, Springer-Verlag, Berlin Heidelberg, 1987, pp. 463{489. [9] F.M. Callier and J. Winkin, On spectral factorization and LQ-optimal regulation for multivariable distributed systems, International Journal of Control, Vol. 52, No. 1, July 1990, pp. 55-75. [10] F.M. Callier and J. Winkin, LQ{optimal control of in nite{dimensional systems by spectral factorization, Automatica, Vol. 28, No. 4, 1992, pp.757{770. [11] F.M. Callier and J. Winkin, In nite dimensional system transfer functions, in Analysis and optimization of systems: state and frequency domain approaches to in nite{dimensional systems, R.F. Curtain, A. Bensoussan and J.L. Lions (eds.), Lecture Notes in Control and Information Sciences, Springer{Verlag, Berlin, New York, 1993, pp. 72{101. [12] F.M. Callier and J. Winkin, Spectral factorization of a spectral density with arbitrary delays, in Open problems in mathematical systems and control theory, V.D. Blondel, E.D. Sontag, M. Vidyasagar and J.C. Willems (eds.), Springer-Verlag, London, 1999, Chapter 17, pp. 79{82. [13] K. Clancey and I. Gohberg, Factorization of matrix functions and singular integral operators, Birkhauser Verlag, Basel, 1981. [14] C. Corduneanu, Almost periodic functions, Interscience Publishers, John Wiley & Sons, NY, 1968. [15] R.F. Curtain and H. Zwart, An introduction to in nite dimensional system, Springer Verlag, New York, 1995. [16] C.A. Desoer and M. Vidyasagar, Feedback systems: Input-Output properties, Academic Press, New York, 1975. [17] I.C. Gohberg and I.A. Fel'dman, Convolution equations and projection methods for their solution, AMS Translations, Providence, RI, 1974. [18] I.C. Gohberg and M.G. Krein, Systems of integral equations on a half line with kernels depending on the di erence of arguments, AMS Translations, Vol. 2, 1960, pp. 217{287.

[19] P. Grabowski, The LQ - controller problem: An example, IMA Journal of Mathematical Control and Information, Vol. 11, 1994, pp. 355{368. [20] G. Gripenberg, S-O. Londen and O. J. Staffans, Volterra integral and functional equations, Encyclopedia of Math. and its Appl., Vol.34, Cambridge University Press, Cambridge, 1990. [21] E. Hille, Analytic function theory, Vol.1, Ginn and Co., Waltham, MA, 1959. [22] M.G. Krein, Integral equations on a half line with kernels depending upon the di erence of the arguments, AMS Translations, Vol. 22, 1962, pp. 163{288. [23] H. Logemann, R. Rebarber and G. Weiss, Conditions for robustness and nonrobustness of the stability of feedback systems with respect to small delays in the feedback loop, SIAM Journal on Control and Optimization, Vol. 34, 1996, pp. 572{600. [24] L. Rodman, I.M. Spitkovsky and H.J. Woerdeman, Caratheodory-Toeplitz and Nehari problems for matrix valued almost periodic functions, Transactions of the AMS, to appear. [25] M. Rosenblum and J. Rovnyak, Hardy classes and operator theory, Oxford University Press, New York, 1985. [26] I.M. Spitkovsky, Factorization of almost periodic matrix-valued functions, Mathematical Notes, Vol. 45, 1989, pp. 482{488. [27] O.J. Staffans, Quadratic optimal control of stable systems through spectral factorization, Math. Control Signals Systems, Vol. 8, 1995, pp. 167{197. [28] O.J. Staffans, On the discrete and continuous time in nite-dimensional algebraic Riccati equations, Systems and Control Letters, Ser. A, No. 178, 1996. [29] O.J. Staffans, Quadratic optimal control through coprime and spectral factorizations, Abo Akademi Reports on Computer Science and Mathematics, Vol. 29, 1996, pp. 131{ 138. [30] O.J. Staffans, Coprime factorizations and well-posed linear systems, submitted to SIAM Journal on Control and Optimization, 1997. [31] O.J. Staffans, Quadratic optimal control of well-posed linear systems, submitted to SIAM Journal on Control and Optimization, 1997. [32] M. Vidyasagar, Control system synthesis: A factorization approach, MIT Press, Cambridge, MA , 1985. [33] M. Weiss, Riccati equations in Hilbert spaces: A Popov function approach, Doctoral Thesis, University of Groningen (NL), 1994. [34] M. Weiss and G. Weiss, Optimal control of stable weakly regular linear systems, Math. Control Signals Systems, Vol. 10, 1997, pp. 287{330. [35] N. Wiener and P. Masani, The prediction theory of multivariate stochastic processes, II: The linear predictor, Acta Math., Vol. 99, 1958, pp. 93{137. [36] J.C. Willems, The analysis of feedback systems, The M.I.T. Press, Cambridge, MA, 1971.

[37] G.T. Wilson, The factorization of matricial spectral densities, SIAM Journal on Applied Mathematics, Vol. 23, 1972, pp. 420{426. [38] G.T. Wilson, A convergence theorem for spectral factorization, Journal of Multivariate Analysis, Vol. 8, 1978, pp. 222{232. [39] J. Winkin, Spectral factorization and feedback control for in nite-dimensional systems, Doctoral Thesis, Department of Mathematics, Facultes Universitaires Notre-Dame de la Paix, Namur (Belgium), May 1989. [40] D.C. Youla and N.N. Kazanjian, Bauer-type factorization of positive matrices and the theory of matrix polynomials orthogonal on the unit circle, IEEE Transactions on Circuits and Systems, Vol.25 , 1978, pp. 57{69. Facultes Universitaires Notre-Dame de la Paix, Department of Mathematics, Rempart de la Vierge 8, B-5000 Namur, Belgium; e-mail: [email protected], [email protected] 93C05, 93C35, 93C22, 93C80, 49N10, 93D09

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.