Parametric approach to optimal control

Share Embed


Descrição do Produto

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/251410944

Parametric approach to optimal control ARTICLE in OPTIMIZATION LETTERS · OCTOBER 2011 Impact Factor: 0.93 · DOI: 10.1007/s11590-011-0377-0

CITATIONS

READS

2

40

5 AUTHORS, INCLUDING: Olga Vasilieva Universidad del Valle (Colo… 20 PUBLICATIONS 50 CITATIONS SEE PROFILE

Andreas Otto Karl Griewank Humboldt-Universität zu Be… 218 PUBLICATIONS 5,961 CITATIONS SEE PROFILE

Available from: Olga Vasilieva Retrieved on: 03 February 2016

Optim Lett (2012) 6:1303–1316 DOI 10.1007/s11590-011-0377-0 ORIGINAL PAPER

Parametric approach to optimal control A. Radwan · O. Vasilieva · R. Enkhbat · A. Griewank · J. Guddat

Received: 28 June 2011 / Accepted: 27 July 2011 / Published online: 23 August 2011 © Springer-Verlag 2011

Abstract We consider the optimal control problem from view point of parametric aspects. We have examined two cases of the parameterized problems. First case describes the situation when the objective functional contains time t as a parameter. We also show how to apply the parametric optimization techniques, such as pathfollowing methods, for finding a nominal optimal control path. Keywords Optimal control · Parametric optimization · Pathfollowing method · KKT conditions 1 Introduction Parametric optimization offers very useful techniques for solving optimization problems in finite dimensional spaces whenever the objective function continuously depends on some unknown parameter. These techniques yield as a result a minimizing (or maximizing) curve that depends continuously on the original parameter. A. Radwan (B) · A. Griewank · J. Guddat Humboldt University, Berlin, Germany e-mail: [email protected] A. Griewank e-mail: [email protected] J. Guddat e-mail: [email protected] O. Vasilieva Universidad del Valle, Cali, Colombia e-mail: [email protected] R. Enkhbat National University of Mongolia, Ulaanbaatar, Mongolia e-mail: [email protected]

123

1304

A. Radwan et al.

On the other hand, traditional techniques for solving optimal control problems rely on finding the nominal control trajectory that minimizes the Hamiltonian at each instant along the time segment. Such trajectory should be continuously time-dependent as well. Basing on this argument, we can establish a bond between these two types of optimization and will finally show how to apply the parametric optimization techniques for solving optimal control problems. There are many works devoted to theory and methods of optimal control (see, e.g., [1,2,7,8]). The paper is organized as follows: The Sect. 1 is devoted to basic problem of optimal control and traditional approaches for solving them. Optimal control problem with a parameter in the objective functional has been considered in Sect. 2. Section 3 examines application of parametric optimization to optimal control problems. The implementation scheme of the proposed methods is discussed in the Sect. 6. 2 Basic optimal control problem The basic problem of optimal control can be formulated as follows: find a control that minimizes the objective functional t1 f 0 (x, u, t)dt

(1)

x˙ = f(x, u, t), x(t0 ) = x0

(2)

u(t) ∈ U, t ∈ T = [t0 , t1 ]

(3)

min J (u) = t0

subject to

where T is fixed. System (2) describes the connection between the state variable x(t) ∈ Rn and the control variable u(t) ∈ Rr at each t ∈ T and also u ∈ PC r (T ). Here x0 ∈ Rn is a given vector and U is a set in Rr that specifies the constraints imposed on all admissible control functions. The vector function f and scalar function f 0 are continuous together with their partial derivatives with respect to x for all admissible controls u ∈ U . The traditional approach for solving the problem (1)–(3) consists in applying the necessary condition of optimality in the form of Pontryagin’s maximum principle, that is: If u∗ (t) is optimal in the problem (1)–(3), then it must satisfy the maximum condition H (ψ ∗ , x∗ , u∗ , t) = max H (ψ ∗ , x∗ , v, t) almost for all t ∈ T v∈U

where H (ψ, x, u, t) = ψ(t), f(x, u, t) − f 0 (x, u, t)

123

(4)

Parametric approach to optimal control

1305

denotes the Hamiltonian function and ψ(t) is a solution of the conjugate system:  ∂ H (ψ, x, u, t) ψ˙ = − ∂x (5) ψ(t1 ) = 0 while x∗ and ψ ∗ are the solutions of (2) and (5) for u = u∗ (t), respectively. The maximum principle (4) is a key feature for many successive approximation algorithms. These algorithms generate a sequence of admissible control {uk } that is relaxational in the sense that J (uk+1 ) < J (uk ) and a minimizing one, that is, uk (t) − u∗ (t) → 0 when k → ∞ almost for all t ∈ T. The standard technique for construction of uk+1 out of uk relies on the use of so-called nominal optimal control defined by uˆ k (t) = arg max H (ψ k , xk , v, t) for almost all t ∈ T.

(6)

v∈U

Then u k+1 is constructed using the needle-shaped variation of control functions:  k uˆ (t), t ∈ Tk (ε) uk+1 (t) = uk (t), t ∈ T /Tk (ε) Here the domain of the variation Tk (ε) can be constructed in different ways [12,13]. Apparently, the majority of works dealing with the successive approximation methods based on the maximum principle, do not pay much attention to the solution of the auxiliary problem (6) and simply suppose that the structure of the set U admits a “clear-cut solution” of this problem. Essentially, it is only possible in very few cases such as: – The Hamiltonian function H (ψ, x, u, t) is linear with respect to u. – U is a static (time-invariant) set of very simple structure, for example, consists of isolated point or has a polyhedral structure, that is, αi ≤ u i (t) ≤ βi , i = 1, 2, . . . , r 3 Optimal control problem with parameters Consider the optimal control problem: t1 f 0 (x, u, τ )dt

min J (u) =

(7)

t0

x˙i = f i (x, u, τ ), i = 1, 2, . . . , n x(t0 ) = x0 , x(t1 ) = x1 , u ∈ U, t ∈ [t0 , t1 ]

(8) (9)

123

1306

A. Radwan et al.

where x0 , x1 and t0 are given, U is a subset of Rr , x(t) ∈ Rn , u(t) ∈ U ⊂ Rr , t ∈ ∂ fi , i= [t0 , t1 ]; τ ∈ Rs is a vector parameter. The functions fi with partial derivatives ∂τ j n s 0, . . . , n, j = 1, . . . , s are continuous on R × U × R . Assume that U (t) are piecewise continuous functions on [t0 , t1 ]. Theorem 1 [9,10] Let u(t), t0 ≤ t ≤ t1 and τ = (τ1 , τ2 , . . . , τs ) be an admissible control and a parameter vector with corresponding path x = x(t) satisfying x(t0 ) = x0 and x(t1 ) = x1 . Then in order that (u(t), x(t), τ ), t0 ≤ t ≤ t1 be optimal it is necessary that there exist a nontrivial vector ψ(t) = ψ1 (t), . . . , ψn (t) and scalar function H (ψ, x, u, τ ) =

n 

ψi f i (x, u, τ ) − f 0 (x, u, τ )

i=1

such that ⎧ ⎪ ⎨ x˙i = − ∂ H (ψ, x, u, τ ) , i = 1, 2, . . . , n ∂ψ i

⎪ ⎩ ψ˙ i = − ∂ H (ψ, x, u, τ ) , i = 1, 2, . . . , n ∂x

(10)

H (ψ, x, u, τ ) = maxv∈U H (ψ, x, v, τ ), ∀t ∈ [t0 , t1 ]

(11)

i

H (ψ(t), x(t), u(t), τ ) = 0, ∀t0 ≤ t ≤ t1 t1 ∂ fi (x,u,τ ) t

n τ ) dt = 0, j = 1, 2, . . . , s dt − t01 ∂ f0 (x,u, i=1 ψi (t1 ) t0 ∂τ j ∂τ j

(12) (13)

Proof In order to apply the Pontryagin maximum principle to problem (7)–(9) we need to introduce additional state variables xn+ j = τ j , j = 1, . . . , s such that x˙n+ j = 0. Then problem (7)–(9) can be written as min J (u) =

t1 t0

f 0 (x, u, xn+1 , . . . , xn+s )dt

x˙i = f i (x, u, xn+1 , . . . , xn+s ), i = 1, 2, . . . , n, j = 1, . . . , s, x˙n+ j = 0, x(t1 ) = x1 , x(t0 ) = x0 , u(t) ∈ U, t ∈ [t0 , t1 ] −∞ < xn+ j < +∞,

j = 1, . . . , s.

(14) (15) (16) (17)

Now this problem can be treated as an optimal control problem with mixed-fixed boundary constraints since (17) has no boundary constraints. The Hamiltonian of the problem is formed as H (ψ, x, xn+1 , . . . , xn+s , u) =

n 

ψi f i (x, u, xn+1 , . . . , xn+s )

i=1

− f 0 (x, u, xn+1 , . . . , xn+s ).

123

Parametric approach to optimal control

1307

If we apply the maximum principle to the above problem, we will have conditions (10)–(13) which completes the proof. Example 1 Solve the problem T min J (u) =

1 2 2 2 (1 + τ )(u 1 + u 2 ) − τ dt 2

0

subject to: x˙1 = x2 + u 1 , x˙2 = x1 + u 2 , x(0) = (−1, −1) , x(T ) = (1, 1) , U = {u ∈ R 2 : u 21 + u 22 ≤ β 2 t ∈ [0, T ]}. Define the Hamiltonian as: 1 H (ψ, x, u, τ ) = − (u 21 + u 22 )(1 + τ 2 ) + τ + ψ1 (x2 + u 1 ) + ψ2 (x1 + u 2 ) 2 The co-state system is: ⎧ ∂H ˙ ⎪ ⎨ ψ1 = − ∂ x1 = −ψ2 ⎪ ⎩ ψ˙ = − ∂ H = −ψ 2 1 ∂ x2 Solving the above system, we find: ψ1 = (1 + τ 2 )u 1 ψ2 = (1 + τ 2 )u 2 Now define an optimal control using the maximum principle max

u 21 +u 22 ≤β 2

=

H (ψ, x, u, τ )

  1 − (u 21 + u 22 )(1 + τ 2 ) + τ + ψ1 (−x2 + u 1 ) + ψ2 (x1 + u 2 ) . 2 u 21 +u 22 ≤β 2 max

This is concave programming problem and by applying Lagrangian method, we find the optimal control

123

1308

A. Radwan et al.

1 ψ1 1 + τ2 1 u 2 = u 2 (τ, t) = ψ2 1 + τ2 u 1 = u 1 (τ, t) =

Substituting the above control into state system and we take into account the boundary conditions, we find x1 = x1 (τ, t), that is, x1



  1 + cos T + sin T t (1 + τ 2 ) cos t = −1 + T   1 + cos T − sin T − −1 + t (1 + τ 2 ) sin t T

and x2 = x2 (τ, t), that is,   1 + cos T + sin T x2 = −1 + t (1 + τ 2 ) sin t T   1 + cos T − sin T t (1 + τ 2 ) cos t + −1 + T ∗

In order to apply the optimality conditions (10)–(13) we need to calculate the partial derivatives: ∂ f0 = τ (u 21 + u 22 ) − 1, ∂τ ∂ f1 = 0, ∂τ ∂ f2 = 0. ∂τ To find τ , we take into account boundary conditions and optimality conditions: The condition (13) can be written as follows: T   τ (u 21 + u 22 ) − 1 dt = 0 ψ1 (τ, T ) 0

By solving (18), we find τ as: τ=

123

T2 4(1 + cos T )

(18)

Parametric approach to optimal control

1309

4 Application of parametric optimization to optimal control Consider the optimal control problem min J (u) =

t1 t0

f 0 (x, u, t) dt

(19)

x0 , t

∈ [t0 , t1 ] x˙i = f i (x, u, t), i = 1, 2, . . . , n x(t0 ) = r u ∈ U = {u(t) ∈ R /gi (u) ≤ 0, i = 1, . . . , s}

(20) (21)

where t0 , t1 and x0 are given, the functions f i , i = 0, . . . , n with partial derivatives ∂ fi n r ∂ xk , k = 1, . . . n are continuous on R × U × R, and gi : R → R, i = 1, . . . , s are twice continuously differentiable convex functions. The Hamiltonian for problem (19)–(21) is written as follows: H (ψ, x, u, t) =

n 

ψi f i (x, u, t) − f 0 (x, u, t), t ∈ [t0 , t1 ]

i=1

⎧ ∂ H (ψ, x, u, t) ⎪ ⎪ ⎨ x˙i = ∂ψi

(22)

⎪ ⎪ ⎩ ψ˙i = − ∂ H (ψ, x, u, t) ∂x i

and 

x(t0 ) = x0 ψ(t1 ) = 0

(23)

Furthermore, we assume that [H1] The Hamiltonian is strictly concave with respect to u; ˜ [H2] u(t) = arg maxu∈U H (ψ, x, u, t), t ∈ [t0 , t1 ] is continuous on [t0 , t1 ] [H3] u ∈ C r [t0 , t1 ] Note that u(t) is determined in a unique way for each t since U is convex and H is concave. Now we consider the problem of maximizing the Hamiltonian with respect to u: max H (ψ, x, u, t) for each t ∈ [t0 , t1 ] u∈U

Usually in literature, u is found explicitly as a function u = u(ψ, x, t) and after substituting it into the system (22)–(23), the problem reduces to the boundary value problem. Theorem 2 Assume that the conditions [H1]–[H3] hold and problem (19)–(21) has an optimal solution (u∗ , x∗ ). Then for a given ε there exist a finite discretization t0 = τ0 < τ1 < · · · < τi < · · · < τ N = t1

123

1310

A. Radwan et al.

˜ and approximate solution u(t), t ∈ [t0 , t1 ] such that ˜ i ) < , i = 1, 2, . . . , N  u∗ (ti ) − u(t Proof Let u∗ be an optimal solution of problem (19)–(21). Then (u∗ , x∗ ) satisfies the conditions: ∂ H (ψ ∗ , x∗ , u∗ , t) x˙i∗ = ∂ψi ∂ H (ψ ∗ , x∗ , u∗ , t) ψ˙i = − , i = 1, 2, . . . , n ∂ xi where H (ψ, x, u, t) =

n 

ψi (t) f i (x, u, t) − f 0 (x, u, t), t ∈ [t0 , t1 ]

i=1

and (u∗ , x∗ ) satisfies the maximum principle: H (ψ ∗ , x∗ , u∗ , t) = max H (ψ ∗ , x∗ , u, t), t ∈ [t0 , t1 ] u∈U

(24)

Now we consider problem (24) as one parametric maximization problem. Since H (ψ ∗ , x∗ , u, t) is twice differentiable in u and assumptions [H1]-[H3] hold, we can apply the Theorem 3.4.1 from [3, p.78] to the problem (24). Then as a result, the method PATH1 [4,6] generates a discretization t0 = τ0 < τ1 < · · · < τi < · · · < τ N = t1 , ˜ i ) such that: and corresponding points u˜i = u(t ˜ i ) < , i = 1, 2, . . . , N  u ∗ (ti ) − u(t which proves the assertion. Remark 1 Parametric optimization also can be applied in finding nominal optimal control given in (6). It is easy to see that at each iteration k, the Hamiltonian function is a scalar function of u ∈ U ⊂ R r and t ∈ T = [t0 , t1 ], that is G k (u, t) = H (ψ k (t), xk (t), u, t) The latter states that uˆ k (t) must be a maximizer of the following problem max G k (u, t), t ∈ T u∈U

which is a problem of parametric optimization as formulated in various papers from [4,11], where the independent variable t is now considered as unknown parameter t ∈ T = [t0 , t1 ]. We can also consider a case when the set of admissible control is time-varying, i.e. U = U (t), t ∈ T = [t0 , t1 ]. In this case, a general theory of parametric optimization is also applicable for finding the nominal optimal controls.

123

Parametric approach to optimal control

1311

5 Implementation scheme and algorithm The application of the pathfollowing method can be efficiently carried out by means of PAFO software package described in [5]. Let (x∗ , u∗ ) be an optimal process in problem (19)–(21). Introduce the function f : U × R → R f (u, t) = −H (ψ ∗ , x∗ , u, t), t ∈ [t0 , t1 ] where (24) is a parametric optimization problem min f (u, t), t ∈ [t0 , t1 ], u∈U

U = {u ∈ Rr : gi (u) ≤ 0, i ∈ J },

(25)

J = {1, 2, . . . , s}.

The KKT conditions for the problem (25) state that Du f (u, t) +

j∈J

μi Du gi (u, t) = 0

gi (u, t)μi ≤ 0, μi ≥ 0, i ∈ J μi gi (u, t) = 0, i ∈ J, t ∈ [t0 , t1 ] where  Du f (u, t) =

 ∂ f (u, t) ∂ f (u, t) . ,..., ∂u 1 ∂u r

Consider the auxiliary parametric optimization problem min f (u, t), t ∈ [t0 , t1 ]

(26)

gi (u) = 0, i ∈ J˜ ⊂ J.

(27)

subject to

Let v0 = (u0 (t), μ0 (t)) satisfy the KKT conditions for problem (26)–(27) with J˜ = J0 . This system can be written in the following compact notation F(u, t) = 0, t ∈ [t0 , t1 ]

(28)

where, v = (u0 (t), μ0 (t)). In order to apply Newton’s method to system (28), we have to solve a linear system with Du F(v(t), t) as matrix. The same matrix is used to compute v˙ (t): Du F(u(t), t)˙v(t) = Du f (u(t), t)

(29)

123

1312

A. Radwan et al.

Therefore, using the Newton method as corrector, we have: Du F(u iki −1 , ti )(u iki − u iki −1 ) = −F(u iki −1 , ti ) and Du F(u iki −1 , ti )(u˙ iki −1 ) = −Δt F(u iki −1 , ti ) Now we can adapt the algorithm PATH1 for solving problem (26) as follows. Algorithm PATH2 Step 1. Given u0 , J0 , εt , ευ , ευ˙ , Δtmin , Δtmax , t0 , t1 , k := 1. Step 2. Determine a step size Δtk ∈ [Δtmin , Δtmax ]. Step 3. Find an approximate KKT point υ k = (uk , μk ) by solving problem (26)– (27) for t = tk with υ k − υ(tk ) < ευ , υ(t) = (u(tk ), μ(tk )). Step 4. If gi (uk (tk )) < −Du g j (uk )ευ ,

j ∈ Jk−1 , and μkj > ευ ,

j ∈ Jk−1

then Jk := Jk−1 and go to Step 6. Step 5. Find t solving the system: gi (uk (t)) ≤ 0, i ∈ J \Jk−1 , and μik (t) ≥ 0, i ∈ Jk−1 Step 6. Solve system (28) approximately, i.e., | t − t| < εt ,  υ − υ k ( t) ≤ ευ , k k  ˙ υ − υ˙ (t) ≤ ευ˙ ,  k

 υ = ( uk ,  μk ). Step 7. Form index sets: J = Jk−1



J− ,

k J− = {i ∈ Jk−1 : gi ( uk ,  t) + |Du g j ( uk ,  t) u˙ + Dt gi ( uk ,  t)|εt

≥ −εt − [Du g j ( uk ,  t) + 1]ευ } ˙ ik |εt + ευ + εt }, μik ≥ | μ J+ = {i ∈ Jk−1 :  J0 = J\ J+ . Step 8. If | J0 | = 1 then construct the index set J k as: Jk = {i : g j ( uk ) = 0}. Otherwise, go to the next step.

123

Parametric approach to optimal control

1313

Step 9. Solve problem (26)–(27) for t = tk and Jk := {i : gi ( u) = 0}. Step 10. Set k := k + 1 and go to Step 2. We note that algorithm PATH2 generates a sequence of approximate global minimizers in problem (26). The convergence of this algorithm is given in [4]. Remark 2 A search for t in Step 5 is carried out by bisection strategy [3]. Remark 3 Choice of parameters εt , ευ , ευ˙ and Δtmin , Δtmax is done according to [3,7]. Example 2 Let us consider the same example above min J (u) =

1 2

T (u 21 + u 22 )dt 0

subject to dynamic system x˙1 = x2 + u 1 x˙2 = x1 + u 2 x(0) = (−1, −1) , x(T ) = (1, 1) with additional constraint U = {u ∈ R2 : u 21 + u 22 ≤ β 2 , t ∈ [0, T ]}. For this problem, the Hamiltonian is 1 H (ψ, x, u, t) = − (u 21 + u 22 ) + ψ1 (x2 + u 1 ) + ψ2 (x1 + u 2 ) 2 and the co-state ψ(t) ∈ R2 is subject to: ∂H = −ψ2 ∂ x1 ∂H ψ˙2 = − = −ψ1 ∂ x2 ψ˙1 = −

We can calculate the solution profiles of the adjoint system:     1 + cos T + sin T 1 + cos T − sin T ∗ sin t − cos t ψ1 = T T     1 + cos T − sin T 1 + cos T + sin T sin t + cos t ψ2∗ = T T and the states are:     1 + cos T + sin T 1 + cos T − sin T x1∗ = −1 + t cos t − −1 + t sin t T T     1 + cos T − sin T 1 + cos T + sin T t sin t + −1 + t cos t x2∗ = −1 + T T

123

1314

A. Radwan et al.

and the admissible controls: u ∗1 = −ψ1∗ u ∗2 = −ψ2∗ Our goal now is to construct an iterative procedure for solving the Pontryagin’s maximum condition: H (ψ ∗ , x∗ , u∗ , t) =

max

v12 +v22 ≤β 2

H (ψ ∗ , x∗ , v, t)

v∈R2

where u∗ = (u ∗1 , u ∗2 ) is solution (optimal control) and x∗ (t) = x(t, u∗ ), ψ ∗ (t) = ψ(t, u∗ ) are the corresponding profiles of state and adjoint systems, respectively. For k = 0 we must choose u0 and evaluate {x0 , ψ 0 }; therefore, H (ψ 0 , x0 , v, t) =



1 + cos T − sin T sin t T    1 + cos T + sin T 1 + cos T + sin T − cos t − −1 + t sin t T T    1 + cos T − sin T t cos t + v1 + −1 + T  1 + cos T + sin T + sin t T    1 + cos T + sin T 1 + cos T − sin T cos t −1 + t cos t + T T    1 + cos T − sin T t sin t + v2 − −1 + T 1 2 (v + v22 ) + 2 1

and we should now solve the parametric optimization problem for G 0 (v, t) = H (ψ 0 , x0 , v, t) → max subject to v = (υ1 , υ2 ) ∈ R2 , υ12 + υ22 = β 2 , t ∈ T = [0, π ] We can get the numerical description of υ10 (t) and υ20 (t) shown in Table 1 and in Fig. 1 drawn by means of PAFO package as parametric curves of t ∈ T = [0, π ].

123

Parametric approach to optimal control

1315

Table 1 Values of the control functions Singularities

υ10

υ20

0.1000000000E−02

0.7071067812

0.7071067812

0.0000000000

0.7071068000

0.7071068000

−1.353000000

0.7071067980

0.7071067644

−2.693000000

0.7071067889

0.7071067735

t

sh.dat

file: shipx1

-3.0

-2.0

-1.0

0

1.0

2.0

3.0

0.70710680

type 2 type 3 type 4 type 5

sh.dat

file: shipx2

-3.0 0.70710680

0.70710679

0.70710679

0.70710678

0.70710678

0.70710677

-1.0

0

1.0

2.0

3.0

type 2 type 3 type 4 type 5

0.70710677

jumps:

jumps:

stat. points:

stat. points:

g.c.--points: pafo2ps version 7.3 by A.Hartmann (c) 5.9.1994

-2.0

g.c.--points: pafo2ps version 7.3 by A.Hartmann (c) 5.9.1994

Fig. 1 Control variables υ10 (t) and υ20 (t)

6 Conclusions We have examined parametric optimal control problems. We have applied the maximum principle to optimal control problem with a parameter in both objective function and dynamical system. We have also shown that the parametric optimization technique can be applied in maximizing the Hamiltonian under some assumptions. Acknowledgments Amr Radwan acknowledges the support of Cultural Affairs & Missions Sector of Egypt for Ph.D. studies grant. Olga Vasilieva expresses her gratitude to Universidad del Valle, Colombia (research project C.I. 7758) and to DAAD Program for university staff that had allowed her to visit the Institute of Mathematics, Humboldt University, Berlin (May–July 2008) and to start working on this paper. Rentsen Enkhbat acknowledge the support of DFG Program for research stay at the Institute of Mathematics, Humboldt University, Berlin (June–September 2009), during which this paper was nearly accomplished.

References 1. Clarke, F.H., Hiriart-Urruty, J.-B., Ledyaev, Yu.S.: On global optimality conditions for nonlinear optimal control problems. J. Glob. Optim. 13(2), 109–122 (1998) 2. Dentcheva, D., Guddat, J., Rückmann, J.-J., Wendler, K.: Pathfollowing methods in nonlinear optimization. III. Lagrange multiplier embedding. ZOR—Math. Methods Oper. Res. 41(2), 127–152 (1995) (parametric optimization) 3. Guddat, J., Guerra Vazquez, F., Jongen, H.Th.: Parametric optimization: singularities, pathfollowing and jumps. B. G. Teubner, Stuttgart (1990) 4. Guddat, J., Jongen, H.Th., Kummer, B., Nožiˇcka, F.: (eds.) Parametric optimization and related topics. III, Approximation & Optimization, vol. 3. Peter Lang, Frankfurt am Main (1993) (Papers from the Third Conference held in Güstrow, August 30–September 5 (1991)) 5. Gollmer, R., Kausmann, U., Nowack, D., Wendler, K., Bacallao Estrada, J.: Program package pafo. Software development report, Humboldt University, Berlin (1995–2007)

123

1316

A. Radwan et al.

6. Guddat, J., Rückmann, J.-J.: One-parametric optimization: jumps in the set of generalized critical points. Control Cybern. 23(1–2), 139–151 (1994) (parametric optimization) 7. Jongen, H.Th., Jonker, P., Twilt, F.: Critical sets in parametric optimization. Math. Program. 34(3), 333–353 (1986) 8. Nakayama, H.: Trade-off analysis using parametric optimization techniques. Eur. J. Oper. Res. 60(1), 87–98 (1992) 9. Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The mathematical theory of optimal processes. Interscience Publishers John Wiley & Sons, Inc., New York (1962) (translated from the Russian by K. N. Trirogoff; edited by L. W. Neustadt) 10. Pinch, E.R.: Optimal control and the calculus of variations. Oxford Science Publications. The Clarendon Press Oxford University Press, New York (1993) 11. Rohde, A., Stavroulakis, G.E.: Path-following energy optimization in unilateral contact problems. J. Glob. Optim. 6(4), 347–365 (1995) 12. Vasiliev, O.V.: Optimization methods. Advanced Series in Mathematical Science and Engineering, vol. 5. World Federation Publishers Company, Atlanta, GA (1996) (translated from the Russian) 13. Vasilieva, O.: Successive approximations technique for optimal control problem with boundary conditions. J. Mong. Math. Soc. 5(1):70–85 (2001)

123

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.