Reliability Optimization of a Series-Parallel System

Share Embed


Descrição do Produto

IEEE TRANSACTIONS ON RELIABILITY, VOL. R-21, NO. 4, NOVEMBER 1972

230

Reliability Optimization of a Series-Parallel System KRISHNA B. MISRA

Abstract-A mathematical model is formulated for optimizing the reliability of a system subject to given linear constraints; the system has several stages in series; each stage has parallel redundancy to improve the reliability. Part I shows a new way to transform the model of constrained optimization to a saddle point problem by using Lagrange multipliers. Conditions are derived for maximizing the reliability function; Newton's method is used to solve the resulting multidimensional nonlinear algebraic equations. Further modifications are provided to avoid inverting the large Jacobian matrices; therefore this method is practical for large systems. Part II shows how to transform the model of constrained optimization to a multistage decision process and uses the Maximum principle to arrive at the optimal decision. This approach is easy to apply, formulate, and program. The solution can be obtained without fear of nonconvergence (very often experienced with earlier methods) besides providing considerable saving in computer time. Design alternatives can be easily considered. Reader Aids: Purpose: Widen state of the art Special math needed for derivations: Statistics, matrices Special math needed for results: None Resultsuseful to: Theoretically inclined reliabilityengineers

2. System Description We begin with the usual reliability optimization problem and modify it as necessary for Parts I and II. The following assumptions are made. 1) There are n statistically independent stages in series; there are statistically independent parallel redundant units in each stage. (In parallel redundancy, all units have the same risk of failure regardless of whether or not they are spares or active. It is sometimes called "hot reserve.") 2) There are m linear constraints on the system (such as cost, weight, and space); that is, each of the constraints can be expressed as linear combinations of the number of redundant units in each stage. 3) Good/bad is a sufficient description of each unit and each stage. Only one unit in each stage need be good for the stage to be good. No assumption is made about the hazard rates of the units, except that it is reflected in the reliability of the units. It can easily be shown that maximizing system reliability is equivalent to maximizing its logarithm, because the logarithm of stage-reliability is a monotonically increasing concave function of the number of redundant units in the stage. The

logarithm of the system-reliability is also a concave function; for a proof, see [17].

INTRODUCTION

3. Notation

1. General This paper treats a system with many stages in series; descriptions and analyses of such networks are found in [1] -[4] . To build high reliability into a system, a design engineer usually resorts to redundant units for*each stage, but must stay within the resources available, i.e., constraints imposed on the design, such as cost and volume. The optimum redundancy depends on reliability, cost, etc., of each stage. Each of the resource types (e.g., cost and volume) is an increasing function of the reliability of a unit. Therefore a design engineer must make tradeoffs between reliability and the resources. It seems reasonable to take stage-reliability (or unreliability) as a state variable and to optimize system reliability within the specified constraints. The redundancy at each stage can be calculated using relevant constraint and reliability data. The Manuscript received November 18, 1971; revised June 10, 1972.

The author was with the Mathematisches Institut der

problem therefore is: how much reliability to assign to each stage so that the system reliability is maximized.

Technischen.

Universitat Munchen, 8Munchen 2, West Germany. He is now with the University of Roorkee, 26/3 Nitinagar, Roorkee U.P., India.

System reliability, 0 < Rs < 1. Total number of stages in series. Total number of constraints. m varies from I to m. ~ ~ ~~~~~~~i Subscript for resource; viaries from 1 to m. Subscript for j varies from 1 to . Subscript;f k to m 1. Subscipt; k varesf 01KR1 K 1. Reliability ofjth stage, a t g J. 1 R Reliability of a unit in the jth stage, 0 Kpi 0.

MISRA: RELIABILITY OPTIMIZATION-PARTS I AND II

aji/ln qj (they are constants); cxf 0; these are the constraints. See (20). Tolerance on bi; ei > 0. bi - X. ;these are the slacks. Incremental factor; 0 < 1. Reduction factor. Lagrange multiplier; Xi < 0. Superscript denoting the iteration number; t = 0 is the initial value assumed for the iteration. See (7) and (8). Implies the sum or product with regard to the index 4 over the domain of P. Boldface implies the array of individual parameters; viz., Y = {Y1, Y2, y. } , Superscript denoting that the parameter is evaluated at the optimal point.

4.4* MathematicalModel Mathematical Model

for all i.

for all i.

(3a)

Wi=

(3b)

nQ;

ln qj in results in (4), Substituting (3b) (2) a In Q < b

.

(4) (4)

for all i.

The problem is to assign reliability to each stage such that the system reliability is optimized, i.e.,

Maximize S In Rs

j

i

ln (1 - Qj) subject to constraints

cij ln Qj < bi.

(5)

(1)

(2) (2)

of in type are calculated(5). The wiinequalities from the Q1.

The linear constraints are

Z a,1w1 S bi, bi, aiiwj <

Qi1Y"I

The objective function and constraints are all separable concave and convex functions of Q1, respectively. This is an important conclusion because it guarantees [14] that any local optimum found by applying the necessary conditions will also be a global optimum. The conditions for which a local optimum exists are derived in the following section for the

The system reliability is

s= HR = H (1 Q1).

The stage unreliability is related to individual unreliabilities (qj) of units at stage j by the relationship,

Part I: Lagrange Multiplier Approach 5. Constrained Optimum of a Function

or

An historical discussion of the Lagrange multiplier approach = 0, fj-R- - Z to reliability is given in the Appendix. The conditions for ( (represent the fj by fR) maximizing a constrained function are: 1) A necessary condition for f(M) to have a relative maxiaF fn+i mum at t* is that the partial derivatives of the Lagrangian a = xi (aij In Qj)- b PJ function with respect to each of its arguments vanish. 2) If f(t) is strictly concave and gj( ) are convex, the condi- (represent the fn+i by fx) . 0; tions in #1 above are both necessary and sufficient for an absolute maximum to exist.

6. Conditions forMaximizing the Reliability Function The functions in this reliability problem satisfy all the conditions of Section 5, e.g., differentiability, concavity, and convexity. The Langrangian function whose stationary point is to be found, is

F(R,X))-ZlnR; + Z A,k

(cxq ln Q1) - .,

j i \f Therefore the conditions can be written as

aF(R,X) = aJR1

1 _ Z

Rj

X,cx>/Q1

/ 0

In Q < bi. Zaij /

(7b)

°

(8) (

9

(5)

Equations (5)-(9) form the basis of a solution for optimization. These are the sufficient conditions for the type of objective function and constraints we have. The solution therefore must be found to (7) and (8) combined and satisfy the criteria of (5) and (9). Actually, (5) will form the stopping criterion for the iterative process. A set of n + m equations represented by (6) (7) anld (8) can be solved by Newton's method; the steps are outlined in the next section. ~ ~~~~

(7a)

7. Technique for Solution The solution to (7) and (8) is found by successive improvement of the values of R, and Xi through Newton's method

IEEE TRANSACTIONS ON RELIABILITY, NOVEMBER 1972

232

Once any one of the elements of J4 changes sign, the control over the accuracy in the neighborhood of the optimal solution can be maintained by providng a tolerance' on Rj. In other words, between the two points on either side of the constrained region, further refinement of the solution is possible by specifying a tolerance on the stage reliabilities. Although there is a sign change in an element of [J4] temporarily, (8) will bring it back within the feasible region in the next iteration. This goes on until the required accuracy is achieved within the feasible region. As a check that R1 < 1 (which usually is not violated with this approach), a reduction factor (v < 1) in (10) can be used. One can test, in discrete steps, many values of v until the iterative process can be advanced further. Fortunately this contingency (need for z. m

J1

J4

general term 1 afJ a1Rj R7

J2

J3

af1 =

J3

J2

J4

J

afn,$

size

n' X n

[C] part by part

_i

Qi

a1j ZaijlnQj-bi I

m = Xai,n mXi

miXim.

as

follows, and thereby save computational

time. Define some interim submatrices as follows: [Cl] diagonal terms: -RJ + Za? Xi/di

nXm

-

fn+ i

aRj

submatrix

[Cl'] off-diagonal terms: (12)

Xi/di ei,,Xid Eo!qcxj i

XAcaz1/Q1di'

3) The above procedure goes on until any one of the non-

from negative to positive, i.e., one of the constraints is violated.

(15b)

di is the slack at each iteration; it is also the negative of the ithdiagonalelementinJ4.

J1 and J4 are diagonal matrices; their properties are used for [C2MA-1ildi (: c) reducing the size of Jac from n + m to min { n, i }. The re[C3t]: duction technique is very useful in practice since usually the Then define the C submatrices as follows: number of constraints (in) is much smaller than the sum of_ constraints plus stages (n + in); it is described in Sec. 8. After [Cl1] = [C]] applying the corrections in (10), go back to step 1 with the [C2] = [C1] [C2t] zero diagonal elements of J4 (or Jl when n > in) changes sign

(I Sa)

(lSd)

[4 C][2 tThe tolerance is compared to the actual change in the variables. When all changes are less than the tolerance, iteration stops.

233

MISRA: RELIABILITY OPTIMIZATION-PARTS I AND II

TABLE 1 VALUES OF SYSTEM RELIABILITY AND ALLOCATIONS WITH SLACKS AT THE END OF EACH ITERATION

Iteration 1 2 3 4 5 6

System

Reliability 0.785890

0.979050

0.997607 0.997925 0.997894 0.997914

1.95 3.67 4.89 4.98 5.18 5.11

CORRECTIONS TO

Slacks

Allocations w3 w2 1.97 1.94 3.84 3.64 6.16 5.11 6.36 5.21 6.27 5.23 6.30 5.23

w,

Rj

[CA [Cl I *

73.65 34.56 2.47 0.22 -0.04 0.0155

TABLE 2 AND Xi AT EACH ITERATION

[C4] diagonal terms: [C4] ii - 1 /di [C4] off-diagonal terms: same as [C4'] =

Resources

34.05 15.85 0.90 -0.14 0.10 0.0098

3.87 3.90

AR2 AR1 AR3 Iteration (1) -1.817 10-1 -1.569 -2.068 1 -6.184 -8.344 2 10-2 -4.034 -5.594 -9.156 3 10-3 -2.345 4 10' -4.828 -12.343 -10.363 5.314 -2.442 5 10' -9.070 0.4085 6 10-5 2.6434 -1.8656 Note: (1) the multiplying factor for AR1 and AXi.

[C3 ]

on

dl-Cost d2-Weight

w4 1.88 3.22 3.91 3.95

(16)

The scheme of (15) and (16) is arranged so that the C1, C2, C3, C4 can be stored at the original location of Cl', C2, C3, C4 while C1, C2, C3, C4 are being calculated. Thus, without calculating the Jacobian matrix and then inverting it, the inverted matrix is calculated directly. This process cuts down, from n + m to n, the size of the maximum matrix to be inverted and can save much time and effort when there are many constraints (m is large). 9. Example

Consider the following system which has four stages in series and find the configuration which has maximum reliability. The cost, weight, and reliability data of the units for each subsystem, and the system constraints are as follows (same as System A in Part II):

AX,

AR4 -1.220

-2.577 -1.607 -4.305 9.279 -3.1272

AX2 -0.0874 -0.0875

-0.1006 -0.1517 0.6934 13.50 -4.219

-0.00997 -0.149 -1.54 -6.16 1.86

-1. 562 5

0

0

0

0.7456

3.1066

0

-2.0408

0

0

1.9103

3.3223

0

0

-1.7777

0

2.4525

5.7707

2.3720 -0.1581 -44.59

3.6898 0

_ -1.3840

0 0 0 -0.3728 -0.0637 -0.0981

-0.2308

-0.1553 -0.1107

-0.2459

0

-96.0

(a) -1.0005

0

0

0

0.7456

3.1066

0

-1.0010

0

0

1.9103

3.323

0

0

-1.0014

0

0

0

-Q.5627

0 -0.6954

-1.0012

-0.7554

0-.4374

-0.2451

1

-0.3053

_

2.4525

2.3720 -0.7755 -0.0097

1

-0.2250

0

_ _

5.7707

3.6898 0

-0.0155

(b)

Stage

Resource (1) Cost

(2) Weight

Reliability

1

2

3

4

1.2 5.0 0.80

2.3 4.0 0.70

3.4 8.0 0.75

4.5 7.0 0.85

System Constraint

Fig. 1. (a) Jacobian matrix at the beginning of 1st iteration. (b) Jacobian matrix at the end of 6th iteration.

56 120

rounded off to nearest integers lead to an answer of [5, 6, 5, 4] = w which is also the answer provided in [17] and Part II. The coefficient matrix a of the constraints is

-

The [R°,X0] was chosen as [0.80, 0.70, 0.75, 0.85; -0.01, -0.01] in the way mentioned in the text. Corrections to [R,X] at each iteration are shown in Table 2. Table 1 shows the system reliability and slacks in the resources at each iteration. The optimal solution for a tolerance on LAR1 of 10-5 is obtained in only 6 iterations. The total time of computation on the TR440 was about 6 s, without the direct matrix building procedure of Section 8. The allocations when

F-0.7456

-2.3720]

1.9104 -2.4525 L-3.1066 -3.3223 -5.7707 -3.68981 The Jacobian matrices at the beginning of first and end of the sixth iterations are shown in Figures la and lb. From the elements [Jac] ,5 and [Jac] 6,6 of Figure ib, it is apparent that almost all resources are conlsumed since these elements are the exceedances (negative slacks) for each constraint and since the minus sign shows that the solution is within the feasible region. The stopping criterion at this step was the

IEEE TRANSACTIONS ON RELIABILITY, NOVEMBER 1972

234

fi Iteration Count 1 2 4 5 6

TABLE 3

EVALUATED AT THE BEGINNING OF THE ITERATIONS (2) 10-1

f,

f2

f3

f4

2.11

3.76

2.51

1.15

10-2 4.02

9.62

6.30

2.12

-106 5.53 85.54 31.72 2.59 10-8 0.233 1.526 1.076 0.185 1o-9 0.690 0.347 0.016 0.979

Note: (2) the multiplying factor for allf's.

fs 4.45

3.28

90.96 -1550 1950

been stopped after the 3rd iteration. During iterations 4 and 5, we were on the other side of the constraints boundary in an effort to get a finer accuracy. Finally after 6 iterations, we obtained a solution within the feasible region and with the f6 9.59 accuracy desired. Equation (8) was responsible for the closed 9 .4 form solution. The advantage of the method lies in its 236.9 self-corrective steps. 1830 The final optimal point (R*,X) is 580

[0.999735, 0.999494, 0.999294, 0.999388;

-0.00019994, -0.00003730]

tolerance on AR; which was l0-5. All AR1 were less than with Rs 0.997914. this value. The values of the corrections to R1 and Xi at each 10. Conclusions for Part I iteration are shown in Table 2. It has been amply shown that the Lagrange multiplier techActually, had we prescribed a tolerance on f's rather than on AR's then we could have stopped after 3 iterations as is nique can be applied efficiently to the reliability model proclear from Table 3. The sign changes in AR1 or AXi after 3 posed in the Introduction. That model is well suited for iterations signify that to obtain more refinement on accuracy, formulation as a saddle point problem. The use of Newton's we are hunting around the optimal solution. Prescribing a still method requires very few iterations and one need only invert a finer accuracy is risky since roundoff errors can become im- matrix whose size is the lesser of the "number of stages in the portant. We co-uld have therefore have stopped after iteration series system" and the "number of constraints." Choosing a 3. A tolerance of reasonable magnitude can be specified on starting point is easy. The method is particularly simple to use and does not require complicated formulation as with some f's or AR's as desired. From Table 1 also it is obvious that the process could have other methods. A control over the accuracy is easily available. =

Part II: Maximum Principle Approach due to eliminating subroutines and the extra mathematical 11. General Considerations The formulation of the Maximum Principle approach in this calculations associated with them, and reduction in computer is different from earlier methods [5-12] in that it is memory requirements. In this way a large problem can be solved economically. A general purpose program in Algol 60 pat only linear . very easy to use. However this paper considers has been written for the TR440 Computer and is available . * X r r constraints while some of the references such as [6-8] deal with nonlinear constraints. By transforming the problem into from the author. the form of (5), only a few algebraic calculations are required 12 Equations for the Maximum Principle when it is solved by the Maximum Principle. The Tillman et al. The description of the following performance equations is [6] formulation results in a set of nonlinear transcendental equations which are solved by Newton's method. As is well based on the procedure described in [15, p. 96] for multiknown Newton's method sometimes does not converge and dimensional processes. The state variables for the n-stage therefore the method of [6] fails. Several runs were made of system and m + 1 dimensional problem (i.e., corresponding the same problem with the position of two stages interchanged to the m resources and system reliability) can be written as (mathematically, the problem remains the same) and showed (17a) Xii = Xi, i-1 + ge(Qj); for alliandj failure of the method. There is no surety of getting a solution through the formulation of [5], [6], mainly due to noncon- with the obvious end-conditions, vergence or convergence to an absurd solution through NewXi = 0; Xin < b. (17b) ton's method. The formulation presented here does not reThe state variable corresponding to reliability is expressed as quire Newton's method and assures a solution. The formulation of (5) provides an opportunity to review (18a) and consider other design possibilities. For example, by Xrn+,j=Xm+1,j_ + ln (1l- Q,) manipulating the aqf, several design alternatives could be gener- while, ated. It is possible to keep ce; constant, yet get anothier design 1b xm+ ,X+ nK alternative, viz, ci/ln qu =4/ln q^,, where c* andc r w valus o oansrait ad q an qtareunreliabilities of the Tehe objective function to be optimized is U1 kl(9 + same unit. Because the solution of (5) will be in terms of Q,l ~= it is possible (for a fixed D) to adjust q, such that a) w1 is an S-nR m ln,fkXn(9 integer and b) cij1 is not changed within a stated tolerance.k Lastly, there are the advantages of saving in computer time, with all ak = 0 except Urn +1 = 1.

Performance

235

MISRA: RELIABILITY OPTIMIZATION-PARTS I AND II

TABLE 4 ANALYSIS OF RESULTS OF PART 11

The recurrence relations for the components of a covariant vector Z can be defined as

Zi, j_ I-Zi, j, for eachiover allj;

(20a)

Zm+i,nEl.

(20b)

___Systemr . _

_

The Hamiltonian function is

ZkjXk=E Zi,jZmi,j {X+

Hj =

ijn(Qj)}

- Q)

+ ZM+l,j {Xm+,,j +ln (1-

__

__

o

o

Qj)} . (21)

0.9223118

37.0

27.2

35.7

57.3

-0.08001622

1 agFrQmc b wt

(the-~111 (

From (22)e

~ Q1_ a a p o.0.a (22).000-.

1 tIj1Zj + 1

I]

-

~

i

ta ol

\,e

I

8

___ \

7-4513

- 0.25268

0.2250

0.0017

3.1531

-

_ p

---

a

-0.00028707

-0.0o006889

No

-0.000Y44420--

wj

22r_ w>

1-

rj

(24) (25) (25)TR

=

This separable form of Qtmakes unnecessary the use of Newton's method. Had the approach of [6] been used and the decision variable wa (the number of parallel units) employed for the formnulation, the expressions of Z-1 and w1 would be

a i.Gne tolerqcen for)apart not= al s z , q

(qj)w

apjt

pared with

Z

(q)i-Zi1a1 + ln q1

Equation (27)

can not

use, of Newton's method

y e sk

=he0e .

a

method.

aii_

22

0.6687

d

iterations _I'

Secough

on

computation when a slack has fallen below the minimum for

any constraint. Step i. b) Choose the increment factor P (0
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.