Factorial experiments for sequential process

June 8, 2017 | Autor: Grazia Vicario | Categoria: Statistics
Share Embed


Descrição do Produto

GRAZIA VICARIO (*) – DANIELE ROMANO (**)

Factorial experiments for sequential process

Contents: 1. Introduction. — 2. Modellization. — 3. Variance components: two cases. 3.1. Three-phase process, one single factor for phase. - 3.2. Two-phase process with three factors (two in the first phase, one in the second). — 4. Residual error. — 5. Conclusions. Acknowledgments. References. Summary. Riassunto. Key words.

1. Introduction The work presented originated from a real world problem in manufacturing. A first layer supplier of automotive components (metal pulleys) was plagued by time-consuming adjustments of process parameters whenever a change in product specifications occurred. Viable new settings were arrived at by trial and error with no clear cut guidelines. Eventually a decision to resort to a systematic approach based upon designed experiments was arrived at, covering a critical stage of the manufacturing process determining final product’s dimensions. Pulley grooves are produced by plastic deformation of a rotating blank acted upon by form tools. Actually this stage is made up by a stack of four consecutive operations, each one using a different tool, with no provision for measuring critical dimensions between operations; that is, only the final outcome can be assessed. A two-level factorial experiment with eight factors was planned, with two factors at each operation, namely spindle speed and tool material. Whether process response could be properly described applying as a model a conventional crossed-factors classification in the case at (*) Dipartimento di Matematica, Politecnico di Torino e-mail: [email protected] (**) Dipartimento di Sistemi di Produzione ed Economia dell’Azienda, Politecnico di Torino e-mail: [email protected]

192 hand was an open question. A system evolving over time might be described more properly as a stochastic process than as a single random event. However standard techniques used for stochastic processes are not suitable for modeling the problem at hand, since factors affect process evolution in a deterministic way. End performances of sequential processes are produced by modifications induced at subsequent operations and their propagation throughout the stages, transmission mechanism providing an inherent memory of process. Thus experimentation on such a process cannot either be broken down into a sequence of independent, individual experiments, or considered as a single comprehensive experiment where all the operations are pooled, and time sequence is disregarded. The problem is further compounded by the availability of pieces for measurement at stage output only. A statistical description was therefore made up by sequencing a chain of linear regression models, where the output of each stage is affected both by factors acting on that stage and the outcome of the preceding one, thus introducing a form of process memory. The latter defines a correlation structure among some terms of the model, catering for as a better descriptive power at the price of added complexity of statistical analysis. Response random variability, deriving from independent random errors originated within each phase and propagated towards process output, is summarized in the model by two main components. The first covers random effects of those control factors (from the second phase on) which act on non-homogeneous experimental units as affected by random errors of previous phases, the second pertains to overall residual error made up by variance components arising from downstream transmission of individual phase errors.

2. Modellization In a multiple-phase process items progress through a time sequence of individual ordered phases. Let φk (k = 1, 2, . . . , K , see Fig. 1), be the k-th phase, having n k factors x(k) , with k = 1, 2, . . . , K , acting on the items; let Y(k) = (y1(k) , y2(k) , . . . , y p(k) ), k = 1, 2, . . . , K − 1, be a set of p responses, outcome of an intermediate phase, and Y(K ) the outcome of the last phase, that is the output of the overall process. An additive random error ε (k) , with k = 1, 2, . . . , K , is attached to the

193 outcome of each phase too. In the case at hand intermediate response measurement is not available, a situation by no means uncommon, therefore Y(K ) is the only set of responses that the experimenter can observe, this constraint shaping the model. x (1)

x (2)

x (K)

ε (1)

φ1

+

ε (2)

Y (1) φ2

+

ε (K)

Y (2)

...

φK

+

Y (K)

Fig. 1. Block model of a multiple-phase process. A set x(k) of n k factors and an additive error ε(k) are applied to each phase φk , with k = 1, 2, . . . , K .

As shown in Fig. 1, the output of each phase (but for the last one) is the input of the following one, originating a stochastic link-up of responses Y which propagates throughout the process. The link-up, due to the fact that every experimental unit go through all phases, originates a correlation among consecutive pairs of response components yl(h) and yl(k) , for every l = 1, 2, . . . , p, from different phases (h = k, for every h, k = 1, 2, . . . , K ); the correlation structure among the phase outputs is translated into an analogous correlation structure among the terms of the statistical model of Y(K ) . Guidelines for derivation of the statistical model are given for the simple case of an univariate response in a two-phases process with a single factor at two-levels per phase. Four homogeneous experimental units cover the four treatment combinations, the minimum number of experimental runs necessary to estimate the main effects of factors and their interaction in an equivalent crossed-factors scheme. Complete randomization is supposed here, regarding both run order and allotment of treatment combinations to the experimental units. Appearance of an error component dependent on factor level, similar to the whole plot error in a split-plot design, is thus prevented. Model’s formalization in the case of restricted randomization however is attractive for practical applications, and reference is made to Guseo (1997) for further considerations on this issue. In the first phase of the process a simple factorial design for a two level factor with two replicates is used. At the output responses y(1) (which under current constraints may not be observed) corresponding to pairs of units which underwent the same treatment, may differ because

194 of random variations due to the error ε(1) occurred in that phase. Then, the four experimental units are affected by the second phase factor thus originating the four treatment combinations, according to the sequence shown in Fig. 2. The first phase factor affects the relevant output, subsequently transmitted downstream through the second phase; the second phase factor, nested in the first phase factor, has a random effect (with a non zero mean) since it acts on non homogeneous experimental units as altered by ε(1) . level

−1

x (1)

factor

U1

x (2)

−1

+1

U2

+1

U3

−1

U4

+1

Fig. 2. Basic factorial scheme (one factor at two-levels per phase) in a two-phase process. Responses for the four experimental units Ui , i = 1, 2, 3, 4, after the first phase exhibit variations due to the effect of factor x (1) and to random errors. Effects of factor x (2) in the second phase have therefore a random component. Factor nesting also occurs, namely x (2) within x (1) .

Let us now derive a linear model for the response at the output of the second phase. A linear model for the response at the output of the first phase takes the form: y(1) = µ(1) 14 + α (1) x(1) + ε(1)

(1)

and therefore the response at the output of the second phase may be written as: y(2) = µ(2) 14 + β (2) x(2) + ψ1 y(1) + ψ12 x(2) ◦ y(1) + ε(2)

(2)

195 (k) (k) (k) (k) T where: y (k) = (y11 , y12 , y21 , y22 ) are the responses at the first phase (not observable) for k = 1 and at the second for k = 2, 14 is a four-dimensional unit vector, x(1) = (−1, −1, +1, +1)T and x(2) = (−1, +1, +1, −1)T are the vectors representing the levels of the factors (k) (k) (k) (k) T of the first and the second phase respectively, ε (k) = (ε11 ,ε12 , ε21 , ε22 ) is the vector of random errors occurring in the first (k = 1) and in the second phase (k = 2), the remaining components of (2) are real constants, µ(1) and µ(2) expressing the overall mean, α (1) and β (2) the factor effects in their own phase, ψ1 and ψ12 the effects due to transmission from one phase to the other (notation x ◦ y indicates the Hadamard product between vectors x and y). An additive effect model is a logical first approximation candidate with factorial designs. By substituting (2) into (1) we obtain:

y(2) = (µ(2) + ψ1 µ(1) )14 + ψ1 α (1) x(1) + (ψ12 µ(1) + β (2) )x(2) + + ψ12 x(2) ◦ (α (1) x(1) + ε(1) ) + (ε(2) + ψ1 ε (1) ) .

(3)

Grouping the terms on the right hand side according to the hierarchical nature of the process, a model for the process outcome is arrived at under the form: ˜ (1) + b + ε˜ (4) y(2) = µ14 + αx with the following positions:  µ = µ(2) + ψ1 µ(1)    (1) 

α˜ = ψ1 α

 b = [(β (2) + ψ12 µ(1) )14 + ψ12 α (1) x(1) + ψ12 ε(1) ] ◦ x(2)    (2) (1)

ε˜ = ε

+ ψ1ε

(5)

.

Let us now underline that the effect of the first phase factor, being deterministic, as expected, is particularly denoted to point out propagation to process output; the effect of the second phase factor is random (as indicated by the use of Latin notation b) with non zero mean, E[b] = [(β (2) + ψ12 µ(1) )14 + ψ12 α (1) x(1) ] ◦ x(2) ; the error term includes two sources of random variability, one due to the second phase error and another to the first phase error propagated to the process output. The interaction effect is embodied in the second factor effect because of the nesting between factors. To generalize these results a multiple-phase process may be described as a one phase process (the last one) with both fixed effect

196 ε (K) (K) x rand

... ... (2) x rand

φK

+

Y (K)

x (1) Fig. 3. Block model for a multiple-phase process equivalent to that of Fig. 1, observations being taken only at the end of the last phase. Arrows with different lengths correspond to different levels of nesting among factors.

factors, x(1) , and random effect factors, x(i) rand , i = 2, . . . , K (Fig. 3). All factors are sequentially nested according to a hierarchical structure corresponding to the link-up among process phases. 3. Variance components: two cases This section deals with evaluation of the expected mean squares for factors and experimental error in two relevant cases of the proposed model, highlighting differences from conventional models, a two-phase process and a three-phase process, both with three factors, namely A, B and C with levels respectively a, b and c, with r replications. Experiment size enables estimation of parameters of an univariate crossedfactors model; comparison between the two analyzed models and the crossed one is straightforward. 3.1. Three-phase process, one single factor for phase Let us consider a process with three phases φm , m = 1, 2, 3, and three factors A, B and C acting respectively in the first, in the second and in the third phase. According to the previous consideration, factor C is nested in B and it has random effect, factor B is nested in A and it has too random effect, factor A has fixed effect. The following statistical model is suggested for the observed response at the exit of the last phase, consistent with the physical nature of the system: ˜ (1) + b(A) + c(B(A)) + ε˜ y(3) = µ1abcr + αx

(6)

(3) (3) (3) (3) (3) (3) T where: y (3) = (Y1111 , Y1112 ,. . . ,Y111r , Y1121 , Y1122 ,. . . ,Yi(3) jkl ,. . . ,Yabcr )

197 is the response vector at the outcome(1 ), the term µ is the grand mean response, αx ˜ (1) = α a ⊗ (1b ⊗ 1c ⊗ 1r ) with ⊗ the Kronecker product, being α a = (α1 , α2 , . . . , αa )T with the constrain 1aT α a = 0, stand for the effect (fixed) of the factor A transmitted along the subsequent stages, b(A) and c(B(A)) are random vectors representing respectively the random effects of the factors B and C and ε˜ is the residual error term. This last component of the model (6), the error term, includes all the sources of random variability due to the three phases and therefore it can be assumed normally distributed, ε˜ ∼ N (0, σ 2 Iabcr ), where Iabcr is the identity matrix and the number abcr is its dimension, σ 2 being the overall variance. The assumption of normal distribution with independent marginal distributions is realistic also for the two random vectors b(A) and c(B(A)) : b(A) ∼ N (B ⊗ 2 2 Iabcr ) and c(B(A)) ∼ N (C ⊗ 1r , σ(B(A)) Iabcr ), being B and C the 1r , σ(A) expected values of effects the corresponding factors for each one of 2 their variance. The difference between the the abc treatments and σ(•) proposed model and the classical models for nested-hierarchical designs is underlined, since in the model (6), the hierarchical structure leads to ) a correlation among b(A) , c(B(A)) and ε˜ . In fact, denoting with εi( jkl and 

)   εi(  j  k  l  the error terms referring to the two different phases  and  (i.e. representative of the global random variability pooled up to the ) ( )   correspond phases), it is cov[εi( jlk , εi  j  l  k  ] = 0 for every i = i , j = j , k = k  , l = l  , with i, i  = 1, 2, . . . , a, j, j  = 1, 2, . . . , b, k, k  = 1, 2, . . . , c, l, l  = 1, 2, . . . , r , as it can be guessed also from the last one of (5). The correlation among errors in sequential phases on the same experimental worked units entails that also cov[b(A)i jkl , ε˜ i  j  k  l  ], cov[c(B(A))i jkl , ε˜ i  j  k  l  ] and cov[b(A)i jkl , c(B(A))i  j  k  l  ] are non zero for the same values of subscripts. Summarizing the results obtained under the previous assumptions, the table of the non-zero covariances is:

cov[b(A)i jkl , ε˜ i  j  k  l  ] = δii  j j  kk  ll  K 1 σ12 cov[c(B(A))i jlk , ε˜ i  j  l  k  ] = δii  j j  kk  ll  (H1 σ12 + H2 σ22 ) cov[b(A)i jkl , c(B(A))i  j  k  l  ] = δii  j j  kk  ll  (L 1 σ12 − Bi jk Ci  j  k  ) (1 )The responses are ordered according to the lexicographic order because of the fact that in the first phase the abcn experimental units are divided into a groups and all the unit of a group have the same treatment corresponding to a specific level of factor A. In the following phases the design goes on with the same pattern: every group is divided first into subgroups of cr units and then of r units.

198 where δii  j j  kk  ll  is the Kronecker symbol, σk2 is variance of error occurred in the phase k, with k = 1, 2, m because of the manufacturing and K 1 , H1 , H2 and L 1 are constant terms due to the effect of manufacturing in the phase with the same subscript. Sum of squares, and their corresponding expected values, of factor A, B and C and experimental error were evaluated, consistent with the feature of the model (6). Referring to the fixed effect factor A, the expected sum of squares are:

E[SSA ] = bcr



(Y i••• − Y •••• )2

i

where the dot in the subscript denotes summation over all the observations with all possible values of subscript. From (6), it follows: 1  Y i••• = Yi jlk = µ + α˜ i + b¯(A)i••• + c¯(B(A))i••• + ε¯ i••• , bcr j,l,k with Y •••• =

1 bcr



Yi jlk = µ + b¯(A)•••• + c¯(B(A))•••• + ε¯ •••• ,

i, j,l,k

with because of

E[Y i••• ] = µ + α˜ i + B i•• + C i••

E[b(A)i jlk ] = Bi jl

E[SSA ] = bcr



and

E[Y •••• ] = µ + B ••• + C •••

E[c(B(A))i jlk ] = Ci jl .

Therefore:

(α˜ i + b¯(A)i•• + c¯(B(A))i•• +

i

+ ε¯ i••• − b¯(A)••• − c¯(B(A))••• − ε¯ •••• )2 = = bcr



2 [α˜ i + (B i•• − B ••• ) + (C i•• − C ••• )]2 + (a − 1)σTot.

i 2 2 with a − 1 degrees of freedom (d.f.) and with: σTot. = σb(A)+c(B(A))+˜ ε. Proceeding along the line shown above and with the same notation, the corresponding expected values of the two random effect factor B and C, with non zero average, are:

E[SSB(A) ] = r



(Y i j•• − Y i••• )2 = cr

i, j



[(B i j• − B i•• )+

i, j

2 with a(b − 1) d.f. + (C i j• − C i•• )]2 + a(b − 1)σTot.

E[SSC(B(A)) ] = r



(Y i jk• − Y i j•• )2 = r

i, j,k

+ ab(c −



(C i jk − C i j• )2 +

i, j,k 2 1)σTot.

with ab(c − 1) d.f.

199 since, from (6) it, is: Y i j•• =

1  Yi jmk = µ + α˜ i + b¯(A)i j• + c¯(B(A))i j• + ε¯ i j•• , cr m,k with

Y i jm• =

1 r

E[Y i j•• ] = µ + α˜ i + B i j• + C i j•

Yi jmk = µ + α˜ i + b¯(A)i jm + c¯(B(A))i jm + ε¯ i jm• ,

k

with

E[Y i jm• ] = µ + α˜ i + B i jm + C i jm .

An estimation of variance σ 2 , comprehensive of all random variability due to the manufacturing in the sequential phases can be obtain from:

E[SSExp. ] =



2 (Y i jkl − Y i jk• )2 = abc(r − 1)σb(A)+c(B(A))+˜ ε =

i, j,k,l

= abc(r −

2 1)σTot.

(7)

with abc(r − 1) d.f.

Eq. (7) points out that components of variability not due to the chance are irretrievably mixed with random ones in the estimation of the experimental error, a rather unwelcome feature. 3.2. Two-phase process with three factors (two in the first phase, one in the second) Let us consider now a sequential process with two phases only, φ1 and φ2 , two factors A and B acting in the phase φ1 and one factor C acting in the phase φ2 . Accordingly, A and B are considered as fixed effect factors, while factor C, nested in the first phase, has random effects. The statistical model proposed for the response at process output is: ˜ (2) + ηx ˜ (1) + βx ˜ (1,2) + c(◦) + ε˜ y(2) = µ1abcr + αx

(8)

where y(2) and the terms µ1abcr and αx ˜ (1) are defined respectively ˜ (2) = as y(3) and the other homonymous term in 3.1; moreover βx T 1a ⊗ β b ⊗ (1c ⊗ 1r ), with β b = (β1 , β2 , . . . , βa ) and the constrain ˜ (1,2) = 1bT β b = 0, stands for the effect (fixed) of the factor B and ηx T η ab ⊗ (1c ⊗ 1r ) with η ab = (η1 , η2 , . . . , ηab ) and the two constrains (1aT ⊗ Ib )ηη ab = 0b and (Ia ⊗ 1bT )ηη ab = 0a , the effect (fixed) of the

200 mutual interaction between the two factors A and B, both propagated to the process output. Term ε˜ represents random error; in this case the term ε˜ represents all the sources of variability due to the manufacturing of the two phases; so the same assumption for ε˜ as in 3.1 may be done. Vector c(◦) represents the random factor C effect, under the usual assumption of normality and independent marginal distribution for the 2 Iabcr ), whereas C is the expected mean vector c(◦) ∼ N (C ⊗ 1r , σ(◦) of factor C effects for each one of the abc treatments with variance 2 σ(◦) . The subscript in parentheses used for the C effect denotes all the relevant levels nested in the previous phase; on the other hand, in model (6) all the levels of C were nested in a single factor, since the evolution of system over the time, and the time depending action of the factors, originate nesting. Due to the structure of adopted model, also in this case cov[c(◦)i jlk , ε˜ i  j  l  k  ] = δii  j j  kk  ll  M1 σ12 , where M1 is a constant term due to the effect of manufacturing in the phase with the same subscript. Evaluation of mean squares and correspondent expected values for the main effects of the factors A, B and C, for the interaction AB and for the experimental error, take therefore the form:

E[SSA ] = bcr



(Y i••• − Y •••• )2 = bcr

i



[αi + (C i•• − C ••• )]2 +

i

2 + (a − 1)σTot. with a − 1 d.f.

E[SSB ] = acr



(Y • j•• − Y •••• )2 = acr

j



[β j + (C i j• − C i•• )]2 +

j

2 + (b − 1)σTot. with b − 1 d.f.

E[SSAB ] = cr



[(Y i j•• + Y •••• ) − (Y i••• + Y • j•• )]2 =

i, j

= cr



[ηi j + (C i j• − C i•• )+

i, j 2 +(C ••• −C • j• )]2 +(a − 1)(b − 1)σTot. with ab(c − 1) d.f. 2 2 with the: σTot. = σc(◦)+˜ ε . The components of variability due to the chance are already present in the estimation of the experimental error

201 σ 2 , but they are not distinguishable from other systematic components, like in 3.1.

4. Residual error Models presented before are heavily over-parameterized, having a number of parameters which cannot be identified on the basis of experimental runs at hand, since intermediate responses y(k) , with k = 1, 2, . . . , K − 1, are not accessible to the experimenter. Actually these responses play a role somewhat similar to that of nuisance factors, as both inflate error variability. However, while nuisance factors are most detrimental if ignored at all, and drop down to a marginal disturbance whenever they are identified and duly taken into account, in the case at hand intermediate responses do certainly exist but they escape observation. Therefore residual error at the output of a multiple-phase process will unavoidably pool together contributions of errors transmitted downstream by every intermediate phase, thus adding noise to effects of all factors. The model so to speak issues a warning against using tout court a crossed factor classification on such instances, as misleading conclusions might easily be arrived at, owing to presence of variance components in the expected values of mean squares for factors and error alike. In the applied problem which originated this study, an improved estimate of residual error was obtained by applying bootstrap and jackknifing techniques to the sample {yi(Kjkl) }, with i = 1, 2, . . . , a, j = 1, 2, . . . , b, k = 1, 2, . . . , c, and l = 1, 2, . . . , r . The latter especially proved to be quite helpful in estimating the individual variability components present in the expected value of mean square error (see related eq. in 3.1 and 3.2), when applied to experimental sub-groups.

5. Conclusions A model for the response at the process output has been obtained, adequate for the analysis of factorial experiments on processes made up of a sequence of distinct, independent phases. Link-up among phases is seen to induce a correlation structure among individual phase outcomes, which, in turn, originates a correlation structure among terms in the response model. The problem of tailoring the statistical

202 model upon system’s peculiarities and experimental arrangement still deserves further attention. Availability of dedicated software can make the rather more involved analysis required by ad hoc models a practical option. Process capability evaluation on a multiple-phase processes may turn out to be an awkward undertaking. Given the inherent error transmission mechanism described above, an interesting question is how the overall process capability depends upon capabilities of the single subprocesses, and how to detect critical sub-processes, responsible for a large proportion of overall process variability. Acknowledgments The authors are indebted to Prof. Alessandra Giovagnoli for her comments and valuable suggestions, which helped substantially to improve the paper. Acknowledgments are due to MURST (Italian Ministry for Scientific and Technical Research) for support within the framework of Progetto Cofin99, “Metodi di inferenza statistica per problemi”, aimed at development of experimental designs and exploitation in applied work.

REFERENCES Bates, R.A., Buck, R.J., Riccomagno, E. and Wynn, H.P. (1994) Experimental design and observation for large systems, Workshop DEINDE, Torino. Bley, H., Jung, D. and Kl¨ar, P. (1999) Modeling Quality in Complex Processes-Optimization of a Powder Metallurgical Process using DOE, Proc. of Int. Conf. on Quality Manufacturing, Katz Z., du Preez N.D. (eds.), Stellenbosch, South Africa, 1-7. Cochran, W.G. and Cox, G.M. (1961) Experimental Designs, John Wiley & Sons, New York. Diggle, P.J., Liang, K.Y. and Zeger, S.I. (1994) Analysis of Longitudinal Data, Oxford Science Publications, Oxford. Guseo, R. (1997) Split-Plot design: Extension and Robustness Aspects, Statistica Applicata, 9, 1, 61-79. Kleijnen, J.P. (1999) Experimental Design for Sensitivity, Analysis, Optimization and Validation of Simulation Models, Handbook of Simulation, J. Banks (ed.), John Wiley & Sons, New York, 173-224. Mason, R.L., Gunst, R.F. and Hess, J.L. (1989) Statistical Design & Analysis of Experiments, John Wiley & Sons, New York. ` H. (1959) The Analysis of Variance, John Wiley & Sons, New York. Scheffe, Vicario, G. (1997) Level Dependent Heteroschedasticity in Factorial Experiments: Aliasing Patterns and Implications, Quality Engineering, 9(4), 703-710.

203 Factorial experiments for sequential process Summary The paper deals with the construction of a statistical model of response in factorial experiments on sequential processes. In such instances experimental units flow through an ordered sequence of consecutive and independent phases, each of them characterized by a set of relevant factors. Such a situation may not readily lend itself to proper description in terms of classical designs such as crossed-factors, nested-factors or split-plot designs. Peculiarities of multiple-phases processes are described in the Introduction, based upon the description of an actual industrial problem. Section two, Modellization, describes the model proposed for the simplest case of a two-phase process with one factor per phase. A chain of linear regression models is hierarchically applied to the chain of sub-processes, where the output of every sub-process is a function of the current factors as well as of the output of the previous sub-process. In the third section, Variance components, expected values of relevant mean squares are derived for models pertaining to two- and three-phase processes, with three factors in either case. Differences between the approach proposed and the analysis pertaining to classical designs are highlighted in results. Section four, Residual error, presents some considerations on peculiarities of model’s residual error structure, in view of practical applications. Basic underlying concepts and implications are discussed in the Conclusions, pointing out the need of further investigations on the evaluation of the process capability of multiple-phase processes.

Sperimentazione fattoriale in processi a piu` fasi Riassunto Lo studio di processi a pi`u fasi tramite esperimenti pianificati presenta alcuni caratteri peculiari, che si riflettono sia sulla forma del modello statistico sia sull’analisi dell’esperimento. Non e` difficile trovare paradigmi di questi processi: si pensi alle propriet`a di un manufatto determinate dalle sue fasi di lavorazione, alle caratteristiche di un segnale dopo esser passato attraverso la catena di elaborazione, allo sviluppo delle capacit`a individuali di una persona in seguito all’iter formativo. Un fatto fisico con una propria proiezione temporale meglio si presta ad essere descritto come processo stocastico piuttosto che come singolo evento casuale. Tuttavia, la presenza dei fattori nei vari stadi del sistema rende impossibile l’applicazione delle metodologie statistiche in uso per i processi stocastici, in quanto l’effetto dei fattori modifica in modo deterministico l’evoluzione del processo. Le prestazioni di processi di questo tipo dipendono dalle modificazioni determinate dai fattori nelle singole fasi, dalle componenti casuali di errore introdotte in ogni fase e da come entrambe vengono trasferite a valle verso l’uscita del processo; l’intrinseco meccanismo di trasmissione dota il processo di memoria. Questa peculiarit`a sarebbe persa sia utilizzando una serie di esperimenti senza memoria applicati alle singole fasi, sia ricorrendo ad un unico esperimento onnicomprensivo in cui la concatenazione tra le fasi sarebbe eliminata. Nel lavoro viene proposto un modello statistico che rispetta la natura del problema. Il modello viene ricavato applicando in cascata dei modelli regressivi lineari nei quali l’uscita di una fase dipende dai fattori di quella fase e dall’uscita della fase precedente, veicolo della memoria del processo. Ne deriva un modello gerarchico di

204 tipo misto, cio`e con effetti sia deterministici che casuali, questi ultimi a media non nulla. Le caratteristiche del modello non si ritrovano in nessuno dei modelli classici adottati in letteratura, il che suggerisce di evitare l’applicazione pedissequa di schemi convenzionali se si vuole rispettare pi`u fedelmente la natura di un processo multifase a fasi indipendenti.

Key words Sequential processes; Factors; Nesting; Random and fixed effects; Sum of squares.

[Manuscript received December 1999; final version received November 2000.]

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.