Optimizing Drug Delivery Systems Using Systematic \"Design of Experiments.\" Part I: Fundamental Aspects

Share Embed


Descrição do Produto

Critical Reviews™ in Therapeutic Drug Carrier Systems, 22(1):27–105 (2004)

Optimizing Drug Delivery Systems Using Systematic “Design of Experiments.” Part I: Fundamental Aspects Bhupinder Singh, Rajiv Kumar, & Naveen Ahuja Pharmaceutics Division, University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, India Address all correspondence to Bhupinder Singh, University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh 160 014 India; [email protected] Referee: Dr. Gurvinder Singh Rekhi, Elan Holdings Inc., Gainesville, GA 30504, USA

ABSTRACT: Design of an impeccable drug delivery product normally encompasses multiple objectives. For decades, this task has been attempted through trial and error, supplemented with the previous experience, knowledge, and wisdom of the formulator. Optimization of a pharmaceutical formulation or process using this traditional approach involves changing one variable at a time. Using this methodology, the solution of a specific problematic formulation characteristic can certainly be achieved, but attainment of the true optimal composition is never guaranteed. And for improvement in one characteristic, one has to trade off for degeneration in another. This customary approach of developing a drug product or process has been proved to be not only uneconomical in terms of time, money, and effort, but also unfavorable to fix errors, unpredictable, and at times even unsuccessful. On the other hand, the modern formulation optimization approaches, employing systematic Design of Experiments (DoE), are extensively practiced in the development of diverse kinds of drug delivery devices to improve such irregularities. Such systematic approaches are far more advantageous, because they require fewer experiments to achieve an optimum formulation, make problem tracing and rectification quite easier, reveal drug/polymer interactions, simulate the product performance, and comprehend the process to assist in better formulation development and subsequent scale-up. Optimization techniques using DoE represent effective and cost-effective analytical tools to yield the “best solution” to a particular “problem.” Through quantification of drug delivery systems, these approaches provide a depth of understanding as well as an ability to explore and defend ranges for formulation factors, where experimentation is completed before optimization is attempted. The key elements of a DoE optimization methodology encompass planning the study objectives, screening of influential variables, experimental designs, postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. The broad topic of DoE optimization methodology is covered in two parts. Part I of the

0743-4863/05$20.00 © 2005 by Begell House, Inc., www.begellhouse.com

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

27

B. SINGH ET AL.

review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical methods, with special emphasis on computer-based approaches, artificial neural networks, and judicious selection of designs and models. KEY WORDS: artificial neural networks, computer software, drug product development, experimental design, factor screening, response surface methodology

I. INTRODUCTION The domain of drug delivery has enabled a newer look toward drug formulation development and subsequent patient therapy. Lately, pharmaceutical scientists have made remarkable strides in the development of diverse types of newer drug delivery systems (DDS).¹-³ Development of such DDS invariably involves handling a plethora of drugs, polymers, excipients, and processes. The traditional approach of optimizing a formulation or process essentially entails studying the influence of the corresponding composition and process variables by Changing One Single (or Separate) variable or factor at a Time (COST), while keeping others as constant.⁴-⁹ The technique, at times, is also referred to as OVAT (i.e., One Variable at a Time) or OFAT (i.e., One Factor at a Time) or “shotgun” approach.⁶,¹⁰,¹¹ During these COST studies, the first variable is fixed at a favorable value, and the next is examined until no further improvement is attained in the response variable. For decades, drug formulations have been developed by this process of trial and error.¹¹,¹² The COST approach can somehow achieve the solution of a specific problematic property, but attainment of the true optimum composition or process is never guaranteed.⁹,¹¹,¹³ It may be ascribed to the presence of interactions—i.e., the influence of one or more variable(s) on others.⁷,¹⁴ During such interactions among variables, the COST approach gets stuck, usually far from optimum. Because there is no further improvement in the response, the experimenter may erroneously assume attainment of the optimum. The final product may be thought satisfactory but will really be suboptimal, because a better formulation still exists, although unperceived under the studied conditions.¹⁰,¹⁴-¹⁶ The prior experience, knowledge, and wisdom of the formulator have been the key factors in formulating new or customized dosage forms. Sometimes, when the developer is instinctive, skilled, and fortunate, such unsystematic approach may yield surprisingly successful outcomes. Invariably, however, when skill, acumen, or chance are not in the developer’s favor, it leads to squandering remarkable amounts of time, energy, and resources. ⁴,¹⁵,¹⁶ Accordingly, the intuitive COST approach requires

28

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

many experiments for little gain in information about the system under investigation. Fıgure 1 illustrates the case of an arbitrary DDS depicting that the “arrived” COST optimum is quite distant from the “missed” true optimum. A drug delivery product and process design problem is normally characterized by multiple objectives.¹¹,¹²,¹⁷ In an attempt to accomplish such objectives, a pharmaceutical scientist has to fulfill various control limits for a formulation. For a controlled release bioadhesive tablet, for instance, the dissolution rate profile and bioadhesion would be most appropriate to control.¹⁸ Because most of the objectives of a formulation are often differing, accepting a suitable trade-off or compromise between one or more properties—e.g., dissolution rate at the expense of bioadhesion—usually becomes unavoidable.⁶,¹¹,¹⁷ Thus, the primary aim of the traditional formulator has been to find that suitable trade-off under the given set of constraints rather than to design the best formulation. The imposed pressures of time, cost, resources, aes-

FIGURE 1. Pictorial representation of the COST approach to designing an archetypical transdermal gel employing the optimal values of gelling polymer and penetration enhancer.

29

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

BOX 1. Various Limitations of Changing One Variable at a Time (COST) Approach Shortcomings of COST approach Strenuous. Uneconomical. Time consuming. Unsuitable to plug errors. Inapt to reveal interactions. Isolated and unconnected studies. Pseudo-convergent to untrue optimum. Result only in “just satisfactory” solutions. Detailed study of all variables is prohibitive. Prone to misinterpretation or faking of results. Futile when all variables change simultaneously. Unable to establish “cause and effect” relationship. Ineffectual as leads to unnecessary runs and batches. New product may retain defects inherent in the old one. Irreproducible as infers randomly on the basis of origin.

thetics, and performance benchmarks further exacerbate the situation. Therefore, the conventional COST approach of drug formulation development suffers from several pitfalls.⁴,⁶-⁸,¹²,¹⁵,¹⁹,²⁰ The most important of these are enumerated in Box 1. These drug product inconsistencies are generally due to inadequate knowledge of the underlying cause-and-effect relationship(s).⁶,⁹,¹⁹ Systematic optimization techniques, on the other hand, have widely been practiced to alleviate such inconsistencies.⁷,¹¹,¹⁹,²¹-²⁶ Development of the principles behind such optimization techniques, now known as design of experiments (DoE), dates back to 1925, with its discovery by British statistician, Sir Ronald Fısher.²⁷ The implementation of DoE optimization techniques invariably encompasses use of experimental designs and generation of mathematical equations and graphic outcomes, thus depicting a complete picture of variation of the product/process response(s) as a function of the input variable(s).¹⁵,²⁶,²⁸,²⁹ Employing various rational combinations of formulation variables, DoE fits experimental data into statistical equations, uses these as models to predict formulation performance, and optimizes the critical responses. In direct contrast to the COST approach, DoE optimization offers an organized methodology that connects various experiments in a rational manner, giving more precise information from fewer experiments.⁷,³⁰ Considering all the multiple variables at once, DoE demonstrates how the system works as a whole. It enables the experimenter to optimize all the critical responses and find

30

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

the “triumphant” combination. DoE undertakes a simultaneous testing approach in parallel studies, which has proved to be far more effective, efficient, economical, and expedient than the “sequential” COST scheme.⁷,¹⁵ In a nutshell, the optimization techniques possess much greater benefits, because they surmount several pitfalls inherent to the traditional approaches.⁶,⁷,¹²,¹⁶,¹⁹,²²,²⁸,²⁹,³¹-³⁵ Several meritorious features of DoE vis-à-vis COST optimization have been summarized in Box 2. Of late, DoE optimization techniques are becoming a regular practice globally, not only in the design and development of an assortment of new dosage forms, but also for modifying existing ones.⁸,¹⁰ Putting such rational approaches into practice, however, usually involves a great deal of mathematical and statistical intricacy. Despite its discovery in the 1920s, DoE optimization lay virtually dormant because the manual calculations it required were extremely cumbersome. It often called for the pivotal help of an apt computer interface.²⁵,³⁵-³⁷ Software that automates the “designed-experiment” studies was invented in the early days of mainframe computers.³⁸ Mainframes no doubt chugged through complicated DoE equations but required programming skills beyond the scope of the most experimenters.

BOX 2. Various Meritorious Features of Systematic DoE Optimization Techniques Advantages of systematic optimization techniques Require fewer experiments to achieve an optimum formulation. Can trace and rectify a “problem” in a remarkably easier manner. Lead to comprehensive understanding of the formulation system. Yield the “best solution” in the presence of competing objectives. Help in finding the “important” and “unimportant” input variables. Tests and improves “robustness” amongst the experimental studies. Can change the formulation ingredients or processes independently. Aid in determining experimental error and detecting “bad data points.” Can simulate the product or process behavior using model equation(s). Save a significant amount of resources viz. time, effort, materials and cost. Evaluate and improve the statistical significance of the proposed model(s). Can predict the performance of formulations even without preparing them. Detect and estimate the possible interactions and synergies among variables. Facilitate decision–making before next experimentation by response mapping. Provide reasonable flexibility in experimentation to assess the product system. Can decouple signal from background noise enabling inherent error estimation. Comprehend a process to aid in formulation development and ensuing scale–up. Furnish ample information on formula behavior from one simultaneous study only.

31

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

Nonetheless, it wasn’t until those room-sized computers became desktop PCs that affordable DoE software first appeared to cater to the nonstatistical experts. Today, with the availability of comprehensive DoE software, coupled with the powerful and economical hardware, the erstwhile computational hiccups have been greatly simplified and streamlined.⁶,²⁸ Hence, computer use is considered almost indispensable in DoE optimization methods to take care of the numeric calculations entailed in its realization. Accordingly, the onerous task of systematic optimization of a DDS can be accomplished using a three-pronged strategy encompassing vistas of drug delivery, DoE, and computer-aided computation. Fıgure 2 illustrates the synergy between them. The conduct of systematic DoE studies using computers, undeniably obviates an in-depth knowledge of statistical and mathematical precepts. However, comprehension of varied concepts behind these methodologies is certainly a must for the successful conduct of optimization studies. The information on such rational techniques, however, lies scattered in different books and journals. Complete and lucid description of the variegated facets of DoE optimization is not available from a single textual source. The current article is an earnest attempt to furnish such unambiguous and illustrated information. The vast topic of DoE optimization of drug delivery is being discussed in two parts. Part I, herein, acquaints the reader with the DoE fundamentals by presenting a concise and cogent account of vital principles and precepts of these systematic methodologies, absolutely needed to comprehend and execute the approach. Part II, appearing in a subsequent issue, will thrash out the subtler features of the DoE application in designing wide-ranging products and processes and lead to successful development of variegated DDS.

FIGURE 2. Pivotal elements for successful endeavor in optimization of drug delivery systems.

32

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

II. OPTIMIZATION: FUNDAMENTAL DoE CONCEPTS AND TERMINOLOGY The word optimize simply means to make as perfect, effective, or functional as possible.⁴,¹⁶ The term optimized has been used in the past to suggest that a product has been improved to accomplish the objectives of a development scientist. However, today the term implies that DoE and computers have been used to achieve the objective(s). With respect to drug formulations or pharmaceutical processes, optimization is a phenomenon of finding the best possible composition or operating conditions.⁴,⁶ Accordingly, optimization has been defined as the implementation of systematic approaches to achieve the best combination of product and/or process characteristics under a given set of conditions.¹⁹

II.A. Variables Design and development of any drug formulation or pharmaceutical process invariably involves several variables.⁴,²⁵,³⁹ The input variables, which are directly under the control of the product development scientist, are known as independent variables—e.g., drug content, polymer composition, compression force, percentage of penetration enhancer, hydration volume, agitation speed. Such variables can either be quantitative or qualitative.²⁸,⁴⁰ Quantitative variables are those that can take numeric values (e.g., time, temperature, amount of polymer, osmogent, plasticizer, superdisintegrants) and are continuous. Instances of qualitative variables, on the other hand, include the type of polymer, lipid, excipient, or tableting machine. These are also known as categorical variables.⁶,⁴¹ Their influence can be evaluated by assigning discrete dummy values to them. The independent variables, which influence the formulation characteristics or output of the process, are labeled factors.⁶,³⁴,⁴⁰ The values assigned to the factors are termed levels—e.g., 100 mg and 200 mg are the levels for the factor, release-rate-controlling polymer in the compressed matrices. Restrictions imposed on the factor levels are known as constraints.¹⁶,⁴⁰ The characteristics of the finished drug product or the in-process material are known as dependent variables—e.g., drug release profile, percent drug entrapment, pellet size distribution, moisture uptake.⁶,²⁸,⁴² Popularly termed response variables, these are the measured properties of the system to estimate the outcome of the experiment. Usually, these are direct function(s) of any change(s) in the independent variables. Accordingly, a drug formulation (product), with respect to optimization techniques, can be considered as a system whose output (Y ) is influenced by a set of input variables via a transfer function (T ).⁷,³¹ These input variables may either be controllable (X; signal factors) or uncontrollable (U; noise factors).²⁸,⁴³ Fıgure 3 depicts the same graphically.

33

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

U

X

Y

T

U FIGURE 3. System with controllable input variables (X), uncontrollable input variables (U), transfer function (T), and output variables (Y).

The nomenclature of T depends upon the predictability of the output as an effect of the change of input variables. If the output is totally unpredictable from the previous studies, T is termed the black box. The term white box is used for a system with absolutely true predictability, while the term gray box is used for moderate predictability. Using optimization methods, the attempt of the formulator is to attain a white box or nearly white box status from the erstwhile black or gray box status observed in the traditional studies.¹⁹ The greater the number of variables in a given system, the more complicated becomes the job of DoE optimization.³¹ Nevertheless, regardless of the number of variables, a distinct relationship exists between a given response and the factors studied.⁶,³¹

II.B. Effect, Interaction, and Confounding The magnitude of the change in response caused by varying the factor level(s) is termed as an effect.³⁴,⁴⁰ The main effect is the effect of a factor averaged over all the levels of other factors. However, an interaction is said to occur when there is “lack of additivity of factor effects.” This implies that the effect is not directly proportional to the change in the factor levels.⁴⁰ In other words, the influence of a factor on the response is nonlinear.⁴,⁶,⁷,⁴⁴ In addition, an interaction may said to take place when the effect of two or more factors are dependent on each other—e.g., the effect of factor A changes on changing factor B by one unit. The measured property of the interacting

34

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

variables depends not only on their fundamental levels, but also on the degree of interaction between them. Depending upon whether the change in the response is desired (positive) or undesired (negative), the phenomenon of interaction may be described as synergism or antagonism, respectively.⁶,⁴⁰ Fıgure 4 illustrates the concept of interaction graphically. Effects plot is plotted between the magnitude of various coefficients for the effects and/or interactions against the response variable.⁶,³¹ The plot is drawn during the initial stages of DoE to determine the influence of each term. The term orthogonality is used if the estimated effects are due to the main factor of interest and are independent of interactions.²⁹,⁴⁰,⁴⁵,⁴⁶ Conversely, lack of orthogonality (or independence) is termed confounding or aliasing.⁴⁰,⁴⁴ When an effect is confounded (or aliased, or mixed up, or equalled), one cannot assess how much of the observed effect is due to the factor under consideration. The effect is influenced by other factors in a manner that cannot easily be explored. The measure of the degree of confounding is known as resolution.⁷,⁴⁵ At times, there is confusion between confounding and interaction. Confounding, in fact, is a bias that must be controlled by suitable selection of the design and data analysis. Interaction, on the other hand, is an inherent quality of the data, which must be explored. Confounding must be assessed qualitatively, while interaction may be tested more quantitatively.⁴⁴

INTERACTION

NO INTERACTION DD I SS SS OO L UU TT I OO NN

D I S S O L U T I O N

Low polymer level

High polymer level

High polymer level

High

Low

High

Low

Low polymer level

DRUG

DRUG

(a)

(b)

FIGURE 4. Diagrammatic depiction of interaction. Unparallel lines in (b) describe the phenomenon of interaction between the levels of drug and polymer amount affecting drug dissolution. (—): Linear response-factor relationship; (.....): nonlinear response-factor relationship.

35

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

II.C. Coding The process of transforming a natural variable into a nondimensional coded variable, Xi , so that the central value of experimental domain is zero is known as coding (or normalization).³⁴,⁴⁰,⁴⁷ Generally, the various levels of a factor are designated as –1, 0, and +1, representing the lowest, intermediate (central), and highest factor levels investigated, respectively.⁶,³¹,⁴⁰ For instance, if sodium carboxymethyl cellulose, a hydrophilic polymer, is studied as a factor in the range of 120–240 mg, then codes –1 and +1 signify 120 mg and 240 mg amounts, respectively. The code 0 would represent the central point at the arithmetic mean of the two extremes—i.e., 180 mg. Alternatively, for convenience, the factors and their levels have been denoted by alphabetic notation (symbol) to express various combinations investigated in the study. For example, a factor is denoted by a capital alphabet letter (say factor A ), the high level by a, and low level as (–1). Table 1 illustrates the alphabetic denotations used in pharmaceutical literature for coding factors and their factor combinations at the respective levels. Although the terminology for factors as A and B and their levels as (1), a, b, etc. is comprehensive in the text format, their translation into mathematical equation(s) is neither practical nor easy to comprehend.¹⁹ Therefore, the symbol Xk is normally used for representing the factor X, where the subscript k depicts the number of factors.²⁸,³¹ Analogously, the subscripted β values are employed to denote the coefficient values in the mathematical equations. Coding involves the orthogonality of effects and depicts effects and interaction(s) using (+) or (–) signs.¹⁶,⁴⁰ It assigns equal significance to each axis and allows not only easier calculation of coefficients and coefficient variances, but easier depiction of response surfaces as well. To circumvent any anomaly in factor sensitivity with change in levels, it is recommended that the factor coding must be carried out judiciously.²⁸,⁴⁰ For instance, in the case of microsphere production, if one factor is stirring speed (say, within

TABLE 1. Denotation of Various Levels of Two Factors Factor

Level notations Low level

High level

A

–1

a

B

–1

b

AB

1

ab

36

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

the range of 1500–3000 rpm) and the other is pH (say within the range of 1–5), a change of 1 pH unit is far more significant than a change of 1 rpm.

II.D. Experimental Domain The dimensional space defined by the coded variables is known as factor space.⁶,²² Fıgure 5 illustrates the factor space for two factors on a bidimensional (2-D) plane during the formulation of controlled release microspheres.⁴⁸ The part of the factor space, investigated experimentally for optimization, is the experimental domain.⁶,⁴⁷ Also known as the region of interest, it is enclosed by the upper and lower levels of the variables. The factor space covers the entire figure area and extends even beyond it, whereas the design space of the experimental domain is the square enclosed by X₁ = ±1, X₂ = ±1.

II.E. Experimental Design The conduct of an experiment and the subsequent interpretation of its experimental outcome are the twin essential features of the general scientific methodology.⁴,²² This

FIGURE 5. Quantitative factors and factor space. The axes for the natural variables, ethyl cellulose:drug ratio and Span 80 are labeled U1 and U2 and those of the corresponding coded variables X1 and X2.

37

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

can be accomplished only if the experiments are carried out in a systematic way and the inferences are drawn accordingly. An experimental design is the statistical strategy for organizing the experiments in such a manner that the required information is obtained as efficiently and precisely as possible.²⁶,²⁹,³⁴,⁴⁹ Runs or trials are the experiments conducted according to the selected experimental design.⁶,²⁸ Such DoE trials are arranged in the design space so that the reliable and consistent information is attainable with minimum experimentation. The layout of the experimental runs in a matrix form, according to the experimental design, is known as the design matrix.⁶,³¹ The choice of design depends upon the proposed model, the shape of the domain, and the objective of the study. Primarily, the experimental (or statistical) designs are based on the principles of randomization (i.e., the manner of allocations of treatments to the experimental units), replication (i.e., the number of units employed for each treatment), and error control or local control (i.e., the grouping of specific types of experiments to increase the precision).⁷,³¹,³⁴,⁴⁷ For deriving maximal benefits from DoE, an experimenter has invariably to know, comprehend and apply some or all of the following aspects.

1. Blocking in Experimental Designs

Often the estimation of “effects” and “interaction” becomes complicated as a result of variability in the results caused by some uncontrollable factors, commonly termed nuisance factors or extraneous factors.⁷ Although these nuisance factors are the factors that may affect the measured result, they are not of primary interest. In such situations, blocks are generated in the experimental domain. Each block is a set of relatively homogenous experimental conditions, wherein every level of the primary factor occurs the same number of times with each level of nuisance factor.⁷,³¹,⁴⁶ These uncontrollable factors, therefore, are usually taken as the blocking factors. This technique of blocking is used to reduce or eliminate the variability transmitted by the nuisance factors. Accordingly, the analysis of the experiment focuses on the effect of varying levels of the primary factor “within each block” of the experiment. Runs are distributed over blocks in such a way that any difference between the blocks does not bias the results for the factors of interest. This is accomplished by treating the blocking factor as another factor in the design. The inclusion of blocking factors as additional factors in the design results in loss of estimation of some interaction terms, eventually lowering the resolution of the design. Nonetheless, the technique of blocking makes the design statistically more powerful.³¹ It allows simultaneous estimation and control of variability stemming from the difference(s) between the blocks during optimization of a process or formulation. Blocking considerably improves the precision with which comparisons are made among the factors of interest.

38

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

2. Resolution of Experimental Designs

One of the important features of the experimental designs is their resolution—i.e., the ability to describe the degree to which the estimated main effects are aliased (or confounded) with the estimated two-, three-, or higher level interactions.⁶,⁷,¹⁵,⁴⁵ In general, the resolution of a design is one more than the smallest order interaction that some main effect is confounded with.⁴¹ For instance, if some main effects are confounded with some two-level interactions, then the resolution is III. The most prevalent design resolutions in the pharmaceutical arena are III, IV, and V.⁶ These designs imply that a. Resolution III Designs: In such designs, the main effects are confounded (aliased) with two-factor interactions. b. Resolution IV Designs: No main effects are aliased with two-factor interactions, but two-factor interactions are aliased with each other. c. Resolution V Designs: No main effect or two-factor interaction is aliased with any other main effect or two-factor interaction, but two-factor interactions are aliased with three-factor interactions. The orthogonal designs, where the estimation of main effects and interactions are independent of each other, are said to possess “infinite resolution.”³¹ For most practical purposes, when the number of factors is quite large in pharmaceutical product development, a resolution IV design may be adequate, while a resolution V design is an excellent choice. Resolution III designs, on the other hand, are also useful in conditions where the number of factors is large and interactions among them are assumed to be negligible. The resolution of experimental designs can be improved upon by the fold over technique.⁷,³¹,⁴⁶,⁵⁰ The procedure involves the generation or addition of a second block of experiments, in which the levels of each factor are reversed from the original block. For a resolution III design, this will improve the alias structure for all the factors. Fold over designs can either be mirror-image fold over designs (resulting in complete dealiasing of main effects and all interactions) or alternative fold over designs (involving break-up of specific alias patterns).

3. Design Augmentation

In the whole DoE endeavor, a situation sometimes arrives in which a study, conducted at some stage, is found to be inadequate and needs to be investigated further, or

39

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

when the study carried out during the initial stages needs to be “reused.”¹⁵ In either situation, more design points can be added systematically to the erstwhile design. Thus, the erstwhile primitive design can be enhanced to a more advanced design furnishing more information, better reliability, and higher resolution. This process of extension of a statistical design, by adding some more rational design points, is known as design augmentation.³¹,⁴¹ For instance, a design involving study at two levels can be augmented to a three-level design by adding some more design points. A design can be augmented in a number of ways, such as by replicating, adding center points to two-level designs, adding axial points (i.e., design points at various axes of the experimental domain), or by folding over.

II.F. Response Surfaces Conduct of DoE trials, according to the chosen statistical design, yields a series of data on the response variables explored. Such data can be suitably modeled to generate mathematical relationships between the independent variables and the dependent variables. Graphical depiction of the mathematical relationship is known as a response surface.¹⁹,⁴⁵,⁴⁹ A response surface plot is a 3-D graphical representation of a response plotted between two independent variables and one response variable. The use of 3-D response surface plots allows us to understand the behavior of the system by demonstrating the contribution of the independent variables. The geometric illustration of a response obtained by plotting one independent variable against another, while holding the magnitude of response and other variables as constant, is known as a contour plot.²⁸ Such contour plots represent the 2-D slices of the corresponding 3-D response surfaces. The resulting curves are called contour lines. Fıgure 6 depicts a typical response surface and contour plot for a diffusional release exponent (proposed by Korsemeyer et al.⁵¹) as the response variable, reported with mucoadhesive compressed matrices of atenolol.⁵² For complete response depiction among k independent variables, a total of kC₂ number of response surfaces and contour plots may be required. In other words, 1, 3, 6, or 10 number of 3-D and 2-D plots are needed to provide depiction of each response for 2, 3, 4, or 5 number of variables, respectively.¹⁵,³¹

II.G. Mathematical Models The mathematical model, simply referred to as the model, is an algebraic expression defining the dependence of a response variable on the independent variable(s).⁴⁶,⁵³ Mathematical models can either be empirical or theoretical.²⁸ An empirical model provides a way to describe the factor/response relationship. It is most frequently, but

40

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

1 .0

0 .7 6 -0 .8 2

0 .8 0

0 .7 0 -0 .7 6

0 .8 2

0 .7 7

Release exponent

0 .6 4 -0 .7 0

0 .5

Sod. CM C

0 .7 6

0 .7 0

0 .7 4

0 .0

0 .6 8

0 .7 1

1 -0 .5

0

0 .6 4 -1

0 1

HPMC

-1 Sod. CMC

0 .7 1 -1 .0

-1 .0

-0 .5

0 .0

0 .5

1 .0

H P M C

(a)

(b)

FIGURE 6. (a) A typical response surface plotted between a response variable, release exponent, and two factors, HPMC and sodium CMC, in case of mucoadhesive compressed matrices; (b) the corresponding contour plot.

not invariably, a set of polynomial equations of a given order.⁷,⁴⁶ Most commonly used linear models are shown in Eqs. (1)–(3):

Y = β0 + β1 X 1 + β 2 X 2 + ... + ε

(1)

Y = β0 + β1 X 1 + β 2 X 2 + β12 X 1 X 2 + ... + ε

(2)

Y = β0 + β1 X 1 + β 2 X 2 + β12 X 1 X 2 + β11 X 12 + β 22 X 22 + ... + ε

(3)

where Y represents the estimated response, sometimes also denoted as E(y). The symbols Xi represent the value of the factors, and β0, βi, βii, and βij are the constants representing the intercept, coefficients of first-order (first-degree) terms, coefficients of second-order quadratic terms, and coefficients of second-order interaction terms, respectively. The symbol ε implies pure error. Equations (1) and (2) are linear in variables, representing a flat surface and a twisted plane in 3-D space, respectively. Equation (3) represents a linear second-order model that describes a twisted plane with curvature, arising from the quadratic terms. A theoretical model or mechanistic model may also exist or be proposed. It is most

41

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

often a nonlinear model, where transformation to a linear function is not usually possible.²⁸ Such theoretical relationships are, however, rarely employed in pharmaceutical product development.

III. DRUG DELIVERY OPTIMIZATION: DoE METHODOLOGY An experimental approach to DoE optimization of DDS comprises several phases.⁵,¹⁵,²⁸,⁵⁴ Broadly, these phases can be sequentially summed up in seven salient steps. Fıgure 7 delineates these steps pictographically.

FIGURE 7. Seven-step ladder for optimizing drug delivery systems.

42

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

• The optimization study begins with Step I, where an endeavor is made to ascertain the initial drug delivery objective(s) in an explicit manner. Various main response parameters, which closely and pragmatically epitomize the objective(s), are chosen for the purpose. • In Step II, the experimenter has several potential independent product and/or process variables to choose from. By executing a set of suitable screening techniques and designs, the formulator selects the “vital few” influential factors among the possible “so many” input variables. Following selection of these factors, a factor influence study is carried out to quantitatively estimate the main effects and interactions. Before going to the more detailed study, experimental studies are undertaken to define the broad range of factor levels as well. • During Step III, an apposite experimental design is worked out on the basis of the study objective(s), and the number and the type of factors, factor levels, and responses being explored. Working details on variegated vistas of the experimental designs, customarily required to implement DoE optimization of drug delivery, have been elucidated in the subsequent section. Afterwards, response surface modeling (RSM) is characteristically employed to relate a response variable to the levels of input variables, and a design matrix is generated to guide the drug delivery scientist to choose optimal formulations. • In Step IV, the drug delivery formulations are experimentally prepared according to the approved experimental design, and the chosen responses are evaluated. • Later in Step V, a suitable mathematical model for the objective(s) under exploration is proposed, the experimental data thus obtained are analyzed accordingly, and the statistical significance of the proposed model discerned. Optimal formulation compositions are searched within the experimental domain, employing graphical or numerical techniques. This entire exercise is invariably executed with the help of pertinent computer software. • Step VI is the penultimate phase of the optimization exercise, involving validation of response prognostic ability of the model put forward. Drug delivery performance of some studies, taken as the checkpoints, is assessed vis-à-vis that predicted using RSM, and the results are critically compared. • Fınally, during Step VII, which is carried out in the industrial milieu, the process is scaled up and set forth ultimately for the production cycle. The niceties of the significance and execution of each of these seven steps is discussed in greater detail below.

43

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

III.A. Step I: Objective The foremost step while executing systematic DoE methodology is to understand the deliverables of the finished product. This step is not merely confined to understanding the process performance and the product composition, but it usually goes beyond to enfold the concepts of economics, quality control, packaging, market research, etc.³¹ The term objective (also called criterion) has been used to indicate either the goal of an optimization experiment or the property of interest.¹⁶,²⁸ The objectives for an experiment should be clearly determined after discussion among the project team members having sound expertise and empiricism on product development, optimization, production, and/or quality control. The group of scientists contemplates the key objectives and identifies the trivial ones. Prioritizing the objectives helps in determining the direction to proceed with regard to the selection of the factors, the responses, and the particular design.⁵,⁵⁴,⁵⁵ This step can be very time consuming and may not furnish rapid results. However, unless the objectives are accurately defined, it may be necessary to repeat the entire work that is to follow. The response variables, selected with dexterity, should be such that they provide maximal information with the minimal experimental effort and time. Such response variables are usually the performance objectives, such as the extent and rate of drug release, or are occasionally related to the visual aesthetics, such as chipping, grittiness, or mottling.¹⁵

III.B. Step II: Factor Studies Subsequent to ascertaining the study objectives and responses, “several possible” factors are envisioned and screening of a “few important” ones is done. The influence of the important factors—i.e., the main effects and the possible interactions are also studied. Collectively, screening and factor influence studies are also known as factor studies.⁴ Often carried out as a prelude to finding the optimum, these are sequential stages in the development process. Screening methods are used to identify important and critical effects.⁶,⁵⁴ Factor studies aim at quantitative determination of the effects as a result of a change in the potentially critical formulation or process parameter(s). Such factor studies usually involve statistical experimental designs, and the results so obtained provide useful leads for further response optimization studies.

1. Screening of Influential Factors

As the term suggests, screening is analogous to separating “rice” from “rice husk,” where rice is a group of factors with significant influence on response, and husk is a group of the rest of the noninfluential factors.¹⁵ A product development scientist

44

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

normally has numerous possible input variables to be investigated for their impact on the response variables. During the initial stages of optimization, such input variables are explored for their influence on the outcome of the finished product to see if they are factors.⁴,⁶,⁵⁶ The process, called screening of influential variables, is a paramount step. An input variable, identified as a factor, increases the chance of success, while an input variable that is not a factor has no consequence.²⁸ Furthermore, an input variable falsely identified as a factor unduly increases the effort and cost, while an unrecognized factor leads to an erroneous picture, and a true optimum may be missed. Principally, screening embarks upon the phenomenon of sparsity effect—i.e., only a few of the factors among the numerous envisioned ones truly explain a larger proportion of the experimental variation.⁷,³¹,⁵⁷ The factors responsible for the variability are the active or influential variables, while others are termed inactive or less influential variables. The entire exercise aims solely at selecting the active factors and excluding the redundant variables, but not at obtaining complete and exact numerical data on the system properties. Such a reduction in the number of factors becomes necessary before the pharmaceutical scientist invests the human, financial, and industrial resources in more elaborate studies.⁴,⁵⁴ This phase may be omitted if the process is known well enough from the analogous studies. Even after elimination of the noninfluential variables, the number of factors may, at times, still be too large to optimize in terms of available resources of time, money, manpower, equipment, etc.⁴ In such cases, more influential variables are optimized, keeping less influential ones as constant at their best levels. The number of experiments is kept as small as possible to limit the volume of work carried out during the initial stages.

a. Screening Designs

The experimental designs employed for this purpose are commonly termed screening designs.⁵⁴,⁵⁶ Screening presumes considerable approximation of the additivity of the different factors and the absence of interaction. Therefore, the primary purpose of the screening design is to identify significant main effects, rather than interaction effects. Thus, these are usually first-order designs with low resolution.²²,³¹ These designs are also sometimes termed main effects designs or orthogonal main effect plans or simply orthogonal arrays.⁶ The number of experiments in the screening process is kept small, but it must at least be equal to the number of independent coefficients (P ) required to be calculated, as in Eq. (4). P = 1 + ∑ik=1 (Si − 1)

Begell

House

Digital

Library,

http://dl.begellhouse.com

45

Downloaded

2011-1-3

(4)

from

IP

168.223.7.171

by

Florida

A&M

University

B. SINGH ET AL.

where Si is the number of levels of the ith factor, when there are k factors in all.⁶ The estimators of the coefficients should be orthogonal and be estimated with minimum possible error. In general, in order to determine main effects independently, the number of runs should be four times the number of factors to be estimated.³¹ The experimental designs are said to be saturated if the number of runs equals the number of model terms to be estimated.³¹,⁵⁷ In cases where a larger number of factors needs to be screened, the number of runs becomes exorbitantly high. In such circumstances, supersaturated designs, which possess fewer runs than factors, are used. Supersaturated designs can be attractive for factor screening, especially when there are many factors and/or the experimental runs are expensive. A supersaturated design can examine dozens of factors using fewer than half the number of runs. This is usually at the expense of the precision and accuracy of the information. The mathematical models normally considered for screening include the linear and interaction models already described by Eqs. (1) and (2).⁴,⁵⁴,⁵⁶ A two-level screening design can be augmented to a high-level design by adding axial points along with center points.

2. Factor Influence Study

Having screened the influential variables, a more comprehensive study is subsequently undertaken, with the main aim to quantify the effect of factors and to determine the interactions, if any.⁴,⁶,⁴⁵ Herein, the studied experimental domain is less extensive, as many fewer active factors are studied. The models used for this study are neither predictive nor capable of generating a response surface. The number of levels is usually limited to two (i.e, the factors are investigated at the extreme values). However, sufficient experimentation is carried out to allow for the detection of interactions among factors.⁶,⁵³ The experimental designs used are generally of the same kind as used for screening. The experiments conducted at this step may often be “reused” during the optimization or response modeling phase by augmenting the experimental designs with additional design points at the center or the axes.³¹ Central points (i.e., at the intermediate level), if added at this stage, are not included in the calculation of model equations.⁴ Nevertheless, they may prove to be useful in identifying the curvature in the response, in allowing the reuse of the experiments at various stages, and if replicated, in validating the reproducibility of the experimental study.

III.C. Step III: Response Surface Modeling and Experimental Designs During this crucial stage in DoE, one or more selected experimental responses are recorded for a set of experiments carried out in a systematic way to develop a mathematical model.⁸,²¹,²⁶,³³,⁴⁷,⁵⁸,⁵⁹ These approaches comprise the postulation of an

46

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

empirical mathematical model for each response, which adequately represents change in the response within the zone of interest. Rather than estimating the effects of each variable directly, response surface modeling (RSM) involves fitting the coefficients into the model equation of a particular response variable and mapping the response over the whole of the experimental domain in the form of a surface.⁶,¹⁹,²³,⁴⁶,⁵⁴ Principally, RSM is a group of statistical techniques for empirical model building and model exploitation.⁴⁶,⁵⁴ By careful design and analysis of experiments, it seeks to relate a response to a number of predictors affecting it by generating a response surface, which is an area of space defined within upper and lower limits of the independent variables depicting the relationship of these variables to the measured response. Experimental designs, which allow the estimation of main effects, interaction effects, and even quadratic effects, and, hence, provide an idea of the (local) shape of the response surface being investigated, are termed response surface designs.⁶,²⁸,⁴⁵,⁵⁹ Under some circumstances, a model involving only main effects and interactions may be appropriate to describe a response surface. Such circumstances arise when analysis of the results reveals no evidence of “pure quadratic” curvature in the response of interest—i.e., the response at the center approximately equals the average of the responses at the two extreme levels, +1 and –1. In each part of Fıgure 8 (a, b, and c), the value of the response increases from the bottom of the figure to the top and those of the factor settings increase from left to right.³¹ If a response behaves as in Fıgure 8a, the design matrix to quantify that behavior needs only to contain factors with two levels—low and high. This model is a basic assumption of simple two-level screening or factor-influence designs. If a response behaves as in Fıgure 8b, the minimum number of levels required for a factor

(a)

(b)

(c)

FIGURE 8. Different types of responses as functions of factor settings; (a) linear; (b) quadratic; (c) cubic.

47

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

to quantify that behavior is three. Addition of center points to a two-level design appears to be a logical step at this point, but the arrangement of the treatments in such a matrix may confound all the quadratic effects with each other.³¹,⁴⁵,⁴⁶ A two-level design with center points can only detect the quadratic nature of the response, but not estimate the individual pure quadratic effects. Generally, the quadratic models are proposed for optimization of drug delivery devices.⁴,⁶,²² Therefore, response surface designs involving studies at three or more than three levels are employed for DoE optimization purposes. These response surface designs are used to find improved or optimal process settings, troubleshoot the process problems and weak points, and make a formulation or process more robust (i.e., less variable) against external and noncontrollable influences.³¹,⁴⁵ Relatively more complicated cubic responses (Fıg. 8c) are quite infrequent in pharmaceutical practice.⁶,²² The prediction ability of response surface designs can be determined by prediction variance, which is a function of experimental variance (σ²) and variance function (d ), as described by Eq. (5).⁶,⁷,⁴⁵ var yˆ d .V 2

(5)

where var ( yˆ ) is the prediction variance. The variance function (d ) further depends upon the levels of a factor and the experimental design. When the prediction variance of a response is constant in all the directions at a given distance from the center point of the domain, the design is termed rotatable.⁷,⁸,³¹ Ideally, all response surface designs should possess the characteristic of rotatability—i.e., the ability of a design to be run in any direction without any change in response prediction variance.

1. Experimental Designs

DoE is an efficient procedure for planning experiments in such as way that the data obtained can be analyzed to yield valid and unbiased conclusions.⁹,³⁰ An experimental design is a strategy for laying out a detailed experimental plan in advance to the conduct of the experimental studies.⁸,¹⁴,²²,²⁶ Before the selection of experimental design, it is essential to demarcate the experimental domain within the factor space—i.e, the broad range of factor studies. To accomplish this task, first a pragmatic range of experimental domain is embarked upon and the levels and their number are selected so that the optimum lies within its realm.¹⁹,³¹ While selecting the levels, one must see that the increments between them should be realistic. Too wide increments may miss finding the useful information between the levels, while a too narrow range may not yield accurate results.¹⁵

48

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

There are numerous types of experimental designs. Various commonly employed experimental designs for RSM, screening, and factor-influence studies in pharmaceutical product development are a. factorial designs b. fractional factorial designs c. Plackett–Burman designs d. star designs e. central composite designs f. Box–Behnken designs g. center of gravity designs h. equiradial designs i. mixture designs j. Taguchi designs k. optimal designs l. Rechtschaffner designs m. Cotter designs For a three-factor study, an experimental design can invariably be envisaged as a “cube,” with the possible combinations of the factor levels (low or high) represented at its respective corners.⁹ The cube thus can be the most appropriate representation of the experimental region being explored. Most design types discussed in the current article are, therefore, being depicted pictorially using this cubic model, with experimental points at the corners, centers of faces, centers of edges, and so forth. Such depiction facilitates easier comprehension of various designs and comparisons among them. For designs in which more than three factors are adjusted, the same concept is applicable except that a hypercube represents the experimental region. Such cubic designs are popular because they are symmetrical and straightforward for conceptualizing and envisioning the model.

a. Factorial Designs

Factorial designs (FDs) are very frequently used response surface designs.⁸,⁴⁰,⁶⁰ A factorial experiment is one in which all levels of a given factor are combined with all levels of every other factor in the experiment.⁶,⁴⁰,⁶¹ These are generally based

49

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

upon first-degree mathematical models. Full FDs involve studying the effect of all the factors (k) at various levels (x), including the interactions among them, with the total number of experiments being xk. FDs can be investigated at either two levels (2k FD) or more than two levels. If the number of levels is the same for each factor in the optimization study, the FDs are said to be symmetric, whereas in cases of a different number of levels for different factors, FDs are termed asymmetric.⁶ • 2 k factorial designs. The two-level FDs are the simplest form of orthogonal design, commonly employed for screening and factor influence studies.³¹,⁴⁰,⁴⁷ They involve the study of k factors at two levels only—i.e., at high (+) and low (–) levels. The simplest FD involves investigation of two factors at two levels only. Characteristically, these represent first-order models with linear response, as demonstrated in Fıgure 8a. Fıgure 9 portrays a 2² and 2³ FD, in which each point represents an individual experiment. The design matrix for a two-level full factorial with k factors in the standard order can be generated in the following manner. The first column (X₁) starts with –1 followed by alternate sign for all 2k runs.³¹ The second column (X₂) starts with –1 repeated twice, then alternates with 2 in a row of the opposite sign until all 2k places are filled. The third column (X₃) starts with –1 repeated four times, then four repeats of +1, and so on. In general, the ith column (Xi) starts with 2i–¹ repeats of –1 followed by 2i–¹ repeats of +1. Table 2 illustrates a simple design matrix layout of 2³ FD. The signs + or – in the columns AB,

(a)

(b)

FIGURE 9. Diagrammatic representation of (a) 22 factorial design; (b) 23 factorial design.

50

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

TABLE 2. Design Matrix for 23 FD in Standard Order Along with the Corresponding Interactions and Responses* Exp. run

Factors

Interactions

Responses dg (µm)

␴g (µm)



1150.7

224.2

+

+

303.0

42.4

+



+

1054.5

222.8

+







507.2

116.0

1

+





+

326.7

56.0

–1

1



+





463.5

97.6

–1

1

1





+



792.9

135.6

1

1

1

+

+

+

+

1252.7

208.4

A (X1)

B (X2)

C (X3)

AB (X1X2)

AC (X1X3)

BC ABC (X2X3) (X1X2X3)

(1)

–1

–1

–1

+

+

+

a

1

–1

–1





b

–1

1

–1



ab

1

1

–1

c

–1

–1

ac

1

bc abc

* Data taken from Korakianiti et al.62

AC, BC, and ABC are generated by multiplication of the corresponding levels of the various factors. The design in the given instance has been employed for optimization of pellets to study the effect of three process factors—the rotor speed (A), the amount of water sprayed (B), and the atomizing air pressure (C)—at two levels each on the geometric mean diameter (dg) and geometric size distribution (σg) of the pellets.⁶² The mathematical model associated with the design consists of the main effects of each variable plus all the possible interaction effects—i.e., interactions between the two variables, and in fact, between as many factors as there are in the model.⁴,⁶,⁴⁶ Equation (6) is the general mathematical relationship for the FDs involving main effect and interaction terms. Y = β0 + ∑ni =1 βi X i + ∑ni =1 ∑nj =i +1 βij X i X j + ∑ni =1 ∑nj =i +1 ∑nk =i + 2 βijk X i X j X k

(6)

where n is the number of factors (3 in the above equation), X is +1 or –1 as per coding, Y is the measured response, and βi, βij, βijk represent the coefficients computed from the responses of the formulation in the design. For a 2³ FD, the above equation can be written as Eq. (7).

51

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

FIGURE 10. Diagrammatic representation of 23 factorial design with added center point.

Y = β0 + β1 X 1 + β 2 X 2 + β3 X 3 + β12 X 1 X 2 +β13 X 1 X 3 + β 23 X 2 X 3 + β123 X 1 X 2 X 3

(7)

Center points can be added to 2k FDs to allow identification of the curvature in the response and, upon replication, validate the reproducibility of the experimental study.⁶⁰ Fıgure 10 shows the cubic model for 2³ FD with an added center point. • Higher level factorial designs. FDs at three or more number of levels are employed mainly for response surface optimization.²⁸,⁴⁵,⁶⁰ Simple to generate, these designs can detect and estimate nonlinear or quadratic effects. The main strength of the design is orthogonality, because it allows independent estimation of the main effects and interactions.³¹,⁴⁰,⁶⁰ On the other hand, the major limitation associated with high-level FDs is the increased number of experiments required with the increase in the number of factors (k). Even at a modest number of factors, the number of runs is quite large. For instance, the absolute minimum number of runs required to estimate all the terms present in a four-factor, three-level quadratic model is 15, involving the intercept term, four main effects, six two-factor interactions, and four quadratic terms. The corresponding 3k FD for k = 4 requires 81 runs. Another disadvantage of xk FDs is the lack of rotatability.⁷,³¹ Table 3 illustrates the design matrix for a three-level FD for buccoadhesive compressed matrices of diltiazem hydrochloride involving two factors—Carbopol (X₁) and HPMC K 100 LV (X₂).¹⁸ The response parameters studied encompass bioadhesive strength (F ), release up to 10 hours (Rel₁₀h), and the time taken for 50% of the drug release (t50%).

52

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

TABLE 3. A 32 Full Factorial Design Layout Along with Studied Responses* Response variables

Trial no.

X1

X2

F (g)

Rel10h (%)

t50% (h)

1

–1

–1

6.66

80.66

4.99

2

–1

0

11.80

72.50

7.43

3

–1

1

14.11

67.01

8.28

4

0

–1

9.09

68.21

7.38

5

0

0

15.55

59.37

8.88

6

0

1

23.50

50.59

9.71

7

1

–1

16.32

57.10

8.35

8

1

0

19.43

47.98

10.23

9

1

1

28.16

35.94

12.03

* Data taken from Singh & Ahuja18

b. Fractional Factorial Designs

In a full FD, as the number of factors or factor levels increases, the number of required experiments exceeds manageable levels. Also, with a large number of factors, it is possible that the highest order interactions have no significant effect. In such cases, the number of experiments can be reduced in a systematic way, with the resulting design called fractional factorial designs (FFD) or sometimes partial factorial designs.²⁸,³⁴,⁴⁷,⁶⁰ An FFD is a finite fraction (1/xr ) of a complete or “full” FD, where r is the degree of fractionation and xk–r is the total number of experiments required. Although these designs are economical in terms of number of experiments, the ability to distinguish some of the factor effects is partly sacrificed by reduction in the number of experiments. In other words, the effects in an FFD can no longer be uniquely estimated.³¹,⁶⁰ Therefore, the FFDs often possess lower resolution than their full factorial counterparts, because they require fewer experiments and consequently provide fewer data. The degree of fractionation should be appropriately chosen on the basis of resources available and the design resolution desired.⁶ It should not be large, because this may lead to confounding of factor effects, not only with the interactions, but also with other factor effects. Properly chosen FFDs for two-level experiments, however, have the desirable properties of being both balanced and orthogonal.³¹,⁶⁰

53

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 4. One Half Replicate of a 3–Factor 2–Level Factorial Design Factors

Interactions

Experiment A

B

C = AB

AC

BC

a

+







+

b



+



+



c





+





abc

+

+

+

+

+

For a two-level, three-factor design, a full FD will require 2³—i.e., eight experiments and seven effects are determined. Out of these seven effects, there are three main effects, and the other four effects are due to the interactions among the three factors. An FFD with r = 1, on the other hand, will require only 2³–¹, i.e., four experiments and a total of three effects are estimated. However, these three effects are the combined effects of factors and interactions. Table 4 depicts a one-half replicate of a 2³ FD. From Table 4, the aliases (i.e., the confounded effects) can be defined. An effect is defined by the signs in the corresponding columns—e.g., the effect of A is (a – b – c + abc), which is exactly equal to BC. Therefore, BC and A are aliases—i.e., confounded. Also, C = AB and B = AC. Thus, in this design, the main effects are

(a)

(b)

FIGURE 11. Diagrammatic representation of (a) a 23–1 fractional factorial design with design points as spheres at the corners of the cubic model; (b) a 23–1 fractional factorial design with added center point.

54

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

confounded with interactions of the other two factors.²⁸,⁴⁰ This implies that the main effects cannot be clearly interpreted if the interactions present are significant. Fıgure 11 depicts an FFD graphically as a hypercube, with its corners represented by the spheres, depicting the experiments studied. Low-resolution FFDs (mainly resolution III or occasionally resolution IV) are routinely employed for screening purposes.⁴,⁵⁶ They are efficient in determining “main effects,” where interactions are assumed to be negligible. On the other hand, high-resolution FFDs such as resolution V or higher are used to determine both the main effects and interactions. Hence, such high-resolution FFDs, besides factor influence studies, can be used for drug delivery product or process optimization.⁷,⁶³,⁶⁴ The resolution of FFDs can also be improved upon by fold over methods.³¹ The high-resolution designs, such as resolution V FFDs, can also be augmented to second-order response surface designs.

c. Plackett–Burman Designs

Plackett–Burman designs (PBD) are special two-level FFDs used generally for screening of K—i.e., N–1 factors, where N is a multiple of 4.⁶⁵ Also known as Hadamard designs or symmetrically reduced 2k–r FDs, the designs can easily be constructed employing a minimum number of trials.⁴,⁶,⁶⁵-⁶⁸ For instance, a 30-factor study can be accomplished using only 32 experimental runs. In most cases, the first line is given and the remaining lines are obtained by permutation, except for the last line, which consists entirely of minus signs. Table 5 presents the first lines of a PBD for the number of experiments, with N ranging between 4 and 24. The second row is generated from the first one by moving the element of the row one position right and placing the last in the first position. The third row is produced from the second in an analogous manner, and the process is continued

TABLE 5. Coded Experimentation Carried Out with Plackett–Burman Design N

First lines of the design

4

++–

8

+++–+––

12

++–+++–––+–

16

++++–+–+––+–––

20

++––++–+–+–+––––++–

24

+++++–+–++––++––+–+––––

55

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 6. A Plackett–Burman Design Layout for Eight Experiments and Seven Factors Experiment run

X1

X2

X3

X4

X5

X6

X7

1

+1

+1

+1

–1

+1

–1

–1

2

–1

+1

+1

+1

–1

+1

–1

3

–1

–1

+1

+1

+1

–1

+1

4

+1

–1

–1

+1

+1

+1

–1

5

–1

+1

–1

–1

+1

+1

+1

6

+1

–1

+1

–1

–1

+1

+1

7

+1

+1

–1

+1

–1

–1

+1

8

–1

–1

–1

–1

–1

–1

–1

until the k t line is reached. Because these designs cannot be represented as cubes, they are sometimes called nongeometeric designs.²⁸,³⁴ Table 6 presents the PBD layout for eight experiments. In Plackett–Burman designs, the main effects are orthogonal, and two-factor interactions are only partially confounded with main effects.³¹ This is different from resolution III FFDs, in which two-factor interactions are indistinguishable from main effects. PBDs are quite favorably employed during the screening process.⁵⁴,⁶⁷,⁶⁹

d. Star Designs

Because FDs do not allow detection of curvature unless more than two levels of a factor are chosen, a star design can be used to alleviate the problem and provide a simple way to fit a quadratic model.⁸,²⁸ The number of required experiments in a star design is given by 2k + 1. A central experimental point is located from which other factor combinations are generated by moving the same positive and negative distance (= step size, α). For two factors, the star design is simply a 2² FD rotated over 45° with an additional center point (Fıg. 12). The design is invariably orthogonal and rotatable.

e. Central Composite Designs

For nonlinear responses requiring second-order models, central composite designs (CCDs) are the most frequently employed.¹⁶,³³,⁵⁴ Also known as the Box–Wilson

B e g e l l

H o u s e

D i g i t a l

L i b r a r y ,

56

h t t p : / / d l . b e g e l l h o u s e . c o m

D o w n l o a d e d

2 0 1 1 - 1 - 3

f r o m

I P

1 6 8 . 2 2 3 . 7 . 1 7 1

b y

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

FIGURE 12. Diagrammatic representation of a star design with an additional center point, derived from the factorial by rotation over 45º.

design, the “composite design” contains an imbedded (2k ) FD or (2k–r ) FFD, augmented with a group of star points (2k) and a “central” point.⁷⁰ The star points allow estimation of curvature and establish new extremes for the low and high settings for all the factors. Hence, CCDs are second-order designs that effectively combine the advantageous features of both FDs (or FFDs) and the star design. The total number of factor combinations in a CCD is given by 2k + 2k + 1. If the distance from the center of the design space to a factorial point is ±1 unit for each factor, the distance from the center of the design space to a star (axial) point is ±α with |α| > 1. The precise value of α depends on certain properties desired for the design and on the number of factors involved.⁸,²⁸,³¹,⁷⁰ The axial points for twofactor problems include (±α, 0) and (0, ±α). A two-factor CCD is identical to a 3² FD with rectangular experimental domain at α = ±1, as shown in Fıgure 13a. On the other hand, the experimental domain is spherical in shape for α = √2 = 1.414, as shown in Fıgure 13b. The CCD is quite popular in response surface optimization during pharmaceutical product development. A face centered cube design (FCCD) results when both factorial and star points in a CCD possess the same positive and negative distance from the center.²⁸ A rotatable CCD (RCCD) is identical to FCCD except that the points defined for the star design are changed to [±(2k )¹/⁴,…0] and those generated by the FD remain unchanged. In this way, the design generates information equally well in all directions—i.e., the variance of the estimated response is the same at all the points on a sphere centered at the origin.³¹ Furthermore, depending upon the type of domain and α value, the RCCD can be either circumscribed (CCC) or inscribed (CCI). Table 7 gives an account of the salient aspects of various types of CCDs. Fıgure 14 depicts the designed experiments carried out using various types of CCDs. To maintain rotatability, the value of α depends on the number of experimental

57

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

(a)

(b)

FIGURE 13. Diagrammatic representation of (a) central composite design (rectangular domain) with α = 1; (b) central composite design (spherical domain) with α = 1.414.

TABLE 7. Various Types of Composite Designs and Their Salient Features CCDs type Face centered

Notation

Salient features

FCCD

In this design the star points are at the center of each face of the factorial space, i.e., α = ± 1. It requires 3 levels of each factor. Augmenting an existing FD or FFD (resolution V) with appropriate star points can also produce this design.

Circumscribed

CCC

These designs are the original form of the central composite design. The star points are at some distance (α) from the center, based on the properties desired for the design and the number of factors in the design. These designs have circular, spherical, or hyperspherical symmetry, and require 5 levels for each factor. Augmenting an existing FD or FFD (resolution V) with star points can also produce this design.

Inscribed

CCI

For situations in which the limits specified for factor settings are truly limits, this design uses the factor settings as the star points and creates a FD or FFD within those limits. In other words, a CCI design is a scaled down CCC design with each factor level of the CCC design divided by α to generate the CCI design. This design also requires 5 levels of each factor.

Rotatable CCDs

58

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

(a)

(b)

(c)

FIGURE 14. Diagrammatic representation of (a) a face centered central composite; (b) a circumscribed rotatable central composite; (c) an inscribed rotatable central composite design.

runs in the factorial portion (factorial or FFD design space) of the CCD, as given by Eq. (8).³¹,⁷⁰ α = [ number of factorial runs ]

1/ 4

(8)

If the factorial space is generated from a full factorial, it can be simplified as Eq. (9). α = [ 2k ]

1/ 4

(9)

Table 8 illustrates some typical values of α as a function of the number of factors for maintaining the rotatability of CCDs. The second-order polynomial, generally used for the composite designs, is given as Eq. (10).

Y = β0 + ∑ni =1 βi X i + ∑ni =1 ∑nj =i +1 βij X i X j + ∑ni =1 βii X i2

(10)

The values of βi, βij, and βii represent the coefficients for main effect, interaction, and second-order terms, respectively. The composite designs normally involve the investigation of X at five levels—i.e., one central point (0 level), two factorial points

59

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 8. Determination of ␣ Value for Maintaining the Rotatability of a Central Composite Design for Different Number of Factors Number of factors

Factorial portion

Scaled value for ␣ relative to ± 1

2

22

22/4 = 1.414

3

23

23/4 = 1.682

4

24

24/4 = 2.000

5

25–1

24/4 = 2.000

5

25

25/4 = 2.378

6

26–1

25/4 = 2.378

6

26

26/4 = 2.828

(±1 levels) and two axial star points (±α levels). However, in case of FCCD, the number of levels is kept at three for each factor. The above second-order equation for a two-factor study, upon expansion, will transform to Eq. (3).

f. Box–Behnken Designs

A specially made design, the Box–Behnken design (BBD), requires only three levels for each factor—i.e., –1, 0, and +1.⁷¹ A BBD is an economical alternative to CCD.⁴,⁵⁴,⁷¹-⁷³ It overcomes the inherent pitfalls of the CCDs, where each factor has to be studied at five levels, consequently escalating the number of experiments with the rise in the number of factors. Table 9 gives a comparative account of the number of runs required by CCDs and BBDs for a given number of factors. This is an independent quadratic design, in that it does not contain an embedded FD or FFD. Although BBD is also called orthogonal balanced incomplete block design, it has limited capability for orthogonal blocking in comparison to the CCDs. The design is rotatable (or nearly rotatable), and the treatment combinations are located at the midpoints of edges and the center of the experimental domain, as portrayed in Fıgure 15.³¹,⁷¹ The BBDs are also popularly used for response surface optimization of drug delivery systems.⁴,²³,⁷²-⁷⁷ Table 10 lists a documented instance of a BBD design layout for preparing sustained release pellets, employing 15 experiments with three factors at three levels each.⁷⁸

60

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

TABLE 9. Number of Runs Required by Central Composite and Box–Behnken Designs Number of factors

Number of experimental runs needed Central composite design

Box–Behnken design

2

13 (5 center point runs)



3

20 (6 center point runs)

15

4

30 (6 center point runs)

27

5

33 (fractional factorial) or 52 (full factorial)

46

6

54 (fractional factorial) or 91 (full factorial)

54

g. Center of Gravity Designs

Outlined by Podczeck,⁷⁹ these designs are modifications of CCD. Retaining the advantages of CCD, these designs further reduce the total number of experiments to 4k + 1. The experiments start with a midpoint, which usually lies in the factorial region. From this midpoint (i.e., center of gravity), at least four points are decided on each coordinate axis in such a way that the resulting geometric space becomes as large as possible. Despite the broader geometric space, the designs include only meaningful experiments. Such designs have been employed to optimize various DDS quite frequently.²¹,⁸⁰,⁸¹

h. Equiradial Designs

Equiradial designs are first-degree response surface designs, consisting of N points on a circle around the center of interest in the form of a regular polygon.⁶,⁴¹ The designs

FIGURE 15. Diagrammatic representation of a Box–Behnken design for three factors.

61

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 10. Design Layout According to Box–Behnken Design* Run

Variable factors

Response variables

X1

X2

X3

Y1

Y2

Y3

1

30

6/1

500

20.0 ± 0.8

27.5 ± 1.1

38.0 ± 1.2

2

30

2/1

500

33.0 ± 1.0

45.4 ± 1.1

65.2 ± 1.1

3

10

6/1

500

42.4 ± 0.9

58.7 ± 1.5

80.5 ± 0.9

4

10

2/1

500

66.1 ± 1.3

85.6 ± 1.2

94.1 ± 2.0

5

30

4/1

700

15.4 ± 1.1

21.1 ± 1.6

29.5 ± 1.1

6

30

4/1

300

53.9 ± 1.4

71.7 ± 1.8

85.1 ± 1.0

7

10

4/1

700

32.9 ± 0.8

46.5 ± 1.3

68.5 ± 1.2

8

10

4/1

300

82.4 ± 2.0

91.0 ± 2.0

93.8 ± 2.0

9

20

6/1

700

10.8 ± 1.0

14.8 ± 1.1

20.3 ± 1.3

10

20

6/1

300

47.4 ± 1.1

62.7 ± 1.3

80.3 ± 1.5

11

20

2/1

700

22.1 ± 1.2

30.4 ± 1.2

42.8 ± 1.7

12

20

2/1

300

75.3 ± 0.9

87.1 ± 2.0

94.0 ± 1.9

13

20

4/1

500

26.5 ± 1.0

36.4 ± 1.5

50.8 ± 1.3

14

20

4/1

500

24.0 ± 1.5

32.9 ± 2.0

47.0 ± 2.0

15

20

4/1

500

25.0 ± 1.2

34.9 ± 1.5

49.3 ± 2.0

Levels Factors

Response variables

–1

0

1

X1: Plasticizer concentration (%)

10

20

30

Y1: Cumulative percent drug release after 3 h

X2: Polymer ratio (Eudragit RS/ Eudragit RL)

2/1

4/1

6/1

Y2: Cumulative percent drug release after 4 h

X3: Quantity of coating dispersion (g)

300

500

700

Y3: Cumulative percent drug release after 6 h

* Data taken from Kramar et al.78

can be rotated by any angle without any loss in the properties. For six experiments, the design is of a pentagon shape, with five design points on the circumference of a circle and one at the center.

62

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

(a)

(b)

(c)

(d)

FIGURE 16. Diagrammatic representation of a two-factor equiradial designs; (a) triangular four-run design; (b) square five-run design; (c) pentagonal six-run design; (d) Doehlert hexagonal seven-run design.

Hexagonal equiradial design for two factors is popularly known as the Doehlert design. Also known as the uniform shell design, this design is characterized by uniform distribution of the experimental points on the surface of a hypersphere, thus providing a good basis for interpolation.⁶,⁸,²⁸,⁸² The total number of experiments is given as k² + k + 1. For two factors, for instance, a minimum of seven experiments is proposed in a regular hexagon shape with a central point. Each factor is analyzed at a different number of levels.⁸²,⁸³ The design may be extended in any direction. This includes the possibility of entailing additional factors without any adverse effect on the quality of the design. Lately, this design has been recommended by several authors for pharmaceutical formulation development.⁶,⁸,²⁸,⁸³ Fıgure 16 explicitly illustrates several important cases of equiradial designs.

63

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

i. Mixture Designs

In FDs, CCDs, BBDs, etc., all the factors under consideration can simultaneously be varied and evaluated at all levels. This may not be possible under many situations. Particularly in DDS with multiple excipients, the characteristics of the finished product usually depend not so much on the quantity of each substance present but on their proportions.⁴,¹³ Here, the sum total of the proportions of all the excipients is unity, and none of the fractions can be negative. Therefore, the levels of different components can be varied with the restriction that the sum total should not exceed one.⁸⁴ Mixture designs are highly recommended in such cases.¹³,⁸⁵-⁸⁷ In a two-component mixture, only one factor level can be independently varied, while in a three-component mixture, only two factor levels can be independently varied, and so on. The remaining factor level is chosen to complete the sum to unity. Hence, they have often been described as the experimental designs for formulation optimization.⁴,¹²,¹³,³² For process optimization, however, the designs such as FDs and CCDs are preferentially employed. The fact that the proportions of different factors must sum to 100% complicates the design as well as the analysis of mixture experiments. There are two types of mixture designs—standard mixture designs and constrained mixture designs.⁶,³¹,⁵⁴ If the experimental region is a simplex, standard mixture designs are used. A simplex is the simplest possible n-sided figure in an (n – 1) dimensional space.¹⁶,²⁸,³³ It is represented as a straight line for two components, as a 2-D triangle for three components, as a 3-D tetrahedron for four components, and so on. If the mixture components are subject to the constraint that they must sum to one, then standard mixture designs for fitting standard models are used. The most popular standard mixture designs are simplex mixture designs (SMDs), also known as Scheffé’s designs.²⁸,⁸⁸ They can either be centroid or lattice designs. Both of these are identical for first- and second-order models but differ for third-order onwards. Herein, the design points are uniformly distributed over the factor space and form the lattice. The design point layout for three factors using various models is shown in Fıgure 17, where each point refers to an individual experiment. Scheffé’s polynomial equations are used for estimating the effects. General mathematical models for a total of three components, X₁, X₂, and X₃, are given as:

Linear:

Y = β1 X 1 + β 2 X 2 + β3 X 3

Quadratic: Y = β1 X 1 + β 2 X 2 + β3 X 3 + β12 X 1 X 2 + β13 X 1 X 3 + β 23 X 2 X 3

64

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

(11)

(12)

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

(a)

(b)

(c)

FIGURE 17. Diagrammatic representation of simplex mixture designs (a) linear model; (b) quadratic model; (c) special cubic model.

Special cubic model:

Y = β1 X 1 + β 2 X 2 + β3 X 3 + β12 X 1 X 2 +β13 X 1 X 3 + β 23 X 2 X 3 + β123 X 1 X 2 X 3

(13)

where βi in each of Eqs. (11)–(13) represent the coefficients of respective variables. Because a change of a fraction in a mixture implies a change of another fraction, there are no quadratic interaction terms in Scheffé’s polynomial equations. Because there are no intercept terms in these polynomials, standard linear regression analysis of the data cannot be performed, and special regression algorithms are required for calculating the model equations.¹³,²⁸,⁸⁸ For screening purposes, first-order linear mixture models are used involving the axial design points in the experimental domain.²⁹ Table 11 shows the design matrix for a simplex lattice design generated for optimization of dissolution enhancement of an insoluble drug (prednisone) with the physical mixtures of superdisintegrants.⁸⁹ When some or all of the mixture components are subject to additional constraints, such as a maximum (upper bound) and/or a minimum (lower bound) value for each component, constrained mixture designs are preferred to standard mixture designs.⁶,⁸⁵,⁹⁰ The extreme vertices design is the most widely used example of a constrained mixture design.⁷,³⁴,⁹¹ It is recommended when the factor space is restricted usually on both the upper and lower limits of the factor levels. For instance, in a study involving a formulation of a controlled release tablet by direct compression, use of lubricant in less than 0.2% w/w amount is useless, and more than 2% w/w is meaningless.¹⁵ In such designs, the observations are made at the corners of the bounded design space, at the middle of the edges, and at the center of the design space, which can be evaluated only by regression.

65

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 11. Design Layout for Simplex Lattice Design* Formulation

X1

X2

X3

Percent drug dissolved in 10 min

1

1

0

0

15.2

2

0

1

0

2.8

3

0

0

1

23.1

4

0.5

0.5

0

55.3

5

0.5

0

0.5

59.5

6

0

0.5

0.5

20.6

7

0.33

0.33

0.33

82.4

8

0.667

0.167

0.167

44.7

9

0.167

0.667

0.167

45.5

10

0.167

0.167

0.667

71.6

X1

Croscarmellose Sodium

X2

Dicalcium Phosphate Dihydrate

X3

Anhydrous β–Lactose

* Data taken from Ferrari et al.89

j. Taguchi Designs

Each industrial development system is amenable to natural variability over which one has little or no control. Such variability arises from a number of possible causes such as materials, operators, processes, suppliers, and environmental changes. To develop the products or processes as robust amidst such natural variability, Genichi Taguchi, a Japanese engineer and quality consultant, proposed several experimental design approaches in the mid 1980s.⁴³,⁹² These Taguchi methods have, of late, become globally popular in industrial experimentation. Taguchi refers to experimental design as “off-line quality control” because it is a method of ensuring good performance in the development of products or processes.⁹² The goal of these robust designs is to divide system variability according to various sources and to find the control factor settings that generate acceptable responses.⁴,²⁹,³¹ The unique aspects of his approach are the use of signal (or control or design) and noise (or uncontrollable) factors. Signal factors are the system control inputs. Noise factors are typically too difficult or too expensive to control. The design employs

66

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

two orthogonal arrays—i.e., tabulated designs. The signal (or control) factors, used to fine-tune the process, form the inner array. The noise factors, associated with process or environmental variability, form the outer array.⁶,⁴¹,⁹³ Taguchi’s orthogonal arrays are invariably two-level, three-level, and mixed-level FFDs. An inner design constructed over the control factors finds optimum settings. An outer design over the noise factors looks at how the response behaves for a wide range of noise conditions. The experiment is performed at all the combinations of the inner and outer design runs. Actually, a Taguchi experiment is the cross-product of the two orthogonal arrays. Fıgure 18 illustrates the layout of the arrays as per two-level, three-factor Taguchi design. Pictorially, it can be seen as a conventional design in the inner array factors (in comparison with Fıgure 9b for the classical 2³ FD), with the addition of a “small” outer array factorial design at each corner of the “inner array” box. Taguchi experimental designs, based on the orthogonal arrays, are usually labeled L8, to indicate an array with eight runs. Classical experimental designs are identified with a superscript to indicate the number of variables. Thus, because a 2³ classical experimental design also has eight runs, the designs generated by the two methods are often analogous. Table 12 shows a Taguchi L8 array (in contrast to Table 2 illustrating a classical 2³ design), to investigate the effects of up to seven factors in eight runs. Use of linear graphs and interaction tables would select columns 1, 2, and 4 to identify the effects of three factors, and this corresponds to the same classical design (Table 2).³¹ For Taguchi arrays, each row represents a run of the experiment. Here, each design has eight runs. Each column represents the settings of the factor at the top of the column. In the classical design, the levels are (–1, +1), while in the Taguchi design, the levels are (1, 2), implying (low, high) for each factor, respectively. At the bottom of each design is the corresponding column number for the alternative

FIGURE 18. Diagrammatic representation of inner 23 and outer 22 arrays for Taguchi robust design with “I” as the inner array and “E” as the outer array.

67

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 12. Taguchi L8 Array for Three Variables Experimental run

Column number 1

2

3

4

5

6

7

1

1

1

1

1

1

1

1

2

1

1

1

2

2

2

2

3

1

2

2

1

1

2

2

4

1

2

2

2

2

1

1

5

2

1

2

1

2

1

2

6

2

1

2

2

1

2

1

7

2

2

1

1

2

2

1

2

2

1

2

1

1

2

C

B

BC

A

AC

AB

ABC

8 3

Classical 2 FD column no.

design—e.g., column 1 in the Taguchi design (Table 12) corresponds to the column C in the classical design (Table 2), and vice versa. The Taguchi design has the same number of components as the classical design, but in a different order. However, the columns for the settings of the factors, chosen according to the interactions assumed by the investigator, may or may not be present in the process. The investigator consults an interaction table and/or linear graphs to determine which columns to choose in the design. The response variable in Taguchi data analysis is not the usual raw response or quality characteristic, but the signal-to-noise ratio (S/N ratio).⁴,⁶,⁷,⁴³ The S/N ratio is a performance statistic, calculated across the entire outer array for each inner run, which becomes the response for a fit across the inner design runs. Its formula depends on whether the experimental goal is to maximize, minimize, or match a target value of the quality characteristic of interest. From the drug delivery perspective, while using Taguchi methods, first one needs to determine the control factors that can be set by the product development pharmacist.⁶,⁹³-⁹⁵ Those are the factors in the experiment for which different levels are investigated. Next, decisions are made on the choice of an appropriate orthogonal array to select the experiment and methodology to measure the quality characteristic of interest. It must be remembered that most S/N ratios require that multiple measurements are taken during each run of the experiment—e.g., the variability around the nominal value cannot otherwise be assessed. Fınally, the experiment is conducted and the factors that most strongly affect the chosen S/N ratio are identi-

68

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

fied, and the production process is reset accordingly. Depending upon the situation at hand, S/N ratios are maximized, minimized, or targeted to a specific limit or range of limit. Table 13 lists the recommended performance parameters for Taguchi’s S/N ratio. The table also encompasses the envisioned drug delivery applications, where Taguchi arrays hold high promise and can find plausible applications. Besides robustness, the Taguchi methodology emphasizes minimization of loss function—i.e., minimizing economic loss associated with running the experiments at non-optimal conditions. In fact, Taguchi’s analysis begins with the operational definition of quality as a measure in terms of loss; the greater the quality loss, the lower the quality. The ideal point of highest quality is obviously the one representing no quality loss. Taguchi designs allow estimation for the maximum number of main effects in an unbiased (orthogonal) manner, with the minimum number of experimental runs. Most analyses of robust design experiments amount to a standard ANOVA of the respective S/N ratios, ignoring two-way or higher-order interactions and sometimes using accumulation analysis.⁷,⁴³ Besides screening of influential variables, Taguchi array designs hold tremendous potential for response surface modeling, especially when the number of factors is quite large.

TABLE 13: Taguchi’s Signal to Noise Ratios as Performance Statistics in Drug Delivery Goal of optimization study

Signal to noise ratios

Potential instances of drug delivery relevance

Maximization of the response

⎛1 1 ⎞ S = −10 log ⎜ ∑ 2 ⎟ ⎜n Y ⎟ N ⎝ i i ⎠

•Flux from transdermal patch through skin. •Bioadhesive strength of bioadhesive tablet. •MDT of an oral controlled release tablet. •Entrapment efficiency in nanoparticles. •Dissolution rate of rapid release tablets. •Shelf–life of a drug delivery system. •Floating time of hydrodynamically balanced system.

Minimization of the response

⎛1 ⎞ S = −10 log ⎜⎜ ∑ Yi2 ⎟⎟ N ⎝n i ⎠

•Drug leakage from liposomal systems. •T85% of a fast release solid dispersion.

⎛Y2 ⎞ S = −10 log ⎜ 2 ⎟ ⎜s ⎟ N ⎝ ⎠

•Lag time release of enteric coated formulations. •Release exponent value for zero–order kinetics. •Dispersion time in dispersible tablets. •HLB of micro-emulsions. •Hardness of oral compressed matrices.

Target to a specific value or a range

69

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

k. Optimal Designs

If the experimental domain is of a definite shape—either cubic or spherical—the standard experimental designs are normally used. However, when the domain is irregular in shape, optimal designs can be used.⁴,²⁸,⁹⁶ These are the nonclassic custom designs generated by exchange algorithm using computers.⁷,⁹⁷ In general, such custom designs are generated based on a specific optimality criterion such as D-, A-, G-, I-, and V-optimality criteria.⁶,⁷,³¹ These optimality criteria are based upon the minimization of various parameter and design prediction variances. The variable space in such designs consists of a candidate set of design points. The candidate set consists of all the possible treatment combinations that the formulator wishes to consider in an experiment. These candidate sets are elected based upon any one of these criteria. The most popular criterion in the custom designs is D-optimality. D-optimal designs are based on the principle of minimization of variance and covariance of parameters. The optimal design method requires that a correct model be postulated, the variable space be defined, and the number of design points be fixed in such a way that will determine the model coefficients with maximum possible efficiency. These powerful designs can be continuous—i.e., more design points can be added to it subsequently, and the experimentation can be carried out in stages. In particular, while augmenting an experimental design, the domain loses its shape; a D-optimal design can be employed for further studies. Many new terms can be added to the original model in any direction, and corresponding optimal new test runs (with respect to this expanded model) can be determined. Two sets of experiments, carried out in different blocks, can also be grouped together. Depending upon the problem, these designs can also be used along with factorial, central composite, and mixture designs. Besides formulation and process optimization, these optimal designs are also successfully used for screening of factors.⁷⁴,⁸³,⁹¹,⁹⁶,⁹⁸ Apart from these commonly employed experimental designs, there are some relatively less popular designs, described below.

l. Rechtschaffner Designs

These designs are of importance in situations where the model involves main effects and first-order interactions.⁶,⁹⁹ Although these designs are saturated, they are neither balanced nor orthogonal except for the five-factor design, where main effects can be independently estimated. Notwithstanding the fact that the use of the designs has seldom been reported for factor influence studies, they hold sufficient promise for the pharmaceutical formulator.⁶,¹⁰⁰

70

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

m. Cotter Designs

This design is generally used for screening purposes and is advantageously used when a larger number of factors is to be screened with lesser resources, and there is likelihood of interactions among the factors.⁵⁷

n. Other Designs

Some scientists have also employed the Latin square design¹⁰¹ and other orthogonal arrays¹⁰² for optimizing pharmaceutical formulations. Experimental designs such as distance-based designs, hybrid designs, etc. can also be employed for systematic DoE studies with limited fruition.

2. Choosing Experimental Designs

The choice of an experimental design is customarily a compromise between the information required and the number of experimental studies to be conducted.⁹ It depends largely upon the objectives of the study and the number of factors to be investigated. If the primary purpose of the experiment is to screen out or select the few important main effects from the many less important ones, screening designs are used.⁸,¹⁵,¹⁹,³¹ By and large, low-resolution designs suffice for the purpose of simpler screening of a large number of experimental parameters. These are usually FDs (full or fractional), PBDs, or Taguchi designs. Screening designs support only the linear responses. Thus, if a nonlinear response is detected or a more accurate picture of the response surface is required, a more complex design type is necessary. Hence, when the investigator is interested in estimating interaction and even quadratic effects, or intends to have an idea of the local shape of the response surface, response surface designs are used. For interaction models, resolution IV or V designs are usually preferred. However, some interaction terms in the model may be confounded with others, and further experimentation might be required to decouple these terms at a later stage.⁹,³¹ Designs such as BBD or CCD, which support nonlinear responses, are commonly used for RSM optimization applications. When the formulator has several factors that are proportions of a mixture formulation, mixture designs are specifically favoured.¹⁵ On the whole, the first-order experimental designs must enable estimation of the first-order effects, preferably free from interference by the interactions among factors and other variables.⁷,⁵⁴ These designs also allow testing for “goodness of fit” of the proposed model. Even if they are able to determine the existence of the

71

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

TABLE 14. Application of Important Experimental Designs Depending Upon the Nature of Factor, Models, and Strategies 2k FD xk FD FFD PBD CCD BBD EQD SMD EVD TGD DOD Factor type Formulation























Process























Both























≤3





















4–6





































Number of factors

>6



Factor level 2





≥3





 



 











 

Model proposed 





Interaction model







Quadratic model





Mixture model







Screening (effect study) Factor influence study

Linear model

Custom made model

Response surface mapping

 

























































































































BBD: Box–Behnken design; CCD: Central composite design; DOD: D–Optimal design; EQD: Equiradial design; EVD: Extreme vertices design; FD: Factorial design; FFD: Fractional factorial design; PBD: Plackett–Burman design; SMD: Simplex mixture design; TGD: Taguchi design

curvature in the response surface, they should normally be employed in the absence of curvature of the response surface. If there are only a smaller number of factors to be studied at extreme levels, then k 2 FDs are acceptable. If there are more factors and levels, then perhaps a FFD or BBD is better. If the number of product factors and processing parameters under

72

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

consideration are many (i.e., ≥7), a Taguchi design may be better. In such cases, creating cause–effect diagrams (i.e., Ishikawa fishbone diagrams) can be quite useful.⁹ Computer-assisted designs such as D-optimal are better suited to situations wherein a large number of qualitative factors are incorporated in the design and/or when the resultant experimental domain is irregular in shape.⁴,⁵⁴ The compilation in Table 14 acts as a help guide while selecting an experimental design, based upon the motive of the study. To facilitate better interpretation of results, it is always worthwhile to run one or more replicate batches (especially at the central points) and test them to determine the reproducibility of the batches, accuracy and precision of analytical data, and reliability of the contour plots.

III.D. Step IV: Formulation of DDS and Their Evaluation A design matrix is generated according to the selected experimental design. Various drug delivery formulations are prepared according to the generated design matrix in a randomized manner.⁵,⁹,¹⁹,³⁹,⁵⁵ Randomization ensures that “noisy” factors are spread uniformly across all “control” factors. The factors are varied at the selected levels while keeping all other process and formulation variables as constant. The raw material and the experimental conditions should also be kept constant to avoid any variability from the unwanted sources. Subsequently, the prepared drug delivery formulations are suitably evaluated for corresponding performance parameters and response variables. The analytical methods used for the purpose should yield results with maximum precision and reproducibility, because the success of any optimization study depends largely upon the accuracy and the reliability of the input data.

III.E. Step V: Computer-Aided Modeling and Optimization 1. DoE Data Analysis and Modeling

The planned conduct of experimentation is succeeded by deft interpretation of data. This is a vital phase that further involves several steps.⁷,⁴⁶,⁵³,¹⁰³ DoE data analysis starts with an overview and examination of data for the presence of any outliers or obvious problems. A wide array of plots are drawn to uncover anomalies or provide insights that go beyond what most quantitative techniques are capable of discovering. Plots such as response histograms, response versus time order scatter plots, response(s) versus factor levels plots, main effects mean plots, and normal or half-normal plots can be plotted for better understanding of the system. After this, the polynomial equations are generated based upon the proposed mathematical

73

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

B. SINGH ET AL.

model. Varied statistical tests of significance such as ANOVA or Student’s t test are applied to test the model and to further simplify it. Residual graphs and other model diagnostic plots are also plotted to confirm the correct transformation of the data. To accomplish the task, it is important that the formulation scientist carrying out the DoE data analysis be aware of the fundamental statistical principles of data transformation, normality, linearity, residual analysis, lack of model fit tests, ANOVA, p values, etc.⁶,⁷,³¹

a. Model Selection

“All models are wrong. But some are useful.” This assertion of Box and Draper⁴⁶ characterizes the situation that a formulation scientist faces while optimizing a system.⁴⁶ Accordingly, the success of the optimization study depends substantially upon the judicious selection of the model. In general, a model has to be proposed before the start of the DoE optimization study.⁴⁵ Model selection depends upon the types of variables to be investigated and the type of study to be made—e.g., description of the system, prediction of the optima or feasible regions, or factor screening. The choice also depends on a priori knowledge of the experimenter about possible interactions and quadratic effects.⁴,²⁸,⁴⁷,⁵⁴ If the model chosen is too simple, higherorder interactions and effects may be missed because the relevant terms are not part of the model. If the model selected is too complicated, over-fitting of the data may occur. This results in large variance in the predictions and low reliability of the predicted optimum. The models mostly employed to describe the response are the first-, second-, and very occasionally, third-order polynomials. A first-order model is initially postulated. If a simple model is found to be inadequate for describing the phenomenon, the higher order models are followed. A series of computations are performed after hypothesizing the model, for calculating the coefficients of polynomials and their statistical significance, to enable the estimation of the effects and interactions. Calculation of the coefficients of polynomial equations. Regression is the most widely used method for quantitative factors.¹⁰⁴,¹⁰⁵ It cannot be used for qualitative factors, because interpolation between discrete (categorical) factor values is meaningless. In ordinary least-squares regression (OLS), a linear model, expressed as Eq. (14), is fitted to the experimental data for estimating the values of β such that the sum of squared differences between predicted and observed responses is minimized.

Y = β0 + β1 X 1

or Y = β0 + β1 X 1 + β11 X 12

74

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

(14)

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

Multiple linear regression analysis (MLRA) can be performed for more factors, Xi, interactions, XiXj, and higher order terms, as depicted in Eq. (15).

Y = β0 + β1 X 1 + β 2 X 2 + β1β 2 X 1 X 2 + ...

(15)

In certain situations in which the factor/response relationship is nonlinear, multiple nonlinear regression analysis (MNLRA) may also be performed.¹⁰⁵ Regression analysis can only be performed on the coded data or the original values after one or several models have been postulated, the choice being based on some expectation of the response surface. In situations in which there are large numbers of variables, such as in multivariate studies, the methods of partial least squares (PLS) or principal component analysis (PCA) can also be employed for regression.²⁹,⁷³,¹⁰³ PLS is an extension of MLRA and is used in situations in which there are fewer observations than the number of predictor variables.²⁹,⁷³,¹⁰³,¹⁰⁶ Also, it aids in selecting suitable predictor variables and the outliers before carrying out classical linear regression. The other multivariate analytical technique, PCA, aims at reducing data dimensionality while retaining as much variation among the data as possible.²⁹,¹⁰⁷,¹⁰⁸ It linearly transforms a large number of intercorrelated variables, often referred to as the original variables, into the same or fewer number of uncorrelated variables.¹⁰⁹ Each of these uncorrelated variables is called a principal component (PC). These PCs are transformed in a hierarchical way so that the variances of these components are in descending order, such that the first several PCs explain most of the variation among the original variables. Data analyses are then performed based on these leading PCs instead of the original variables. Estimation of the significance of coefficients and model. Significance of coefficients can be carried out using ANOVA, followed by Student’s t test.³³,⁵³,¹¹⁰,¹¹¹ ANOVA computation can be performed using the Yates algorithm to find the significance of each coefficient. This ANOVA helps in determining the significance of the model as well as of the lack of fit. It is always advisable to retain only significant coefficients in the final model equation. The values of Pearsonian coefficient of determination (r²)and that adjusted for degrees of freedom (r ²adj) of the polynomial equation are also compared. The value of r ² is the proportion of variance explained by the regression according to the model and is the ratio of the explained sum of squares to that of the total sum of squares.

r2 =

Begell

House

Digital

Library,

http://dl.begellhouse.com

(SSTOTAL − SSRESIDUAL ) SSTOTAL

Downloaded

75

2011-1-3

from

IP

(16)

168.223.7.171

by

Florida

A&M

University

B. SINGH ET AL.

The closer the value of r ² to unity, the better the fit and the better, apparently, the model.⁶,⁷,¹⁰⁴,¹⁰⁵,¹¹⁰ However, there are limitations to its use in MLRA, especially in comparing the models with a different number of coefficients fitted to the same data set. A saturated model will inevitably give a perfect fit, and a model with almost as many coefficients as data is likely to yield a higher value for r ². In such cases, r ²adj is preferred, which corrects the r ² value for the number of degrees of freedom. The value of r ²adj is calculated using equaivalent mean squares (MS) in place of sum of squares (SS), as described in Eq. (17).⁶,¹⁰⁴ Its value is usually less than r ².

2 radj =

( MSTOTAL − MSRESIDUAL ) MSTOTAL

(17)

Predicted residual sum of squares (PRESS) is calculated as the sum of squared differences between the observed values (Yi) and the predicted values (Yˆi ), calculated using the leave-one-out method.⁶,²⁸,⁴¹ Ideally, the value should be zero or close to it.

PRESS

¦ (Yi  Yˆi )2

(18)

Eq. (19) computes the cross-validated value of r ²—i.e., Q²—as the predictive power of the model.¹⁴ Relative to r ², it underestimates the goodness of fit. A fit of Q² > 0.5 is considered as fairly good, while Q² > 0.9 is usually taken as excellent.

Q2 = 1 −

PRESS SSTOTAL

(19)

Fınally, all these parameters are assessed to help in choosing the most appropriate model for a particular response. The final polynomial equation is subsequently used to calculate the magnitude of effects and interactions. Table 15 is a typical ANOVA table generated during modeling of the controlled release buccoadhesive compressed matrices employing HPMC and Carbopol as the factors.¹⁸,¹⁹ Statistically significant values (p < 0.001) of Fısher’s ratio (F) and insignificant value of “lack of fit F-value” (p > 0.001) unambiguously ratify that the proposed model fits well into the data. Model diagnostic plots. One or more of these plots are usually plotted to investigate the goodness of fit of the proposed model:

76

Begell House Digital Library, http://dl.begellhouse.com Downloaded 2011-1-3 from IP 168.223.7.171 by Florida A&M University

DESIGN OF EXPERIMENTS, PART 1: FUNDAMENTAL ASPECTS

TABLE 15. ANOVA Table for a Response Variable* Model terms

Source

DF

Mean square

F

p value

7

217.25

1.330E+5

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.