Raise Your Batting Average

Remember the importance of sequence in experimentation

by Ronald D. Snee

Design of experiments (DoE) has been an effective tool for experimenters, statisticians and quality professionals for decades. DoE has evolved since the seminal work of Sir Ronald A. Fisher in the 1920s, and since George E.P. Box and his colleagues enhanced and popularized the approach in the process industries in the 1950s and 1960s. The utility of the method has even spread to the service industries, backed by a growing amount of literature.1, 2

A key aspect of experimentation is that it is sequential. Box emphasized that experimentation and learning is an iterative process, as shown in Figure 1. Problems are rarely solved and significant advances in knowledge are rarely made after a single experiment. Learning is a process, not an event. With some exceptions, a series of experiments is usually the norm.

Figure 1

When you look at the plethora of DoE books on the market, however, you see little discussion on the sequential (iterative) nature of the endeavor. This is due, in large part, to the statistics profession’s focus on individual statistical tools without thinking about how the tools are sequenced and linked to solve problems.

Some approaches

When sequential experimentation is discussed, it is addressed in a number of ways, all of which are effective under the right circumstances. A classic treatment of the subject is optimization of the product or process design via "hill climbing," using the method of steepest ascent. A response surface method3 or a simplex optimization4 is used to guide the sequence of experiments.

Another approach is to run a fractional-factorial design, perhaps followed by additional experiments to sort out any interactions identified by the fractional-factorial design.

You can also use a design that permits the estimation of linear effects or linear and interaction effects, and includes counterpoints to detect response-surface curvature if it exists. If curvature is detected, additional experiments are run using designs that involve three, four and five levels to estimate quadratic effects enabling the identification of optimal conditions.

We also often find situations in which, as the experimenter moves from one experiment to the next, the factor ranges may change (expand, narrow or shift), the center of the experimental region may be changed or variables may be added to or deleted from the study.

Each of these approaches is useful, but they can be made even more effective when included in an overall strategy of experimentation.

Ignoring Sequence—An Example

A lab director uses the "critical test" strategy, which assumes subject matter expertise and a few tests will identify a better product or a product with less impurity. The lab director came to work each day and instructed technicians on what test to run. When a few tests didn’t produce a better product, he identified what he said was another "few tests that will work." The result was a series of "re-dos," not a planned series of experiments.

After 54 tests, a better product design was not found and the importance of the factors was not identified. The lab director failed to recognize that a critical test approach is a low-yield strategy. A different strategy was needed.

A better approach

The information gained in the previous tests was useful in defining the new strategy. It was decided to run an optimization experiment because there were only three factors involved, and the ranges were well defined. A 15-run, face-centered-cube design was used, which involved some replicate test controls, resulting in 23 total tests. It was learned that increasing the active ingredient of the formulation had no effect on impurity, so the minimum level tested was chosen for the product, which reduced product cost. Using response-surface optimization, a combination of the other two factors—which minimized the impurity—was found.

The predicted measured impurity of this formulation was about 50% lower than that of the current product. An added bonus was that the developed model accurately predicted the impurity of the old product. This suggested that the model could be used in the future to identify formulations for other applications, which could accommodate higher impurity levels.  —R.D.S

A strategy that works

Over the years, it has been recognized that experimentation is more effective when it is approached with a strategy in mind. For any strategy to be effective, it must recognize that the design (or sequence of designs) should match the experimental environment. Experimentation is sequential, and the DoE tools must be embedded in the strategy, linked and sequenced to guide the experimenter.

The sidebar, "Ignoring Sequence—An Example," describes an experiment in which the sequential nature of experimentation was not considered, resulting in an ineffective and inefficient experimental program. Poor planning is frequently the culprit. For instance, you may run out of time and money before you get to a useful answer. An important variable may be missed because a well-thought-out experimental plan was not developed—often the result of a desire to show results too quickly.

This experience leads to the following principles that can enhance experimental strategies:

  • Plan ahead. Decide on the series of experiments that may be needed to satisfy the objective of the experimental program.
  • Consider all factors. In the beginning, include (or at least consider) all factors (Xs) that may possibly be important. Recall the Pareto effect, which says the majority of the variation will be caused by a small subset of the factors. As you move through the experimentation, the important factors will be discovered and tested further in later experiments.
  • Don’t spend all your resources on a single experiment. As mentioned earlier, an issue is rarely resolved in a single experiment.

A strategy that uses these principles was developed at DuPont in the 1960s and offered in public workshops in the 1970s. This strategy identifies three experimental environments: screening, characterization and optimization (SCO). The objective of each of the three phases and the designs used are summarized in Table 1.

Table 1

Screening: This phase explores the effects of a large number of variables, with the objective of identifying a smaller number of variables to study further in characterization or optimization experiments. Additional screening experiments involving additional factors may be needed when the results of the initial screening experiments are not promising. On several occasions, I’ve seen the screening experiment solve the problem.

When there is very little known about the system being studied, sometimes range-finding experiments are used, in which the candidate factors are varied one factor at a time to get an idea of what factor levels it would be appropriate to consider. Varying one factor at a time can be useful.

Characterization: In this phase, you experiment to better understand the system by estimating interactions and linear (main) effects.

Optimization: In this phase, using response surface contour plots and perhaps mathematical optimization, you develop a predictive model for the system that can be used to find useful operating conditions.

Keep in mind Dave Bacon’s observation—particularly when working with an existing process—that there may be only time, money and process availability to run a single experiment.5 This situation is covered by the strategy of planning ahead, considering all factors and performing multiple experiments when an SCO experiment is used to solve the problem. I have seen such a strategy work on a number of occasions.

The SCO strategy in fact embodies several strategies, which are subsets of the overall SCO strategy:

  • Screening-characterization-optimization.
  • Screening-optimization.
  • Characterization-optimization.
  • Screening-characterization.
  • Screening.
  • Characterization.
  • Optimization.

The end result of each of these sequences is a completed project. There is no guarantee of success in any instance, only that SCO strategy will raise your batting average in hitting on the right answers.

The strategy used depends on the experimental environment, which includes the objectives of the experimental program. Criteria that can be used to characterize the experimental environment are outlined in Table 2. These characteristics involve program objectives, the nature of the factors (Xs) and responses (Ys), resources available, quality of the information to be developed and the theory available to guide the experiment design and analysis. A careful diagnosis of the experimental environment along these lines can have a major effect on the success of the experimental program.

Table 2

Over the years, we have learned that experimentation can be used to improve all types of processes in manufacturing and service. As with any endeavor, it is important to have a strategy to guide your work. Recognizing that experimentation is sequential—sometimes involving several phases—the SCO strategy has proven to be a high-yield strategy to guide experimentation. The SCO strategy has stood the test of time, and it’s definitely worth your consideration. 


  1. Roger W. Hoerl and Ronald D. Snee, Statistical Thinking: Improving Business Performance, Duxbury Press, 2002.
  2. Johannes Ledolter and Arthur J. Swersey, Testing 1-2-3—Experimental Design With Marketing and Service Operations, Stanford University Press, 2007.
  3. George E.P. Box, J. Stuart Hunter and William G. Hunter, Statistics for Experimenters: Design, Innovation and Discovery, second edition, John Wiley and Sons, 2005.
  4. C.D. Hendrix, "Through the Response Surface With Test Tube and Pipe Wrench," Chemtech, August 1980, pp. 488-497.
  5. David W. Bacon, "Making the Most of a ‘One-Shot’ Experiment," Industrial and Engineering Chemistry, 1970, Vol. 62, pp. 27-34.


Pfeifer, C.G., "Planning Efficient and Effective Experiments," Materials Engineering, May 1988, pp. 35-39.

©Ronald D. Snee, 2009.

Ronald D. Snee is founder and president of Snee Associates LLC in Newark, DE. He earned a doctorate in applied and mathematical statistics from Rutgers University in New Brunswick, NJ. Snee has received the ASQ Shewhart and Grant medals, and is an ASQ fellow and an academician in the International Academy for Quality.

--Rodrigo Salgado, 06-06-2016

Average Rating


Out of 1 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers