The Birth of Specifications
Reconciling process capability and performance with specifications
by Lynne B. Hare
Unlike Sandro Botticelli’s depiction of the birth of Venus, specifications do not ride in from the sea on a half shell. No—they actually come from ceiling tiles. This revelation may come as a shock to many who think specifications are God-given. But the origin is definitely the ceiling tile.
Why the confusion? Well, sometimes specifications have been around so long that their spanning of generations gives the appearance of divine origin, hence the veneration and hesitancy to question.
Ask around. Find someone who has been in your organization a long time. This might take a while, but it can have its rewards. In moments of candor, he or she might reveal the nativity.
"Ah, yes. I remember back in ’82, they asked me about the specs. My default then was plus or minus 10% around the target. But I had a standby, just in case. I used the ceiling tile on that one. My favorite tile was just to the left, above my office door. High and mighty it was. It revealed ±8%. The engineers added a decimal, and the statisticians calculated the probability of compliance to four places. That was a proud day."
That’s a little far-fetched, but make no mistake about it. To the originator of the specification limits, those bounds serve to protect product and process. They cradle the developer’s offspring, the product of their genius, education and experience. A violation is a stabbing insult, an affront, an abuse, a personal attack.
Denizens of manufacturing facilities usually think otherwise:
"Of course, we are corporate citizens, too, so we seek high levels of quality. But we are differently motivated. We are often pressured to get product out the door, so we’d prefer to have specification limits broad enough to drive a truck through—sideways. We know we ask embarrassing questions like, ‘Where did these specification limits come from?’ and ‘What would happen if we were just a touch beyond them?’ After all, rework is expensive and time-consuming. May we release slightly out-of-spec products anyway? If not, may we assimilate them with conforming product? Will the customers really be able to tell the difference?"
Neither party is wrong. Some moderation is in order. But what can be done?
First, there should be an assessment of process capability.1 And the general rule on which all should agree is that specifications may never be set more snugly than process capability will allow. Have you ever tried to sit in a chair that was too narrow for you? No amount of squirming would prevent certain parts of your anatomy from overlapping the seat. It is the same with specifications and capability.
Second, specifications shouldn’t come from the deities or from ceiling tiles. They should be driven by performance factors and by consumer reaction. If you put too much milk in the pudding mix, it won’t pud. A nut and a bolt won’t conjoin if their mutual dimensions don’t permit. In many situations, the establishment of specifications is costly, requiring careful study and consumer research, but it is almost always value-added. Performance as defined by conformance to specifications is the key.
Third, and perhaps most importantly, there should be congruity of processes and specifications. This means that at the very least, the specification limits must be wider than ±3 capability standard deviations, as noted earlier.
But things are never quite that simple. There is no wiggle room, and we all know that processes wiggle despite our best efforts to keep them from wandering off target. So there should be some allowance for departures from target. But how much?
If you have the right kind of long-term data, you can use them to estimate structural and assignable cause variation.2
Assignable cause variation is variation due to outside interventions, such as raw material changes, shift changes and changes made without benefit of supporting data.
Structural variation is like assignable cause variation, but it is internal to the process and is often due to differences among parallel process streams. It appears in the data as a result of the way the process is structured—hence the name.
Capability, structure and cause
The combination of capability, structural and assignable cause variation is performance variation. It is a measurement of what emerges from the end of the manufacturing line.
If the interval composed of the mean ±3 capability standard deviations is inside the specification limits, you have some assurance that the process can meet specifications.
If the interval composed of the mean ±3 "capability plus structural variation standard deviations" is inside specification limits, you have some assurance that the additional variation due to structure is not harmful. If not, there are adjustment opportunities.
If the interval composed of the mean ±3 "capability plus structural variation plus assignable cause variation" is inside the specification limits, you have some assurance that the process is conforming to specifications. If not, there are some process control opportunities.
Admittedly, Figure 1 is created without the artistry of Botticelli. Nonetheless, it is intended to show three possible specification situations. On the left of the figure are intervals representing ±3 capability, capability plus structure and performance intervals derived from process data. On the right are hypothetical specification intervals.
In situation No. 1, the process does not have the capability to conform to the specifications. Product manufactured on that line will never meet the desired specifications. Situation No. 2 shows the process can meet specifications, but there is something in the internal process structure that prevents conformance. Structural adjustments to the process are necessary for conformance. Situation No. 3 illustrates a process that is conforming to specifications.
A graphical display like this is useful because it helps R&D and manufacturing staff members to figure out what must be done to sort out the specification conundrum.
To be sure, there are other approaches to this kind of comparison of process to specifications. There are tolerance intervals, which are like having confidence intervals on your intervals. Using them permits statements such as, "We can be 95% certain that 99% of the observations will lie between L and U."3
Many use Cp and Cpk as taught in lean Six Sigma training.4
A concern with all methods is the possible loss of information resulting from the reduction of manufacturing data to a single number or only a few numbers. Excellent statistical software packages are available to calculate variance components, tolerance intervals and quality indexes. Most produce images to aid understanding and communication. These images do more than words possibly can to ease the reconciliation of processes and specifications.
- Lynne B. Hare, "What’s Meant by Capability," Quality Progress, August 2014, pp. 50-51.
- Lynne B. Hare, "Chicken Soup for Processes," Quality Progress, August 2001, pp. 76-79.
- William Q. Meeker, Gerald J. Hahn and Luis A. Escobar, Statistical Intervals, second edition, Wiley & Sons, 2017.
- Lynne B. Hare, "The Ubiquitous Cpk," Quality Progress, January 2007, pp. 72-73.
Lynne B. Hare is a statistical consultant. He holds a doctorate in statistics from Rutgers University in New Brunswick, NJ. He is past chairman of the ASQ Statistics Division and a fellow of ASQ and the American Statistical Association.