Featured Product
This Week in Quality Digest Live
Lean Features
William A. Levinson
Quality and manufacturing professionals are in the best position to eradicate inflationary waste
Mark Graban
Focus on psychological safety instead
Donald J. Wheeler
What does this ratio tell us?
Matthew M. Lowe
Take this opportunity to prepare for the future
Chandrakant Isi
DFM reduces manufacturing complexity, leading to lower costs and higher quality

More Features

Lean News
Embrace mistakes as valuable opportunities for improvement
Introducing solutions to improve production performance
Helping organizations improve quality and performance
Quality doesn’t have to sacrifice efficiency
Weighing supply and customer satisfaction
Specifically designed for defense and aerospace CNC machining and manufacturing
From excess inventory and nonvalue work to $2 million in cost savings
Tactics aim to improve job quality and retain a high-performing workforce
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA

More News

Anthony Chirico

Lean

d2: More Than Just a Control Chart Constant

We owe a debt of gratitude to Tippett and other pioneers who put ‘engineering’ into quality engineering.

Published: Wednesday, November 7, 2018 - 13:03

Perhaps the reader recognizes d2 as slang for “designated driver,” but quality professionals will recognize it as a control chart constant used to estimate short-term variation of a process. The basic formula shown below is widely used in control charting for estimating the short-term variation using the average range of small samples. But what exactly is d2 and why should we care?

L.H.C. Tippett

To find some answers to this question, we need to consult the 1925 work of L.H.C. Tippett.1 Leonard Henry Caleb Tippett was a student of both Professor K. Pearson and Sir Ronald A. Fisher in England. Tippett pioneered “Extreme Value Theory,” and while advancing the ideas of Pearson’s 1902 paper of Galton’s Difference Problem,2 he noted that the prior work of understanding the distribution of the range for a large number of samples was deficient.

Tippett proceeded to use calculus and hand calculations to integrate and determine the first, second, third, and fourth moments of the range for samples drawn from a standard normal distribution. That is, he calculated the mean, variance, skewness, and kurtosis for sample sizes of size two through 1,000 by hand.

After completing his rigorous hand calculations, Tippett wanted to verify his results by experimentation. He proceeded to manufacture 1,000 “very small cards,” which were marked proportionally so they aligned with the standard normal distribution. These small cards were placed in a bag and sampled one at a time. After each card was drawn, he replaced the card and mixed all cards in the bag before withdrawing the next card. He did this 5,000 times. When he finally finished the experiment, he concluded that there was too much error in his results, most likely because he did not mix well enough in the bag between successive samples.

Tippett proceeded to repeat the entire experiment, this time manufacturing 10,000 small cards and being more careful to thoroughly mix between successive samples. There were other improvements to his experimental methods, which are too detailed to list here. This time his experiment was a success! Good agreement was attained between his calculations for expected value of the range and his actual results attained by experimentation.

Figure 1 shows a portion of Tippet’s experiment when grouping the samples into size n = 10. We can clearly observe the shape of the distribution of ranges, and the mean range is clearly illustrated. His calculation of the mean range for repeated samples of size n = 10 was 3.07751. He tabulated the results of his calculations for the average ranges with sample sizes of n = 2 through n = 1,000. These exact calculations later were adopted and became what we now know as d2 factors.

We can clearly see in Tippett’s illustration that the average range (d2) is the “expected” value of the range for n = 10 from a standard normal distribution having σ as the unit of measure. In other words, when repeatedly selecting a sample of size n = 10 from a normal universe, we would expect that the average of differences between the largest and smallest observation to be 3.07751σ.

Application to control charts

The control chart formula for estimating the standard deviation is derived from Tippett’s work. This is why, when sampling, we can obtain a value for the range that is in physical units—say, inches—and then divide by the expected value of the range and derive the value of the standard deviation in physical units. We actually set up an equivalency formula of physical units to units in standard deviations, as shown below.

observed average range = expected average range

To be explicit, suppose we calculate the average range from many samples of size n = 5, and we obtain We then set this equivalent to the expected value of 2.326σ.  

By some very simple algebra, we divide both sides of the equation to obtain:

The value 2.326 is now unit-less and is referred to simply as a “constant” or a “factor,” and now it is clear that .

Figure 1: From L.H.C. Tippett, “On the Extreme Individuals and the Range of Samples Taken From a Normal Population. Biometrika, Vol. 17, 1925

Why quality professionals care

In addition to control charting, why should we care about the lessons from Tippett? The patience and tenacity of Tippett who spent his lifetime studying “extreme values” was a gift given to us almost 100 years ago. As one of his legacies he provided tabulations of the expected value of the range (d2) for samples of size n = 2 through n = 1,000. To this day, there is not another source for d2 factors that is this comprehensive. In graphic form, some d2 factors are illustrated in figure 2 for samples of size n = 2 through n = 150.

Figure 2: Expected value of the range (d2) for sample size n = 2 through n = 150

From a practical perspective, we can see that even with a sample of size n = 150, we can only expect to observe a difference between largest value and the smallest value of 5.3σ. In fact, according to Tippett’s tabulated values, a sample of size n = 444 is required to observe extreme values separated by 6σ (d2 = 6.00079). Clearly, very large sample sizes are required to consistently observe values in the tails of a distribution.

More important, we can observe a system of diminishing returns in the inspection of successive units. An expected range of roughly 3σ can be attained with a sample size of n = 10, and some marginal improvement will occur with a sample n = 20. Beyond n = 20, improvement in our ability to observe extreme values becomes much less cost effective. In fact most tables of d2 are truncated at n = 25.

Quality professionals should carefully examine the relationship shown in figure 2. We can see that for a marginal process, attribute acceptance sampling for conformance to requirements is not a winning proposition. Only by good luck will observations be made in the tails of the distribution. Usually it is prohibitive to take samples this large.

The quality professional must rely on other measures that are both effective and economic to ensure conformance to requirements. Incorporation of imaginary limit techniques is one available option. The d2 factor can be used to select the appropriate sample size, which allows observations to be made that exceed the imaginary limits.

Another quick, but coarse, check would be to leverage the relationship illustrated by inspecting 10 units, recording the high and low value, and calculating the range. Since we know the expected value of the range is 3.07751 (roughly 3), we can double the observed range to get a “feel” for the extreme values at 6σ. Of course variables methods are more powerful than this quick check, but as John Tukey said, the practical power of a procedure is related to the probability that it will be used. “The ability of the statistician to carry the procedure everywhere, stored in a very small part of his memory.”3 In other words, sometimes the best technique is the one you happen to have with you. It is easy to remember n = 10 and multiply the observed range by 2.

Conclusion

Estimating the standard deviation by use of the expected value of the range does not happen by magic, and the rote use of tables such as d2 reduce understanding and the depth of knowledge of the quality professional. We owe a debt of gratitude to L.H.C. Tippett and other pioneers who put the word “engineering” into quality engineering.

These pioneers were talented visionaries who overcame obstacles and setbacks to prove what they knew intuitively to be true. We in turn must retain the value-added work in our profession and not let rote application of tables and procedures determine our future. Perhaps we will not dedicate our lives to the next mathematical breakthrough, but we can apply some imagination and creativity to the principles available to us, and not be too worried about theoretical perfection. As Dr. Edward G. Schilling once said to me, “If it works, it works.”4

Sources cited
1. Tippett, L.H.C. “On the Extreme Individuals and the Range of Samples Taken From a Normal Distribution”, Biometrila Vol 17, Issue 3–4, 1925, pp. 364–387.
2. Pearson, K. “Note on Francis Galyon’s ‘Difference Problem,’” Biometrika, Vol. 1, 1902, pp. 390–399.
3. John W. Tukey, John W. “A Quick, Compact, Two-Sample Test to Duckworth’s Specifications,” Technometrics Vol. 1, No. 1, 1959, pp. 31–48.
4. Schilling, Edward G., professor emeritus, Rochester Institute of Technology, in private conversation with the author about imaginary limit enhancements, circa 1991.

Discuss

About The Author

Anthony Chirico’s picture

Anthony Chirico

Anthony Chirico is a senior executive within the aerospace sector and has more than 30 years experience leading international quality assurance and supply chain organizations. Chirico holds a master’s degree in engineering from Rochester Institute of Technology in applied and mathematical statistics.