Featured Product
This Week in Quality Digest Live
Metrology Features
NVision Inc.
Scanning plays a role in extending life span and improving design of A/C systems
Patrice Parent
Integral components of an electric vehicle’s structure are important to overall efficiency, performance
NIST
NIST scientists perfect miniaturized technique
Donald J. Wheeler
The more you know, the easier it becomes to use your data
Miron Shtiglitz
How production managers can increase yield by automating defect detection

More Features

Metrology News
Enables better imaging in small spaces
Helping mines transform measurement of blast movement
Handles materials as thick as 0.5 in., including steel
Presentation and publication opportunities for both portable and stationary measurement leaders
Accelerates service and drives manufacturing profitability
Improved readings despite harsh field conditions
Designed to meet standards, bolster detection, and mitigate work environment hazards
Machine vision market will return to single-digit growth in 2024 following declines in 2023
Enables manufacturers to integrate collaborative robots in operations

More News

Steven Wachs

Metrology

Where Do the Typical Control Chart Signals Come From?

Back to the basics

Published: Monday, January 4, 2010 - 05:00

The purpose of using control charts is to regularly monitor a process so that significant process changes may be detected. These process changes may be a shift in the process average (X-bar) or a change in the amount of variation in the process. The variation observed when the process is operating normally is called “common cause” variation. When a process change occurs, then “special cause” variation occurs.

With the use of control charts, snapshots of the process average and variation are captured throughout a time frame. By first establishing the variation that we expect from the process (via control limits) when it is stable (in control), we are able to detect subsequent process changes. When specific signals are observed on the control charts, we conclude that the process is unstable (a change occurred, the process is out of control) because the probability of observing those signals if the process had not changed is very small.

Hypothesis testing

In a statistical hypothesis test, we presume some statement, which is called the “null hypothesis” (H0). We also establish the “alternative hypothesis” (H1), which is what we are actually attempting to conclude (if the data supports it). Every time a new value is plotted on a control chart, a hypothesis is evaluated. The initial assumption is that the process is stable (in control). If, after plotting a point, we have enough evidence to reject this null hypothesis (we see a signal), we conclude that the alternate hypothesis is true (the process is out of control).

Ideally, we would correctly reject the H0 every time the process is actually out of control. If the process is actually out of control and we do not detect it, we have made a Type II error. This may be a severe error since the process has changed but we haven’t noticed it and don’t react. An appropriate sample size may be selected to minimize the occurrence of a Type II error (see determining sample sizes for X-bar charts ).

We would also ideally not reject the H0, if in fact the process has not changed. If we observe a signal even though the process has not changed, we have made a Type I error (α). This error leads to inefficiency because we will react to a signal but not find any actual cause, the process having not actually changed. By convention, the probability of a Type I error is specified as 0.0027 (0.27%). This results in the control limits trapping 99.73 percent of the statistic that is being plotted on the control chart.

Note: 99.73 percent equates to ±3 standard deviations from the process average, if the data being plotted is normally distributed.

Basis of chart signals

Now it should be clear that hypothesis testing is performed to determine whether sufficient evidence exists to conclude that the process is unstable. The common rules for identifying process instability are based on the probability of observing such signals assuming the process is actually stable.

For example, suppose we observe a single point that falls outside the upper control limit on an X-bar chart as shown in figure 1.
 
 
 
 

Figure 1

The probability that we would observe a sample average that is more than 3 standard deviations away from the process average, assuming the process is stable, is only 0.0027 (since the control limits trap 99.73 percent of the sample averages). Because this probability is so small, we conclude that the alternate hypothesis is true and we react as though the process is unstable. Of course, we might be wrong, in which case a Type I error has occurred.

Another common chart signal is a run of seven (or more) consecutive points above or below the centerline. This is illustrated in figure 2.
 
 
 

Figure 2

So, what is the probability of actually observing seven points in a row above (or below) the centerline, assuming that the process is actually stable?

For any random sample, the probability of getting a sample average above the process average is simply ½. There is also a 50-percent chance of seeing a sample average below the process average.

To obtain seven points in a row above the centerline, we need to find the probability of the following: one point being above the centerline and the next point being above the centerline, and the next point being above the centerline, and so on until seven is reached. Essentially, it’s like flipping a coin and getting seven consecutive heads. Because these events are independent (if the process is stable), the joint probability of getting seven consecutive heads is simply ½ × ½ × ½ × ½ × ½ × ½  ½ = 1/128 = 0.0078.

Thus, the probability of seeing this pattern is very small (less than 1% chance) if the process stable. Therefore, we reject this null hypothesis and conclude that the process is unstable. (In this case, it seems that the process average has shifted.)

Many other rules are used by practitioners to detect trends or process shifts on X-bar charts. They include:

  • 7 points in a row trending upward or downward
  • 14 points alternating up and down
  • 2 out of 3 consecutive points more than 2 standard deviations away from the centerline on the same side of the chart
  • 4 out of 5 consecutive points more than 1 standard deviation away from the centerline on the same side of the chart
  • 15 consecutive points within ±1 standard deviation of the centerline


The probabilities of observing many of these patterns assuming a stable process are not extremely difficult to compute, but suffice to say that the probabilities are low. Because it’s unlikely the pattern comes from a stable process, we conclude that the process is unstable and react to it.

Note that some people use slightly different rules. For example, waiting for a run of eight points above the centerline (rather than seven). This version of the rule will result is fewer Type I errors but a greater number of Type II errors. All rules should balance the errors that can be made in interpreting control charts.

 

This article is courtesy of DataNet Quality Systems.

Discuss

About The Author

Steven Wachs’s picture

Steven Wachs

Steven Wachs has extensive experience in the development of statistical models, reliability analysis, designed experimentation, and statistical process control, including work as a statistician and Six Sigma Black Belt at Ford Motor Co. As principal statistician at Integral Concepts, Wachs works with executives, process engineers, and quality professionals on applying statistical methods to improve manufacturing processes and product quality. Wachs has a bachelor’s degree in mechanical engineering from the University of Michigan, an MBA from the University of Pittsburgh, and a master’s degree in applied statistics from the University of Michigan.

Comments

Wachs' Lack of Understanding

It is interesting to read the comments here. 7 people appreciate that the author has no understanding of control charts. 3 people have been sucked in by the rubbish propagated under Six Sigma, and are in agreement. Perhaps there is some hope for quality after all !

Hopefully one day Six Sigma will be buried six feet under and industry will return the teachings of Deming and Shewhart.

Comment on probability of getting any of the chart signals

You suggest the probability of getting any of the chart signals can be easily calculated--the Minitab defaults, such as one sample 3 sigma from the mean or 14 alternating up and down are all approximately 3 in a thousand probability of happening at random.

Not just for monitoring . . .

The first paragraph of the article raised my flags. Control charts, if used only for "monitoring" a process for shifts and special causes, loses the real power of a control chart: the reduction of common cause variation. where continual improvement lives!

Charles Hannabarger

Lack of Understanding

Steven Wachs demonstrates his complete lack of understanding of control charts in this article. If he wants to "go back to basics" he should go back to Shewhart.

ABD, YOU ARE CORRECT, SIR!!!!

ABD,

YOU ARE CORRECT, SIR!!!!

Watch the assumptions

Mr Wachs' article using hypothesis theory has an assumption of the underlying normal distribution for the process. Shewhart never assumed that when he developed these charts. There is also a common misconception that the normal distribution applies because of the Central Limit Theorem. However, that theorem requires large sample size (not the 2-10 commonly used) and is also not applicable to all underlying distributions (see Kendall & Stuart Vol I).

Shewhart arrived at the three sigma limits by experimentation. He was looking for an economic way to limit two mistakes on the basis of a small sample: saying the process was good when it was in fact bad and saying that the process was bad when in fact it was good.

The non-parametric measure, such as seven points in a row, are based on the binomial probability of p = 1/2. Any number of combinations can be worked out that lead to the suspicion of a process shift occurring if the binomial probability is less than 1:100.

Dr. W. J. Latzko

SPC rules

Rip Stauffer is correct when he reminds us that control charts are not hypothesis tests - they use the statistics as a heuristic for when to react or not to react. And this is why I don't agree with MarkParadies' comment regarding the requirement of independence. We know that many processes are not truly independent, but we still use the heuristic and react as if they are, in order to minimize consumer risk (missing something we should be reacting to) at the expense of producer's risk (reacting when we should not). Unless we have a REALLY good understanding of the dependence of the process (like you have done auto- and partial-auto-correlation analysis and find strong, persistant signals), I would advice a client to stick with the usual rules. And if we do have such an understanding, I would be working on breaking that dependence in order to reduce variability!
.
Personally, I continue to use trends, lack of variation, excessive variation, and alternating values in determining process control. While trends will eventually be caught by one of the other rules, using them will give you a bit more lead time (at the expense of Type I error of course). Lack of variation could be bad subgroup selection, but it can also catch a real process change (for example, Bob's procedure really is less variable than Joe's, or different raw material vendor results in less variability, or running a different product class that should be on a different control chart, etc). Best way to catch bad subgroup selection for x-bar type charts is a random effects ANOVA - it is far more sensitive to excessive variation between as compared to within, and with a ridiculously low F-vale, can give you a clue about within being higher than between as well. (See my series on Why Doesn't SPC Work? for examples of that.) The others occur in real processes, and if you have a high cost of missing signals, they will also give you a little bit of an edge in reacting.
.
One important thing to note is that, while the criteria for different out of control signals is that it is around a probability of 0.0027, the JOINT probability of all the rules you choose is much higher than that, so unless you only use one of the rules, you will be making alpha errors at a far higher rate than 0.27%. The price we pay to buy down Type II error...

Control Chart Rules

A few notes:
Wheeler and Chambers did quite a bit of research on this matter, and Wheeler carried it on and amplified it after Chambers died. They found that a good trade-off between too many false signals and too many missed signals came from the use of Western Electric Zone tests:
1. One point outside the limits
2. 2 out of 3 points on the same side of, and more than 2 sigma away from, the centerline.
3. 4 out of 5 points on the same side of, and more than 1 sigma away from, the centerline.
4. A run of 8 consecutive points above or below the centerline.
This combination of tests provides excellent sensitivity with a reasonably low false signal rate.
The Western Electric Zone test for runs used 8 consecutive points above or below the centerline. Minitab and most other software packages have added one more, for 9.
Most statisticians no longer assess within-limits trends any more; if a trend is significant, the other rules will generally catch it. A long run within one sigma ("hugging the centerline") in an Xbar chart generally indicates bad subgrouping.
One other important indication often seen in both transactional and manufacturing environments is the sort of "spiky" appearance seen when you have inadequate discrimination in your measurement system, and only 1-3 values within the control limits in your range chart.
A couple of other notes about control charts and hypothesis testing: although many people think of a control chart as a sort of running hypothesis test, it's not one. Deming used to thunder loudest at people who made that association. A control chart is a heuristic. We don't set alpha; we just use 3-sigma limits to help make decisions about where and how to act.
If that hypothesis testing model is useful for your understanding, use it; but understand that it's flawed (as are all other models), and take some time to learn the difference between analytic studies and enumerative studies. In enumerative studies, the population exists (at least in principle), and we are trying to extrapolate from a sample to the population. In analytic studies, the population does not, and will never, exist; we are trying to extrapolate into the future. In enumerative studies, random sampling is important, because it provides the best chance for representation. In analytic studies, rational subgrouping and judgment sampling are more important; our "sample" consists of subgroups gathered over time, and we want to sample at times when we are most likely to find signals.

Good Explanation!

I have been aware for some time that there are numerous signals to look for in control charts. Thanks for explaining their significance in terms of the null hypothesis and the probabilities involved--none of my past SPC and Six Sigma instructors have ever taught this!

Dennis, Hopefully (though

Dennis,

Hopefully (though not likely), the reason why your instructors have did not teach you about probabilities as the basis for control charts is simple: CONTROL CHARTS ARE NOT BASED ON PROBABILITIES OR THE NORMAL DISTRIBUTION!!!! Therefore, it is ERRONEOUS to teach the principles of control charts with the use of probability theory or the normal distribution. I invite you to read the classic "Enonomic Control of Quality of Manufactured Product" by Walter Shewhart. Dr. Shewhart developed the concept of the control chart empirically, not theoretically. The control limits are based on three sigma because they worked - simple as that. It is at three sigma that the total loss from Alpha and Beta error is minimized. You cannot minimize the cost of both types of error at the same time. A balance has to be struck and the three sigma limits are where the total loss from the errors is minimized.

The four Western Electric Zone tests do indeed work very well, BUT they only apply to the charts for averages, not charts for ranges. For ranges, use only the rule of points outside the control limits and the rule of too many consecutive points on one side of the centerline.

Unfortunately, articles like this one lead people down the wrong paths.

You can read a great deal about this subject in the published works of Dr. Donald Wheeler.

Limitation of Rules on Control Charts

Don't forget that many rules for control chart limits have limitations.

For example, your calculation of the probability of having 7 points above the average depends on each point being "independent" (what you can random).

In most business processes, the points are not independent (results are not like flipping a coin - they are not completely random) and the assumption of a 50/50 outcome for each result is not correct.

To prove "independence" you have to be able to prove that the outcome of the last event has no influence on the next event. In business, the last event almost always influences the next event ... therefore, no independence.

Therefore, you should not use the rule of 7 points above/below the mean in most business process control charts because you will over estimate the amount of signals - Type I errors.

Many of the other rules you mention also have limitation ... many related to independence of the data, normal distributions, ... .

An X-bar chart 3-sigma limits do not this same requirement for independence of data, so you can use them all the time.

Best Regards,

Mark Paradies
President
System Improvements, Inc.
http://www.taproot.com