Featured Product
This Week in Quality Digest Live
Metrology Features
NIST
Having more pixels could advance everything from biomedical imaging to astronomical observations
Tara Fortier
It will likely change in the next decade
Douglas C. Fair
Part 3 of our series on SPC in a digital era
Chris Anderson
How this technology drives transformational change
Eric Whitley
Manufacturing methods and technologies that improve waste management

More Features

Metrology News
Study of intelligent noise reduction in pediatric study
Easy to use, automated measurement collection
High-end microscope camera for life science and industrial applications
Three new models for nondestructive inspection
Machine learning identifies flaws in real time
Advancing additive manufacturing
ABB robot charger automatically detects boreholes, fills them with charges, with no humans present
Two awards annually for students studying precision metrology
Includes checkups to help reduce the risk of failure, optimize production, keep equipment operating

More News

Davis Balestracci

Metrology

A Statistician’s Favorite Answer: ‘It Depends,’ Part 2

Stop getting sucked into the swamp of calculation minutiae

Published: Tuesday, March 22, 2011 - 07:22

When teaching the I-chart, I’m barely done describing the technique (never mind teaching it) when, as if on cue, someone will ask, “When and how often should I recalculate my limits?” I’m at the point where this triggers an internal “fingernails on the blackboard” reaction. So, I smile and once again say, “It depends.” By the way…

… Wrong question!

I made a point in Part 1 of this article that I feel is so important, I’m going to make it again: Do not bog down in calculation minutiae. If you feel the instinct to ask that question, pause and think of how you would answer these from me instead:

1. Could you please show me the data (or describe an actual situation) that are making you ask me this question?

2. Please tell me why this situation is important.

3. Please show me a run chart of these data plotted over time.

4. What ultimate actions would you like to take with these data?

 

And since writing Part 1, I’ve thought of a fifth question I’d like to add:

5. What “big dot” in the board room are these data and chart going to affect? Or less tactfully,

5a. Who cares whether the limits are correct or not?

 

When you supply me with the answers to questions 1 and 2, then we can begin a dialogue, during the course of which I would be happy to answer your question about limits.

 

OK, I’ll answer the question now… sort of


The purpose of the limits is to give a reasonable range of expected performance due to common cause. For the I-chart, as long as the limits are computed correctly—via the moving range between consecutive observations in time order—and “three sigma” are used, then they are “correct limits.” As Donald J. Wheeler likes to say, “Notice that the definite article is missing.” They are just “correct limits,” not “the correct limits.”

Ready for a blinding flash of the obvious? The time to recompute the limits for your charts comes when, in your best judgment, they no longer adequately reflect your experience with the process. There are no hard and fast rules. It is mostly a matter of deep thought analyzing the way the process behaves, the way the data are collected, and the chart’s purpose.

If the process has shifted to a new location, and you don’t think there will be a change in its common-cause variability, then you could use the former measure of variation in conjunction with the new measure of location to obtain temporarily useful limits. Meanwhile, it would probably be a good idea to keep track of the moving range on an MR-chart to note any obvious changes. There is no denying that you will need to ponder the issue of recalculating the limits. With today’s computers, as mentioned below, it’s less of an issue; however, it still requires good judgment.

Wheeler wrote a column 15 years ago that is every bit as relevant today. So, let’s have him ask you three questions:

1. Do the limits need to be revised for you to take the proper action on the process?

2. Do the limits need to be revised to adequately reflect the voice of the process?

3. Were the current limits computed using the proper formulas?

 

Still not sure? Look at the chart and ask these additional questions Wheeler added from Perry Regier of Dow Chemical Co.:

1. Do the data display a distinctly different kind of behavior than in the past?

2. Is the reason for this change in behavior known?

3. Is the new process behavior desirable?

4. Is it intended and expected that the new behavior will continue?

 

If the answer to all four questions is yes, then it is appropriate to revise the limits based on data collected since the change in the process.

If the answer to question 1 is no, then there should be no need for new limits.

If the answer to question 2 is no, then you should look for the special cause instead of tinkering with the limits.

If the answer to question 3 is no, then why aren’t you working to remove the detrimental special cause instead of tinkering with the limits?

If the answer to question 4 is no, then you should again be looking for the special cause instead of tinkering with the limits.

The objective is to discover what the process can do or can be made to do.

Yes, indeed: It depends.


Wait for it…

Frustrated by my lack of a concise answer and now trying to distract me from pressing for answers to all these questions, I then get asked, “Well, even though I can’t think of a situation, how many data points are needed to compute accurate limits?”

I generally answer, “How much data have you got?” (It’s usually not very much.)

In my experience, useful limits may be computed with small amounts of data. Even as few as seven to 10 observations are sufficient to start computing limits, especially if, as frequently happens to me, it’s all you’ve got. What else are you going to do? I dare you to find a more accurate way to assess the situation. I chuckle when I think of how many times executives have told me, “Your way of doing things has too much uncertainty.” I’ve been so tempted to answer, “So exactly what are you going to do instead?”

The limits do begin to solidify when 15 to 20 individual values are used in the computation. To argue semantics, when fewer data are available, the limits can be considered “temporary limits,” subject to revision as additional data become available. When more than 50 datum are used in computing limits, there will be little point to further revise the limits.

However, who does charts by hand anymore? Given today’s computer packages, limits are automatically updated as new data are added, so what’s the problem? You might have to make a decision about what period to aggregate for the appropriate moving range statistic, but it’s a somewhat minor point. Frankly, it’s a question I rarely consider; I generally have far too many questions regarding the process being improved. After those are settled, the calculation process always somehow seems to sort itself out. Rest assured, by focusing on the process, you will get “correct limits.”

So stop getting sucked into the swamp of calculation minutiae. Instead, spend all that energy using your charts to understand and improve your processes. And the first time you say, “It depends” in answer to someone’s question, let me know, and we’ll both smile.

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.

Comments

Questions...questions!

Davis,  Your insight is appreciated.  I get the same questions when I teach the use of Process Behavior Charts.  I guess people are people wherever you go!  For some reason people get too hung up on the computations rather than gaining insight into their processes.  Also, whenever executives tell me that there is "too much uncertainty" in the predictive powers of a given Process Behavior Chart, I just show them a chart of variances between THEIR monthly forecasts and the actuals.  Now THAT'S uncertainty!


sjm ;)

Auditing a large cause

I agree wholeheartedly with the lunacy of recalculating limits for the sole purpose of recalculating limits.  Davis, I think if you do some investigation, you’ll find that a major source of this comes through the “audit” process (that has been almost 100% of my experience).  I have been involved with two different “statistically enlightened” companies for over 20 years.  We never talked about frequency of recalculation of limits amongst ourselves as an opportunity for improvement, because we knew that was a foolish thing to do.  But the audits that we were subjected to by our direct customers and their auditing bodies didn’t have this same sense of understanding of limits.  The best was when purchasing agents, with a whopping 2 day course in SPC and no hands-on experience, would attempt to tell people with much more background and experience that  limits must be recalculated every 3 months.  Sigh…


 


Now, manufacturing companies aren’t blameless in creating this scenario.  I have often seen where a set of limits were calculated many moons earlier and are no longer relevant, or opened up because too many alarms were being generated by improperly set limits (and we all know there are more reasons than these).  So, the customer sees these examples and is thus dissatisfied with the state of control within their supplier (and who could blame them).  As a result, the easiest way to audit this is to see the date of when the limits were last recalculated.  Auditors love black and white – makes their job easier.  That forces the supplier to take a look at it and never let it get “out of control” again.  Hence, the frequency requirement and the drive for the answer to the question.  


 


After my first round of corporate quality I got smarter in my second round.  I wrote corporate procedures and policies that strictly stated NOT recalculating limits on a systematic basis.  If a plant was audited and pushed on this, they would point to the corporate policy and corporate would get on the horn with the auditors.  That generally didn’t save the plant from receiving points off of the audit (but at least it demonstrated they followed policy!!!!!!!!! ). 


 


You want the plants to focus their limited resources on the right things that add value to the process, not in silliness that adds no value other than satisfying an uneducated auditor or manager who has been otherwise duped into believing that is part of process control.


 


Kevin Keller

The more things change...

...the more they remain the same.


Gents,


WONDERFUL insights,both of you and "spot on."  Kevin, I saw the same things in my manufacturing days at 3M, whose profile I'm sure compares favorably to your "statistcally enlightened" companies.    Is the goal of an audit to pass the audit or improve quality?  I'm more firmly convinced than ever that quality must be built into a company's cultural DNA to the point where the words "statistical" and "quality" are dropped as qualifiers because they are "givens."  Otherwise, the maddening games continue to try to outfox auditors, many of whom are as you described.


VERY few people need an advanced knowledge of statistics.  It is only in the last 10 years that I have finally "got" it.  As Deming himself said, "If I had to reduce my message to management to just a few words, I'd say it all has to do with understanding variation."  He taught very few techniques in his seminars and he is probably rollling over in his grave at the sub-culture of "hacks" (his term) that has been created.  Only 1-2% of people need ADVANCED statistical knowledge, and all these "belts" are shooting themselves in the foot with their training, which nothing short of legalized torture.  I had access to some correspondence Deming wrote to a well-respected statistician...in 1984:  "Sorry about your misunderstanding...TOTAL!  When will statisticans wake up?"  As David Kerridge says:  If we're actually trying to do the wrong thing, we may only be saved from disaster because we're doing it badly."  And managment like Steve describes above continues to be deluded as to its effectiveness.  The more things change...