Featured Product
This Week in Quality Digest Live
Quality Insider Features
Donald J. Wheeler
What are the symptoms?
Graham Ward
Asserting yourself and setting clear boundaries
Henning Piezunka
Businesses and leaders influence the kinds of ideas they receive without even realizing it
NIST
Having more pixels could advance everything from biomedical imaging to astronomical observations
Chris Caldwell
Significant breakthroughs are required, but fully automated facilities are in the future

More Features

Quality Insider News
Providing practical interpretation of the EU AI Act
The move of traditional testing toward Agile quality management is accelerating
Easy to use, automated measurement collection
A tool to help detect sinister email
Funding will scale Aigen’s robotic fleet, launching on farms in spring 2024
3D printing technology enables mass production of complex aluminum parts
High-end microscope camera for life science and industrial applications
Three new models for nondestructive inspection

More News

Davis Balestracci

Quality Insider

Wasting Time With Vague Solutions, Part 2

Some wisdom from Joseph Juran

Published: Tuesday, September 18, 2012 - 13:17

As you all know, the influence of W. Edwards Deming on my career and thinking has been profound. A criticism always leveled at him was that he was short on specifics—but he would always growl at someone who alluded to this, “Examples without theory teach nothing!”

Enter Joseph Juran, the other quality giant of the 20th century. When I worked at 3M during the 1980s, they had several sets of his 16-video Juran on Quality Improvement series. I studied it hard and watched several tapes many, many times. He had a good empirical sense (and sense of humor), and having been around the block once or twice, a lot of wisdom. So, let’s apply some of that wisdom to the two scenarios from part one of this three-part series.

Juran always advised as a first strategy: “Exhaust in-house data.” An initial chart analyzing your process is important before doing this. Here’s a key principle of common cause analysis that almost everyone overlooks: The data from any common cause period on a control chart can be aggregated to attempt a “stratification” and apply Juran’s beloved Pareto Principle.

For the percent computer-uptime data from part one:
From the control chart, the department was averaging 99.3 percent uptime and, hence, (100 – 99.3 =) 0.7 percent downtime. Assuming a 30-day month:
0.007 × 30 × 24 ~ 5 hours a month, on the average (0–16 in any one particular month). The routine meeting previously described usually focused on this, treating it as a special cause in terms of both the amount and its causes.

What if you considered the potential of using the data from all 19 months of the graph, i.e., ~ 95 hours? One could now ask:
1. Is it possible to go back even further to see if there is even more stable behavior to potentially aggregate?
2. Can one think of categories into which these hours can be stratified, such as time of day?
3. Is routine maintenance in the uptime definition?
• If so, are these maintenances executed consistently?
• At what time(s)?
• How many of these hours are involved?
• How can the other remaining hours be categorized to see what’s significant?

The “different conversation”—and resulting action—has begun.

For the “never events” data from part one, which also showed common cause, 29 total events occurred during the plotted period.
• Given the trendiness of root cause analysis these days, what if 29 individual analyses had been done during this period (Joiner’s Level 2 fix)?
• What does the chart say about the effect of all this activity? No improvement.
• Given that the control chart is common cause, did you know that you can aggregate all 29 events during this time because the same process produced them? Maybe you should do a root cause analysis of your 29 aggregated root cause analyses! (Joiner’s Level 3 fix)
• Can you go back further to get more data? If you were to plot some previous history, might it be indistinguishable (i.e., common cause) from this time period? In which case, you could aggregate any of those additional events into the mix.

Key point: Are these suggestions for both scenarios something you could easily do up front before getting more people involved? As you know, one of my ongoing themes is about you “getting the respect you deserve.” Might this help?

Here’s a more thorough approach to localize the interesting “20 percent.”

Define recurring problems

If the control chart exhibits common cause, the process is stable and, in this first cursory analysis, most likely characterizes the reporting process and the results it is perfectly designed to produce. A deeper issue in the initial analysis of a process, which is in many cases humdrum and routine to the work culture, is looking at the effects of human variation—both in perception (i.e., definition) and reporting. In some cultures, this includes a most nontrivial fear factor.

Just because a reporting process exists doesn’t necessarily mean that the right events are being reported, or even that all the events are being reported. In the case of the “never events” data, based on the current chart, the reporting process is stable—no more, no less. Based on the analysis results in the context of the previous paragraph, some changes may be needed to reduce variation to improve the reporting process—and improve the ongoing improvement process as a result.

Meanwhile, there is hope that at least some vital areas needing improvement might be exposed, including the reporting process, and that many human variation areas can be considered and improved by working on these “vital few” (Juran again).

Assess the impact of each problem

Do you already have any data on its impact?
• How often does this problem occur?
• How severe is it when it occurs?
• Would other data be useful to determine its impact? How can you get them?

Localize each major problem

Do you have any data already?
• When does or doesn’t the problem occur?
• Where does it occur, or where is it first observed? Where doesn’t it occur? Where is it not observed?
• Does their occurrence correlate with any particular vendor’s product in terms of higher or lower rates?
• Are there other problems that always or often occur together with this problem? Could these be related somehow? Are there problems that you might ordinarily expect to see but don’t?
• Who tends to have the problem most often?

I’ve used Juran's “exhaust in-house data” to introduce the first common cause strategy: stratification. There’s more to stratification than is presented here, and there are also two other common cause strategies.

So, to be continued in part three.

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.