Featured Product
This Week in Quality Digest Live
Six Sigma Features
Scott A. Hindle
Part 4 of our series on SPC in the digital era
Donald J. Wheeler
What are the symptoms?
Douglas C. Fair
Part 3 of our series on SPC in a digital era
Scott A. Hindle
Part 2 of our series on SPC in a digital era
Donald J. Wheeler
Part 2: By trying to do better, we can make things worse

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Davis Balestracci

Six Sigma

Time to Lose the 10-Minute Overview

Stop the self-sabotage and help executives understand simple variation

Published: Tuesday, September 28, 2010 - 04:30

I attended a talk in 2006 given by a world leader in quality that contained a bar graph summary ranking 21 U.S. counties from best to worst (see figure 1). The counties were ranked from 1 to 21 for 10 different indicators, and these ranks were summed to get a total score for each county (e.g., minimum 21, maximum 210, average 110. Smaller score = better). Data presentations such as this usually result in discussions where terms like “above average,” “below average,” and  “who is in what quartile” are bandied about. As W. Edwards Deming would say, “Simple… obvious… and wrong!” Any set of numbers needs a context of variation within which to be interpreted.

Rank Sum             County
42                                  1
76                                  2
84                                  3
87                                  4
92                                  5
99                                  6
101                                  7
102                                  8
105                                  9
105                                10
107                                11
108                                12
112                                13
113                                14
114                                15
121                                16
128                                17
131                                18
145                                19
157                                20
181                                21

Figure 1: Summary of 21 U.S. counties

I asked for the original data (i.e., the individual sets of rankings for each of the 10 characteristics), and the presenter was kind enough to supply it. My analysis showed there was one “above average” county (No. 21) and one “below average” county (No. 1). Counties 2–20 were indistinguishable. (If you’re interested in the statistics involved, you can view them here.)   

I then shared my analysis with him. Our e-mail correspondence follows:

World quality leader:  A subtle issue you did not tackle is the political-managerial issue of communicating such insights to [the two special-cause counties] and the counties that thought they were “different” but, statistically, aren't. I wonder what framework one could use to approach that psychological challenge?

Balestracci: As I say to my audiences, “Hey, I’m just the statistician, man!” I think the issue is how people and leaders like you are going to facilitate these difficult conversations. This is the leadership that quality gurus keep alluding to and seems to be in very short supply.

My job is to keep you all out of the “data swamp,” but I would be a willing participant. I would love to pilot some of these analyses with you or other leaders. We need to figure out what this process should be. This is potentially very exciting and could quantum-leap the quality improvement movement.

My point is that this “language” needs to be a fundamental piece of any improvement process and led by leaders who understand it and are promoted into leadership positions only if they understand it. If this could become culturally inculcated, then the rampant shoot-from-the-hip analyses and resulting defensiveness would stop, period. The discussion would then focus, as it should, on process. We need new conversations, and this could be a key catalyst.

World quality leader:  Nope. I don’t buy it. Yes, I am a leader and need to carry the message. But I know you too well to let you off the hook. I’d love to see you try to lead these conversations and experiment with approaches. You're a leader, too.

Balestracci:  Give me an opportunity, and I will do my best to lead that conversation. Have you fathomed the potential of this?

Real root causes?

That last e-mail of mine has never been answered. I’m still waiting for the promised opportunity. I try to remind him every once in awhile but have given up. During the past four years, further e-mails from me have not been responded to. At his insistence, I even sent the analysis with explanation to the original executive group that collected and summarized the data. No reply.

Many of this example’s statistical principles are what Deming demonstrated during his seminars. After more than 20 years of trying to teach similar concepts, I am still amazed at the abject cowardice of (yes, cowardice) and fierce resistance from (alleged) leaders who abdicate responsibility to comprehend the power of a simple understanding of variation. As a lot of us know, Deming had zero tolerance for such ignorance or arrogance.

Let me tie this reaction into the current hot topic of root cause analysis. An excellent article by John Dew, “The Seven Deadly Sins of Quality Management” (Quality Progress, 2003) considers the true root causes to quality problems. They are entrenched in a “quality as a bolt-on” culture, of which the conversation I had above is symptomatic. These root causes include:

1. Placing budgetary considerations ahead of quality
2. Placing schedule considerations ahead of quality
3. Placing political considerations ahead of quality
4. Being arrogant
5. Lacking fundamental knowledge, research, or education about improvement
6. Pervasively believing in entitlement
7. Practicing autocratic behaviors that result in “endullment” rather than empowerment

Regarding items 4 and 5, I believe quality professionals have made huge strides in speaking the language of senior management. In fact, maybe too good; I’m seeing an increasing emphasis on “bottom line results.” In many organizations, senior management still does not know the fundamental lessons of quality and, frankly, shows no interest in learning them other than insisting, “Get to the punchline and give me the 10-minute overview and bottom-line results.”

Promotions self-perpetuate the status quo. Could it be that few quality managers make it into senior management positions because senior management does not really believe in quality concepts?

Am I the only one who sees the potential implications of this simple example?

Mark Graham Brown, a balanced scorecard and measurement expert, thinks that 50 percent of executive meetings where data are involved are a waste of time—as is middle management spending an hour a day poring over useless operational data. (Put that into a dollar figure.)

Why is it the only people who truly don’t seem to get it, or want to get it, tend to:

• Look at tables of raw data and draw circles around numbers they don't like
• Look at data summarized by smiley faces, bar graphs, trend lines, and traffic lights
• Compare a number to an arbitrary goal and throw a tantrum
• Brag about reading the latest airport best-seller, leadership-fad book

Sigh. Passionate lip service continues to be alive and well.

What can you do?

Herein lies the opportunity for quality professionals: Getting the respect we deserve by bringing “data sanity” to organizations, which would free up precious time to consider and make quality an organizational “build in.” People in quality must stop seeing themselves as victims or being complacent because they are “so busy.” Activity is not impact. (See my 2009 column on this subject here.)

Join me and watch like a hawk for opportunities to convert everyday executive data presentations into this “funny statistical way” of doing things. This will keep you from doing yet another self-sabotaging seminar simulating Deming’s red bead experiment. We need to stop whining that people “just don’t get it” and think more formally about how to stop boring execs to death.

Getting mad and focusing that energy wouldn’t hurt, either.


About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.


10 minute overview & one number summaries

Davis, until financial reporting steps out of the dark ages in this area I don't believe we'll see much widespread progress. Look at the emphasis on comparing this time period to the last period and the one last quarter and year! These are snapshots of financial health which can and often are manipulated to look good. I'm reminded of a conversation I once had with a PhD chemist about statistical methods and in particular DOE. He asked "If these are such great methods why aren't they being taught in chemist PhD programs?" Until business, financial and MBA programs start teaching a better way, we'll have a tremendous problem making a real difference. Why aren't the best business schools teaching about the impact of variation? Why don't the big consulting companies advise their clients about how variation can make so many "conclusions" erroneous? Which company's annual report is filled with trend diagrams, not pie and bar charts? Why don't we see more sparklines?

Great Article, as usual

Davis, I'm glad you were able to do something with this!