



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 02/13/2017
Just curious: Do you have monthly (and/or quarterly and/or even weekly) “How’re we doin’?” meetings like the end-of-year scenario described in my November and December columns last year—about budgets, financials, never events, incidents, near misses, machine downtime, productivity, root cause analyses, returned shipments, rehospitalizations, complaints, customer satisfaction scores, and employee satisfaction scores?
[ad:29749]Usually these vague meetings’ agenda is to discuss:
1. Only the past month’s overall result , e.g., “Were we red, yellow, or green?” (Special cause strategy.)
2. How overall performance seems to be “trending”—using only this-month, last-month, or 12-months ago results (Ditto.)
3. Each individual incident that occurred during the month and how each could have been fixed (Ditto.)
4. Which particular events need individual root cause analyses? (Ditto.)
What is this costing you? That’s “unknown or unknowable,” but what difference does it make? It’s a huge number.
Remember that the cancellation/no show process described in my November column was stable at 10 percent.
So now what?
All common cause means is that—initially—each data point has temporarily lost its individual identity, i.e., no “cherry picking”—and that includes labeling the individual differences from their goal as a red, yellow, or green variance.
Common cause strategies rely on somehow aggregating the variable over a stable time period and applying a grouping strategy to expose possible hidden special causes. It tries to leverage Joseph Juran’s beloved Pareto principle if at all possible: “What is the 20 percent of this process that’s causing 80 percent of the problem?”
You must avoid the initial urge to act on these commonly taught misconceptions: “It’s common cause, so that means the process needs to be redesigned,” or worse yet, “Dr. Deming says that if it’s common cause, then management needs to solve the problem.”
This will avoid a disruptive, cumbersome total redesign (and implementation) process that will involve too many people and will be resisted at every turn by both the management and front line.
Often, a surprisingly simple answer is already contained within the current process. It is only when data show no existing hidden special causes that there may be no other choice but to redesign.
Many times, it’s as simple as answering the initial easy question in reaction to a stable, common-cause process chart: Do the points that are all high (or all low) have the same reason, e.g., same month, day of the week, time of day, holiday, or special product run?
If not, for the case of the cancellation or no-show data, you would next consider all of the individual incidents from the year en masse. These data are then stratified (“sliced and diced,” if you will) for the purpose of immediate disaggregation into distinct, possible special-cause groupings hidden by the current collection process.
A group could brainstorm various ways to stratify the data, categorizing it by various process inputs (e.g., time of day, day of week, type of therapy, site, age, forgot, specific type of appointment, therapist).
Isn’t this is a much better use of people’s brainpower than the futility of using a tabulated report to ask: “Why did we go up? (Why did we go down?) Why weren’t we green? (Why were we green?) What’s the trend? What’s our action plan for the upcoming month? What are we going to say at the upcoming operational review?”
Frightened or even energized people will find reasons—such a waste of good people’s time, energy, and talent. Competent facilitators must know how to focus a group’s energy into a productive direction.
It is always best to start with data that are already recorded, but not necessarily used, as part of the routine data collection.
For the cancellation/no-show example, it might be a good idea to at least start with “day (of the week) of cancellation” and “time of cancelled appointment,” which should be available. There could be a particular day, particular time, or specific day/time pattern that could be exposed as a more focused opportunity. Is the pattern different between the cancellations and no-shows (sub-stratification)?
In this scenario, if more data are needed to make any pattern clearer, part or all of the two previous years’ data can be added because the chart of all the data showed the same stable behavior as the current year, i.e., the same process.
A very common issue: If you have multiple facilities, are these patterns similar (stratify by location)? Many times, they are not. A time plot comparison of individual overall performances (separately and on the same scale) is usually quite insightful.
Focus... focus... focus.... After exhausting all available in-house data, additional questions may arise that require more formal data collection to seek out deeper special causes. This involves more effort and planning, but 1) the need has been demonstrated; and 2) its only objective is diagnostic, i.e., to focus further where to invest precious improvement energy.
Using this strategy requires letting the process continue as is, but there is temporary, nonroutine collection of data that are pretty much there for the taking. They just need some plan to be grabbed and recorded. Unlike using convenient in-house data, there is slightly more inconvenience, which usually proves to be worth it.
Make this clear to the collectors: it is slightly inconvenient, but it will not become “permanently temporary.” There is no “sample size,” only “enough”—enough until the picture is clear. It may take only a couple of days or up to one to two weeks.
A simple example: Suppose a plot of “daily yield” shows common cause at about 90 percent. You have three operators, all of whose outputs are collected onto a common conveyor belt and 100-percent inspected.
What if one immediately suggested solution was, “Ninety percent is unacceptable. We need new machines to get us to at least 95 percent”—? However, the three operators’ yields are consistently at 85, 90, and 95 percent. Unless data are stratified by operator to expose this, it could be a missed opportunity to get the process to a consistent 95 percent yield. This knowledge might already exist in the current process.
Isn’t a result like this worth the temporary inconvenience of getting this easily collectible data—and not for very long?
Important: Keep it focused only on your objective. I see it all too often: “While you’re at it, why don’t you also collect...?” or, “Wouldn’t it be neat to also know...?” No, it wouldn’t. Trust me; you’re going to have enough problems with human variation when collecting the data you need.
If you are able to focus some opportunities, now would be the time to brainstorm a productive cause-and-effect diagram—on your focused, isolated 20 percent of the process.
Once again, notice: Has there been any mention of any goal?
Until next time....
Links:
[1] http://www.qualitydigest.com/inside/management-column/111416-which-deming-s-14-points-should-i-start.html
[2] http://www.qualitydigest.com/inside/health-care-column/112816-unknown-or-unknowable-yet-shocking.html