Featured Product
This Week in Quality Digest Live
Six Sigma Features
Donald J. Wheeler
Part 1: Process-hyphen-control illustrated
William A. Levinson
Quality and manufacturing professionals are in the best position to eradicate inflationary waste
Donald J. Wheeler
What does this ratio tell us?
Donald J. Wheeler
How you sample your process matters
Paul Laughlin
How to think differently about data usage

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Donald J. Wheeler

Six Sigma

Different Approaches to Process Improvement

Does your approach do what you need?

Published: Monday, June 6, 2022 - 12:03

Many different approaches to process improvement are on offer today. An appreciation of the way each approach works is crucial to selecting one that will be effective. Here we look at the problem of production and consider how the different improvement approaches deal with this problem.

The problem of production

For the purposes of the following discussion, a cause-and-effect relationship will exist when changes in the value of a cause result in changes in the value of a product characteristic. Here we will define the effect of a cause to be the variance created in the product stream as the cause varies through its natural range of values. Of course, when the cause is constrained so that it cannot vary, then it will no longer create any variation in the stream of product values.

Any specific product characteristic will be the result of dozens, if not hundreds, of cause-and-effect relationships. These causes can be divided into two groups: those causes that we know well enough to name, and those causes that operate without our knowledge and that therefore remain unknown to us. Before we start production, the effects of all of these causes may well be unknown, resulting in a picture like figure 1.


Figure 1: Two categories of cause-and-effect relationships

Typically R&D will study a subset of the known causes to determine their effects. These studied causes will be those that are thought to have pronounced effects upon the product characteristic. Once these effects are known, this set of studied causes can be organized into a Pareto diagram as shown in figure 2.


Figure 2: Pareto of known effects for the studied causes

Denote causes one through five as Group One causes. These causes have dominant effects and are the causes we will want to control during production. By holding these five causes constant, we will effectively remove their effects from the product stream. At the same time, the fixed values chosen for each of these five causes will collectively determine the process average.

Call causes six through 14 the Group Two causes. These causes have such small effects that we will not attempt to control them in production. (Typically these are those causes where the cost of control exceeds the benefit of control.)

Among the 14 causes studied, if no attempt to control the causes was made, the effects of the five causes in Group One would make up 85 percent of the variation in the product stream, while the remaining nine causes in Group Two would contribute 15 percent of the variation. So by controlling the Group One causes, we remove 85 percent of the variation due to the studied causes.

Group Three causes will be those known causes that were not studied by R&D. Generally, these are causes whose effects were thought to be minimal and were therefore considered not to be worth investigation. However, as noted in figure 2, these causes will actually have unknown effects.

For convenience, let’s denote the collection of all of the unknown cause-and-effect relationships that affect our process without our knowledge as Group Four causes.


Figure 3: Process variation comes from all the uncontrolled causes.

Unfortunately, the variation in the product stream is not limited to the causes in Group Two alone. Causes in Groups Three and Four will also contribute to this variation. As causes in these three groups vary, they will each contribute incremental variation to the product characteristic, and all of these sources of variation will add up to result in the variation in the product stream. This is why the variation observed in production will generally exceed the variation predicted by R&D.

Finally, there is no guarantee that the unknown effects of the causes in Groups Three and Four will all be small. Furthermore, these effects can change over time. These changes can occur with wear and tear, with changes in personnel, with evolving operating techniques, and with changes in the supply of materials. When these changes occur, they can further complicate the question of process improvement.


Figure 4: Some uncontrolled causes may have dominant effects.

Thus, figures 3 and 4 define a framework against which we can evaluate different improvement approaches. They characterize what we know and what we do not know so that we can see how a particular approach deals with each element in the problem of process improvement.

From figure 3 we see that problems of setting the process aim will involve the choice of levels for causes in Group One. Problems of reducing the variation in the product stream will involve causes in the other three groups.

So how can we reduce the process variation? Essentially, the only way to reduce variation is to remove an effect from the product stream by holding its cause steady at some fixed value. In other words, we remove variation by moving a cause from Group Two, Three, or Four into Group One.

However, before it will be economical to move a cause into Group One, the benefit will have to exceed the cost. This means that a cause will have to have a dominant effect before it will be economical to attempt to control it.

So the problem of reducing the process variation involves finding causes with dominant effects within Groups Two, Three, or Four, and then moving these causes into Group One.

Experimental approaches

Several approaches to process improvement are based on running a series of experiments. Experiments allow us to study selected causes to quantify their effects upon a given product characteristic. These experimental approaches cover everything from simple experiments with a single cause to designed experiments involving multiple causes. Regardless of the complexity involved, experimental studies always require the manipulation of process inputs. This limits experimental approaches to the study of known causes from Groups One, Two, or Three.

When causes from Group One are studied, you will have an optimization study. Here you will be seeking to find that combination of values for the Group One causes that will result in an optimum value for the process aim.

When experimental studies are used with causes from Groups Two or Three, the objective is to identify any causes that might have a dominant effect. When such causes are found, they can then be moved to Group One to remove their variation from the product stream. Of course, experiments with causes from Group Two will be looking for large effects where R&D originally found only small effects. And experiments that study causes from Group Three will be looking for large effects where no large effects were thought to exist. However, over time, due to wear and tear and other effects of entropy, causes that formerly had a small effect can evolve into causes with a large effect.

For this reason, experiments with causes in Groups Two and Three can sometimes be helpful. In figure 5, causes 18 and 16 were found to have dominant effects and were moved from Group Three to Group One. This reduced the average cost of production and use (ACP&U) for this process to 75 percent of what it was in figure 4.


Figure 5: What experimental studies may achieve

But what about causes in Group Four? Although we are not able to study unknown causes in an experiment, this does not mean that our experimental results are exempt from the effects of any unknown causes in Group Four. If a dominant cause from Group Four happens to change during the course of an experiment, it can wreck the analysis and ruin the experiment. (Most statisticians can tell you stories of what happened when some extraneous variable from outside the study messed up an experiment.)

So while experimental studies are essential in setting up a process, and while they allow us to review the effects of various process inputs, they face certain limitations in the problem of process improvement. Although experiments allow us to get definite answers to specific questions, they are of limited utility when we do not know what questions to ask.

Observational approaches

Experimental studies always start by identifying a set of causes to study. Observational approaches do not do this. Rather, they seek to learn about the process by using the existing data. Since existing data will generally be obtained while the causes in Group One are being held constant, observational approaches will tend to focus on the uncontrolled causes.


Figure 6: What we need to know

The idea behind an observational approach is that we really do not need to know the sizes of all of the effects in figure 6. Rather, we only need to know which uncontrolled causes have dominant effects (causes 16, 18, 22, 24, and 30 here). Once we know which causes have effects that are large enough to change the product stream, we know which causes need to be moved to Group One. And we can make this decision regarding these causes without actually quantifying the size of their effects. The following will explain the characteristics of the two basic types of observational studies.

Data-snooping approaches

With today’s computing power, new gee-whiz approaches to analyzing existing data are becoming popular. These approaches used to be called data snooping, but today they are known as big data, artificial intelligence, or machine-learning approaches. Regardless of the name, these approaches collect all of the available data into a database and use some mathematical method to look for patterns, groupings, or relationships within the data. As promising as this sounds, and regardless of how many variables are used, the basic problem with these approaches is that the data will always have an incomplete context.

Context is so essential to analysis that it is the first axiom of data analysis: No data have any meaning apart from their context. Yet the data-snooping approaches will never have the complete context. No matter how many variables you include in the database, you can never include variables from Group Four. The unknown cause-and-effect relationships will never be measured, and therefore cannot be part of the database. (If we knew enough to measure their effects, they would be known causes rather than unknown causes.) Consequently, although data-snooping approaches may help you discover relationships between the known causes and your product characteristic, they cannot identify any unknown causes.

The implicit assumption behind all data-snooping techniques is that there is a homogeneity of conditions behind the data—that the variables not included in the database do not have any real impact upon the outcomes studied. Yet there is no effective check on this fundamental assumption. When unknown causes have dominant effects upon the process, they can completely distort the patterns found by the data-snooping approaches. So while these approaches attempt to find patterns among the known variables, they can be undermined by variables from Group Four.

Process behavior charts

Process behavior charts have a proven track record as an observational approach to process improvement. They provide an operational definition of how to get the most out of any process. The running record of the product characteristic displays the actual process performance. The limits of the process behavior chart define the process potential—what the process is capable of achieving when it is operated on-target with minimum variance. By superimposing the process performance upon the process potential, a process behavior chart provides a way to judge how close a process is to operating at full potential. Moreover, it allows us to identify when a change has occurred in the process. And these changes are the key to identifying the unknown causes with dominant effects from Group Four.

Some critics who have not understood how process behavior charts work claim that they are “old hat.” But when it comes to mathematics, age does not invalidate a technique or change its applicability. Calculus is more than 300 years old, while the Pythagorean theorem is at least 2,500 years old. So although Walter Shewhart created the process behavior chart more than 90 years ago, the concept behind the chart is 2,200 years older. It was Aristotle who taught us that we should look at those points where the system changes in order to discover those causes that affect the system. And this is, in effect, what the process behavior chart allows us to do.

By identifying those points where the process changes, a process behavior chart allows us to detect causes with dominant effects that come from any of the four groups.

By waiting until the process displays a change in behavior, we also let the process prioritize the causes according to the size of their effects. In this way we discover those causes that have dominant effects without having to waste time and effort studying the many causes with trivial effects.

It is only with the approach of Shewhart and Aristotle that we can discover the unknown causes from Group Four that have dominant effects. It is both the known and unknown causes with dominant effects that create the signals found by process behavior charts. The ability to learn about causes in Group Four represents a major advantage of using process behavior charts. It allows us to learn about mistakes, bad practices, and dumb things that actually happen in production but would never be studied in any R&D program. And it allows us to discover in real time when things go wrong so they can be fixed in a timely manner. Thus, process behavior charts are both more general and more robust than the other approaches.


Figure 7: What can be achieved with process behavior charts

For the process shown in figure 7, the average cost of production and use (ACP&U) will be only 25 percent of the average cost of production and use for the process shown in figure 4. This fourfold increase in quality and productivity comes from moving causes 22, 24, 18, 16, and 30 into Group One.

Summary

Experimental approaches to process improvement can only study known cause-and-effect relationships. Although such studies are essential when setting up a process, they have limitations as a process improvement technique. Studies involving causes in Groups One and Two will replicate previous research, while studies of causes in Group Three will be searching for nuggets that were missed in previous research. So experimental studies involving causes in Groups Two and Three will spend time, money, and effort essentially reconfirming that most of these causes still have small effects. Moreover, although experiments cannot study causes in Group Four, experimental results can be undermined by Group Four causes that have dominant effects.

Data-snooping approaches to process improvement include big data, artificial intelligence, and machine learning techniques that seek to model the data and discover relationships using causes in Groups Two and Three. Unfortunately, despite their complexity and sophistication, these approaches can also be undermined by Group Four causes with dominant effects.

Only the process behavior chart explicitly looks for upsets created by both known and unknown causes. Aristotle’s approach of studying the points where the process changes allows us to discover things that are beyond the scope of experimental studies and that cannot be discovered by data-snooping techniques. In addition, the process behavior chart approach does not waste time and effort on quantifying trivial effects. By always focusing on causes with dominant effects, process behavior charts allow us to learn how to operate our processes predictably, with minimum variance and on target.

So if you are confident that you can discover things that were overlooked by R&D, then go ahead and use an experimental approach to process improvement.

Or if, unlike Aristotle, you are confident that your process is unchanging over time, then use one of the data-snooping approaches to process improvement. The complexity and sophistication of these techniques will impress everyone. But be aware that your results may only be as durable as a house of cards.

If you simply want to improve your process, then use process behavior charts to discover how to operate your process up to its full potential. Nothing even comes close to delivering so much with so little effort.

Discuss

About The Author

Donald J. Wheeler’s picture

Donald J. Wheeler

Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com.

 

Comments

How to measure variation on y axis

Wonderful article Don, thank you.

They axis states "Variation in Product due to Each Cause", how do we calculate these values using a control chart

Regards

Multi Vari

Sometimes a multi vari study is a good compromise between purely experimental and purely observational. 

Different Approaches and the High-Reliability Organization

This article is another in a long line of excellent articles on SPC and process improvement. Thanks for this, Don!

I just wanted to add two cents to this, because I have recently had what looked like a good opportunity: to help a healthcare organization in its journey to becoming a High Reliability Organization. The concept came from studies in the late '80s around Aircraft Carrier Group operations and Airline flight operations. The idea is to work a highly complex set of operations and develop processes and systems that yield very high reliability, i.e., very low defective counts.

What evolved from the research were five basic traits or values shared by HROs:

1. Sensitivity to operations - leaders and staff are constantly aware of how processes and systems affect the organization. 

2. Reluctance to accept "simple" explanations - HROs recongnize the risk of "common sense" assumpions and failure to dig deeply enough to find the real sources of problems.

3. A preoccupation with failure - every employee at every level is encouraged to identify and (if possible) find solutions to problems.

4. High reliability organizations defer to expertise - leaders at HROs listen to people who have the most knowledge about a task. 

5. HROs are resilient - (sometimes "relentless"). This is essentially what Deming called "Constancy of Purpose," but it is also about seeing mistakes as opportunities for improvement, and relentlessly pursuing those opportunities.

The reason I bring this up is that it struck me as we were beginning what looked like a promising journey, it struck me that one of their problems was the problem of rarity. They are looking for problems that occur rarely (the crash of an aircraft, cross-infection in a surgical suite). These fit well into Don's category 4. 

The prime contractor I was working for ended up losing the contract before we got very far, but their approach didn't strike me as effective. They immediately decided that the only way for them to make progress was to ignore value 2, because all the healthcare professionals involved thought that complex problem solving paradigms (that included SPC or any other statistical methods) would be too confusing, so they decided to mandate the lean 9-block A3 as their only problem-solving method.