Featured Product
This Week in Quality Digest Live
Six Sigma Features
Donald J. Wheeler
How you sample your process matters
Paul Laughlin
How to think differently about data usage
Donald J. Wheeler
The origin of the error function
Donald J. Wheeler
Using process behavior charts in a clinical setting
Alan Metzel
Introducing the Enhanced Perkin Tracker

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Scott A. Hindle

Six Sigma

Use the Context in Your Data to Enable Process Improvement

To understand the signals in your data you need to know how they were collected

Published: Thursday, March 10, 2022 - 13:03

In 2010, new to the world of statistical process control (SPC), I was intrigued by Don Wheeler’s statement that “No data have meaning apart from their context” (from his book, Understanding Variation—The Key to Managing Chaos, SPC Press, 2000, available on Amazon). For a while, I didn’t really get the importance of this message.

Now, some years later, and working mainly to support manufacturing processes, data analysis for me begins in context and ends in context. Moreover, communicating the results to others is driven by context, and the simpler this is done the better. To see an example, read on.

Time order of production

All, or practically all, manufacturing data have an essential piece of context, which is a time stamp. It’s the time stamp that allows you to put your data in a logical sequence—the time order of production—when you start the analysis.

The importance of this was once hammered home to me in a proposed case study in which a series of data values in Excel had been ordered from lowest number to highest number. As you may guess, this case study—part of an SPC training class—didn’t turn out as I’d hoped. (If you’re not sure about this point, construct a control chart of data put together in an ascending order and see what you get.)

The example

For a routine production process, 147 hourly measurements for a key product characteristic in one product are plotted on a process behavior chart, also known as a control chart for individual values, as shown in Figure 1. The data are plotted in their time order of production. (The moving range part of the chart isn’t shown.)


Figure 1: Process behavior chart of hourly data

As those accustomed to SPC will recognize immediately, the voice in the process data is trying to tell us that we’re looking at a very unpredictable process. For the product characteristic under study, this unpredictability means that:
• Per these data, we have no reasonable means of predicting what the process will give us in the next productions (e.g. what would we expect of the next 50 values?)
• A big reduction in variation can be achieved if the causes of unpredictability can be identified and controlled so their effect is eliminated from the process 

Note that even if your software provides you with an apparently impressive process capability report, process capability has little meaning for such a set of process data. Why? Because a capable process must also be a predictable process (or “in control,” or stable in behavior over time). For a fuller discussion of the dependence of capability on predictability, see my series of Quality Digest articles “Process Capability: What It Is and How It Helps,” Parts 1, 2, 3, and 4. These articles are also discussed in a 2016 episode of Quality Digest Live.

A change in specifications

At the time the data were collected, the specifications for the characteristic under study were somewhat forgiving and none of the measurements were out of specification.

But newer and stricter specifications were to come into play the following year, providing another reason to justify improvement efforts. Without improvement, a best-case scenario could be a surge in rework, which would bring a lot of undesired attention on the plant in question.

Leveraging the context

Figure 1 leaves no doubt that the process is unpredictable: The many circled points outside the limits are all signals of unpredictability. To understand these signals, we can do better than Figure 1 if we leverage the context in the data.

Signals of unpredictability tell you of real changes in a process. They invite you to dig deeper with the aim of pinpointing when the changes in the process occurred so the cause(s) of these changes can be identified. With the causes identified, effective actions—to better control these causes—can be defined. The idea is to work smarter, not harder, as W. Edwards Deming urged us to do. (The graph you choose to use is important. A histogram, for example, has many virtues, but it buries the time sequence contained in the data. Often, the time sequence is tremendously helpful as an aid to process improvement efforts.)

Additional to the time stamp, a second piece of context in these data is production run. These 147 measurements were collected over 11 different production runs—different days of production—meaning that we have two important sources of variation in the data to recognize:
• The variation within each run
• The variation between the runs 

Figure 2 is a process behavior chart built on the context of 1) time order of production; and 2) production run. Each set of limits in Figure 2 is calculated from the data from that particular production run.


Figure 2: Process behavior chart with limits per production run

Moving across Figure 2 from left to right, we see that, for the most part, the points are within each run’s limits, but that the limits shift up and down from run to run. This observation became key in this work. (It might be easier to visualize the up and down movements of the green lines. Each green (central) line is the average of a production run.)

As the team interpreted Figure 2, a key focus area became between production runs, this being “when” the changes in the process occurred. The team explored the question, “What was different, or what changed, from one production run to the next?”

More often than not, different batches of a key raw material were used for different production runs, which correlated with the detected shifts between productions. The team identified an inconsistency in the content of a key component in this raw material as the likely cause of the observed shifts—the ups and downs—between the production runs.

To eliminate this effect from the process measurement, data on the raw material were used before production to adjust the quantity of this raw material introduced into the process. For example, if the concentration was higher than expected, a lower quantity than the nominal was used.

Although this action wasn’t the only one needed to achieve the required improvements, it was probably the most important. First, this action brought about the biggest decrease in variation in the characteristic under study, so it was the biggest contributor to the achieved improvements. Second, it created faith in the team that the lessons from process behavior charts are worth discovering, paying attention to, and acting on.

Choosing the right graph for communication

Communication is different than analysis. We may use more complicated techniques to analyze data but opt for simpler techniques to present and communicate the results to others.

For communication, one reason to consider a time-series plot, rather than a process behavior chart (control chart), is that the risk of confusion around control limits is eliminated. Sometimes the control limits are thought to be specification limits, and the conversations go, at least momentarily, in a wrong direction.

Remember that Figure 2’s process behavior chart made visible the lack of consistency between production runs. Using a time-series plot, one option is to highlight the different production runs by color and missing line segments, as seen in Figure 3. This graph was used to gain agreement that it was worth the effort to investigate and identify the causes of inconsistency between production runs.


Figure 3: Time-series plot of the hourly data. Missing line segments and different colors are used to distinguish each production run.

Figure 3 is a simple graph—easy to create and easy to interpret—that is built around the context in the data. Such a graph can be sufficient to leverage the context in the data in such a way that meaningful decisions and actions can be defined to move things forward successfully.

Food for thought

Another piece of context is that the specifications to be introduced the following year were 42.5 and 47.5 (lower and upper specification limits, respectively). Against these specifications, 12 of the original 147 measurements are out of specification; hence, the comment above of a possible surge in rework if nothing changes.

The control limits in Figure 1—lower limit of 43.3 and upper limit of 46.9—can tell us something about the potential capability of the process under study. What is this? I’ll answer this question in the comments section next week. Your thoughts on this question, or to address other points and ideas in this article, are welcomed.

Discuss

About The Author

Scott A. Hindle’s picture

Scott A. Hindle

Scott Hindle supports R&D and factory operations on process capability studies for new products and processes, statistical process control (SPC) for use in routine production, and the use of online measurement devices as a part of both SPC and engineering process control.

Comments

Potential capability

“Potential” capability: The limits in figure 1 provide an approximation* of the capability to expect IF, and only IF, the assignable causes in this process are identified and effectively controlled.

The “IF” here is key:  Without action on the assignable causes this “potential” capability has no meaning.  Note the elephant in the room:  No predictability = No actual process capability.

In this example, and because action on the assignable causes was planned, and did happen, the useful information in the potential capability was this: Achieve a predictable process and a capable process shall be expected. (This is because the limits in figure 1, 43.3-46.9, fall inside the specification range of 42.5 to 47.5.)

*This approximation is essentially an upper bound, but this point is secondary in the bigger picture of what capability means for an unpredictable process.