Featured Product
This Week in Quality Digest Live
Quality Insider Features
Akhilesh Gulati
To solve thorny problems, you can’t have either a purely internal or external view
Daniel Croft
Noncontact scanning for safer, faster, more accurate, and cost-effective inspections
National Physical Laboratory
Using Raman spectroscopy for graphene and related 2D materials
Ashley Hixson
Partnership with Hexagon’s Manufacturing Intelligence division provides employable metrology skills
Lily Jampol
Here’s why that’s a problem

More Features

Quality Insider News
Alliance will help processors in the US, Canada, and Mexico
Makes it easy to perform all process steps, from sample observation to data analysis
General, state-specific, and courses with special requirements available
New features revolutionize metrology and inspection processes with nondimensional AI inspection
Annual meeting in Phoenix, April 26–28
Engineering and computer science students receive new lab and learning opportunity
Strategic partnership expands industrial machining and repair capabilities

More News

Steve Wise

Quality Insider

How to Determine In-Process Sampling Strategies

What should you measure or avoid?

Published: Wednesday, March 27, 2013 - 11:34

Determining an effective in-process sampling strategy can be a tricky business. What should you measure? What should your sample size be? What are the pitfalls? Your approach can be the determining factor to whether you will ever attain true understanding of process performance or see any significant improvements in quality, uptime, or deliverability at cost.

Developing sampling plans for acceptance sampling is typically a well-documented process based on industry-accepted standards and practices designed to detect if a lot meets an acceptable quality level. Most quality managers use acceptable-quality-level tables to determine the number of parts to sample from a given lot size. However, developing in-process sampling strategies is more than referring to tables; it requires an understanding of the manufacturing process, patterns of variability, historical stability of the process, and a willingness to use data to drive improvements.

Why in-process sampling matters

In-process sampling is valuable because collecting data throughout a manufacturing run allows you to monitor and ensure the process is operating in a desirable manner. Done properly, sampling provides an early detection point so operators can take corrective action before continuing a run of unacceptable product. Doing acceptance sampling only at the end of the run may be a common practice, but end-of-run sampling does not provide any real-time notifications when processes start to misbehave, and adds to the risk of not being able to identify bad product before it heads out the door.

The director of quality at a manufacturer of precision plastics for laboratory use described to me how an incident that caused several pallets of finished product to be scrapped, at a significant cost to the company, was the impetus for changing his sampling approach. Originally he performed chemical testing on batches by sampling at the end of the production process. After determining where the problems were in the molding and packaging process, he changed the work procedures and then began sampling during setup. The chemical testing is time-consuming, but he now tests for the most likely contaminants first, during setup runs, to catch problems early in the process.

What to measure

Deciding what to measure typically falls into one of two categories: part measurements, such as diameter and thickness; or process parameters, such as temperature and pressure. Sampling in both categories can indicate variability and instability in the process, and can be used to bring the process back on track. The goal is to detect special causes of process variability so that immediate corrective action can be taken.

Part measurement sampling uses control charts to track the process’s ability to maintain a stable mean with consistent variability about that mean. Ideally, the mean of the data stream is very close to the desired feature’s target value. Any measurements outside the upper or lower control limits would indicate the process mean or variability has deviated from historical norms. In fact, there are a number of additional patterns that occur within the control limits that act as early detection warnings.

When deciding what process parameters to measure, choose those that have a direct effect on quality, and then determine what the optimum settings should be to deliver consistent quality. For example, if the temperature of an incoming fluid has no effect on the outgoing quality, but the flow rate does, then it’s better to monitor the flow rate.

Setting sampling requirements

After establishing what to measure, the next step is to determine the actual sampling requirements, such as how often to take samples and how many measurements per sample and also factor in the risks and costs of sampling. When determining how often to sample, it’s helpful to think about how long the process can hum along and still produce good product. If the process tends to be very stable, then taking minimal measurements, for instance, at the beginning, middle, and end may suffice. However, if the process is less predictable, then more sampling is in order.

If in-process adjustments are typically needed every couple of hours, then consider taking at least two samples between adjustment periods. These samplings will let you know what happens with the process within each adjustment period. In addition to time-based sampling intervals, samples should also be taken whenever there is a known change in the process, such as when the shift changes, during setup, at start-up, or when tooling is refreshed.

In some cases, there is no historical process knowledge from which to base a reasonable sampling strategy. In these cases, consider sampling 100 percent for as long as it takes to expose the process variability patterns, and then, if conditions warrant, reduce sampling as you begin to better understand the process behavior.

Sample size

Generally, most textbooks use sample sizes of 1, 3, 5, and 10. In industry these sizes have become common as well. When the sample size is greater than one measurement, the assumption is that the values are consecutive. That is, if three bottle weights make up the subgroup, those three bottles were manufactured consecutively.

The purpose of a subgroup is to provide a snapshot of a process’s mean and the short-term variability about that mean. If you capture five consecutive measurements, then you have a more definitive measure of the mean and short-term variability than three measurements. But at some point, the strength of the statistic does not appreciably improve by increasing sample size. As a rule, you’ll get more process knowledge by taking more frequent samples rather than by increasing the number of measurements within a sample. 

Sometimes a sample size of one is the only size that makes sense. For example, the differences in three consecutive samples taken of a homogeneous product (e.g., agitated gravy in a mixing tank) would only be an indication of measurement error. A better strategy in this situation is to use a sample size of one. If the mixing tank were sampled again, say 30 minutes later, the differences in the two measurements would indicate how much the feature changed since the last sample. Sample size of one is also appropriate when only one value exists, like overtime hours for a given day, or peak temperature for a given oven cycle.

Improving sampling strategy

There are typically three situations that call for modifying a sampling strategy. The first is when a failure happens, but is not detected until downstream in the process. This indicates a need to change what is being measured upstream or to increase the sampling frequency. The second situation is when no failures are ever detected, indicating less frequent sampling may be appropriate. The third is when the measured product feature is showing no variation. This would indicate that the process produces to tighter tolerances than can be detected by the measurement system, or that someone is arbitrarily adding a value that he knows will report within limits.

Common pitfalls

Data can provide more value that one might think. When speaking of in-process data collection, the useful life of a single point is short-lived if the data are used only to provide real-time feedback. As important as it is to use data in real time, the value of those data are far from over. Historical data now becomes an infinitely valuable process database. All data collected for real-time decisions take on a “second life” for quality professionals to help them determine what to do today to make things better tomorrow. Analyzing and mining these data can yield process improvement golden nuggets. Slicing and dicing these data becomes the practice to expose relationships that would otherwise go undetected. 

Another common pitfall is not utilizing software investments to their full capabilities. There is a tendency to configure statistical process control (SPC) software to meet current goals and then forget it. But usually the software offers additional processes and sampling opportunities. For example, a worker may still use a clipboard to complete a pre-operation checklist. Today this can be done using a tablet or smartphone to eliminate the paper, not only saving time, but also improving data integrity. Having this additional data in the process database also improves process analysis capabilities.

Build a strategy that lasts

Finally, don’t let in-process sampling improvement efforts stagnate. Make sure there are always two internal personnel who really know the in-process sampling strategies and are constantly looking for new ways to use the SPC software.

At the precision plastics manufacturer, the director of quality’s next goal is to further refine his sampling plans to make them dynamically respond to inspection results. Similar to an acceptable quality level methodology, sampling plans for in-process inspection will increase or decrease sampling, based on the rejection history of a particular product line or process. The director notes, “I’m confident we can reduce the time and cost of inspections while maintaining or improving our internal product quality.”

Discuss

About The Author

Steve Wise’s picture

Steve Wise

Steve Wise is the vice president of statistical methods for InfinityQS, helping companies from all industries implement real-time production for statistical process control and advanced statistical tools. He co-authored  an  industry standard, “D1-9000 Advanced Quality System” in 1991 for Boeing suppliers. Wise is co-author of the book Innovative Control Charting: Practical SPC Solutions for Today's Manufacturing Environment (ASQ Quality Press, 1997).

Comments

Textbook sample sizes

"Generally, most textbooks use sample sizes of 1, 3, 5, and 10." Can you give me a hint in which textbook I could find a rationale for these sample sizes?

Thanks!

Sample size of subgroups

It depends on:

1) What is possible to sample: Can you sample as much as you want?

2) The analysis cost: Do you have enough resources to pay for the analysis?

3) The analysis time: How fast do you have the results? The reason you do in process sampling is because you want to react on time if the value is out of tolerance. If you the analysis of 30 samples is not within a reasonable time to take corrective action and steer your process to be within tolerance, then you need to reduce the amount of samples. 

4) How accurate you want your estimation to be? Look at the 1-alpha confidence interval of your point estimation for the average and the standard deviation. Remember the average is most probably normally distributed and the standard deviation follows a chi-square distribution. If both ranges of uncertainty are narrow enough for your application, then you choose that specific sample size. If you look at graphs that show the sample size versus de 95% confidence range for example, you can notice that goind higher than a sample size of 20 doesn't add much to lower the range. 

I hope this type of evaluation helps you better than any value written in a textbook. If somebody asks you why you have chosen that value for the sample size, you can't answer "recommended in a textbook". It is better to validate it and record it in a validation document with reasoning. If someone asks "why?" then you can refer to the validation document.

Same goes for the sampling setup where you must define the sampling interval:

1) fixed interval and fixed sample size

2) varying interval and fixed sample size

3) fixed interval and varying sample size

4) varying interval and varying sample size

how to determine sampling rate and sample size

Maybe my paper on sampling rate and sample size in SPC may result helpful to some of the persons who commented. I would be very happy to receive feedback.

Here you can download: http://www.hrpub.org/journals/article_info.php?aid=1587

Regards

Thanks Steve, Your post came

Thanks Steve, Your post came in such time when I was thinking on the same line regarding sampling size and frequency.

In my opinion, if you have put the interpretation of control chart with some examples and exactly how to derive sample size and frequency it would have nicest article.

Moreover how it could change based on situations would make it rich one.

If in forthcoming post if you could, I will wait for the same.

Thanks,

Mukundraj.

Wise-dom

Wise words, Mr. Wise, thank you. An all too common failure in quality management systems is to green-light incoming raw materials & components after an incoming sample-based inspection. When asked "why THAT sample size?" and "why THOSE controls?" the auditees' usual answer is "we have always done so". But very seldom, unfortunately, there's positive interaction, or exchange, with the processing - or production - processes. But there is where the problems arise. I don't blame quality people except for one thing: they keep looking at quality as at after-event, therefore control, instead of looking at it as before-event, therefore prevention, process. And, let's be honest, the ISO TC 176 certainly doesn't do really much to enhance prevention. Thank you.