The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data.
It goes in the form of:
Having zero failures out of 22 samples, we can be 90% confident that the process is at least 90% reliable (or at least 90% of the population is conforming).
Or...
A simple approach for quantifying measurement error that has been around for over 200 years has recently been packaged as a “Type 1 repeatability study.” This column considers various questions surrounding this technique.
A Type 1 repeatability study starts with a “standard” item. This standard...
Since 2010, citations for insufficient corrective action and preventive action (CAPA) procedures have been at the top of the list of the most common issues within the U.S. Food and Drug Administration (FDA) inspections, particularly for the medical device industry. Issues can occur while...
Chunky data can distort your computations and result in an erroneous interpretation of your data. This column explains the signs of chunky data, outlines the nature of the problem that causes it, and suggests what to do when it occurs.
When the measurement increments used are too large for the job...