Featured Product
This Week in Quality Digest Live
Quality Insider Features
Ophir Ronen
Ushering in a new era of data-driven hospitals
Jacob Bourne
Combining computers, robotics, and automation drives efficiency and innovation
Gleb Tsipursky
Here’s the true path to junior staff success
Nathan Furr
Here’s how to balance psychological safety and intellectual honesty for better team performance
Massoud Pedram
An electrical engineer explains the potential

More Features

Quality Insider News
Introducing solutions to improve production performance
High-performance model extends vision capability
June 6, 2023, at 11:00 a.m. Eastern
Improving quality control of PCBAs and optimizing X-ray inspection
10-year technology partnership includes sponsorship of quality control lab
Research commissioned by the Aerospace & Defense PLM Action Group with Eurostep and leading PLM providers
MM series features improved functionality and usability
Improved design of polarization-independent beam splitters

More News

The QA Pharm

Quality Insider

Six Methods for Verifying CAPA Effectiveness

How to choose the right method and time frame

Published: Monday, July 27, 2015 - 15:39

Verifying the effectiveness of corrective and preventive actions (CAPAs) closes the loop between identifying a problem and completing the actions to solve it. It’s reasonable to assume that if a problem is worth solving, it’s also worth verifying that the solution worked. However, given the wide range of problems that could occur, determining the best verification approach and time frame to implement it can often seem elusive.

Before we discuss CAPA effectiveness, we need to look at a few of the reasons why performing this check is often a challenge. Here are six:
The problem isn’t well defined. Sometimes breaking the mental logjam is as simple as asking, “What problem were we trying to solve?” That sounds like an easy question, but when the answer isn’t well defined or stated simply, measuring success isn’t easy to do.
The root cause isn’t determined. This is a natural consequence of the first reason. It’s next to impossible to determine the root cause for a fuzzy problem, or one that seems too complicated to explain. Those who try also get a fuzzy root cause.
It’s not really a CAPA. All too often, the CAPA system has become a quality work-order system (aka dumping ground) because the commonly used data management systems, such as Trackwise, provide project management structure and visibility. But without a stated problem or determining a root cause, it’s not a CAPA; it’s just a project.
CAPA effectiveness verification is used for everything. CAPA effectiveness verification can be too much of a good thing when it’s expected for every possible CAPA. This usually occurs from the cascading problem of a CAPA being required for every deviation, and a deviation being required for every conceivable blip. Soon you become a drowning victim of your own making.
We overthink it. Rather than allowing reason to prevail, there are those who tend to complicate just about everything. Determining and applying an appropriate verification method is no exception. Yes, we operate in a scientific environment, but not every method of verifying effectiveness must be labor-intensive. Major processes need not be applied to minor problems.
It’s not considered important. There are those who believe that living with an ongoing problem is the path of least resistance, especially when compared to conducting the same boilerplate investigation (i.e., same problem, different day) in order to get on with the real work of production. Having a low tolerance for recurring problems is truly the root cause for many who are treading water in a deviation-swirling tide pool.

Assuming we have a real CAPA, where an investigation was conducted on a well-defined problem to determine the root cause and product impact, we can turn to the regulatory requirements and business obligation to evaluate how well we spent our resources to eliminate the problem permanently. This brings us to options for verifying CAPA effectiveness. Here are six:
Auditing is used when the solution involves changes to a system and a verification is done to see whether the changes are in place procedurally and in use behaviorally. An example is an audit of a new-line clearance checklist to verify effective implementation of the checklist.
Spot check is used for random observations of performance or reviews of records. Spot checks provide immediate but limited feedback. An example is a spot-check of batch records to ensure that the pH step was performed correctly after training on the new procedure.
Sampling is used for observations of variables or attributes, per defined sampling plan. An example of sampling a statistical sample randomly drawn from lot XYZ123 to confirm that a defect has been removed after implementing a process improvement.
Monitoring is used for real-time observations over a defined period. An example of monitoring is real-time observation to verify that changes to operator-owning practices were implemented.
Trend analysis is the retrospective review of data to verify that expected results were achieved. An example of trend analysis is the review of environmental monitoring (EM) data for the period covering the last 30 batches to show the downward trend in EM excursions due to process improvements.
Periodic product review is a retrospective review of trends of multiple parameters done at least annually to confirm the state of control. An example of periodic product review is the review of data after major changes were made to the facility and equipment as part of a process technology upgrade following a recall.

Now that we have a real CAPA and a selected a method to verify its effectiveness, we must determine an appropriate time frame to perform the verification. Time frames are subjective, but there must be a basis for the decision. Here are points to consider when determining an appropriate time frame for CAPA effectiveness verification.

Allow relatively less time after implementing the solution when there is:
• Higher opportunity for occurrence and observation
• Higher probability of detection
• An engineered solution

In these cases, fewer observations are needed for a high degree of confidence.

Allow relatively more time after implementing the solution when there is:
• Lower opportunity for occurrence and observation
• Lower probability of detection
• A behavioral or training solution

In these cases, more observations are needed for a high degree of confidence.

Following are several fictitious examples of CAPAs that require an effectiveness verification. Which CAPA verification effectiveness method and time frame would you recommend?

Example one
Problem: There are widespread errors in selecting an appropriate effectiveness verification and time frame in the Trackwise fields when compared to the requirement in the new procedure.

Root cause: There is a general lack of understanding of acceptable CAPA effectiveness review methods that would satisfy the procedural requirement.

CAPA: Develop and deliver targeted training on CAPA effectiveness verification methods to CAPA system users who have the responsibility to make this determination.

Example two
Problem: Transcription errors are being made when copying information from sample ID labels to laboratory notebooks.

Root cause: Labels made on the current label printer are frequently unreadable.

CAPA: Replace the current label printer with one that produces legible labels.

Example three
Problem: The incorrect number of microbiological plates, as required by SOP XYZ123, were delivered to the lab on two separate occasions by a newly trained operator after routine sanitization of Room A.

Root cause: The instructions in SOP XYZ123 are more interpretive than intended, which can mislead inexperienced operators.

CAPA: Revise SOP XYZ123 to be more specific about the required number and specific placement of micro-plates in Room A.

Example four
Problem: Increased bioburden levels were noted in the microfiltration process train A.

Root cause: The phosphate buffered saline delivery piping system upstream of the microfilter exhibited high bioburden levels.

CAPA: Revise the cleaning procedure to incorporate water for injection fish to remove residual harvest material from the process piping, and provide training on the flushing process.

Example five
Problem: A statistically significant trend was observed in assay X results for six lots of the 25 mg vial manufactured at site A, but not the 10 mg vial manufactured at site B for the same period.

Root cause: There was a difference in sample preparation techniques between the two sites.

CAPA: Revise the sample preparation of the test method to establish consistency between sites, and provide training on revised test method.

First published March 17, 2015, on The QA Pharm blog.

Discuss

About The Author

The QA Pharm’s picture

The QA Pharm

The QA Pharm is a service of John Snyder & Co. Inc., provider of consulting services to FDA-regulated companies to build quality management systems and develop corrective actions that address regulatory compliance observations and communication strategies to protect against enforcement action. John E. Snyder worked at the lab bench, on the management board, and as an observer of the pharmaceutical industry for more than 30 years. His posts on The QA Pharm blog are straight talk about the challenges faced by company management and internal quality professionals. Synder is the author of Murder for Diversion (Jacob Blake Pharma Mystery Series Book 1).