Featured Product
This Week in Quality Digest Live
Operations Features
Gleb Tsipursky
Only a third of organizations have hybrid policies in place
Stephanie Ojeda
How addressing customer concerns benefits the entire quality process
Mike Figliuolo
Creating a guiding maxim helps your people think ahead, too
Melissa Burant
A way to visualize and help prioritize risks, actions
Scott Ginsberg
Addressing skill gaps and preserving valuable institutional knowledge

More Features

Operations News
It’s the backbone of precision measurement. What’s best for you?
Enables better imaging in small spaces
Helping mines transform measurement of blast movement
Handles materials as thick as 0.5 in., including steel
HaloDrive Omnidirectional Drive System for heavy-duty operations
For companies using TLS 1.3 while performing required audits on incoming internet traffic
Accelerates service and drives manufacturing profitability
For processed protein products
Cologne, Germany, March 19–22, 2024

More News

David Currie


Metrics: The Good, the Bad, and the Ugly

Part three: the ugly

Published: Monday, December 17, 2018 - 12:02

This is part three of a three-part series. Read about good metrics in part one and bad metrics in part two.

Have you ever had occasion to dread a metric reviewed month after month, where the metric defies logic, and any action taken does not seem to reflect in the metric? It is most likely a bad metric in so many respects that it has turned ugly. Let’s look at a sample ugly metric.

At first glance this metric seems quite reasonable and logical, as the graph below indicates. The metric is intended to track warranty expense and use it to guide improvements in the warranty area. The statistic comes from accounting data and charts the monthly warranty costs. It also compares the current year to the previous year. The expectation is that if the quality system is actively pursuing warranty issues, then the number should decrease over time. The dollar amount is derived from five factors;
1. Monthly dollars spent on materials sent out as replacement components “component returns”
2. Cost in dollars associated with unit returns for freight damaged
3. Cost in dollars associated with unit returns for customer rejections
4. Dollars spent to compensate distributor expenses associated with customer quality concerns, called “tool box adjustments” (TBAs)
5. Dollars associated with freight damage claim resolution, “claims collected”

So the total dollars per month are: (component returns + box returns [freight damage + customer rejections] + TBAs) – claims collected.

Take a moment to review the graph. Note the large changes in total dollars month to month. If there really was any improvement, how could you tell? Wide swings in data from month to month should be the first sign that something isn’t right. Of course, a completely flat chart would also be cause for concern.

This metric suffers the same issue described in part two, that is, a conglomeration of too many processes. There are some additional significant issues, as follows:

Accounting dollars are often used in metrics because it reflects on the bottom line. Again, everything appears OK, so why is it that actions taken one month don’t appear to have any impact on the next? The answer is simple once you take a closer look. Accounting accumulates and reports dollars in the month that they are paid, whereas improvements are made in months in chronological order (date code/serial number order). The two factors are unrelated to each other. Whenever accounting dollars are used, make sure that the detail is available to allow the dollar amounts to be arranged in the same order as the topic being measured.

A second area where the data might be somewhat misleading is that there is no reference to quantity produced. When speaking in terms of warranty dollars, one must consider the cost per available unit. To correct this, the first question one should ask is, “What is the warranty period that contributes to the two factors, component returns and TBAs?” The answer is that these units have a lifetime warranty. How long has the company been in business? The company has been in business since 1938, or roughly 77 years.

Let’s assume we are only talking 25 years. Let’s also assume that the company produces roughly 100 units a day, using a five-day day work week and a 50-week year. This implies that there are more than 625,000 units available that could contribute to the warranty expense (as of the date of the report). This total is being added to at the rate of 2,083 units a month. Of course, the actual number of units produced would yield a more accurate cost per unit, but this approximation will do for a quick analysis. Thus, any improvement that would impact these two areas would only have a maximum .33 percent effect on the overall number, assuming everything is produced perfectly. This means that a very large improvement wouldn’t even register. 

For the two areas, component dollars and TBA dollars, let’s compare the cost per unit produced, using the YTD (year to date) data from the report. The units produced as of 2015 would be 600,004. The number in 2016 would be 625,000. Using these numbers, let’s look at the incurred cost per unit in the following table:


Available units produced

Component $ incurred

Component $/unit

TBA $ incurred

TBA $/unit

Total $/unit















This suggests improvement during the previous year in the area of long-term warranty costs, and should be applauded. 

The last category includes returns for freight damage and quality defects, with the largest cost incurred by far coming from freight damage. Putting the YTD numbers together in the following table, however, we see improvement from 2015 to 2016. The data show an 8.5-percent cost improvement in a year.


Warranty expense for freight damage

Warranty expense for defects

Total warranty expense




$ 292,762




$ 267,998

This metric was very misleading when presented as a multi-page Excel chart. The charts were pretty, and appeared to show a very volatile process with no discernable improvement from month to month. The data were not well understood by the presenter or reviewers, so it was shown in a very negative light. The hour-long meetings were brutal and rife with accusations and the expectation of large changes in the numbers over a one-month period. These data would have been better reviewed on a product packaging basis to relate corrective action implementation to product damage.

This metric failed in the definition of a good metric in every aspect. The Excel file also included detail data, but these did not “total up” to the dollar data presented, so the metric did not accurately represent the processes involved. The detail data could not be reviewed in depth due to missing information, misguiding detail, and a failure to include all recovery costs. 

The component returns category included a list of components shipped as replacements and how many, but no detail as to the part number of the units returned, where they came from, or what, if anything, was wrong with them. This information was not being collected. Thus, there was truly no indication of what needed improvement because the focus of the component-returns process was to process returns as quickly as possible and did not include any focus on improvement. 

The TBA detail data included new TBA’s opened that month and were classified in one of five gross categories, which severely limited any analysis. This was because the purpose of the TBA process was to compensate the sales force by providing a monetary adjustment, and only as a side line to collect defect data. The TBA’s included pictures of the defects identified, which needed to be reviewed to identify and classify the actual defects and process source. These were difficult to get to and could take many hours to collect and review. The detail data did not total up because it should have listed the TBA’s closed that month.

The box returns detail data listed model numbers only, so that it was impossible to trace the numbers to the actual unit included, or month produced. So what happened to the units that were returned? They were analyzed for damages and defects, reworked if possible, and returned to stock either as a new unit, or as a discount unit. Those units that were not usable were stripped of needed components and scrapped. The data did not reflect any recovery costs. Improvement would have been reflected in the recovery dollars because the amount of damage incurred due to shipping was greatly reduced by the actions taken. 

The focus of the effort was directed toward improved packaging since roughly half of the warranty cost incurred came from freight damage. The focus of all of the improvement effort was to better protect the equipment during shipment, which should result in a lower degree of actual damage. Without assessing the extent of damage, or percent of units incurring damage by each of the 159 different models, it would be impossible to determine what corrections made an improvement and which ones did not. More to the point, corrections were introduced in chronological order by month. The cost data were based on costs incurred by month, so it was truly apples to oranges.

The moral of this dialogue is that if a metric does not meet the following criteria, then no one can determine if actions taken have made any impact at all on the processes.

Characteristics of a good metric


    The metric supports the goals and objectives of the quality system.
    The data contain sufficient detail to allow analysis of specific defects.
    Data are carefully collected, checked for accuracy and completeness.
    Data are combined in a way that clearly represents and follows the process.
    The data collection process is clearly understood.
    There is a clear relationship between the process and the data being used.
    The review interval of the metric matches the response time for corrections.
    The metric helps save money through improved processes.


Everyone involved in the use of metrics, from managers, data collectors, presenters, and action takers, have a responsibility to make sure their metrics meet these criteria.  Otherwise, we are doomed to waste valuable resources and energy senselessly.


About The Author

David Currie’s picture

David Currie

David Currie is a quality professional with a broad background of experience in the nuclear (ANSI N45.2), commercial (ISO 9001), automotive (QS-9000), aerospace (AS 9100), and defense (MIL-Q-9858) quality systems.


Nice - thanks!

Really appreciate this series. Very timely for a task I'm working on a team developing business metrics.