You like FREE content. We like getting a paycheck. Please support us by disabling your ad blocker on our website.
Our PROMISE: Our ads will never cover up content.
Our children thank you.
Eighty-four doctors treated 2,973 patients, and an undesirable incident occurred in 13 of the treatments (11 doctors with one incident and one doctor with two incidents), a rate of 0.437 percent. A p-chart analysis of means (ANOM) for these data is shown in figure 1.
This analysis is dubious. A good rule of thumb: Multiplying the overall average rate by the number of cases for an individual should yield the possibility of at least five cases. Each doctor would need 1,000 cases to even begin to come close to this!
The table in figure 2 uses the technique discussed in last month’s column, “A Handy Technique to Have in Your Back Pocket,” calculating both “uncorrected” and “corrected” chi-square. Similar to the philosophy of ANOM, I take each doctor’s performance out of the aggregate and compare it to those remaining to see whether they are statistically different. For example, in figure 2, during the first doctor’s performance, one patient in the 199 patient treatments had the incident occur. So, I compared his rate of 1/199 to the remaining 12/2,774.
As Davis Balestracci frequently emphasized in his column, “RealWorldSPC,” published in Quality Digest for four years, it is fundamental to understand the context of the data before you begin to do any computations. It is the background for your data that determines how you should organize the data, how you should analyze the data, and how you should interpret the results of your analysis. Once you ignore the context, you’re like a train that has gone off the track, with the inevitable result.
One day a company sent me some data that it had spent more than a month collecting. These data represented the results of an experimental study carried out using production batches. For each of 30 batches the company recorded all sorts of production information, along with the experimental conditions that applied to that batch. At the end of the production process it took 40 items from each batch and measured the property of interest. Thus, it had a total to 1,200 values: 40 values for each of the 30 batches.
I often agree with Scott Paton’s columns bemoaning the state of customer service in America (“Customer Service?” “Quality Curmudgeon,” October 2008) Then there are times when something happens that gives you hope. Our 10-year-old daughter has several American Girl dolls. One of these dolls had a problem--her eyelash had come off. However, for a not-inconsequential fee, you can ship your American Girl to the “doll hospital” and have eyes, limbs, and even heads replaced in about two weeks. (I guess this is actual “plastic” surgery.) The doll returns, repaired, in a hospital gown, with a certificate of health and a get-well balloon. So, we sent the doll off from our home near Chicago on Friday, to the doll hospital in Middletown, Wisconsin. We were surprised to discover a doll-shaped box on our doorstep the following Wednesday--implying a stay at the doll hospital of a mere 24 hours. When opened, my daughter found her doll as promised with a new eyelash, in the hospital gown, with balloon and a clean-bill-of-health certificate. My wife and I were shocked to discover an additional item in the box--a letter from American Girl that stated, “You expressed concern that your doll required repair. Upon examining her, we agree with you that American Girl is responsible for these problems.
In “Is Three Sigma Good Enough?” (H. James Harrington, “Performance Improvement,” June 2008), the author makes some good points about whether the cost of getting to a Six Sigma level is really justified, but his point is made based on his special set of numbers. I could make up different numbers to prove that going to a Six Sigma level is cost-justified. The point that’s being missed, I feel, is the real cost of poor quality. I believe that Taguchi suggested the measurable cost of poor quality to the customer should be squared to get the real costs. If we square Mr. Harrington’s cost of bad widgets, it presents a very different picture. One can never capture the full cost of a poor product in the marketplace, but it is certainly much more than is presented in this example. As the late Philip Crosby would say, ‘There has never been a case where the cost of repairing a bad product was cheaper than doing it right the first time!’
The Commerce Department’s National Institute of Standards and Technology (NIST) and President Bush recently announced the recipients of the 2008 Malcolm Baldrige National Quality Award, the nation’s highest honor for organizational innovation and performance excellence. The winners are:
• Cargill Corn Milling North America, Wayzata, Minnesota, www.cargill.com (Manufacturing)
• Poudre Valley Health System, Fort Collins, Colorado, www.pvhs.org (Health care)
We own a washer and dryer combo unit made by a large home appliance manufacturer--let’s call them “Maytag.” The washer and dryer are in the company’s Neptune series. We’ve had them for about seven years and they’ve worked flawlessly, until recently.
About four months ago the washer started making an awful squealing noise when it went into the spin cycle. No problem, I thought. We have an extended warranty. I’ll just call the friendly Maytag repairman.
Instead of Gordon Jump, I got a customer service representative whose sole job, apparently, is to make sure that Maytag does not send out a repairperson. After I described the problem, she told me flat-out that they don’t make service calls based on a noise.
“But it’s a really loud noise,” I explained. “When this thing starts to squeal the dogs start howling, the cats hide, and even the fire department called to complain that no one could hear their sirens. It’s really loud.”
“I’m sorry sir,” she repeated. “It’s not our policy to send a repairperson out based on just a noise.”
“Sooo… what? I have to wait for smoke, flames, water on the floor, the drum to come spinning out onto the floor like some crazy oversize dreidel?”
After spending this summer attending several trade shows, marveling at equipment that can capture a 3-D point cloud of an entire Airbus A380 to within a few thousandths of an inch accuracy, or measure surface defects of a cylinder wall to within fractions of a micron, it’s easy to fall into the trap of regarding measurement equipment as the semi-autonomous guardians of precision. Push a button, and voilà--red light, yellow light, green light--scrap it, rework it, use it. Why, a monkey could do this job!
Unfortunately, precision measurement, even with the most advanced equipment, isn’t monkey business. It’s a highly skilled profession, and a good metrologist is worth his or her weight in gold. That word hasn’t gotten out, however, and the number of people who have the knowledge and skill to perform equipment calibration and precision measurements are dwindling. Just ask around at a measurement conference such as the Coordinate Metrology Systems Conference (CMSC) or the Measurement Science Conference (MSC), and you’ll get an earful. In recent conversations I had with Boeing metrologists, it was apparent that the shortage of skilled measurement specialists is definitely being felt by the aerospace industry.
I have been a faithful and interested reader of Quality Digest and the “Quality Curmudgeon” column for many years. As is usually the case, I breeze through the magazine and then cut out the last page so I can take my time with it at a later point in time. I started this back in the day when I realized I cut them out for future reference or to forward to a colleague anyway.
I had not taken the time to read the “Give Thanks” column (December 2008) because, as vice president of sales and marketing for my company, I was too busy working my crazy hours keeping things afloat. Ironically, I have plenty of time to read old items I have saved, such as your column, since today is my first day of unemployment. As the 88-year-old patriarch of the family-owned business told me this past Friday, “Look at this as a learning experience. We can now get three college kids to do the work of a six-figure executive such as yourself.” With that they showed me the door. Ouch!
The number of major hurricanes in the Atlantic since 1940 (as we considered in my February column, “First, Look at the Data”) are shown as a histogram in figure 1, below. Some analysts would begin their treatment of these data by considering whether they might be distributed according to a Poisson distribution.
The 68 data in figure 1 have an average of 2.60. Using this value as the mean value for a Poisson distribution, we can carry out any one of several tests collectively known as “goodness-of-fit” tests. Skipping over the details, the results show that there’s no detectable lack of fit between the data and a Poisson distribution with a mean of 2.60. Based on this, many analysts would proceed to use techniques that are appropriate for collecting Poisson observations. For example, they might transform the data in some manner, or they might compute probability limits to use in analyzing these data. Such actions would be wrong on several levels.