That’s fake news. Real news COSTS. Please turn off your ad blocker for our web site.
Our PROMISE: Our ads will never cover up content.
Fred Faltin
Published: Monday, May 8, 2017 - 11:03 All of us draw conclusions based on what we see happening around us. Often what we’re observing is a sample from some larger population of events, and we draw inferences based on the sample without even realizing it. If the sample we observe is not a representative one, our resulting judgments can be seriously flawed, potentially at considerable personal cost. During World War II, the Allied air forces wanted to analyze data on the damage suffered by aircraft returning from combat missions over Europe. Those in charge had wisely decided to examine returning planes in order to assess what design upgrades might increase survivability. They observed how many hits each had sustained, and where on the aircraft they occurred. The brass was of the opinion that those parts of the plane that received the most hits should be up-armored in order to make them more survivable. Fortunately, before proceeding, they turned to Abraham Wald. Wald was a brilliant statistician who, like many other mathematicians and scientists of the time, had immigrated to the United States to escape Nazi persecution. Wald’s analysis was lengthy and detailed, but the gist of it was simple enough. The only aircraft on which data was available, he pointed out, were those that had survived combat and returned. The areas in which those aircraft had sustained numerous hits were therefore areas in which the planes could take substantial damage and still survive. It was the areas in which the returning aircraft had not been hit that needed to be better armored, because planes that had taken hits in those areas (e.g., the engines) had not come back. A lot has been written on this anecdote recently, perhaps because of the discussion of it in Jordan Ellenberg’s excellent book, How Not to Be Wrong (Penguin Books, 2014), which I highly recommend. This is an example of what statisticians, for obvious reasons, call “survival bias,” also known alternatively as “survivorship bias”—the lack of representativeness that occurs when we gather data on a population, some portion of which is missing due to some specific cause, and is therefore unobservable. Clearly, the absence of some appreciable part of the population calls into question any conclusions that might be drawn based on observing only the part that is available. Put another way, a sample drawn only from the part of the population we can see might not be representative, and conclusions drawn from our sample might therefore mischaracterize the population as a whole. In fact, the situation is even a little worse than that—but we’ll return to that in a moment. The term “survival bias,” though understandable enough from the example above and others like it, is nonetheless an unfortunate one. That’s because survival bias occurs in many aspects of daily life that have nothing whatsoever to do with survival in any common use of the word. In fact, almost any selection procedure or criterion that is applied before we gather data results in some measure of survival bias. The only sense of “survival” going on in such cases is that the elements of the population that are available to us are those that “survived” the selection process. There are numerous examples of survival/selection bias that we experience in everyday life, sometimes without our even realizing it. We’ll look at some examples below. But first, it’s natural to ask under what circumstances such a restriction on our sample will bias the outcome of an analysis. Suppose we’re looking at a population Ώ, and are interested in some feature that defines a subset F of Ώ. Imagine further that some survival or selection criterion has limited our potential observations to some subset S of Ώ. For the restriction to S not to bias our observation of F, the frequency of occurrence of F in S would have to be the same as it is in the overall population Ώ. That is, it would have to be the case that P(F given S) = P(F). From the definition of conditional probability, this means that P(F and S)/P(S) = P(F), and thus P(F and S) = P(F)*P(S). But this is precisely the definition of the events F and S being independent of one another. So the prior constraint imposed by a survival or selection criterion will always bias our view of a feature of the population unless the feature and the criterion are independent characteristics to start with. This simple proof pertains only to features that are either present or absent in each member of the population, but can be easily extended to random variables in general. Where the military brass went wrong in the case of the returning aircraft was in assuming that planes that returned were a representative sample of all the aircraft that had flown on a given mission or day. So they tacitly assumed that the planes that did not return had the same pattern of battle damage as those that did. But the volume and location of battle damage is clearly not independent of the planes’ survival. Wald recognized, fortunately, that looking at the damage on surviving planes tells us a lot about where the aircraft didn’t need more armor—and by inference, therefore, also where they did. Real estate Suppose we’re looking at a property that’s been listed for 180 days in a market where the average property sells in half that time. Think of it this way: Six months ago, there was a cohort of newly offered properties, of which the listing under discussion was one. We assume throughout that there is a free, liquid market of properties and buyers. In the interim, the great majority of those properties have been sold. The one we’re looking at “survived” the sales process and remains available. The mere fact that it has done so suggests that it is not typical of the population of properties among which it was a new listing. Of course, it could just be a coincidence that this property remains on the market—even among equally meritorious properties, one of them has to be the last to sell. But beyond a certain point in time, more than likely there’s a reason that others have sold and this one hasn’t. It might be a matter of condition, location, style, of some pending activity in the area, or simply that it’s overpriced—but most likely, it’s something. Sales data (or the lack thereof) When I was a manager at GE Research, one of its equipment leasing businesses approached my group to address this very problem. They knew they were turning away business, and were wondering how much more inventory—if any—could be justified by the return on investment it would earn. The problem was, of course, that they had no data on their lost sales, and therefore no real idea of how much profit was slipping away. It was a difficult problem, and led to one of the best Six Sigma projects I’ve seen. The solution was to devise an appropriate mathematical model for demand, and then to use simulation to fit the parameters of the model to their data on what actually had been leased. We were then able to infer what transactions in the tail of the model had been turned away. Customer loyalty Clearly, the frequent guests being surveyed were “survivors” who had experienced what the hotel had to offer and came back because they liked it. Guests who were dissatisfied and didn’t return, and those who did not find the chain’s features appealing enough and had chosen not to come in the first place, mostly weren’t around to be surveyed. This effect is surprisingly common, and still seems to be routinely overlooked even by entities that should know better. Whether it’s employee satisfaction surveys, customer focus groups, or primary elections, sampling only people who choose to remain affiliated with some organization fails to capture the views of those who joined and left, or who chose never to sign on in the first place. Corporate training In business and everyday life, we tend to assume—often unconsciously—that a group or event we’re observing is characteristic of some broader set, and we draw conclusions accordingly. If, in fact, what we’re observing is not representative of the whole, our conclusions may be seriously flawed. One of the most common errors we make is to overlook selection criteria that have limited and biased the sample we observe. Subsequent articles in this series will examine still other ways in which people and organizations commonly run afoul of sound logic and good statistics. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Fred Faltin is Managing Director and co-Founder of The Faltin Group, providing consulting and training services in Statistics, Six Sigma, Economics, and Operations Research to companies throughout the Americas, Europe, and Asia. Previously, he founded and managed the Strategic Enterprise Technologies laboratory at GE Global Research. Fred is a Fellow of the American Statistical Association, and a recipient of the American Society for Quality’s Shewell Prize. He served as co-Editor in Chief of the Encyclopedia of Statistics in Quality and Reliability (2007), and Statistical Methods in Healthcare (2012), both published by John Wiley & Sons. His current project is Analytic Methods in Systems and Software Testing, to appear later this year. Sampling Bias in Business and Life
How not to be wrong
Survival bias
Examples
It seems that every couple of years I hear a discussion over happy hour of whether buyers should be cautious of a property that has been on the market an unusually long time. The answer is that of course they should.
Almost every company collects data on what it sells. But in my experience, almost no companies gather data on what they could have sold, but didn’t. If a customer calls or comes into a business to buy something, and it’s not available, they generally give up or go elsewhere. The sale that never happened goes unrecorded, so its existence, as well as its cause (whether a lack of inventory, short staffing, or something else entirely) is lost. The sales that happen are the planes that came back. Year-end financials show the revenues that were actually realized—but not the ghosts of opportunities lost.
I recall reading the case study of a hotel chain that surveyed its frequent guests about the features they most valued in selecting lodging. Not surprisingly, a large number of the surveys cited the desirability of features and services that the chain conducting the study already excelled at. Nonetheless, a decade later the chain was in decline.
One of the things I have the great pleasure of doing now and then is spending time presenting training on topics in business analytics, Six Sigma, finance, or statistics to groups of up-and-coming professionals. One thing I’m always struck (but not surprised) by is the quality of the attendees at these sessions. As a former corporate manager, I can assure you that when companies decide on whom to invest in, they don’t do it by drawing names out of a hat. They send the people of greatest perceived potential, who are judged most likely to benefit from and apply their learning to deliver value to the organization in return. I have the easiest job on the planet, because I get to consult with and train the best people corporations have to offer—the “survivors” of the corporate selection process.Conclusion
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Fred Faltin
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Comments
A Related Issue
Good article.
I have often commented on a personal rule: Thou shalt not reason from small samples. And its corollary, Thou shalt not fail to reason from large samples.
I am particularly critical of those who reason from selected anecdotes. But your article rightly points out that even reasoning from large samples may lead to erroneous conclusions if the sample is biased.
Thank you.