Featured Product
This Week in Quality Digest Live
Management Features
Bryan Christiansen
Actualizing your company culture
Klaus Wertenbroch
As algorithms increasingly become gatekeepers, where should rejected customers turn for an explanation?
Sébastien Breteau
How digital supply chains offer a competitive advantage
Jessica Ellspermann
These tactics can get your team connected, engaged, and motivated
Amitava Chattopadhyay
How one company redefined ‘win-win’ by creating a sustainable and scalable supply chain

More Features

Management News
Too often process enhancements occur in silos where there is little positive impact on the big picture
Latest installment of North American Manufacturing Covid-19 Survey Series shows 38% of surveyed companies are hiring
How to develop an effective strategic plan and make the best major decisions in the context of uncertainty and ambiguity
What continual improvement, change, and innovation are, and how they apply to performance improvement
Good quality is adding an average of 11 percent to organizations’ revenue growth
Further enhances change management capabilities
Awards to be presented March 24, 2020, at the Quest for Excellence Conference, in National Harbor, MD
Workers more at ease about job security. Millennials more confident regarding wages.

More News

Mike Figliuolo

Management

How to Avoid Mistakes When Doing Analysis and Making Recommendations

Three pitfalls on the road to success

Published: Wednesday, November 11, 2020 - 12:03

A hypothesis-driven approach to problem solving and making recommendations can be tremendously efficient. You create a hypothesis (i.e., something taken to be true for the sake of argument), conduct analysis designed to prove or disprove the hypothesis, then make your recommendation based on the results of your analysis. Typically, your hypothesis is based upon prior experiences you’ve had as well as your knowledge of the subject matter you’re evaluating.

I’ve personally used this approach for years. I refer to it as the structured thought process. The method is both efficient and effective. That said, using this approach is not without risks.

Risk No. 1: Confirmation bias
Although it’s great to have experience and prove your hypotheses are correct, that same experience carries risk with it. Confirmation bias—the tendency to look for or interpret information in a way that confirms your preconceived ideas—is the biggest risk you face when using a hypothesis-driven approach like the structured thought process.

No one wants to be wrong, so it’s easy to fall into the trap of thinking that disproving your hypothesis means you made a mistake. That fear of being “wrong” can lead you to wear blinders when you’re conducting analysis. You might ignore or dismiss facts contrary to your hypothesis. You might look only for data that prove you are “correct,” which can then skew your analytical results. Before you know it, you’re making a case based on incorrect information. You find you’re like a horse wearing large blinders that allow you to see only down a very narrow path, ignoring things outside of your field of view.

The end result of succumbing to confirmation bias is you’ll present a recommendation with flawed supporting data. If your audience picks up on your bias, they’ll call you out on it, and you’ll have to go back and redo your work without the blinders on. If they don’t notice your bias and they approve your recommendation, you’ll be implementing an idea that could be harmful to your organization. Neither one of those outcomes is acceptable.

To avoid confirmation bias, enlist the aid of others. Get independent views of your analysis and ask people if you’re missing anything. I know leaders who encourage their team members to try to prove them wrong with additional analysis. Their thinking is if no one on their team can prove their recommendation is wrong, then the answer they’ve arrived at is right. It takes courage to put yourself out there like that and ask your team to prove you’re wrong. But if you focus on the objective of getting to the right answer and see this as a way to ensure you do, it’s easier to take this approach.

Another check to prevent confirmation bias is involving your nemesis—that one person who tends to disagree with any recommendation you make. He constantly seeks to disprove or discredit your recommendations. If anyone is looking for flaws in your work, he is. He will spot situations where you’ve ignored data that’s counter to your hypothesis. He will point out when your interpretation of the facts is skewed in favor of proving you’re “right.” Although his challenges can be frustrating, your nemesis can prevent you from making the big mistake of implementing a flawed recommendation.

Risk No. 2: Analysis paralysis
Assuming you’re able to avoid confirmation bias pitfalls, beware of another trap: analysis paralysis. Many people have the mistaken belief that if some data are good, more data are better, and excessive amounts of data are best. In reality, all that extra analysis is wasted time and effort. Once you have enough analysis to prove your hypothesis, stop doing analysis! All you’re doing is spending time on something that won’t generate incremental benefits.

Excessive analysis increases the risk of upsetting your audience when you ultimately pitch your recommendation to them. The more data you share with them, the longer the meeting ends up being. If they’re supportive of your answer after four analyses, but you insist on showing them 10 more that they understand but don’t need, they’ll grow frustrated. Those feelings won’t help you get your recommendation approved. If your audience is confused by the irrelevant or redundant analyses you’re sharing, the likelihood of getting your pitch approved during the meeting will plummet. Know when enough analysis is enough.

Risk No. 3: Weak analysis
The flip side of analysis paralysis is insufficient or weak analysis. Your facts must support the hypothesis enough to convince others the hypothesis is true. For example, if your hypothesis states that customers will buy 5,000 units of your new product during the first month it’s released, you’ll need rigorous analysis to make your case. If the analysis you use to support this hypothesis is based on two prospective customers you met in a store who said they would “buy the product immediately,” that’s likely insufficient evidence to prove your case. You would need deeper market projections with statistically significant data to make a convincing argument.

If you’re not sure your evidence is compelling enough to convince your audience, ask your nemesis what she thinks about the conclusions you’re drawing based on the data. If your nemesis is convinced by your analysis, it’s likely others who are less skeptical will be convinced, too.

When you’re conducting analysis with the purpose of supporting a recommendation you’re making, be sure to avoid these three pitfalls. Making your case based on rigorous (but not too rigorous) analysis that’s objective and unbiased will increase your odds of getting your recommendation approved on your first attempt at pitching it.

First published Oct. 19, 2020, on the thoughtLEADERS blog.

Discuss

About The Author

Mike Figliuolo’s picture

Mike Figliuolo

Mike Figliuolo is the author of The Elegant Pitch and One Piece of Paper. He's the co-author of Lead Inside the Box. He's also the managing director of thoughtLEADERS, LLC—a leadership development training firm. He regularly writes about leadership on the thoughtLEADERS Blog.

Comments

RISK ANALYSIS

I  AGREE WITH YOU 

THE ANALYSIS IN POSITIVE DIRECTION IS IMPORTANT

INTERPRITATION OF ANALYSIS AND CORRECTIVE ACTION IS VERY IMPORTANT . RELATION BETWEEN PRODUCTION , QC AND ANALYSIST IS IMPORTANT

anup.chandra@yahoo.com

Analysis mistakes

It seems that there is more to this subject than what is presented.  More data is not analysis paralysis.  Traditionally hypothesis testing is to take a sufficient sample size to make a decision from the test to minimize false positive or false negative conclusions about the population.  This article focuses on "accepting" an hypothesis.  That means having sufficient data to avoid a false positive conclusion of accepting a result incorrectly.  It also depends on the statistical tools for Big Data that one uses to handle large data sets, not analysis paralysis. (see Duarte, J, "Data Disruption", Quality Progress, Vol. 50, Issue 9, Sept. 2017)  There are other topics in this article worth commenting on, but I chose to address only one at this time.

Your parenthetical phrase

Dude that's an assumption, not a hypothesis.

Merriam-Webster definition

Definition of "hypothesis" from Merriam-Webster

1a: an assumption or concession made for the sake of argument
b: an interpretation of a practical situation or condition taken as the ground for action

2: a tentative assumption made in order to draw out and test its logical or empirical consequences