Why perfect research meets imperfect organizations... and what both sides can do about it
Your research was flawless. Your insights were sharp. Your report was beautiful.
And then... crickets....
TL;DR:When insights don’t drive action, the research is often not the problem. The same cognitive biases we study in consumers are alive and well inside the organizations that commission our work. This article maps seven of them and offers practical countermeasures for both agency-side and client-side researchers.
We’ve all been there. Probably too many times…. Flawless methodology, sharp insights, a compelling narrative...and nothing happens.The deck gets filed. The recommendations get “taken on board.” The next quarter looks exactly like the last one....
So what went wrong? Probably not the research.
Here’s the irony: as an industry, by now we’ve fully embraced thatHomo Economicusis a myth. We know consumers don’t optimize rationally. We’ve built our careers on understanding cognitive shortcuts, emotional reasoning, and the gap between intention and behavior.
But the moment our research enters an organization, we seem to forget that the peoplereceivingit are subject to exactly the same forces. We hand our findings to a room full of humans... humans with careers to protect, budgets to justify, bosses to please, and mental bandwidth that’s already maxed out... and we expect purely rational information processing.
If we wouldn’t expect it from consumers, why would we expect it from the people sitting across the table?
My experience is that there are many biases standing between the cold hard data and ‘taking the best action or decision; A lot of these biases areamplified by hierarchy and risk asymmetry: the more senior the audience, the higher the stakes of admitting a strategy needs to change, and the more ‘protected’ people are from being challenged. Which means power often ends up shaping which findings even get heard.
The good news is that once you see and acknowledge these dynamics, you can design for them. I don’t think you can fully eliminate organizational biases any more than you can eliminate consumer biases, but you can anticipate them, structure your work around them, and give your insights a much better shot at actually cutting through. And that’s what we want no?
7 Biases that kill good research
Below are seven patterns I’ve seen derail good research, both from the agency side and from years working client-side. From how people see data, to how power shapes the interpretation of it, to why decisions stall even when the evidence is clear.
A note on Agency vs Client side responsibility:Whether you sit agency-side or client-side, insight impact is part of the job. The agency that delivers brilliant findings without anticipating organizational resistance has done half the work. The client-side lead who commissions research without creating the conditions for it to be acted upon has wasted much if not all of the investment. I see this as a mutual responsibility.
1. Confirmation bias
How we see what we want to see
What it looks like:“That finding about declining brand relevance is interesting, but look... satisfaction is stable. Let’s focus there.” Stakeholders can unconsciously zoom in on data that validates what they already believe and skim past anything inconvenient. That’s not necessarily stupidity or dishonesty... that’s just how human brains process information under cognitive load.
If you’re the agency:During the briefing, ask each stakeholder: “What result would surprise you? What finding would make you change course?” Write their answers down. Literally. Then revisit those exact words during your debrief. It’s much harder to cherry-pick when you’ve publicly committed to what would matter.
If you’re the client-side insights lead:Try this before you even open the report... write down your three strongest assumptions about what it will show. Then after reading, check whether you lingered on the slides that confirmed them. Share that exercise with your team. It's a small thing, but it makes it a lot easier for others to question their own assumptions too.
2. Power & Authority bias
How hierarchy decides which insights survive the room
What it looks like:The CEO says “I think customers really care about X” and suddenly the research showing they care about Y gets quietly shelved. Once the most senior person in the room reacts, everyone else calibrates to that reaction. Dissenting views don’t get voiced (in some cultures even to the extreme). The debrief reaches a comfortable consensus in the first 10 minutes, and silence gets interpreted as agreement.
This is actually two dynamics working togerher. Authority bias means senior voices carry disproportionate weight regardless of evidence quality. Groupthink means the pressure toward harmony suppresses critical evaluation. Together, the most powerful person’s first impression becomes the group’s conclusion. Avinash Kaushik gave this a name that summarizes this nicely: the HiPPO... Highest Paid Person’s Opinion.
If you’re the agency:Send a one-page topline to each key stakeholder individually before the group debrief. When everyone walks into the room having already formed their own first impression from the data, the CEO’s ‘live’ reaction doesn’t become the only anchor. In workshops, you can start with a couple of minutes of silent individual reflection... sticky notes, no talking. Collect responses before anyone speaks. This surfaces views that would otherwise stay hidden.
If you’re the client-side insights lead:Structure the debrief so the most senior person speaks last. Collect written reactions before opening discussion. And assign a rotating “red team” role: one person’s explicit, formal job that day is to argue the strongest possible case for the opposite interpretation of the findings. Think of it as quality assurance, not so much dissent. Rotate who does it so it’s never personal. Build it into every meeting agenda.
3. The inertia trap
How past investment masquerades as future logic
What it looks like:“We’ve spent two years developing this product direction. The research might suggest pivoting, but we can’t just abandon all that work.” Or: “The research is compelling, but implementing those changes would be really disruptive. Let’s revisit next quarter.” Next quarter becomes next year becomes never.
In practice, the sunk cost fallacy and status quo bias are two faces of the same organizational failure: the past is the main compass future decisions. The bar for evidence supporting action is always higher than the bar for evidence supporting doing nothing. And inaction feels like it carries no risk... even when it absolutely does.
If you’re the agency:Never frame findings as “your strategy was wrong.” Instead: “The market has shifted since this direction was set. Here’s what the next 12 months look like under three scenarios.” Give them a bridge to a new decision, not so much a full u-turn that requires admitting the old one was a mistake. And always include a cost of inaction slide if possible. Make it as concrete and time-bound as possible. “If current trends continue, you’ll lose X market share points within 18 months” might move a room in ways that “there’s an opportunity to grow” simply doesn’t.
If you’re the client-side insights lead:When you hear “but we’ve already invested so much in X” you could reframe: “If we were starting fresh today with this data and a blank slate, what would we choose?” Propose it as the literal question for the decision meeting. And reframe the decision itself: the question is not “should we change?” but “are we actively choosing to maintain our current course despite this evidence?” Make inaction a conscious, documented decision that someone has to put their name to.
4. Incentive-driven reasoning
How KPIs, bonuses, and career risk distort information processing
What it looks like:The executive whose bonus depends on a product launch finds reasons why concerning research findings “aren’t quite applicable to our situation.” The brand manager whose reputation is tied to the current campaign questions the methodology. They’re not being dishonest... their cognition is doing what cognition does under motivational pressure.
The psychological research on this is very clear: when the stakes are personal, people process information toward desired conclusions, constrained only by their ability to construct seemingly reasonable justifications. When the stakes are high enough, almost any justification feels reasonable. This is why the same person who demands “data-driven decisions” can dismiss inconvenient data with a straight face... the reasoning feels genuinely objective from the inside.
I’ve seen this too many times. And can be a tricky one to deal with because powerplay might kick in big time…
If you’re the agency:Do your homework on incentive structures before you present. Whose KPIs depend on the product launching on time? Whose reputation is tied to the current strategy? Then build your narrative knowing which slides will trigger defensive reasoning. Prepare a specific bridging statement for each: “This finding doesn’t mean the launch should stop... it means the launch succeeds better if we adjust X.” The goal is to make the difficult finding survivable.
If you’re the client-side insights lead:Build a “conflict of interest” check into your research process. Before the debrief, ask each stakeholder (including yourself) to write down: “What outcome am I hoping for from this research?” Making motivated reasoning visible doesn’t eliminate it, but it makes it dramatically harder to act on unconsciously. Keep these declarations. Revisit them when recommendations stall.
5. Salience bias
How the vivid, the recent, and the secondhand outweigh systematic evidence
What it looks like:“I know the research says one thing, but I just had dinner with a customer last week and she told me something completely different.” That single vivid anecdote outweighs your n=2,000 study, because it’s recent, personal, and emotionally resonant. Your data is abstract. Their dinner was real. We hear this one all the time
This operates on two levels. There’s the pull of vivid personal experience over systematic evidence. And there’s the anchoring effect: the first piece of information encountered... whether it’s a dinner conversation or a hallway rumor... disproportionately shapes all subsequent interpretation. In organizations, what’s salient almost always beats what’s systematic.
If you’re the agency:Fight anecdote with better anecdote. Prepare two or three short, vivid, quotable customer stories that bring the data to life. Video clips are even better. When the CMO says “well, I spoke to a customer who said...” you need a more compelling customer in your back pocket, not another bar chart. And control the first anchor: send a brief, carefully framed topline summary 24 hours before the full debrief. If the first thing stakeholders hear about your research is a watercooler rumor from someone who glanced at one chart out of context, you’ve already lost control of how the findings get interpreted.
If you’re the client-side insights lead:Create a simple ground rule: personal anecdotes get discussed after research findings, not before. And when someone shares one (and they will), normalize this question: “That’s an interesting example... is it consistent with the broader data, or is it an outlier?” Be deliberate about information sequencing. If preliminary results are circulating, get ahead of it. The first formal communication about findings should come from you, framed the way the data deserves.
6. Not-invented-here syndrome
How ownership of the process determines ownership of the output
What it looks like:“That competitor benchmark is interesting, but our customers are different.” Or: “The global research doesn’t really apply to our market.” Research that stakeholders didn’t commission or participate in feels like someone else’s opinion, not their data. External insights face an automatic credibility discount.
This bias is structurally different from the others because it runs on identity and ownership more than cognition. Which means the remedy lies in how you design the research process itself, not in how you present the findings.
If you’re the agency:Co-create the research questions with key stakeholders during the brief. The actual decision-makers, not just the insights lead. Include their language in the discussion guide. When people see their own fingerprints on the research design, they’re far less likely to dismiss the results. Go one step further: invite a key stakeholder to observe a few interviews or sit in on analysis. The insight lands differently when they witnessed it firsthand.
If you’re the client-side insights lead:Bring cross-functional stakeholders into the process early. Have product leads review the screener. Have marketing sit in on two focus groups (even remotely). Ownership of the process creates ownership of the output. The debrief should feel like a shared reveal of something you built together, not an external delivery from a vendor.
7. Loss aversion
Why opportunity-framed findings generate nods, but threat-framed findings generate action
What it looks like:Research framed as “here’s an exciting opportunity” gets nods of appreciation and then nothing. Research framed as “here’s what you’re about to lose” triggers emergency meetings.
Here also the psychological evidence is clear: losses are much more powerful as equivalent gains. It has a direct and practical implication for how we deliver research: the framing of a finding can matter as much as its content in determining whether it drives action.
If you’re the agency:Draft your key recommendations twice: once as “opportunity gained” and once as “opportunity lost.” Compare which framing will actually move the room to action. You’re applying the same behavioral understanding you bring to consumer research to the research delivery itself. Choose the framing that serves the outcome.
If you’re the client-side insights lead:When you see a finding being parked as a “nice-to-have opportunity,” flip the lens in the room: “What’s the risk if our competitor acts on this insight and we don’t?” Competitive framing activates loss aversion in your favour. It’s often the fastest way to move a finding from the “interesting” pile to the “urgent” pile.
A note for senior decision makers (whoever you are…)
If you’re a senior decision-maker reading this, you might be thinking: “This is a useful framework for my insights team.” I hope it is. But it’s also about you.
Most of these biases are amplified by seniority. The more authority you carry, the less likely people are to challenge your interpretation of findings, the higher the stakes of admitting a strategy needs to change, and the more insulated you are from dissenting views. None of that is a character flaw... it’s just how organizations work.
The most impactful thing you can do is create the conditions for research to be heard honestly. That means speaking last in debriefs, asking your team what they think before you share your reaction, and treating “the research challenges our current direction” as valuable intelligence rather than bad news. The research budget is already spent. The only question is whether you get the full value of what it uncovered.
The real work is not (only) in the deck
Crafting brilliant research is only half the job. Maybe less than half.
The other half is navigating the messy, political, thoroughly human process of getting that research to actually influence decisions. And that requires the same behavioral sophistication we bring to understanding consumers.
Impact = Insight × Action.The best insight in the world multiplied by zero action is still zero. And the gap between insight and action is almost never about the quality of the research. Look at the humans in the room.
These seven biases follow a predictable arc: how we see the data (Confirmation Bias), who gets to interpret it (Power & Authority), why we resist what it tells us (Inertia, Incentives), what competes with it for attention (Salience), whether we feel we own it (Not-Invented-Here), and how its implications are framed (Loss Aversion). And because they’re predictable, you can design around them.
As Neil Young would sing...“it’s only castles burning”.Make sure you are the firefighter.
Keen to hear your experience what biases have you seen derail good research? And what’s worked for you in getting insights to actually stick?



