Blog

Research Debt – the hidden costs of unvalidated assumptions

“We know our users.”

If you’ve ever joined an established product team as a researcher, you’ve likely heard this phrase within your first week. It often comes with the best intentions from well-meaning colleagues who have spent years building a product. They’ve conducted some research, gathered feedback, and developed a mental model of who uses their product and how.

But what they know doesn’t quite match the usage data. Customer support keeps fielding the same unexpected issues. And that product feature everyone was certain users would love has barely been touched.

Well, that’s research debt in action.

Research debt accumulates when organizations operate on unvalidated assumptions about their users, products, and markets. Just like technical debt in software development, research debt compounds over time, creating hidden costs and potentially leading to misaligned products. But unlike technical debt, which often announces itself through crashes, bugs, and maintenance headaches, research debt can remain invisible, until it suddenly manifests as a product that users don’t understand, don’t need, or simply don’t want.

In this article, I’ll introduce a framework for identifying, and quantifying research debt. 

What is research debt?

Research debt is what happens in the gap between evidence and assumption. It’s the slow drift from verified insight to organizational “truth” that occurs as research findings age, change hands, and lose their context. These are not knowledge gaps (something we know that we don’t know). Research debt is what we incorrectly believe we do know.

Image title 'Research insights evolving across org - Context and nuance lost of time. Below the title are 6 colorful circles. Between the circle, arrows show from one to two, from two to three, etcetera. First circle states - study with nuanced findings. Circle two states: reports and presentations. Circle three: Headline insights extracted for stakeholders. Circle four: headline insights become shorthand references. Circle 5: shorthand becomes accepted knowledge and 6: Accepted knowledge informs decision making forever.

At each step, the connection to the original evidence weakens, including important caveats and methodological limitations. What remains is a simplified version of the insight that may no longer accurately represent what the research actually found, or what’s true about users today.

Types of research debt

Research debt comes in several forms, each with its own causes and consequences. Understanding these variations helps us spot them more effectively in our organizations.

Type 1 – Outdated insights

The most straightforward form of research debt is the research that was valid when conducted but no longer reflects current reality. This commonly occurs because:

  • User needs and behaviors evolve naturally over time
  • User demographics transform as products reach new audiences.
  • Technology and competition shifts
  • Products change, creating new interaction patterns and expectations

Type 2 – Overextended findings

Overextension happens when research conducted with a specific user segment, use case, or product area gets inappropriately applied to others. For example:

  • “Our research shows users prefer simplified navigation options”, without mentioning that the research only studied first-time users, not power users who need access to advanced features.
  • “We know our users struggle with complex forms”, forgetting that the study focused on the checkout process, not account settings where detailed information might be necessary and expected.

Type 3 – Compressed nuance

Consider how these insights transform:

  • Original finding: “Users between 25-34 showed a slight preference (23% higher engagement) for the visual search feature, particularly when shopping in unfamiliar categories, though they still relied on text search for specific known-item searches.”
  • After compression: “Millennials prefer visual search.”

The compressed version is easier to remember and share but loses critical qualifiers that should limit how the insight is applied. As decisions are made based on these simplified claims, debt accumulates in the form of misapplied research.

Type 4 – Institutional folklore

Perhaps the most detrimental form of research debt is what I call “institutional folklore.” They are beliefs about users that everyone “knows” to be true but can’t be traced back to specific research. It sounds like:

  • “We’ve always known our users care most about speed.”
  • “It’s common knowledge that our enterprise customers need extensive customization options.”
  • “Our audience obviously prefers detailed specifications over lifestyle imagery.”

What makes this form of debt particularly challenging is that it masquerades as research-based knowledge, making it difficult to identify as an assumption that requires testing.

How to easily spot research debt’s red flags

How do you know if your organization has accumulated significant research debt? Look for these warning signs:

Unattributed user knowledge

Definitive statements about users that can’t be traced back to specific research. This often sounds like: “We’ve known this for years” or “This is just how our users are”

Outdated research archives

If there is a research repo (Yay!) Are the most recent substantive studies on core user behaviors more than 18-24 months old? Has there been significant product evolution or market change since the research was conducted?

Contradictions between beliefs and user behavior

  • Features launched with confidence that see minimal adoption
  • User feedback that consistently contradicts internal expectations
  • Support tickets that reveal unexpected user workflows or needs
  • Analytics that show usage patterns at odds with your user models

Resistance to new research

Listen for responses like: “We don’t need to ask users about that again.” “That doesn’t match what we know about our users.” This defensive stance often indicates that assumptions have calcified into “facts” that the organization is reluctant to reconsider.

The Audit Framework for finding research debt

I put this audit framework together last year and it involves three complementary approaches: documentation analysis, stakeholder interviews, and decision forensics.

Documentation analysis

  1. Collect documentation that contains claims about users or justify decisions based on user needs
  2. Review these materials systematically, extracting explicit and implicit claims about users.
  3. Create a central repository for these extracted claims, a spreadsheet often works well, with columns for the claim, source, evidence reference, and initial assessment notes.
  4. For significant claims, attempt to trace their lineage through documentation:
  5. Note where context, limitations, or qualifiers have been lost
  6. Evaluate the temporal aspects of key assumptions

Stakeholder interviews

Many assumptions live primarily in organizational culture, transmitted through conversations and informal onboarding rather than formal documentation. You want to speak to people who play roles in translating user insights and represent different functional areas (product, design, marketing, sales).

Here is a list of questions that can help surface implicit assumptions:

  • “If you were onboarding a new team member, what would you tell them about our users?”
  • “What surprises people new to our industry about how users behave?”
  • “What user behaviors do we design around without much discussion?”
  • “What user needs were you addressing with this feature?”
  • “How did you know users would want this?”
  • “What alternatives did you consider, and why did you believe users would prefer this approach?”
  • “On a scale of 1-5, how confident are you that this is true about our users?”
  • “What would make you more or less confident in this belief?”
  • “How would you expect users to react if we designed assuming the opposite?”
  • “Can you recall when we first learned this about our users?”
  • “Was there specific research that showed this, or has it been common knowledge?”

Decision forensics

Here’s a four-step process for auditing product decision assumptions:

  • Map key product decisions (feature additions/removals, redesigns, content strategies) and document their stated rationales, particularly those referencing user needs or behaviors.
  • Identify and diagram the network of user assumptions underlying these decisions, tracking dependencies to find “load-bearing” assumptions that affect multiple product areas.
  • Assess decision reversibility by determining how difficult it would be to change course if underlying assumptions proved incorrect.
  • If possible, analyze outcomes against expectations, looking for gaps between anticipated and actual user behavior that indicate which assumptions may require validation.

How to properly audit your company’s research debt

Once you recognize that research debt exists in your organization, the next step is to conduct a systematic audit. This is where you create a clear inventory of what your organization believes about users, assess the evidence behind those beliefs, and identify which assumptions carry the most risk if they’re incorrect.

Step 1: Pre-audit prep

Before diving into the audit process itself, several preparatory steps will increase your chances of success.

  • Set the right context:  If you position the audit as an effort to “find all the wrong things we believe,” you’ll likely encounter resistance. Instead, present it as an opportunity to refresh and strengthen user understanding, or a way to make more confident product decisions
  • Create psychological safety: Research debt audits can feel threatening, so explicitly acknowledge the value of previous work and clarify the reason for the audit. Approaching the audit with empathy and a commitment to collaborative improvement will yield better results than positioning yourself as the sole arbiter of research quality.
  • Determine the audit scope: You cannot audit every assumption, so you need to define a scope. Are you going to be focusing on assumptions related to a product/feature? Or a user type? Or a journey/workflow?

Step 2: Use the audit framework 

The audit framework shown below comprises three complementary approaches that work together as a unified methodology, as discussed earlier. They are designed to be conducted in parallel rather than sequentially. Each approach targets different manifestations of research debt, and together they provide a comprehensive picture of your organization’s debt.

A diagram titled 'Uncovering Research Debt' showing an audit framework at the center of three overlapping circles. Each circle represents a method for uncovering research debt: 'Decision Forensics' (top left) with the description 'Analyzing past decisions to uncover underlying biases'; 'Documentation Analysis' (top right) with the description 'Examining existing records to identify gaps and inconsistencies'; and 'Stakeholder Interviews' (bottom center) with the description 'Gathering insights through discussions with key individuals.' The diagram is signed 'Maryam Oseni' in the bottom right corner.

Step 3: Create your research debt inventory

This inventory is the foundation for your debt reduction strategy. Classify assumptions along several dimensions like the type of assumptions (what users do/want, who they are, motivations, etc.). You can also categorize based on evidence quality or even the assumption’s age, i.e., is it based on recent or outdated research? 

In the inventory, you can also map the assumptions to their business implications. This mapping helps quantify the potential impact of incorrect assumptions, providing context for prioritization decisions.

Step 4: Create a debt prioritization matrix

Finally, create a prioritization framework that considers both risk and feasibility. What works for me is plotting assumptions on a 2×2 matrix with axes for “Risk if Wrong” and “Ease of Validation,” I then prioritize those in the high-risk/easy-validation quadrant. Here is a rough sketch of what it looks like:

Title: Research Debt Prioritization Matrix

This matrix helps prioritize research assumptions based on two factors:

Risk if the assumption is wrong (Low to High)

Ease of validation (Difficult to Easy)

The matrix includes four categories:

High Value Targets (Priority 1)

High risk, easy to validate

Example assumptions:

"Users prefer mobile over desktop"

"Feature X is the primary driver of adoption"

"Users don't read error messages"

Major Projects (Priority 2)

High risk, difficult to validate

Example assumptions:

"Enterprise users need extensive customization"

"Our product reduces customer churn by 30%"

"Users will pay premium for feature Y"

Quick Wins (Priority 3)

Low risk, easy to validate

Example assumptions:

"Users complete forms in a single session"

"Notifications are checked daily"

"Help documentation is rarely used"

Low Priority (Priority 4)

Low risk, difficult to validate

Example assumptions:

"Users prefer blue over green in UI"

"Weekend usage patterns differ from weekdays"

"Tutorial videos are too long"

Conclusion: Taking the first step

A research debt audit can be overwhelming, so it’s okay to start small. Begin by identifying just one critical product area or feature and conduct a “mini-audit.” Start by collecting the top 3-5 assumptions your team makes about users in this area. For each assumption, simply ask:

  • What’s our evidence for this belief?
  • When was this evidence last validated?
  • What would be the impact if this assumption is wrong?

This lightweight process can be completed in a single afternoon session with key stakeholders, yet still identifies potential research debt without requiring extensive resources. The insights gained often provide immediate value and build momentum for more comprehensive debt management.

The most valuable outcome of addressing research debt is a shift in how your organization thinks about user knowledge. It changes user research from a series of disconnected studies into a continuous, evolving understanding that becomes more valuable over time rather than less. And in doing so, it helps fulfil the promise that brought many of us to user research in the first place, which is to create products that truly meet user needs because they’re built on genuine understanding.

Cover image – by Raph Howald on Unsplash

Maryam Oseni

Maryam is a UX researcher and writer based in the Netherlands. With a passion for human-centered design thinking, she focuses on bridging the critical gap between user needs and business solutions. Her work combines research methodologies with compelling storytelling to advocate for user-first approaches across digital and physical products. She loves discussing how businesses can achieve sustainable success only when they genuinely prioritize user experiences.

8 min. read

Stay ahead in UX research

Get inspiring UX research content straight to your inbox.

  • This field is for validation purposes and should be left unchanged.