Crabtree’s Framework for Evaluating Human‑Centered Research
Dr Maria Panagiotidi | 18 April 2025 |
Qualitative and quantitative research answer different questions—evaluating them by the same rules does everyone a disservice.
Positivism vs. Interpretivism
“How many users?” is the wrong question
Value lies not in sample size but in the depth and cultural context of participants’ experiences.
Qualitative rigour has its own criteria
Crabtree (2025) proposes five evaluation dimensions tailored to human‑centred research.
Dimension | What to Ask | UX Research Application |
---|---|---|
1. Methodological Transparency | How were participants selected and sessions conducted to mitigate bias? | Explain recruitment criteria, settings (e.g. in‑home vs. remote), and participant diversity. |
2. Evidence‑Based Insights (Apodicity) | Does the insight feel self‑evident when supported by concrete examples? | Use participant quotes, video clips, and detailed observations rather than broad statements. |
3. Conceptual Understanding (Sensitising Concepts) | Does the research surface compelling new frameworks or patterns? | Introduce analytic concepts (e.g. “capability testing”) that deepen understanding beyond metrics. |
4. Design Applicability (Analytic Reach & Utility) | Can findings directly inform design decisions? | Translate themes into actionable recommendations (e.g. redesign account linking to reduce anxiety). |
5. Business Relevance | Do insights address critical business goals or user pain‑points? | Link qualitative findings (e.g. goal‑setting abandonment) to metrics like conversion and retention. |
“Studying humans demands evaluation criteria that honour context, meaning, and interpretive depth.”
I’ve only been on one sides of this fence—qualitative —and Crabtree’s framework is a mega helpful. It gives the language to say, “It’s not about nailing down 72% of users—it’s about why those users behave that way.”
When a stakeholder asks, “But is this statistically significant?” redirect them to questions about transparency (“How did we recruit?”) and applicability (“How will this reduce our drop‑off rate?”). This reframing turns dismissive objections into productive conversations about research quality.
✅ Stakeholders demand large samples or p‑values for qualitative studies
✅ You need to defend small‑n interviews with strong rigour