Where are we now?
Most teams I meet aren’t at square one with AI anymore.
They’ve tried tools, they have licenses, maybe they’ve even run AI through a diverse list of customer research tasks.
And yet…
They still feel like they’re “dabbling” and not “adopting” AI because there’s nothing that works all the time.
<aside>
<img src="/icons/asterisk_yellow.svg" alt="/icons/asterisk_yellow.svg" width="40px" />
Here’s why - it’s what I see again and again:
- Everyone is winging it. Researcher 1 uses ChatGPT Enterprise one way, Researcher 2 another, and no one compares notes. No one knows what “best” looks like, because no one is measuring or evaluating results together.
- Findings aren’t replicable. A researcher presents insights, a stakeholder pushes back, and when they re-run the analysis in the LLM, the results change. No one knows which version to trust.
- The “AI as expert” myth. Teams assume AI will just “do the work.” But without systematic input and human oversight, it produces outputs that look polished but can’t stand up to scrutiny.
- Cross-team rollouts without guardrails. Leaders expect to “roll out” AI across disciplines — asking Design or Product teams to use AI for synthesizing customer discovery — but they provide no shared guidelines for how to make sure it actually works. The result: inconsistent practices and unreliable results spreading across the org.
</aside>
If this feels familiar…That’s exactly where this roadmap meets you. I want to get you from there to “this AI stuff actually DELIVERS HUGE VALUE for {key use cases}”. 🙌
〰️
Your Roadmap
Months 1–2: Build shared ground rules
Months 3–4: Test for consistency and trust
Months 5-6: Rolling out to other teams (coming soon)