<aside>
⚠️
Disclaimer: This case study is 100% AI simulated
Tool used: ChatGPT, Hotjar, Dovetail, Figma Make, Figma Slide
</aside>
Stakeholder → jump here
UX Story Insight → jump here
Problem Framing
“Users struggle to confidently select the most suitable ChatGPT model due to limited knowledge, unclear guidance, and a lack of behavioral transparency. This results in suboptimal outcomes, rework, and hesitation in high-value workflows.”
Key Findings Summary (from User Interviews)
1. Confidence & Expertise Levels
- Majority of users lack confidence when choosing a model.
- Only 2 out of 5 users felt confident (rated 4+); 3 users were uncertain (rated 3 or lower).
- Some users didn’t even know model selection was an option or necessary.
- Simulated metric: ~60% of users are unaware of model selection, and incorrect choices lead to an average of 8 minutes lost per task.
Implication: Users need support, clarity, and just-in-time education to confidently select the right model.
2. Current Selection Criteria
- Most-used factors:
- Accuracy / reliability
- Response speed
- However, default bias is high: users often stick to the default because they don’t understand the differences.
- Simulated metric: 70% of users default to GPT-3.5 regardless of task type, missing opportunities to leverage model strengths.