<aside>
⚠️
Disclaimer: This insight is 100% AI simulated
Tool used: ChatGPT, Hotjar, Dovetail, Figma Make, Figma Slide
</aside>
← Full Case Study
Context
Users interacting with AI tools are expected to choose from multiple models (e.g., GPT-3.5, GPT-4), but the process is unintuitive and lacks guided support. This decision, though critical, often goes unnoticed or is made based on shallow assumptions.
User Behaviour
- Low confidence & awareness: Many users either didn’t realize model selection was required or didn’t feel confident doing so. In testing, 60% were unaware the feature existed, and 70% defaulted to GPT-3.5 regardless of task type.
- Default to familiar: In the absence of clarity, users defaulted to GPT-3.5 — not because it fit the task, but because it was the path of least resistance.
- Mismatched expectations: Users often noticed issues like verbosity or lack of nuance after the interaction, when it's too late to adjust.
- Desire for guidance: Repeated requests surfaced for clearer model comparisons, recommendations by task, and the ability to switch when results aren’t ideal.
Why It Matters
This gap in model selection fluency creates a compounding effect:
- Wasted time and effort from misaligned outputs.
- User frustration that could have been avoided with clearer guidance.
- Reduced trust in the AI system’s capabilities when the issue is actually in selection, not performance.
- Business impact: Low model selection accuracy leads to underutilization of advanced models like GPT-4o, lowers perceived AI capability, and increases rework rates by 45%, which directly impacts efficiency and adoption.
Design Implications