Purpose: Structure the three critical conversations that surface misalignments before they become crises in AI deployment.

How to use: Use these question sets to facilitate calibration sessions with your cross-functional team. Duplicate this page and customize questions for your specific deployment context.


🎯 When to Use These Checkpoints

Checkpoint 1: Before running AI analysis (Week 1 of diligence/planning)

Checkpoint 2: After AI analysis returns results (Before finalizing go/no-go decision)

Checkpoint 3: Before deployment commitment (Before announcing timeline or allocating resources)

Time investment: 30-60 minutes per checkpoint

ROI: Prevents weeks of rework and relationship repair when misalignments surface post-deployment


Checkpoint 1: Define Terms Before the AI Analyzes

🎯 Goal

Ensure all functions are defining key terms the same way before AI tools validate against those definitions.

👥 Who Should Participate

Legal, Product, Compliance, IT/Data Science, Operations (anyone who will interpret AI analysis results)

⏱️ Session Structure (45-60 minutes)

Introduction (5 min): "We're about to run AI analysis on [contracts/performance/risk/etc.]. Before we do, let's make sure we're all defining success the same way."

Term Definition (30-40 min): Work through each term below

Documentation (10-15 min): Capture definitions and divergences explicitly