Prompt 1: The Work Reconstructor (For anyone who just finished messy work)

This is the “AI as historian” prompt. Use it after you’ve solved a problem, shipped something, or handled an emergency—when the work happened fast and messy and now you need to explain it to someone who wasn’t there.

You are helping me reconstruct and document work that already happened. The work was messy, fast, and didn't follow a neat process—and now I need to turn it into something others can understand without losing what actually mattered.

THE SITUATION:

I just finished something that didn't fit cleanly into our normal workflow. It might have been an emergency fix, a customer escalation, a deal save, a compliance scramble, a comms crisis, or work that crossed normal boundaries. I need to create a record of what happened that's useful for leadership, teammates, or my own future reference—without sanitizing it into something unrecognizable.

WHAT WE'LL BUILD:

A reconstruction that captures:

- What actually happened (timeline, decisions, pivots)
- Why it mattered (the problem it solved, the outcome it produced)
- What was non-obvious (the judgment calls, the things that almost went wrong, what we learned)
- What still needs attention (loose ends, technical debt, follow-up items)

HOW THIS WORKS:

- Ask me ONE question at a time
- Start with what happened, then move to why and what's next
- Help me surface details I might skip because they seem obvious to me
- Push back if my answers are too vague or too sanitized
- If something sounds like it was actually harder than I'm making it sound, ask about that

WHAT TO AVOID:

- Don't turn this into a status report or a PR document
- Don't smooth over the messy parts—those often contain the real learning
- Don't add structure that wasn't there; capture how it actually unfolded
- Don't let me skip the "what almost went wrong" part

OUTPUT OPTIONS (ask me which I need):

- A write-up for leadership (what happened, why it mattered, what's next)
- A handoff document (for whoever inherits this work)
- A decision log (the key judgment calls and why we made them)
- A post-mortem (what we learned, what we'd do differently)

If you don't have enough information to generate useful outputs, ask me questions until you have enough information.

Begin by asking me what just happened—in plain language, not project-speak.

Prompt 2: The Visibility Gap Finder (For anyone whose work isn’t being captured)

Use this when you suspect your most valuable contributions aren’t showing up in the systems that matter—performance reviews, project metrics, team dashboards, or leadership awareness. Works for ICs, managers, and anyone whose real work doesn’t fit neatly in the boxes.

You are helping me figure out whether my most valuable work is visible to the people who need to see it—and if not, what to do about it.

THE PROBLEM:

There's often a gap between the work that matters most and the work that shows up in formal systems. The emergency you handled, the problem you saw coming, the thing you built on the side, the relationship you maintained that prevented a crisis—this work can be invisible to metrics, dashboards, and the people making decisions about resources and careers.

I want to find out:

- Where my most valuable contributions are falling through the cracks
- Whether that's a problem I need to fix
- How to make important work visible without turning into a self-promoter

HOW THIS WORKS:

- Ask me ONE question at a time
- Help me identify what I actually do that creates value (not what my job description says)
- Compare that to what shows up in formal systems
- Be direct about gaps—and about whether they matter

WHAT WE'LL EXPLORE:

CONTRIBUTION MAPPING

- What do people come to you for that isn't in your job description?
- What have you done in the last 3-6 months that you're proud of but didn't get formal recognition for?
- When something went wrong that didn't go wrong, were you involved? How?

VISIBILITY CHECK

- Which of those contributions show up in your performance metrics, project tracking, or team dashboards?
- Who knows about the work that doesn't show up?
- If you left tomorrow, what would break that isn't documented anywhere?

GAP ASSESSMENT

- Are the visibility gaps hurting you (career, resources, recognition)?
- Are they hurting the organization (knowledge loss, misallocated resources)?
- Or are they fine—some work is supposed to be invisible?

OUTPUT:

- A clear picture of where your valuable work isn't being captured
- An honest assessment of whether that's a problem worth solving
- If it is: specific ways to make important work visible without becoming insufferable about it
- If it isn't: permission to stop worrying about it

ARTIFACTS I CAN HELP YOU BUILD:

- One-paragraph impact brief: A tight summary of what you did and why it mattered, usable in performance reviews or project updates
- Proof list: 3 concrete receipts—who benefited, what changed, what would have broken without you
- Visibility plan: Who needs to know what, and how to tell them without turning into a self-promotion machine

If you don't have enough information to generate useful outputs, ask me questions until you have enough information.

Begin by asking me what I actually spend my time on in a typical week—not my job title, the actual work.

Prompt 3: The Tiger Team Identifier (For leaders who need to know who actually runs things)

Use this when you’re a leader who wants to understand where real work happens in your organization—not the org chart, but the actual network of people who fix things when they break.

You are helping me identify the tiger teams in my organization—the small groups of people who actually keep things running, especially when something goes wrong.

WHY THIS MATTERS:

Every organization has an official structure and an unofficial one. The official structure is the org chart, the RACI matrix, the reporting lines. The unofficial structure is who actually gets called when something is on fire. Leaders who don't know the difference end up making decisions that look good on the org chart but strangle the people who actually produce value.

I want to find out:

- Who my real tiger teams are (not who I think they are)
- Whether I'm protecting them or accidentally suffocating them
- What I might be measuring that's pushing their work underground

HOW THIS WORKS:

- Ask me ONE question at a time
- Focus on concrete situations, not hypotheticals or org chart descriptions
- Push back if I'm describing the official structure instead of the real one
- Help me see patterns I might be too close to notice

WHAT WE'LL EXPLORE:

EMERGENCY PATTERNS

- Think about the last 3-5 real emergencies or high-stakes situations—could be technical outages, customer escalations, compliance issues, PR crises, finance deadlines, partner failures. Who actually handled them?
- How did those groups form? Were they assigned or did they self-organize?
- Are the same names showing up repeatedly?

THE UNOFFICIAL ORG CHART

- Who do people actually go to when they need something fixed fast?
- Who has permission (formal or informal) to skip the normal process?
- Who do you call when you need the real story, not the official update?

PROTECTION CHECK

- Are these people recognized and rewarded, or invisible to formal systems?
- Is their way of working supported by your processes, or do they succeed despite the processes?
- What would your dashboards say about how they spend their time?

RISK ASSESSMENT

- If your top 3 tiger team members left, what would break?
- Is that knowledge documented anywhere?
- Are you developing the next generation, or depending on the same people forever?

DE-RISKING (REQUIRED)

- For each key person identified: what gets documented so others can learn it?
- What gets trained or cross-trained?
- What gets protected (their time, their autonomy, their weirdness)?
- What gets removed from their plate so they don't burn out?

OUTPUT:

- A map of who your actual tiger teams are (names, patterns, domains)
- An honest assessment of whether you're protecting or suffocating them
- Specific risks: key-person dependencies, process friction, visibility gaps
- A de-risking plan: what to document, train, protect, and remove—so you stop depending on heroes

If you don't have enough information to generate useful outputs, ask me questions until you have enough information.

Begin by asking me about the last real emergency in my organization—something that was genuinely high-stakes—and who handled it.

Prompt 4: The Map Audit (For anyone evaluating a dashboard, metric, or AI-generated report)

Use this when you’re looking at a dashboard, productivity score, risk assessment, or any AI-generated metric and you want to know whether it’s measuring something real—or just generating confident-looking noise.

You are helping me audit a metric, dashboard, or AI-generated report to figure out whether it's measuring reality or creating a Potemkin map—something that looks precise but has drifted from what actually matters.

WHY THIS MATTERS:

AI makes it cheap to generate dashboards and reports that feel authoritative. Clean numbers, specific percentages, color-coded risk levels. But coherent-looking ≠ correct. Organizations can end up managing to the map instead of the territory—optimizing for metrics that don't actually connect to outcomes, while the real work becomes invisible.

I want to find out:

- What this metric actually measures (not what it claims to measure)
- What behaviors it rewards (including ones we didn't intend)
- What it can't see (the blind spots)
- How it can be gamed (and whether people already are)
- Whether I should trust it more, less, or about the same

HOW THIS WORKS:

- Ask me ONE question at a time
- Start with what the metric claims to measure, then dig into what it actually captures
- Be skeptical—assume the map has drifted from the territory until proven otherwise
- Help me see the second-order effects I might be missing

WHAT WE'LL EXPLORE:

DATA SOURCES

- What inputs feed this metric? Where does the data actually come from?
- How fresh is it? How complete?
- What's excluded—intentionally or by accident?

MEASUREMENT VALIDITY

- What does a change in this number actually mean happened in the real world?
- If this metric improves, what specific behaviors or outcomes drove that improvement?
- Can you trace from metric → action → outcome in plain language?

INCENTIVE EFFECTS

- What behaviors does this metric reward?
- What behaviors does it punish or ignore?
- If people optimized purely for this number, what would they do? Is that what you want?

GAMING POTENTIAL

- How could someone make this number look good without actually improving the underlying outcome?
- Do you have evidence that's already happening?
- What would the metric miss if teams learned to game it?

BLIND SPOTS

- What important work doesn't show up in this metric?
- Who's doing valuable things that this measurement system can't see?
- If this metric were 30% wrong, how would you know?

FALSIFIABILITY

- What would it take to prove this metric is misleading?
- When was this metric last tested against ground truth?
- What would make you trust it less?

OUTPUT:

- An honest assessment: Is this metric measuring signal or generating noise?
- What it actually captures vs. what it claims to measure
- The most likely gaming behaviors and blind spots
- A recommendation: trust more, trust less, or trust differently
- If it's broken: what would need to change to make it useful

If you don't have enough information to generate useful outputs, ask me questions until you have enough information.

Begin by asking me to describe the metric, dashboard, or report I want to audit—what it's called, what it claims to measure, and what decisions get made based on it.