These prompts solve the decision points where your stack actually determines output quality. When should you switch tools versus stick with what you have? When should you iterate versus start over? How do you chunk work to avoid hitting limits? Which tool should you use for this specific task?
These are the judgment calls that separate people who get value from their stack versus people who just have a collection of tools. Each prompt is structured with multi-stage reasoning and verification loops because these decisions require thinking through trade-offs properly, not getting quick surface-level answers. Use them when the decision matters.
<aside> 💡
Looking for the Kimi K2 Deck? Here it is!
LLM Best Practices_ 201→301.pptx
</aside>
Tool Evaluation Prompt
You are helping me evaluate whether to switch from my current tool to a new tool. This is a structured evaluation, not a casual recommendation.
CONTEXT:
- Current tool: [TOOL NAME]
- Task I use it for: [SPECIFIC TASK]
- New tool being evaluated: [NEW TOOL NAME]
- Why I'm considering it: [WHAT IT CLAIMS TO BE BETTER AT]
EVALUATION FRAMEWORK:
Step 1 - Capability Assessment
Design three specific tests that would reveal meaningful performance differences between these tools. For each test:
- Describe the exact task I should try
- Explain what a "good" result looks like versus a "mediocre" result
- Identify what this test reveals about each tool's strengths
Do not give me generic tests like "try making a presentation." Give me tests that stress the specific capabilities that matter for my use case.
Step 2 - Failure Mode Analysis
Identify the three most likely ways this new tool will fail or disappoint. For each failure mode:
- Describe the specific scenario where this failure would appear
- Explain why this matters for my workflow
- Rate the likelihood (high/medium/low) and impact (critical/significant/minor)
Think adversarially. What are the ways this tool will let me down that aren't obvious from marketing materials?
Step 3 - Constraint Audit
Before I test anything, evaluate these constraints:
- Data governance: Where is data processed and stored? Does this violate corporate policies?
- Cost structure: What's the total cost including API calls, subscriptions, team seats?
- Integration requirements: What breaks in my current workflow if I switch?
- Collaboration impact: If I switch, can my team still work with me effectively?
For each constraint, give me a clear pass/fail and explain why.
Step 4 - Switching Cost Analysis
Calculate the true cost of switching:
- Learning curve: How many days of reduced productivity should I expect?
- Workflow disruption: What existing patterns need to be rebuilt?
- Context loss: What do I lose from my current tool's memory/history?
- Reversibility: If this doesn't work out, how hard is it to switch back?
Be realistic about the switching cost. Most people underestimate this.
Step 5 - Synthesis and Recommendation
Based on Steps 1-4, provide one of three recommendations:
A) SWITCH: The new tool is dramatically better (2x+ improvement) and switching costs are justified
B) ADD: The new tool is better for specific use cases, add it for specialized work while keeping current tool
C) STICK: The new tool is only marginally better or fails critical constraints, not worth switching
Support your recommendation with specific reasoning from the previous steps. Do not hedge. Pick one.
OUTPUT FORMAT:
Provide your analysis in sections matching Steps 1-5 above. Be specific, be direct, be ruthless about identifying problems. I need to make a decision, not feel good about possibilities.
This is for you. Run now.
Insight Extraction Prompt
You are helping me transition from research to writing. I've spent significant time researching a topic and now need to extract the essential insights to write about.
CONTEXT:
- Topic: [TOPIC]
- Time spent researching: [DURATION]
- What I'm writing: [article/report/presentation/brief]
- Target audience: [WHO WILL READ THIS]
- Target length: [WORD COUNT or SCOPE]
RESEARCH CONVERSATION TO ANALYZE:
[PASTE FULL RESEARCH CONVERSATION OR KEY SECTIONS]
EXTRACTION FRAMEWORK:
Step 1 - Identify Core Insights
Review the research conversation and identify the 5-10 most important insights. For each insight:
- State the finding in one clear sentence
- Explain why this matters to my target audience
- Cite the specific evidence that supports it
- Note any caveats or limitations
Do not summarize the research process. Do not include "we discussed" or "the conversation covered." Extract only the substantive findings.
Step 2 - Assess Insight Quality
For each insight you identified, evaluate:
- Is this actually new or interesting, or is it obvious/generic?
- Is the evidence strong enough to make this claim confidently?
- Does this insight connect to other insights in a meaningful way?
Flag any insights that are weak, obvious, or poorly supported. I need to know what's solid versus what needs more research.
Step 3 - Identify Gaps
Based on what I'm trying to write, what's missing from these insights?
- What questions remain unanswered?
- What evidence would strengthen weak insights?
- What counterarguments haven't been addressed?
Be specific about gaps that would make my writing incomplete or unconvincing.
Step 4 - Structure for Writing
Organize the insights into a logical sequence for writing:
- Which insights establish context or setup?
- Which insights are the main substance?
- Which insights are implications or conclusions?
- What's the narrative thread connecting them?
Don't just list insights. Show me how they build on each other.
Step 5 - Flag Verification Needs
For any insights that seem important but potentially questionable, tell me:
- What specifically needs verification before I write about it
- Where I should check for accuracy
- What the consequence would be if this insight is wrong
I'm accountable for every word I publish. Help me identify where I need to be extra careful.
OUTPUT FORMAT:
Section 1: Core Insights (numbered list, each with: insight, evidence, caveats)
Section 2: Quality Assessment (which insights are strongest, which are weakest, why)
Section 3: Gaps (what's missing, what needs more research)
Section 4: Writing Structure (the narrative sequence I should follow)
Section 5: Verification Checklist (what to double-check before publishing)
Be ruthlessly honest about quality. If an insight is weak or obvious, tell me. I'd rather know now than publish something mediocre.
This is for you. Run now.
Iterate vs Restart Prompt
You are helping me decide whether to iterate on AI-generated output or start over. This is a structured decision, not a gut feeling.
CONTEXT:
- What I asked for: [DESCRIBE ORIGINAL REQUEST]
- Tool used: [WHICH AI TOOL]
- What I got: [PASTE THE OUTPUT]
DECISION FRAMEWORK:
Step 1 - Foundation Assessment
Analyze whether the fundamental approach is sound:
A) Core structure: Does the overall organization/architecture make sense?
B) Problem framing: Did the AI understand what I actually needed?
C) Key assumptions: Are the underlying assumptions correct?
D) Completeness: Are the essential elements present, even if poorly executed?
For each element, rate: SOUND, FIXABLE, or FUNDAMENTALLY WRONG.
If two or more elements are FUNDAMENTALLY WRONG, skip to Step 4 (Restart Protocol). Otherwise, continue to Step 2.
Step 2 - Iteration Analysis (if foundation is sound)
Identify specific problems that can be fixed through iteration:
For each problem:
- Describe exactly what's wrong
- Explain how to fix it (specific instruction, not vague "make it better")
- Estimate: minor fix (one iteration), moderate fix (2-3 iterations), or major fix (requires substantial rework)
If you identify more than 5 problems, or if any single problem requires "major fix," question whether iteration is actually more efficient than restart.
Step 3 - Iteration Path (if iterating)
Provide the exact prompts I should use to iterate:
Iteration 1: [SPECIFIC INSTRUCTION]
Iteration 2: [SPECIFIC INSTRUCTION] (if needed)
Iteration 3: [SPECIFIC INSTRUCTION] (if needed)
Each iteration should target one specific improvement. Don't try to fix everything at once.
Step 4 - Restart Protocol (if starting over)
If the foundation is wrong, explain:
A) What specifically is wrong with the approach?
B) Why iteration won't fix it
C) What I should ask for instead (provide the actual revised prompt)
D) What context or constraints did I fail to provide originally?
Be direct about what I got wrong in my original request. I need to learn from this.
Step 5 - Decision and Reasoning
Make a clear recommendation:
DECISION: [ITERATE or RESTART]
REASONING:
- Summary of why this decision is correct
- Time estimate to get acceptable output with this approach
- Confidence level (high/medium/low) that this approach will work
If recommending ITERATE: Provide the specific iteration prompts from Step 3
If recommending RESTART: Provide the revised prompt from Step 4
OUTPUT FORMAT:
Follow the step structure above. Be direct. Don't hedge. I need a decision I can act on immediately.
CRITICAL: If you're unsure whether to iterate or restart, default to RESTART. Trying to fix fundamentally broken output wastes more time than starting over.
This is for you. Run now.
Deck Chunking Prompt
You are helping me plan how to chunk a presentation to avoid context window limits while maintaining narrative flow.
CONTEXT:
- Total slides needed: [NUMBER]
- Topic: [TOPIC]
- Audience: [WHO WILL SEE THIS]
- Complexity level: [high/medium/low data density and analysis]
- Tool I'm using: [Claude/ChatGPT/other]
CHUNKING FRAMEWORK:
Step 1 - Narrative Architecture
Before chunking, define the narrative structure:
A) What story is this presentation telling?
B) What are the 3-5 major narrative beats? (e.g., "establish problem," "explain our approach," "show results," "propose next steps")
C) How do these beats build on each other logically?
Map out the complete narrative arc. Chunking decisions flow from narrative structure, not arbitrary slide counts.
Step 2 - Natural Boundary Identification
For each narrative beat, identify:
- Which slides belong to this beat
- Why this is a natural boundary (where you'd pause if presenting)
- What context carries forward to the next beat
- What context is complete and doesn't need to continue
Good chunk boundaries are where:
- You've completed a thought
- The next section introduces new information or perspective
- A presenter would naturally take a breath or transition
Bad chunk boundaries are where:
- You're mid-explanation of a complex concept
- Data on one slide connects tightly to analysis on the next
- The narrative is building momentum toward a conclusion
Step 3 - Chunk Specification
Based on narrative boundaries, define each chunk:
CHUNK 1: Slides [X-Y]
- Narrative purpose: [one sentence]
- Key content: [3-5 bullets of what this chunk contains]
- Success criteria: [how I'll know this chunk is done well]
- Transition to next chunk: [what setup this provides for the next section]
CHUNK 2: Slides [X-Y]
[same format]
[Continue for all chunks]
Each chunk should be 5-8 slides maximum. If a narrative beat requires more than 8 slides, split it into sub-beats.
Step 4 - Context Management Strategy
For each chunk after the first, specify:
- What context from previous chunks is essential to reference
- How to provide that context efficiently (brief summary, not full detail)
- What can be assumed as "already covered" without re-explaining
This prevents you from burning tokens re-establishing context you've already built.
Step 5 - Chunk Execution Order
Recommend the order to build chunks:
Standard order: Build sequentially (1, 2, 3...)
Alternative order: Build core content first, then intro/conclusion
Explain your reasoning. Sometimes it's more efficient to build the substance first, then frame it.
Step 6 - Red Flags and Contingencies
Identify potential problems:
- Which chunk is most likely to hit context limits? Why?
- Which chunk might need further splitting if content is complex?
- If a chunk fails, what's the fallback approach?
Prepare me for where things might go wrong.
OUTPUT FORMAT:
Section 1: Narrative Architecture (the story structure)
Section 2: Chunk Specifications (detailed breakdown of each chunk following the template above)
Section 3: Context Management (what to reference across chunks)
Section 4: Execution Order (recommended building sequence)
Section 5: Contingency Planning (red flags and fallbacks)
CRITICAL CONSTRAINT: Each chunk must stand alone conversationally. I should be able to start a fresh conversation for each chunk without needing the full history of previous chunks.
This is for you. Run now.
Collaborative Writing Prompt
You are my thought partner for refining a piece of writing. This is collaborative iteration, not rewriting.
CONTEXT:
- What I'm writing: [article/section/paragraph/argument]
- Target audience: [WHO WILL READ THIS]
- Purpose: [what this writing needs to accomplish]
- Voice/tone target: [professional/conversational/technical/etc.]
CURRENT DRAFT:
[PASTE THE WRITING I'RE WORKING ON]
COLLABORATIVE FRAMEWORK:
Step 1 - Clarity Audit
Read the draft and identify specific clarity problems:
- Which sentences are confusing or ambiguous?
- Where does the logic jump without sufficient connection?
- What concepts need more explanation or less?
- Where do I use jargon or assume knowledge the audience might not have?
For each clarity problem, quote the specific text and explain exactly what's unclear.
Step 2 - Structure Analysis
Evaluate the structure:
- Does the opening hook effectively?
- Do ideas flow logically from one to the next?
- Are there tangents that should be cut or moved?
- Does the conclusion land effectively?
- Is anything in the wrong order?
Show me where structure breaks down and why. Suggest specific reorganization if needed.
Step 3 - Precision Check
Identify places where I'm being vague or imprecise:
- Where am I using qualifiers like "very," "really," "quite" that weaken the writing?
- Where could I be more specific with data, examples, or concrete details?
- Where am I hedging when I should be direct?
- Where am I overconfident and should add nuance?
Quote specific examples and show me how to make them more precise.
Step 4 - Voice Consistency
Assess whether voice is consistent:
- Does this sound like me, or like generic AI output?
- Are there phrases that feel off or unnatural?
- Is the tone appropriate for the audience and purpose?
- Where does the writing get stiff or overly formal when it should be conversational (or vice versa)?
Flag specific phrases that break voice consistency.
Step 5 - Strengthening Opportunities
Identify the strongest parts of this draft and explain why they work. Then show me how to bring the weaker parts up to that standard.
For 2-3 weak sections:
- Quote the weak version
- Explain why it's weak
- Show me a stronger version
- Explain what made the improvement work
Step 6 - Final Synthesis
Provide a prioritized list of improvements:
CRITICAL (must fix): [issues that make the writing unclear or undermine credibility]
IMPORTANT (should fix): [issues that noticeably weaken the writing]
OPTIONAL (nice to have): [polish that would enhance but isn't essential]
For each critical issue, give me the exact change to make.
OUTPUT FORMAT:
Follow the step structure above. Be specific with examples. Don't just say "this could be clearer" - show me exactly where and how to improve it.
CRITICAL: Do not rewrite sections for me. Show me what's not working and suggest specific improvements I can implement. I need to maintain ownership of the writing.
This is for you. Run now.
Tool Selection Diagnostic Prompt
You are helping me choose the right tool from my stack for a specific task. This is constraint-based selection, not "best tool" thinking.
TASK DESCRIPTION:
[Detailed description of what I need to do]
MY CONSTRAINTS:
- Data governance: [public information/personal/corporate/regulated industry]
- Timeline: [how urgent is this]
- Collaboration needs: [solo work/need to share with team/client deliverable]
- Output requirements: [format, quality bar, any specific needs]
- Budget: [any cost constraints]
- Integration needs: [does this need to work with other tools/systems]
MY AVAILABLE STACK:
[List the tools in your stack that are potentially relevant]
SELECTION FRAMEWORK:
Step 1 - Constraint Filtering
Apply hard constraints first to eliminate tools that won't work:
For each potentially relevant tool, evaluate:
- Does it violate data governance requirements? (ELIMINATES if yes)
- Does it fail to meet output format requirements? (ELIMINATES if yes)
- Is it too expensive given budget constraints? (ELIMINATES if yes)
- Does it require integration that doesn't exist? (ELIMINATES if yes)
List which tools are eliminated and why. These are non-negotiable constraints.
Step 2 - Capability Matching
For tools that passed constraint filtering, evaluate fit:
For each remaining tool:
- What is it genuinely excellent at?
- How well does that match my specific task requirements?
- Where will it likely struggle or fail on this task?
- What's my confidence level (high/medium/low) it can deliver what I need?
Rate each tool: STRONG FIT, ACCEPTABLE FIT, or POOR FIT.
Step 3 - Failure Mode Assessment
For tools rated STRONG FIT or ACCEPTABLE FIT, identify specific failure modes:
For this specific task:
- Where is this tool most likely to break down?
- At what point in the workflow will I hit limits?
- What's my workaround ready before I hit that failure?
Better to know the failure mode in advance than discover it mid-task.
Step 4 - Workflow Design
For the top 1-2 tool options, design the specific workflow:
Tool: [NAME]
Workflow steps:
1. [First step with this tool]
2. [Second step]
3. [Where to chunk/pause if needed]
4. [How to handle tool limitations]
5. [Final output and any necessary cleanup]
Known failure point: [where it will likely break]
Workaround: [what to do when it breaks]
Step 5 - Recommendation
Make a clear, single recommendation:
RECOMMENDED TOOL: [specific tool name]
REASONING:
- Why this tool over alternatives
- Where it will excel on this task
- Where it will struggle and how to work around it
- Estimated time to completion
- Confidence level in this recommendation
ALTERNATIVE (if primary fails): [backup option and when to switch]
OUTPUT FORMAT:
Follow the step structure above. Be direct about trade-offs. Don't give me a list of options - give me one clear recommendation with reasoning.
CRITICAL: If no tool in my stack is well-suited for this task, tell me that directly. Recommend I either add a new tool or approach the task differently. Don't force a poor fit.
This is for you. Run now.