TIER 1: SELF-CORRECTION PROMPTS

Chain-of-Verification Template

[YOUR ANALYSIS REQUEST]

After providing your initial analysis, complete these verification steps:

1. List three specific ways your analysis could be incomplete, misleading, or incorrect
2. For each potential issue, cite specific evidence from [DOCUMENT/DATA] that either confirms or refutes the concern
3. Provide a revised analysis that incorporates verified corrections

Do not skip the verification stage. I need to see your self-critique before the final answer.

Usage note: The key is forcing enumeration of specific potential errors. Generic "check your work" gets ignored. Requiring evidence citation prevents the model from generating vacuous self-critique.

Adversarial Stress-Test Template

[YOUR INITIAL REQUEST AND MODEL RESPONSE]

Now attack your previous answer:

1. Identify five specific ways it could be wrong, incomplete, or fail under adversarial conditions
2. For each vulnerability, rate severity (Critical/High/Medium/Low) and likelihood (Likely/Possible/Unlikely)
3. Propose specific revisions to address each issue
4. Provide the hardened version incorporating all improvements

Be aggressive in finding problems - I need stress-testing, not validation.

Usage note: Deploy this for high-stakes decisions where you need the model to find problems even if it has to stretch. The severity/likelihood framework prevents the model from treating all critiques as equally important.

Strategic Edge Case Template

I need you to [TASK]. Here are three calibration examples:

**BASELINE EXAMPLE**
Input: [Simple case where correct approach is obvious]
Correct Output: [What good analysis looks like]
Why this is correct: [Brief reasoning]

**FAILURE MODE EXAMPLE**
Input: [Case where naive approach produces false positive/negative]
Incorrect Output: [What the wrong answer looks like]
Correct Output: [Actual right answer]
Why naive approach fails: [Specific reason the obvious method breaks]

**EDGE CASE EXAMPLE**
Input: [Complex case similar to your actual problem]
Correct Output: [Known good answer]
Why this is tricky: [What makes this boundary case difficult]

Now apply this same reasoning to: [YOUR ACTUAL PROBLEM]

Usage note: This is the most labor-intensive template because you need to construct good edge cases. The ROI comes from reusing it across similar problems - build once per problem class, deploy hundreds of times.


TIER 2: META-PROMPTING TEMPLATES

Reverse Prompting Template

You are an expert prompt engineer. Your task is to write the single most effective prompt that would make an LLM solve this problem with maximum accuracy:

[DESCRIBE YOUR TASK AND OBJECTIVES]

Consider:
- What specific details and constraints matter for quality output
- What reasoning steps are essential to avoid common failure modes
- What output format would be most actionable for the end user
- What examples or edge cases would improve reliability

First, write the optimal prompt. Then execute that prompt.

Usage note: This works best for unfamiliar domains where you don't know what good looks like. The model's training data includes thousands of examples of effective prompts - leverage that knowledge instead of guessing.

Recursive Optimization Template

You are a recursive prompt optimizer.

Current prompt: "[YOUR EXISTING PROMPT]"

Task goal: [SPECIFIC OBJECTIVE AND SUCCESS CRITERIA]

Improve this prompt through three iterations:
- Version 1: Add missing constraints, specifications, and edge case handling
- Version 2: Resolve ambiguities, clarify expectations, improve structure
- Version 3: Enhance reasoning depth and output quality while maintaining clarity

Provide only the final Version 3 prompt, ready for production deployment.

Usage note: Use this to harden prompts before scaling them. The three-iteration structure prevents over-optimization - you get systematic improvement without the prompt becoming unwieldy.


TIER 3: REASONING SCAFFOLD TEMPLATES