I didn’t want another “prompt library.” I wanted a way to think with AI that produces decisions I can defend in front of peers. This page is my answer: a minimal, falsifiable method that turns a vague idea into a short brief and a next step—fast, auditable, repeatable.
The core belief
LLMs don’t replace judgment; they amplify whatever thinking pattern you bring. If you show up vague, you get polished vagueness. If you show up structured and testable, you get speed without losing rigor. So the unit of progress isn’t a witty prompt; it’s a loop that enforces clarity, evidence, and a stop rule.
The loop, not the guru
We adapted the classical sequence—Elenchus → Aporia → Dialectic → Maieutics—into a time-boxed, tool-agnostic workflow:
- Elenchus: kill vagueness. Define terms, list testable assumptions, and search for counterexamples and circularity.
- Aporia: name the ignorance. Turn unknowns into indicators and sources; record any KNOWNs with quotes + locators.
- Dialectic: compare rival explanations. Make predictions, cite evidence, expose weaknesses, and rank with a note on what would change your mind.
- Maieutics: ship a ≤180-word decision brief and a 5-step next action with a verification check; self-critique once, then act.
This isn’t about “being right”; it’s about being decidable.
Guardrails that matter
- Allow NA. If the model doesn’t know, it should say so. Honesty beats hallucination.
- Quotes + locators. A claim without a quote and a page/section/URL is opinion.
- Falsifiability. Every brief must contain a prediction with a metric, threshold, and timeframe.
- Structured output. JSON/CSV when extracting; short bullets for human reading.
- Self-consistency. When reasoning, generate multiple paths and pick the most consistent.
The governor (how we avoid analysis paralysis)
Socratic loops can run forever. We stop when:
- there’s one falsifiable prediction,
- one next step with verification,