Use these prompts in order (1-4). Each is designed to interview you one question at a time. These are hefty prompts, so set aside time to really go through them. It may take you an hour to do a prompt, but the time you invest here will let you make progress on memory systems much more quickly.

1. Memory Architecture Designer

## Prompt 1: Memory Architecture Designer

**Purpose:** Design a complete memory strategy for AI-assisted work
**Method:** Conversational interview leading to architecture document
**Output:** Complete memory architecture in markdown

---

### Question Flow:

**Context Setting**

1. "What kind of work do you do? Describe your role and primary responsibilities."

2. "Who else is involved in your work? Are you solo, part of a small team, or in a larger organization?"

3. "What AI tools are you currently using or planning to use?" 
   *(e.g., Claude, ChatGPT, Cursor, other)*

**Constraint Gathering**

4. "Are there any compliance, confidentiality, or data handling requirements I should know about?"
   *(e.g., client data restrictions, industry regulations, internal policies)*

5. "What's your technical comfort level?"
   - I'm comfortable with APIs, scripts, and technical setups
   - I prefer UI-based tools and simple workflows
   - Somewhere in between

6. "How much time can you realistically invest in setting up and maintaining this system?"
   - Minimal (needs to be nearly automatic)
   - Moderate (willing to do weekly maintenance)
   - Substantial (this is a priority project)

**Lifecycle Inventory**

7. "Let's map out the types of information you work with. For each category, tell me if it exists in your work:"

   - **Profile & Preferences:** Your work style, quality standards, recurring constraints
   - **Work Playbooks:** Repeatable processes, rubrics, definitions, methodology
   - **Reference Library:** Domain knowledge, technical specs, terminology
   - **Project Context:** Active project requirements, constraints, deliverables
   - **Session State:** Temporary context for current conversation
   - **Decision History:** Record of choices made and why
   - **Interaction Logs:** What worked/didn't work in past AI sessions

8. "For each type that exists: how often does it change?"
   - Permanent (set once, rarely changes)
   - Evergreen (updates quarterly or annually)
   - Project-scoped (lives for weeks/months, then archives)
   - Session-scoped (disposable after conversation ends)

**Storage Decisions**

9. "Where are you comfortable storing different types of information?"
   - Native AI memory/projects (if available)
   - Personal knowledge base (Notion, Obsidian, etc.)
   - Cloud storage (Google Drive, Dropbox)
   - Version control (GitHub, GitLab)
   - Local files only
   - Mix of above

10. "For high-stakes information—facts where being wrong causes real problems—do you need:"
    - Original source links for verification
    - Version history/audit trails
    - Separate verification step before use
    - All of the above

**Retrieval Patterns**

11. "Think about how you typically start working with an AI. Do you usually:"
    - Start fresh each time (no persistent context)
    - Resume ongoing projects (need project continuity)
    - Switch between multiple active projects
    - Mix of routine and novel work

12. "When you need the AI to recall something, would you rather:"
    - Have it pull automatically from memory
    - Explicitly paste context into each conversation
    - Hybrid: automatic for some, manual for others

**Portability Requirements**

13. "How important is it that your memory system works across different AI tools?"
    - Critical (I use multiple tools daily)
    - Important (I may switch tools eventually)
    - Not important (I'm committed to one tool)

14. "If you had to migrate everything to a new tool tomorrow, what format would make that easiest?"
    - Markdown files
    - Structured JSON/YAML
    - Plain text
    - Don't know / doesn't matter

**Implementation Planning**

15. "What's your biggest pain point right now with AI context management?"

16. "If you could only fix one thing in the next two weeks, what would have the most impact?"

---

### Output Structure:

Based on your answers, I'll create a memory architecture document containing:

1. **Architecture Overview:** Your personalized memory system design
2. **Storage Map:** What goes where and why
3. **Lifecycle Rules:** When context gets created, updated, archived, deleted
4. **Retrieval Patterns:** How each context type gets queried
5. **Verification Protocol:** What requires fact-checking and how
6. **Portability Plan:** How to export/import across tools
7. **Implementation Roadmap:**
   - Phase 1 (Week 1-2): Quick wins
   - Phase 2 (Week 3-4): Core infrastructure
   - Phase 3 (Month 2+): Advanced features
8. **Maintenance Schedule:** What needs updating and when

Ready to begin? Let's start with question 1: What kind of work do you do?

THIS IS FOR YOU BEGIN ASKING QUESIONS ONE AT A TIME NOW.

2. Context Library Builder

## Prompt 2: Context Library Builder

**Purpose:** Create permanent and evergreen memory documents
**Method:** Conversational interview extracting core work knowledge
**Output:** Three formatted markdown documents ready to use across AI tools

---

### Question Flow:

**Introduction**

"I'm going to help you build three core documents that will serve as your permanent AI context library:
1. **Profile & Preferences** - Who you are and how you work
2. **Work Playbooks** - Your repeatable processes and standards
3. **References** - Domain knowledge and technical specs you use regularly

Let's build these one at a time."

---

### Part 1: Profile & Preferences

**Writing & Communication Style**

1. "How would you describe your writing style?"
   - Formal, technical, academic, conversational, direct, etc.

2. "What are your non-negotiable quality standards for written work?"
   *(e.g., "Always cite sources," "No jargon unless defined," "Active voice preferred")*

3. "Are there specific things you want AI outputs to avoid?"
   *(e.g., excessive formatting, generic phrases, certain tones)*

4. "What level of detail do you typically need in responses?"
   - High-level summaries
   - Detailed explanations with examples
   - Depends on context

**Work Constraints & Preferences**

5. "What are recurring constraints in your work?"
   *(e.g., word limits, deadline patterns, budget ranges, client requirements)*

6. "Are there tools, formats, or platforms you always use?"
   *(e.g., "All docs in Google Docs," "Code in Python," "Slides for executives")*

7. "What's your typical workflow or work rhythm?"
   *(e.g., morning strategy, afternoon execution; batch similar tasks; etc.)*

**Domain Context**

8. "What's your background? What expertise do you bring to your work?"
   *(Education, prior roles, specialized knowledge)*

9. "What shouldn't an AI assume you know?"
   *(Areas where you want explanations, not shortcuts)*

10. "What shouldn't an AI explain because you definitely already know it?"
    *(Your areas of deep expertise)*

---

### Part 2: Work Playbooks

**Repeatable Processes**

11. "What tasks do you do repeatedly where the process is generally the same?"
    *(e.g., "Weekly status reports," "Client onboarding," "Code reviews")*

12. "For each repeatable task, walk me through your standard process."
    *(I'll ask follow-up questions for each one you mention)*

**Quality Rubrics**

13. "How do you evaluate if your work is 'good enough to ship'?"
    *(What's your internal checklist?)*

14. "Are there specific rubrics, frameworks, or evaluation criteria you use?"
    *(e.g., "Use the STAR method for examples," "Apply the pyramid principle")*

**Definitions & Terminology**

15. "Are there terms, acronyms, or concepts specific to your work that an AI might misunderstand?"
    *(Internal terminology, industry jargon, role-specific definitions)*

16. "For each term you mentioned, what's the precise definition you use?"

**Decision Frameworks**

17. "When you're facing a common decision point in your work, what framework do you use?"
    *(e.g., "Prioritize by impact/effort matrix," "Always validate with client first")*

---

### Part 3: References

**Domain Knowledge**

18. "What bodies of knowledge do you draw on regularly in your work?"
    *(e.g., specific technical standards, industry frameworks, research areas)*

19. "For each knowledge area, what are the key concepts or principles I should know?"

**Technical Specifications**

20. "Are there technical specs, APIs, or standards you reference frequently?"
    *(e.g., "Always use Material Design 3 guidelines," "RESTful API standards")*

21. "Where can these be found? Do you want links included for verification?"

**Client/Project Templates**

22. "Do you have standard templates or structures you reuse?"
    *(e.g., proposal template, project brief format, report structure)*

23. "For each template, what are the required sections and what goes in each?"

**External Resources**

24. "What external resources do you reference most often?"
    *(Books, websites, documentation, style guides)*

25. "Should these be cited in a specific format?"

---

### Formatting & Maintenance

**Version Control**

26. "How do you want to track changes to these documents over time?"
    - Manual "Last updated" dates
    - Version numbers
    - Change log section
    - No tracking needed

**Usage Examples**

27. "For each playbook or reference, can you give me an example of when you'd use it?"
    *(This helps me write clearer documentation)*

**Portability Check**

28. "Let's verify: these documents need to work in:"
    *(List the AI tools you mentioned earlier)*
    "Are there any tool-specific formatting requirements I should know about?"

---

### Output Structure:

I'll create three markdown files:

**1. profile-and-preferences.md**
- Writing style guidelines
- Quality standards
- Work constraints
- Domain expertise map
- What to explain vs. what to assume
- Last reviewed: [date]

**2. work-playbooks.md**
- Repeatable processes (step-by-step)
- Quality rubrics
- Terminology definitions
- Decision frameworks
- Usage examples for each
- Last reviewed: [date]

**3. references.md**
- Domain knowledge summaries
- Technical specifications with links
- Templates and structures
- External resource citations
- Organized by category
- Last reviewed: [date]

Each file will include:
- Clear section headers
- Portable markdown formatting
- Examples of good vs. bad usage
- Cross-references where relevant
- Metadata for version tracking

Ready to start? Let's begin with Profile & Preferences, question 1: How would you describe your writing style?

THIS IS FOR YOU BEGIN ASKING QUESIONS ONE AT A TIME NOW.

3. Project Brief Compiler

## Prompt 3: Project Brief Compiler

**Purpose:** Transform messy project information into clean, AI-optimized briefs
**Method:** Conversational extraction and verification
**Output:** Structured project brief ready to paste into AI context windows

---

### Question Flow:

**Introduction**

"I'm going to help you create a clean project brief from whatever materials you have—messy notes, meeting transcripts, scattered documents, or just your recollection. The goal is a concise, fact-checked brief that gives an AI everything it needs to help you with this project.

We'll go through this together, and I'll help you separate confirmed facts from assumptions. Let's build these one at a time."

---

### Input Gathering

1. "What project are we creating a brief for? Give me a working title or description."

2. "What materials do you have about this project?"
   - Meeting notes or transcripts
   - Existing documents or specs
   - Email threads
   - Just my memory of conversations
   - Mix of above

3. "Go ahead and paste/share whatever you have. Don't worry about it being messy—I'll help you extract what matters."
   *(Wait for user to provide materials)*

---

### Goal & Audience Extraction

4. "In one sentence: what is this project supposed to achieve?"
   *(The core outcome)*

5. "Who is this for? Who's the primary audience or beneficiary?"

6. "What does success look like? How will you know when this is done?"

7. "Is there a specific deliverable format required?"
   *(e.g., document, presentation, code, system, design)*

---

### Canonical Facts Verification

8. "Let me list the factual claims I found in your materials. For each one, tell me if it's **confirmed** or **assumed**:"
   *(I'll list things like: IDs, dates, metrics, names, technical requirements, budget/resource constraints)*

   For each fact:
   - "Is this confirmed? Where did it come from?"
   - "If assumed, do we need to verify it before proceeding?"

9. "Are there any links, reference documents, or source materials that should be attached to this brief?"

10. "Are there specific numbers, metrics, or technical details that are non-negotiable?"
    *(The facts where being wrong would break everything)*

---

### Scope Boundaries

11. "What is explicitly **in scope** for this project?"
    *(What you're definitely doing)*

12. "What is explicitly **out of scope**?"
    *(What you're definitely NOT doing, or 'not in this phase')*

13. "Are there any edge cases or boundary scenarios we should clarify now?"
    *(The gray areas that cause confusion later)*

---

### Constraints & Prior Decisions

14. "What constraints are you working within?"
    - Timeline/deadline
    - Budget/resources
    - Technical limitations
    - Policy/compliance requirements
    - Team capacity
    - Other

15. "Have any major decisions already been made that shouldn't be revisited?"
    *(e.g., "We're using Python, that's final," "Client already approved the approach")*

16. "Are there any decisions still pending that could change the direction?"

---

### Acceptance Criteria

17. "What are the specific criteria for accepting this work as complete?"
    *(Not vague goals—concrete yes/no checkpoints)*

18. "Who needs to approve or sign off on the deliverable?"

19. "What would cause you to reject the work or send it back for revision?"

---

### Risks & Unknowns

20. "What are the biggest risks or uncertainties in this project?"

21. "What information is still missing that you need to track down?"

22. "If you had to flag one thing that could derail this project, what would it be?"

---

### Context for AI Assistance

23. "What specifically do you need AI help with on this project?"
    - Research and information gathering
    - Drafting or writing
    - Technical implementation
    - Analysis or evaluation
    - Review and feedback
    - Other

24. "Is there background context an AI would need to understand your industry, domain, or technical environment?"
    *(Things that aren't project-specific but are necessary context)*

---

### Format Optimization

25. "How long should this brief be?"
    - As short as possible (just key facts)
    - Moderate (enough detail to avoid questions)
    - Comprehensive (everything someone would need)

26. "Do you want this optimized for:"
    - Pasting into a single conversation
    - Reusing across multiple AI sessions
    - Sharing with team members
    - All of the above

---

### Output Structure:

I'll create a structured project brief in markdown containing:

**Header:**
- Project title
- Last updated date
- Status (planning/active/review)

**1. Goal & Audience**
- Primary objective (1 sentence)
- Audience/beneficiary
- Success criteria

**2. Deliverable**
- Format and specifications
- Acceptance criteria
- Approval process

**3. Scope**
- In scope (bullet list)
- Out of scope (bullet list)
- Key boundaries/edge cases

**4. Confirmed Facts**
- Canonical data (IDs, dates, metrics)
- Technical requirements
- Resource constraints
- Source links where applicable

**5. Assumptions Requiring Verification**
- Unconfirmed claims
- Pending decisions
- Information gaps

**6. Prior Decisions**
- Choices already made
- Rationale (where relevant)

**7. Constraints**
- Timeline
- Budget/resources
- Technical/policy limitations

**8. Risks & Unknowns**
- Key uncertainties
- Potential blockers

**9. Context for AI**
- What you need help with
- Domain/technical background
- Relevant reference materials

**Token count estimate:** ~[X] tokens

**Verification checklist:** 
- [ ] All dates confirmed
- [ ] All IDs/metrics verified
- [ ] Scope boundaries clear
- [ ] Acceptance criteria specific
- [ ] Assumptions flagged

Ready to start? Go ahead and share what you have about your project.

THIS IS FOR YOU BEGIN ASKING QUESIONS ONE AT A TIME NOW.

4. Retrieval Strategy Planner

## Prompt 4: Retrieval Strategy Planner

**Purpose:** Design reusable retrieval patterns across your task spectrum
**Method:** Conversational mapping of work modes and information needs
**Output:** Retrieval strategy matrix—task type × retrieval approach × verification needs

---

### Question Flow:

**Introduction**

"I'm going to help you design a retrieval strategy that works across all the different types of work you do. Instead of figuring out what to pull each time, we'll create patterns you can reuse.

We'll map your task types, understand how your information needs shift between them, and design retrieval approaches that match each mode of work."

---

### Task Spectrum Mapping

1. "What are the main types of tasks you do regularly in your work?"
   *(e.g., client deliverables, technical troubleshooting, strategic planning, routine execution, research, review/QA)*

2. "For each task type you mentioned, roughly what percentage of your time do you spend on it?"
   *(Helps prioritize which retrieval patterns matter most)*

3. "Are there seasonal or cyclical patterns to your work?"
   *(e.g., quarterly planning, monthly reporting, project phases)*

---

### Context Needs by Task Type

4. "Let's go through each task type. For [TASK TYPE 1], what information do you typically need at hand?"
   *(Repeat for each task type mentioned)*
   
   Prompt for:
   - Background/reference material
   - Prior work examples
   - Constraints and requirements
   - Process guidelines
   - Quality standards
   - Historical decisions

5. "For [TASK TYPE 1], do you need comprehensive context or just key facts?"
   - Broad context (need to understand the full picture)
   - Targeted specifics (just the facts I need to execute)
   - Both at different stages

---

### Mode-Based Information Patterns

6. "When you're in **planning mode** (exploring options, designing an approach), what kind of information helps most?"
   - Wide-ranging examples and possibilities
   - High-level patterns and principles
   - Prior similar projects
   - Constraints to work within

7. "When you're in **execution mode** (implementing something with clear requirements), what do you need?"
   - Exact specifications and requirements
   - Step-by-step process
   - Quality checklist
   - Templates or starting points

8. "When you're in **review or debugging mode** (figuring out what went wrong, tracing decisions), what matters?"
   - Decision history and rationale
   - What was tried before
   - Event timeline
   - Original requirements vs. what was delivered

9. "Do you tend to work in one mode at a time, or do you switch modes within a single work session?"

---

### Precision Requirements

10. "Which task types require high precision—where being wrong would cause real problems?"
    *(e.g., client deliverables, compliance work, technical implementation)*

11. "For high-precision tasks, what types of facts absolutely must be verified?"
    - Client names, IDs, dates, metrics
    - Technical specifications
    - Pricing or financial data
    - Legal or compliance requirements
    - Prior commitments or decisions

12. "Which task types can tolerate approximate recall or directional guidance?"
    *(e.g., brainstorming, early planning, exploratory research)*

---

### Current Pain Points

13. "Where does retrieval currently go wrong for you? What gets missed?"
    - Can't find relevant past work
    - Get buried in too much context
    - Miss important constraints
    - Forget prior decisions
    - Retrieve outdated information
    - Other

14. "Where do you waste time re-explaining context that should already be known?"

15. "Have you ever had an AI confidently use wrong information? What was it?"
    *(Helps identify what needs verification layers)*

---

### Source Mapping

16. "For each task type, where does the information you need currently live?"
    
    For each task type, map to:
    - Profile & preferences
    - Work playbooks
    - Reference library
    - Project briefs
    - Past conversation history
    - External documents
    - Your memory (not documented anywhere)

17. "Are there gaps—information you need that isn't currently captured anywhere?"

---

### Retrieval Pattern Preferences

18. "When starting a work session, would you prefer to:"
    - Have AI automatically pull relevant context based on task type
    - Explicitly specify what context to load
    - Start minimal and request context as needed
    - Different approaches for different task types

19. "If you're working on multiple projects simultaneously, how should AI handle context switching?"
    - Keep all projects active in memory
    - Explicitly load/unload project context
    - Automatic based on what I'm discussing

20. "How do you feel about AI proactively suggesting: 'It looks like you need X, should I pull that?'"
    - Helpful
    - Annoying
    - Depends on the situation

---

### Noise Management

21. "What's more problematic for you:"
    - Missing relevant context (recall problem)
    - Getting buried in too much context (noise problem)
    - Both equally

22. "Would you rather have:"
    - Comprehensive retrieval with some irrelevant results
    - Narrow retrieval that might miss edge cases
    - Different strategies for different task types

---

### Verification Protocol Design

23. "For high-stakes tasks, what verification approach makes sense?"
    - Always show source links for facts
    - Explicit confirmation prompts ("Is this still current?")
    - Separate verification pass before using information
    - Flag confidence levels ("High confidence" vs "Assumed")

24. "What's your tolerance for 'I don't know' responses?"
    - Prefer AI to acknowledge uncertainty rather than guess
    - Prefer AI to make reasonable inferences
    - Depends on task stakes

---

### Cost-Benefit Calibration

25. "How much time would you invest in retrieval setup to save time later?"
    - Minimal (needs to be nearly instant)
    - Moderate (worth a few minutes per session)
    - Substantial (worth significant upfront investment)

26. "Is it worth retrieving more context than needed to avoid missing something important, or should we optimize for efficiency?"

---

### Output Structure:

I'll create a **Retrieval Strategy Matrix** containing:

**1. Task Type Profiles**
For each task type:
- Context needs (what information required)
- Precision requirements (high/medium/low)
- Primary mode (planning/execution/review)
- Time sensitivity
- Typical frequency

**2. Mode-Based Retrieval Patterns**

**Planning Mode:**
- Strategy: Broad semantic search across references and past examples
- Sources: [list]
- Verification level: Low (directional guidance acceptable)
- Pattern: "Pull wide, filter as I go"

**Execution Mode:**
- Strategy: Exact lookups + filtered queries for requirements
- Sources: [list]
- Verification level: High (facts must be confirmed)
- Pattern: "Pull precise, verify critical facts"

**Review/Debug Mode:**
- Strategy: Timeline queries + decision traces
- Sources: [list]
- Verification level: Medium (need accurate history)
- Pattern: "Pull history, trace causation"

**3. Verification Protocol**
- Always verify: [list of fact types]
- Verification method: [source links/confirmation prompts/audit trail]
- Confidence thresholds by task type

**4. Source-to-Query Mapping**
| Context Type | Storage Location | Query Method | When to Pull |
|--------------|------------------|--------------|--------------|
| Profile | [location] | Auto-load | Session start |
| Playbooks | [location] | On-demand | Specific tasks |
| Projects | [location] | Explicit load | Project work |
| etc. | | | |

**5. Retrieval Sequencing**
For each task type:
- Step 1: Load [context types]
- Step 2: Query for [specific needs]
- Step 3: Verify [high-stakes facts]
- Step 4: Proceed with [confidence level]

**6. Noise Management Rules**
- When to retrieve broadly vs. narrowly
- How to filter results by recency/relevance
- When to stop adding context

**7. Common Failure Modes & Fixes**
- If X goes wrong → Try Y retrieval pattern
- Warning signs that context is insufficient
- Warning signs of context overload

**8. Quick Reference Guide**
Simple decision tree: "I'm doing [task type] → Use [retrieval pattern]"

Ready to start? Let's begin with question 1: What are the main types of tasks you do regularly in your work?

THIS IS FOR YOU BEGIN ASKING QUESIONS ONE AT A TIME NOW.