Just had a chat with Joe about using Claude to audit the frontend codebase for cache invalidation errors.Sharing the strategy here in case it’s useful for people:

  1. Load context
    1. Take the error report (in this case a screenshot of Chris saying what was broken in the dashboard) and ask it to diagnose one of the specific errors
  2. Create a ‘plan’ file that describes the problem
    1. Ask to create cache-error-patterns.md to capture the different classes of cache invalidation error, seeding it from the first of the examples in Chris’ message
    2. Now ask to confirm the other errors are the same pattern: they are not, it discovered some more categories, so get it to update our cache-error-patterns.md with the new error classes
    3. Iterate until we think the patterns doc is complete and comprehensive
  3. Add human context
    1. We knew that Claude had guessed which other cache keys to invalidate by implying the behaviour of the API from the frontend. We know to confirm this we would look at the backend code, so we tell it to check the backend code in server to check if our assumptions were correct
    2. Having done this once, we update cache-error-patterns.md with how we found the APIs in the server codebase so we can repeat this again
  4. Run an audit
    1. Ask claude to find all the files that are potentially broken using our patterns
    2. Do this with sub-agents and run them in parallel so it’s fast
    3. Save all the results in a single cache-invalidation-errors.md file so we can eyeball it and check it’s right
  5. Apply the fix
    1. Ask claude to apply the fix to everything it found in the audit using the patterns doc to resolve the problems
    2. Do this in parallel using sub-agents

This is a really useful pattern for large scale refactors when you can gradually refine the “hammer” you’ll use to fix a problem and then have Claude apply the hammer N times in parallel across the entire codebase. The files we write prevent us from suffering as the context window gets large (as we can always reference the file, and new sub-agents can read the file in its entirety) and we can read the instructions as human to verify we’re happy with the approach.I’m attaching some of the prompts I used so you can see the specific phrasing.

Load context

image.png

Create a plan

image.png

image.png

Use human context

image.png

image.png

Run an audit

image.png

image.png