How we ask an agent to perform a given task.
How we can validate the code does what it should and avoids regressions at scale.
How we improve the understanding of future agents given our current temporary context.
How we ensure code is consistent, maintainable, and debug-able by a human.
When prompting, you get out what you put in.
Good prompting is to provide the appropriate context to sufficiently reduce the amount of inference the agent has to perform to complete the task to your liking.
There is no one-size-fits-all answer as to how much is enough context. It will depend on the complexity of the task and your desired response. Treating your agent like a really smart Jr. dev is a good way to think about what to share for examples, READMEs, and explanations.
Simple tasks can often be done in one prompt, and don’t require much planning. Sometimes there are more complicated tasks that you want to build more robust context around for the agent.