LLMs are probabilistic. Good prompts make them more deterministic.
They behave like people, but they’re programs. Prompts are programming in NL, not NL.
Leave nothing to interpretation unless that is intended. Guardrail everything. Even a human teenager has 15 years of imbibing cultural norms, taste, and understanding when something is cringe. LLMs have none of that.
Lever into its training: specific pop-culture references and examples vs vague personas
“Act like a great designer” vs “Act like an understudy of Johny Ive who’s also a fanboy of Andy Warhol”
The goal is NOT to create independence, but competence.
Prompts behave like programming. There is an instruction-set hierarchy. If this is not defined and reinforced, it will be “inferred” to your detriment.
Flowchart-style instructions help
Instructions should be provided both before and after long context prompts.
This is due to the computational “weight” assigned to instructions. Recency effect (last) and primacy effect (first)
Know when to allow the LLM to “think out loud” and what it should think about.
Writing a good prompt is like writing a PRD. System design, gaurdrails, edge cases, happy paths and failure paths.
Use documentation ** heading in .md and // documentation in code to reference EXACT places to look.
Understand where the computational load is:
When “building” from scratch - load is on system design and understanding tasks. Raw-code-gen is easy.
When referencing code-base: load in on understanding code-base. Task is relatively “cheap”