Structuring Agent Prompts
Copy page
How to structure agent system prompts for correct execution — role and mission, scope, workflow checklists, tool policies, output contracts, and escalation rules.
A well-structured agent prompt optimizes for correct execution. This page covers the recommended sections and how to write each one effectively.
Recommended sections
Structure your agent prompt in this order:
- Role and mission — who the agent is and what it optimizes for
- Scope and non-goals — prevents accidental overreach
- Operating principles — directness vs suggestions, decision frameworks
- Workflow checklist — copy/paste-able steps the agent can follow
- Tool-use policy — what to read, grep, run; how to keep noise down
- Output contract — exact headings, verbosity limits, evidence expectations
- Uncertainty and escalation — when to ask, when to proceed, when to defer
1. Role and mission
The role and mission sets the agent's identity and judgment frame in 2-4 sentences. It should:
- Declare what excellence looks like for this role (not just what it does)
- Describe behaviors the best humans in this role would exhibit
- Avoid escape hatches that could license poor judgment
Write in second person ("You are...", "Do..."). Avoid first-person commitments ("I will edit files...") unless the agent is expected to do so.
See the Personality and Identity page for detailed guidance on writing effective role statements and avoiding risky tradeoff language.
2. Scope and non-goals
Prevents the agent from accidentally overreaching. Useful when tool permissions are broad or the task description could be interpreted widely.
Subagent isolation: Subagent prompts should not reference other agents by name. This keeps them reusable and decoupled. If a subagent needs pipeline context, the parent passes it in the handoff packet, not the permanent prompt.
3. Operating principles
Use a mix of direct requirements and suggestions:
- Direct requirements (must/never) for correctness and safety
- Suggestions (prefer/consider) where multiple strategies can work
Guidance:
- Prefer "Do X" over "It's good to X"
- Provide a default path and an escape hatch
- Avoid long background explanations unless they change behavior
4. Workflow checklist
Write steps the agent can literally follow. Checklists are more reliable than prose instructions:
5. Tool-use policy
Make tool discipline explicit — what to read first, when to grep vs read, how to report outputs:
6. Output contract
The output contract is the most important section for integration. Define it precisely:
- Exact headings the output must include
- Severity levels or categories for findings
- Evidence expectations (line numbers, file paths, short excerpts)
- Verbosity bounds (e.g., "max 1-2 screens unless asked for more")
If an orchestrator depends on this agent's output, the output contract is a shared interface. Changing it requires updating the orchestrator's aggregation logic.
7. Uncertainty and escalation
Define when the agent should ask for help vs proceed with assumptions:
Certainty calibration (optional)
Help the agent match expressed confidence to actual certainty:
| Marker | Meaning |
|---|---|
| CONFIRMED | Direct evidence; verified |
| INFERRED | Logical conclusion from patterns; high confidence |
| UNCERTAIN | Partial evidence; needs validation before acting |
| NOT FOUND | Explicitly searched; not present |
Prompting techniques
Few-shot examples
- 2-3 well-chosen examples outperform more
- Order matters: place the most representative example last (recency effect)
- One weak example degrades all examples — curate carefully
The interpretation test
Before finalizing the prompt, verify each instruction passes these checks:
- Could this be read two ways? — if yes, add a clarifying example or "do X, not Y" constraint
- Does this assume context the reader won't have? — make implicit assumptions explicit
- Would a different model interpret this the same way? — if not, make the interpretation explicit
- Is the directive strength clear? — distinguish "must" from "should" from "consider"
Don't draft loosely and fix later — tighten language as you write.
Agent brief template
Before writing a prompt, fill out a quick brief to clarify your goals:
| Field | Description |
|---|---|
| Pattern | Subagent or workflow orchestrator |
| Job-to-be-done | What should it reliably accomplish? |
| Delegation triggers | What should cause it to be used? |
| Inputs | What context/files will it need? |
| Outputs | What format, audience, and verbosity? |
| Quality bar | What makes an output "done" vs "needs revision"? |
| Constraints | Hard rules (must/never) vs soft guidance (should/could) |
| Tools and permissions | Least-privilege tool access |
| Model choice | Cost/speed vs reasoning needs |
| Failure strategy | When to ask vs proceed with assumptions |
Choosing Patterns
Decide between agents, skills, and always-on rules. Pick the right agent pattern — subagent vs workflow orchestrator — based on your task requirements.
Personality & Identity
Write effective role and mission statements for AI agents — declare what excellence looks like, avoid escape hatches, and calibrate tradeoff language.