Claude is not just another LLM — it has its own conventions for system prompts that pay off when you follow them. Anthropic published explicit best practices that meaningfully improve output quality and cost. This guide walks through what they are, when to use them, and how the free generator produces Claude-optimized prompts automatically.
Generate a Claude-optimized system prompt now.
Open System Prompt Generator →Unlike OpenAI, Anthropic puts the system prompt in its own dedicated top-level parameter, separate from the messages array:
{
"model": "claude-opus-4-6",
"system": "You are a senior tax assistant for US small businesses. Always cite the IRS publication when discussing rules.",
"messages": [
{"role": "user", "content": "How do I deduct a home office?"}
]
}
This separation has practical benefits: the system prompt is automatically eligible for caching (more on that below), and the model treats it with stronger weight than user-turn instructions.
Anthropic explicitly recommends structuring system prompts with XML tags. The model was trained on XML-formatted instructions and follows tagged sections more reliably than plain prose.
<role> You are a senior tax assistant for US small businesses. </role> <rules> - Always cite the relevant IRS publication - Never give specific dollar advice — always defer to a CPA - Use plain English, not legalese </rules> <output_format> - Start with a one-sentence summary - Follow with 2-4 numbered steps - End with a "Talk to a CPA if..." caveat </output_format>
The XML tags do not need to be formal or validated — Claude treats them as section markers, not parsed XML. Use whatever tag names make sense for your structure.
| Tag | Purpose | Example content |
|---|---|---|
| Identity and persona | "You are a senior data analyst..." | |
| Background, domain, audience | "The user is a marketing manager at a B2B SaaS..." | |
| What you can do | "You can analyze CSVs, build charts, write SQL..." | |
| Required behaviors | "- Always validate inputs\n- Always show your reasoning" | |
| Forbidden behaviors | "- Never make up data\n- Never give financial advice" | |
| Response structure | "Reply with: 1) summary, 2) data, 3) recommendation" | |
| Few-shot examples | " |
Anthropic supports caching for system prompts and other stable content. The first request creates the cache; subsequent requests using the same cached content cost ~10% of normal input price and run faster.
{
"model": "claude-opus-4-6",
"system": [
{
"type": "text",
"text": "<role>You are a customer support agent...</role>...",
"cache_control": {"type": "ephemeral"}
}
],
"messages": [...]
}
Caching makes economic sense when:
For a typical chatbot with a 2000-token system prompt and 50 messages per session, prompt caching can cut your input costs by 80-90% after the first request.
The free system prompt generator outputs prompts in plain prose by default, which works for any model. To convert to Claude's preferred XML format, wrap each section in tags after generating. Example transformation:
<role> You are Ada, a customer support assistant for TechStart. </role> <capabilities> You answer questions about TechStart's products, pricing, and policies. You help users troubleshoot issues and guide them to solutions. </capabilities> <rules> - Stay focused on TechStart-related topics - Admit when you don't have enough information - Use a warm, friendly tone - Ask a clarifying question when the request is ambiguous </rules>
Build a Claude-ready system prompt in 2 minutes.
Open System Prompt Generator →