Blog
Wild & Free Tools

System Prompt for JSON and Structured Output

Last updated: April 2026 7 min read

Table of Contents

  1. The two paths
  2. The base template
  3. Common JSON failures
  4. Show an example
  5. Validation in your code
  6. Token cost
  7. When to use XML instead

If your application reads AI output programmatically — populates a UI, writes to a database, triggers another API call — you need predictable structure. Free-form prose breaks every parser. JSON output is the standard answer, and the system prompt is where you make it happen reliably. This guide covers the patterns that actually work.

The free system prompt generator can produce a structured-output prompt with one toggle.

The Two Paths to Structured Output

You have two ways to get reliable JSON from an LLM:

  1. API-level enforcement — OpenAI's "JSON mode" (response_format: { type: "json_object" }) and "structured outputs" (response_format: { type: "json_schema", ... }) force the model to produce valid JSON. Anthropic and Google have similar features now too.
  2. Prompt-level instruction — telling the model in the system prompt to output JSON. Less reliable than API enforcement but works on any model and any provider.

For production, use both. API enforcement catches syntactic errors (invalid JSON), and the system prompt handles semantic correctness (the right fields with the right meanings).

The Base JSON System Prompt

"You are a [role]. Respond ONLY with valid JSON in this exact format. Do not include any text outside the JSON object. Do not include markdown code fences (```json or ```). Do not include explanations or apologies. The response must parse with JSON.parse() in JavaScript without modification.

Required format:

{ "field1": string, "field2": number, "field3": boolean, "field4": string[] }

If you cannot determine a value for a field, use null. Never omit a field. Never add fields not listed above."

Common JSON Failures and Fixes

Sell Custom Apparel — We Handle Printing & Free Shipping

Always Show a Concrete Example

Models follow examples better than abstract rules. Always include a concrete example in the system prompt:

"Example output:

{ "summary": "User asked about pricing", "intent": "pricing", "urgency": "low", "needs_human": false }"

One example is worth ten paragraphs of schema description.

Validation in Your Application Code

Even with API enforcement and a tight system prompt, validate the output in your application code before using it. Use a schema validator (Zod, Joi, Ajv, Pydantic) to confirm the structure matches what you expect. If validation fails, you have two options: retry with a corrective prompt ("the previous response was invalid, please provide valid JSON") or fail gracefully and log the error for debugging.

Production AI apps that read JSON output without validation eventually crash. Validate every time.

Token Cost of Strict JSON Output

JSON output is more verbose than free-form text. Field names, brackets, commas, and quotes all count as tokens. A 100-word free-form summary becomes a 130-150 token JSON response. This adds up at scale.

Trade-off: pay slightly more per request for structured output that does not break your parser. Almost always worth it. Use the AI cost calculator to model the difference.

When to Use XML Instead of JSON

Anthropic's Claude is trained to respect XML tags more than JSON. For Claude-only applications, XML tags can be more reliable: <summary>User asked about pricing</summary><intent>pricing</intent>

For multi-provider or OpenAI-primary applications, JSON is the standard. Pick one and stick with it across your prompt.

Generate a JSON Output Prompt

Toggle "use structured output" in our generator. Copy-ready output.

Open System Prompt Generator
Launch Your Own Clothing Brand — No Inventory, No Risk