Claude has 200,000 tokens of context — enough to fit a full novel, a 500-page legal document, or hours of conversation history in a single request. Here is exactly what fits, how to use the window well, and how to verify before you send.
| Content type | Words | Pages | Fits in 200K? |
|---|---|---|---|
| 1-page document | 250 | 1 | Yes (easy) |
| 10-page report | 2,500 | 10 | Yes (easy) |
| 50-page contract | 12,500 | 50 | Yes (easy) |
| 100-page report | 25,000 | 100 | Yes (easy) |
| Short novel (50K words) | 50,000 | 200 | Yes (easy) |
| Average novel (80K words) | 80,000 | 320 | Yes (room to spare) |
| Long novel (150K words) | 150,000 | 600 | Yes (tight fit) |
| Very long book (200K words) | 200,000 | 800 | Borderline |
| 1,000-page manual | 250,000 | 1,000 | No (use Gemini 1M) |
For most documents people actually need to process, Claude 200K is more than enough. The window only becomes a constraint for full books, multi-document collections, or very large codebases.
Check whether your document fits in Claude 200K.
Open Token Counter →Claude charges per token. A 200K input is more expensive than a 1K input by definition. Real numbers:
| Input size | Sonnet 4 cost | Opus 4 cost | Haiku 3.5 cost |
|---|---|---|---|
| 1K tokens | $0.003 | $0.015 | $0.0008 |
| 10K tokens | $0.030 | $0.150 | $0.008 |
| 50K tokens | $0.150 | $0.750 | $0.040 |
| 100K tokens | $0.300 | $1.500 | $0.080 |
| 200K tokens | $0.600 | $3.000 | $0.160 |
Sending a full novel to Sonnet 4 costs about 60 cents per query. To Opus 4, $3 per query. To Haiku 3.5, 16 cents. For one-off analysis, this is trivial. For high-volume workloads, the math matters.
Just because you can stuff 200K tokens in doesn't mean you should. Long-context performance is real but not magical. Five practices for better results:
1. Put critical info at the start and end. Like every LLM, Claude attends slightly more to the beginning and end of long inputs. If your question depends on a specific section, mention "see Section 3 below" at the start and put Section 3 near the end.
2. Use clear structure. Headers, sections, numbered lists help Claude navigate long input. A 100K token blob of unstructured text is harder for Claude than 100K tokens with clear divisions.
3. Reference specific parts in your question. "Based on the contract clauses about indemnification..." performs better than "Based on the contract..." Claude retrieves relevant sections more reliably when you cue it.
4. Cache the static parts. Anthropic's prompt caching gives 90% input discount on cached prefixes. If you're sending the same 100K document with multiple questions, cache it once and reuse.
5. Set your output expectations. Tell Claude "respond in 200 words or less" or "answer with a single paragraph." Without limits, Claude can write essays in response to a long input.
The temptation with long context is to dump everything into the prompt. Sometimes that's right. Often it's wasteful.
Send everything when:
Use retrieval (RAG) when:
Three steps to fit Claude 200K reliably:
If you're working in code, use Anthropic's count_tokens API endpoint to get exact counts before each request. The browser counter is for one-off checks.
Legal contract review. Send the full contract. Ask for risk analysis, missing clauses, and suggested revisions. Claude handles this well because legal documents are structured and Claude attends across the full input.
Code review. Send a complete file or related files. Ask for bug analysis, architecture suggestions, or security review. Claude is widely considered the strongest LLM for code review tasks.
Long meeting transcript analysis. Send a 3-hour meeting transcript. Ask for action items, decisions made, and unresolved questions. Claude pulls information across the full transcript reliably.
Multi-document research. Send 5-10 research papers in one prompt. Ask for synthesis, contradictions, or a literature review. Claude is good at this when documents are clearly delimited.
Book or long article analysis. Send the full text. Ask for theme analysis, character relationships, or chapter summaries. Works for novels, textbooks, technical manuals.
Make sure your text fits in Claude 200K before sending.
Open Token Counter →