Anthropic just made 1M context generally available for Opus 4.6 and Sonnet 4.6.
How to activate:
Claude Code: Max, Teams, or Enterprise. Open a new session and it's live.
API: use claude-opus-4-6 or claude-sonnet-4-6. Automatic, no extra headers.
At 1M tokens, Opus 4.6 holds 78.3% on MRCR v2 while GPT-5.4 is 36.6% and Gemini 3.1 Pro is 25.9%.
Big workflow update: stop panic-clearing at 100K. 200K is now a comfortable checkpoint and you have much more room.
Anthropic just made the 1M token context window generally available for Opus 4.6 and Sonnet 4.6.
The benchmark chart says it all. At 1 million tokens, Opus 4.6 holds at 78.3%. GPT-5.4 drops to 36.6%. Gemini 3.1 Pro falls to 25.9%.
Context rot is not fully dead but Claude just lapped the field.
How to activate it in Claude Code:
During a session: /model opus[1m] or /model sonnet[1m]
At startup: claude --model opus[1m]
Permanently: add "model": "opus[1m]" to your settings file
If you don't see the 1M option in your /model picker, restart your session.
Plan breakdown (important):
Max, Team, and Enterprise: Opus 4.6 with 1M context is included automatically. No extra cost, no config needed. It's live when you open a new session.
Max, Team, and Enterprise: Sonnet 4.6 with 1M context requires extra usage (billed separately).
Pro plan: Both Opus and Sonnet 1M require extra usage.
API / pay-as-you-go: Full access to both, standard pricing throughout.
No price premium past 200K tokens. Standard rates apply the whole way.
What this means for workflows:
Stop panic-clearing at 100K. Working through a large codebase or multi-step build? Stay in session much longer without quality falling off. Old rule: 100K was a red line. New rule: 200K is a comfortable checkpoint and you have real room.
If you use opusplan mode, that alias also gets the 1M upgrade automatically on Max/Team/Enterprise.
Image capacity also jumped from 100 to 600 images or PDFs per request.mage attached with the benchmark chart.