Skip to content
EssayAI & Technology04 / 19 / ’267 min readAI

Anthropic Shipped Claude Opus 4.7. The Model It Didn't Ship Matters More.

Claude Opus 4.7 is a solid release beat. The more interesting artifact is Claude Mythos Preview — the more powerful sibling Anthropic deliberately held back. The AI race has quietly shifted: the frontier is no longer raw capability, it's the gap between what a lab can train and what it's willing to ship.

Anthropic released Claude Opus 4.7 on April 16. The model that matters for understanding where the AI race is headed is the one it didn’t ship alongside.

Opus 4.7 is the usual release beat — better software engineering, better vision, same pricing. But the calibrated capability gap between Opus 4.7 and its more powerful sibling, Claude Mythos Preview, is the more interesting artifact. The frontier has moved. The race is no longer about training the most powerful model. It’s about the gap between what a lab can train and what it’s willing to deploy.

What actually shipped

Claude Opus 4.7 is, in Anthropic’s phrasing, their most capable generally-available model. It replaces Opus 4.6 in that slot. The coding improvements are where they’re pitching hardest: users report being able to hand off hard coding work — the kind that previously needed close supervision — to 4.7 with confidence. Long-running agentic tasks get more rigorous planning and more consistent execution, and the model pays more precise attention to instructions and verifies its own outputs before reporting back.

On the platform side, there’s one genuinely new capability: high-resolution image support up to 2576px / 3.75MP, up from the 1568px / 1.15MP ceiling. Coordinates now map 1:1 with actual pixels, removing the scale-factor math that vision pipelines needed before. There’s also a new xhigh effort level and a task-budgets beta that lets the model self-pace against a token budget, finishing agentic loops gracefully as the budget draws down. The 1M token context window and 128k max output carry forward from 4.6.

Pricing holds at $5 / $25 per million input/output tokens, with up to 90% savings via prompt caching and 50% via batch processing. Distribution is the same as Anthropic’s other enterprise surface — Claude Pro, Max, Team, Enterprise, plus AWS Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. If you were deploying Opus 4.6 last week, you’re using Opus 4.7 this week without contract work.

What Anthropic held back

The quote that matters in the release isn’t about coding or vision. It’s this: Opus 4.7 is “less broadly capable than our most powerful model, Claude Mythos Preview.” Mythos Preview exists. It is more capable than 4.7. Anthropic is not shipping it to the open API.

Mythos was released to a select group of companies last week under Project Glasswing, Anthropic’s cybersecurity initiative. The framework is explicit: test new cyber safeguards on less-capable models before any broader release of Mythos-class capability. Opus 4.7 is the first model shipped under that framework. During its training, Anthropic deliberately worked to reduce cyber-offensive capabilities relative to Mythos. On top of that reduction, 4.7 ships with real-time safeguards that automatically detect and block requests flagged as prohibited or high-risk cybersecurity use. For legitimate security work, Anthropic opened a Cyber Verification Program — a gated channel for penetration testing, vulnerability research, and red-teaming.

Read that twice. The goal isn’t to keep Mythos internal forever. The goal is to deploy the safeguards that make Mythos-class capability shippable. Opus 4.7 is the test bed. Every piece of feedback from real-world 4.7 usage feeds into the decision about when the more powerful model can reach the rest of the market.

Three signals for the AI race

Taken together, what Anthropic shipped and what Anthropic held back tells you three things about how the frontier lab race is evolving.

Capability is no longer the only axis

For three years, the headline metric has been raw capability — benchmark wins, context windows, reasoning depth. Anthropic is explicitly decoupling capability from deployability. A more powerful model exists. It stays behind a gated deployment until safeguards catch up. The axis competitors are being measured on is shifting from “can you train it” to “can you ship it with confidence.”

The enterprise bet is getting louder

Anthropic’s release rhythm is remarkably predictable for a frontier lab. Opus 4 in May 2025, 4.1 in August, 4.5 in November, 4.6 in February 2026, 4.7 in April — a roughly quarterly cadence of incremental, production-oriented releases. The audience isn’t consumers chasing the newest flashy demo. It’s enterprise buyers who need a model they can deploy against a P&L, trust against a compliance review, and renew against a roadmap.

The visible race isn’t the whole race

If the most powerful models at frontier labs are all sitting behind internal safety programs while less-capable siblings ship, then the public benchmark race undercounts the actual state of the frontier. Anthropic is the first lab to be explicit about the gap. It won’t be the last. Expect the question “what have you trained that you’re not shipping” to become part of how enterprise buyers evaluate AI vendors.

What this means if you’re buying

For anyone choosing an AI vendor in the next six months: the Anthropic bet reframes the question. You’re not just buying a model. You’re buying a lab’s release discipline, its safeguard program, and its view on what a model should and shouldn’t do in production. Opus 4.7’s availability on AWS Bedrock, Vertex AI, and Microsoft Foundry means you can route traffic through whatever cloud your infrastructure sits on, but the underlying bet about what you’re actually deploying is the same.

If you’re operating a marketing or analytics stack that depends on an AI vendor, the questions that matter are drifting: what’s the vendor’s release cadence, what happens when a more powerful model is trained but not shipped, what is their posture on safeguards when capability runs ahead of guardrails. Those aren’t abstract. They’re procurement questions now.