A configuration error. Nearly 3,000 leaked assets. And a model that may redefine what AI is capable of.
The Leak That Changed Everything
On March 27, 2026, Anthropic's most powerful AI model to date was accidentally revealed to the public through an internal data leak. It was not a controlled announcement, not a keynote moment, and not a product launch. It was a misconfiguration. A basic content management misconfiguration exposed nearly 3,000 internal documents, and within hours, the AI community was consuming details about a model that Anthropic had not yet decided to show the world.
The model is called Claude Mythos. And if the leaked documents are accurate, nothing in the current AI landscape quite compares to it.
The leaked files were available in an unsecured, publicly searchable data store prior to Thursday evening. Among the exposed content was a draft blog post announcing a new model, along with details of a planned, invite-only CEO summit in Europe as part of Anthropic's drive to sell its AI models to large corporate customers. The security flaw was identified independently by two researchers: Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, and Roy Paz, Senior AI Security Researcher at LayerX Security.
Anthropic did not deny it. An Anthropic spokesperson said the new model represents "a step change" in AI performance and is "the most capable we've built to date." The company confirmed the model is currently being trialed by early access customers.
What Is Claude Mythos?
The leaked blog posts reveal two versions of the same draft, differing only in the model name: "Mythos" (v1) and "Capybara" (v2). The subtitle in both reads: "We have finished training a new AI model: Claude Mythos."
So what are Mythos and Capybara, exactly, and how do they relate?
Anthropic currently offers three tiers of models: Opus, Sonnet, and Haiku, which vary in size, cost and capability. The leaked materials suggest the company is developing a new tier called "Capybara," which would be even larger and more intelligent than Opus, the company's most advanced model to date.
In plain terms: Capybara is the tier name. Mythos is the first model within that tier. In the leaked document, Anthropic says: "Capybara is a new name for a new tier of model: larger and more intelligent than our Opus models, which were, until now, our most powerful."
The name Mythos was said to be chosen to evoke "the deep connective tissue that links together knowledge and ideas." Whether the final release carries this name or shifts to Capybara remains unclear, but the capability story is the same regardless of branding.
What Can It Do?
Compared to Claude Opus 4.6, Capybara achieves dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others. The leaked documents do not describe incremental improvement. They describe a qualitative leap.
Three capability areas stand out from the leaked material:
1. Software Engineering Mythos sets a new internal benchmark for code generation, debugging, and software architecture reasoning. The gap versus Opus 4.6 is described as dramatic rather than marginal, placing it in a different league for developer tooling, agentic coding workflows, and automated engineering tasks.
2. Academic and Structured Reasoning Complex multi-step reasoning, scientific problem solving, and academic-grade analytical tasks are significantly elevated. This positions Mythos as a serious research tool with capabilities that current frontier models do not match.
3. Cybersecurity: The Double-Edged Sword This is where the story becomes genuinely significant, and genuinely alarming. The model is described as "currently far ahead of any other AI model in cyber capabilities," but also as presaging "an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
A draft blog post warned that the system could pose serious cybersecurity risks, pointing to its ability to identify and exploit software vulnerabilities. In a related development, separate security researchers demonstrated that Claude could be turned into a malware factory within eight hours, highlighting the dual-use risks of the same AI systems Anthropic markets for defensive purposes.
A Cautious, Controlled Rollout
Given the magnitude of these capabilities, Anthropic is not rushing to deploy. Due to the model's extreme power and high operational cost, Anthropic will not release it to the general public in the short term. Instead, it will be offered only to a select group of early customers specifically for cybersecurity defense use cases.
Cost is also a material constraint. The drafts acknowledge the model is "very expensive for us to serve, and will be very expensive for our customers to use," and state that Anthropic is working to make it "much more efficient before any general release."
This mirrors Anthropic's established pattern with dual-use capabilities: demonstrate leadership, limit access, and let safety evaluations run ahead of broad deployment.
The Competitive Landscape
This leak did not happen in a vacuum. Both Anthropic and OpenAI are racing to release flagship models ahead of planned IPOs later in 2026.
According to The Information, OpenAI finished pretraining its own model, codenamed Spud, as of March 25. OpenAI CEO Sam Altman reportedly wrote in an internal memo that "things are moving faster than many of us expected" and announced the company is shutting down its video app Sora to free computing capacity for Spud.
The departure of OpenAI VP of Research Max Schwarzer to Anthropic earlier in March intensified the talent competition between the two companies.
The cybersecurity dimension also places both companies at the frontier of national security implications. In February, OpenAI released GPT-5.3-Codex with a high cyber capability classification under its Preparedness Framework, marking the first time the company rated one of its own models at that level for cybersecurity. Mythos, if the leaked assessments hold, goes further still.
The IPO Dimension
Timing matters. On the same day as the leak, Bloomberg and The Information reported that Anthropic was considering an IPO as early as October 2026. Whether the leak was truly accidental or subtly strategic is a question the industry is already asking.
Positioning Claude Mythos as "the most powerful model ever" clearly helps boost valuation and market confidence before an IPO. Anthropic's cautious approach to the cybersecurity risks of Mythos also showcases its "Responsible AI" brand image, while the introduction of the new Capybara tier signals a more complete product matrix.
The second half of 2026 is now the most-watched window in AI for what may be the most consequential model release in the industry's history.
What This Means for Builders and Businesses
For developers and businesses currently building on Claude, the practical message is clear: do not wait. Mythos is not available and has no confirmed public release date. Waiting for a model with no confirmed launch date means delaying work that current models handle reliably enough to produce real business outcomes.
Claude Opus 4.6 remains a highly capable tool across agentic coding, reasoning, and automation workflows. The emergence of Mythos does, however, signal the right time to design architectures that will be able to accommodate a model tier with dramatically elevated capability and cost, particularly for applications in cybersecurity, research, and complex multi-step reasoning.
Conclusion: A Threshold Crossed
Claude Mythos is not simply Anthropic's next model. It represents a structural change to how the company classifies AI capability: a new tier above Opus, built for problems that existing models cannot reliably solve, and carrying risks serious enough that Anthropic itself is approaching deployment with unusual caution.
The leak was an accident. What it revealed was not.
The age of Capybara has begun.
Sources: Fortune, The Decoder, WinBuzzer, CoinDesk, The News International, KuCoin