Anthropic's most powerful model yet can find thousands of zero-day vulnerabilities across every major OS — and that's exactly why the public won't get access to it.
Key Numbers
| Stat | Detail |
|---|---|
| 1000s | Zero-day vulnerabilities found in weeks |
| 40+ | Project Glasswing partner organizations |
| 27 years | Age of the oldest bug found, in OpenBSD |
What is Claude Mythos?
In early 2026, Anthropic quietly trained what it describes as "by far the most powerful AI model we've ever developed." Originally codenamed Capybara internally, and now officially called Claude Mythos Preview, this model represents an entirely new tier above Claude Opus — more capable, more expensive, and considerably more dangerous.
Its general-purpose performance is exceptional across reasoning, coding, and academic benchmarks. But it's the cybersecurity capabilities that set Mythos apart and forced Anthropic into a very unusual decision: don't release it to the public at all.
What It Can Do
During internal testing, Anthropic's Frontier Red Team discovered that Mythos Preview can autonomously identify and exploit zero-day vulnerabilities — previously unknown flaws — across every major operating system and every major web browser.
These aren't simple, well-known attack patterns. Many of the bugs discovered were subtle, decade-old flaws that had evaded detection for years. The oldest uncovered so far was a 27-year-old bug in OpenBSD, a system renowned for its security-first design.
Notable capability: Mythos can independently chain together multiple vulnerabilities — including kernel memory read/write bugs protected by KASLR — to achieve complete root access on a compromised system.
The model doesn't just spot bugs. It builds working exploits, including multi-packet ROP chains for remote code execution. Anthropic's team describes its approach as an "agentic search process" — the model iterates with tools, retries hypotheses, and validates results over time. It's not a one-shot magic prompt; it's more like an extremely patient and methodical security researcher.
How It Was Discovered
| Date | Event |
|---|---|
| Early 2026 | Draft blog post and model details inadvertently exposed in a publicly searchable Anthropic data cache, first reported by Fortune. |
| March 2026 | Anthropic confirms training and testing of a new model it calls "a step change" in AI capability and cybersecurity risk. |
| April 2026 | Official announcement of Claude Mythos Preview and Project Glasswing. Limited gated preview rolled out to ~40 defensive cybersecurity partners. |
| Present | Available in gated preview on AWS Bedrock and Google Cloud Vertex AI. Full public release has no announced date. |
Project Glasswing: Defense First
Rather than a standard API launch, Anthropic debuted Mythos through Project Glasswing — a coordinated coalition of technology companies given early access specifically to use the model's capabilities for defense: finding and fixing vulnerabilities in the software the world depends on before adversaries develop comparable tools.
Partners include: AWS, Google, Microsoft, Apple, Cisco, CrowdStrike, Palo Alto Networks, and ~33 others.
"AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats — and there is no going back."
Anthropic estimates that similar offensive capabilities will emerge from other AI labs within six to eighteen months. Project Glasswing is a bet that defenders — with early access — can use this window to patch the software the entire internet runs on before attackers catch up.
What This Means for the Security Industry
Independent researchers at Vidoc Security Lab have already partially replicated Mythos findings using publicly available models like Claude Opus 4.6 and GPT-5.4, which signals that this isn't a narrow capability exclusive to one model — it's a frontier shift that's spreading.
The practical implication for security teams is stark: AI-assisted vulnerability discovery will soon be within reach of every enterprise. Backlogs of known vulnerabilities won't just grow — they could explode by orders of magnitude. The bottleneck shifts from finding bugs to prioritizing, validating, and remediating them at speed.
Anthropic has stated its eventual goal is to enable safe, broad deployment of Mythos-class models. To get there, they plan to ship new safety safeguards through an upcoming Claude Opus release first — refining the guardrails before enabling the full model at scale.
The Bottom Line
Claude Mythos is a genuinely new kind of AI model — not because it reasons better or codes faster, but because it fundamentally alters the balance of power in cybersecurity. Anthropic's choice to withhold it from public release, redirect its capabilities toward defense, and share it selectively with industry partners is one of the more consequential product decisions in the AI industry's short history.
Whether Project Glasswing succeeds in giving defenders a lasting edge remains to be seen. But the window to act — before similar capabilities become widely available — is narrow, and closing fast.
Sources: Anthropic (red.anthropic.com, anthropic.com/glasswing), Fortune, Bloomberg, Vidoc Security Lab, AWS Bedrock Docs, Google Cloud Blog
Dr. Johnson leads AI research and implementation at Kerdos Infrasoft, specializing in healthcare AI and machine learning applications with over 12 years of experience.