Eventfold Logo
Published on
· 5 min read

Anthropic's Claude Mythos Found Thousands of Zero-Day Vulnerabilities — Here's What That Means

Authors
  • avatar
    Name
    Lucas Dow
    Twitter

Yesterday, Anthropic announced something that will reshape the cybersecurity landscape: Claude Mythos Preview, a new AI model so capable at finding security vulnerabilities that Anthropic has decided not to release it publicly. As I wrote in my initial reaction on LinkedIn, we are entering a time where models can no longer just be released to the public.

That decision alone tells you everything about where AI capabilities are heading in 2026.

What Mythos Can Do

Claude Mythos Preview is a general-purpose language model — it was not specifically trained for cybersecurity. But its reasoning capabilities have crossed a threshold where it can autonomously identify zero-day vulnerabilities and construct working exploits at a scale no previous system could match.

The numbers are striking. Over the past few weeks of testing, Mythos has:

  • Identified thousands of zero-day vulnerabilities, many of them critical, across every major operating system and every major web browser
  • Successfully reproduced vulnerabilities and created proof-of-concept exploits on the first attempt in 83.1% of cases
  • Turned 72.4% of identified vulnerabilities into successful exploits within Firefox's JavaScript engine
  • Achieved register control in an additional 11.6% of attempted attacks
  • Found bugs that had gone undetected for over a decade — including a 17-year-old remote code execution vulnerability in FreeBSD and a 27-year-old bug in OpenBSD

For context, Anthropic's previous top model, Opus 4.6, managed to create working Firefox exploits only twice out of several hundred attempts. Mythos did it 181 times.

Project Glasswing: Defense Before Offense

Instead of releasing Mythos publicly, Anthropic launched Project Glasswing — a cybersecurity initiative that provides the model to 12 partner organizations for defensive security work. The list reads like a who's who of the tech industry:

  • Amazon Web Services
  • Apple
  • Broadcom
  • Cisco
  • CrowdStrike
  • Google
  • JPMorgan Chase
  • Linux Foundation
  • Microsoft
  • NVIDIA
  • Palo Alto Networks

In total, 40 organizations will have access to Mythos Preview. Anthropic is backing the initiative with 100millioninusagecreditsand100 million in usage credits and 4 million in direct donations to open-source security organizations.

The reasoning is straightforward: if Mythos can find these vulnerabilities, other AI models will eventually develop similar capabilities. Anthropic estimates competitors could reach this level within 6 to 18 months. The window to find and fix critical bugs — before offensive AI tools reach parity — is narrow.

Why This Matters Beyond Cybersecurity

The immediate cybersecurity implications are significant on their own. But the broader signal is worth paying attention to.

First, AI capabilities are not advancing linearly. The gap between Opus 4.6 and Mythos on security tasks is not an incremental improvement — it is a step change. Two successful exploits versus 181 is not a 10 percent gain. This kind of capability jump will happen across other domains, including the ones we work in every day.

Second, general-purpose models are developing specialist-level capabilities without specialist training. Mythos was not built as a security tool. It emerged from improvements in general reasoning. That means the next leap could come in legal analysis, financial modeling, scientific research — or event logistics and operations. The model that cracks protein folding and the model that autonomously manages your vendor communications may be the same model, running different prompts.

Third, the responsible deployment question is getting harder. Anthropic's decision to restrict Mythos is defensible — but it also highlights a growing tension. AI capabilities that are powerful enough to be transformatively useful are also powerful enough to be transformatively dangerous. Every AI company building frontier models will face this tradeoff more frequently as capabilities scale.

The Security Wake-Up Call

The vulnerabilities Mythos found are not theoretical. A 17-year-old remote code execution bug in FreeBSD means that for 17 years, every system running that software was exploitable. A 27-year-old OpenBSD bug means that even the most security-conscious operating system in existence had blind spots that human auditors missed for nearly three decades.

If AI can find these bugs, it can find the bugs in your software too. The question is whether you find them first — using AI-powered security tools on your side — or whether someone else finds them first.

For the event technology industry, where platforms handle payment information, personal data, and identity verification at scale, this is not an abstract concern. It is a preview of the security arms race that every software company will need to engage with in 2026 and beyond.

What Happens Next

Anthropic has said it does not plan to make Mythos generally available. The goal with Project Glasswing is to learn how Mythos-class models could eventually be deployed at scale with appropriate safeguards.

But the genie is out of the bottle on capabilities. Other labs — OpenAI, Google DeepMind, Meta, Mistral — are working on models that will reach similar performance. The 6-to-18-month window Anthropic described is a best-case estimate.

The practical takeaway: if your organization handles sensitive data, now is the time to invest in AI-powered security auditing. Not because Mythos is available to you (it is not), but because the class of threats it represents — AI-discovered zero-days — is coming regardless of whether any single model is released publicly.

The organizations that survive the AI security era will be the ones that used AI for defense before it was used against them for offense.