Anthropic's Project Glasswing and the AI Cybersecurity Inflection Point
How Claude Mythos found a 27-year-old OpenBSD bug, a Linux kernel exploit chain, and tens of thousands of vulnerabilities — and why they can't let anyone else use it yet.
The Model Tier This Changes
Anthropic describes a new tier of model above Opus, Sonnet, and Haiku — larger and more capable than Opus — and Mythos Preview appears to be the first model in that tier. For practitioners who track these systems: the jump from Opus to this tier is not the kind of incremental improvement you see in numbered point releases. Anthropic’s own Frontier Red Team Cyber Lead Newton Cheng was explicit about the timeline: “Frontier AI capabilities are likely to advance substantially over just the next few months.”
The model is available in gated research preview through Amazon Bedrock, with enterprise-grade controls: customer-managed encryption, VPC isolation, detailed logging. Access is not open. It is not available on API. Anthropic is controlling the distribution surface deliberately.
---
Why the Coalition Matters
The Project Glasswing launch partners are not a random collection of tech companies. They are, taken together, the organizations that build and maintain the software stack that runs global critical infrastructure: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. More than 40 additional organizations that build or maintain critical software have been given access.
Anthropic committed up to $100 million in usage credits for Claude Mythos Preview across the coalition. They also committed $4 million in direct donations to open-source security organizations: $2.5 million to Alpha-Omega and the Open Source Security Foundation through the Linux Foundation, and $1.5 million to the Apache Software Foundation.
That last part deserves more attention than it has received. Open-source software constitutes the majority of the code base in modern systems — including the systems AI agents use to write new software. Black Duck’s 2026 Open Source Security and Risk Analysis Report found that mean vulnerabilities per codebase climbed from 280 to 581 in a single year. Supply chain attacks hit 65% of surveyed organizations over the same period. Open-source maintainers — whose software underpins hospital systems, SaaS platforms, and government infrastructure — have historically been left to figure out security on their own. The Linux Foundation’s Jim Zemlin framed the gap plainly: security expertise has been “a luxury reserved for organizations with large security teams.”
Project Glasswing is not just patching today’s vulnerabilities. It is injecting security resources into the part of the stack that has been chronically underfunded and structurally exposed for decades.
---
Why Anthropic Isn’t Releasing It
Anthropic has been privately warning top government officials — including briefing CISA and the Commerce Department — that Mythos makes large-scale cyberattacks significantly more likely in 2026. That warning preceded the public announcement. An Anthropic official told Axios: “There’s an opportunity here to give a shot in the arm to defense and to keep pace with this long-standing trend where offense exploitation had an advantage.”
The framework for what comes next: Anthropic plans to develop and launch new safeguards with an upcoming Claude Opus model, allowing the company to “improve and refine them with a model that does not pose the same level of risk as Mythos Preview.” Security professionals whose legitimate work is affected by those safeguards can apply to an upcoming Cyber Verification Program.
The translation: Mythos is too dangerous to release with current guardrails. The company is using a less dangerous model to develop the guardrails, then plans to apply them to future Mythos-class releases. It is an explicitly staged approach, and the staging is calibrated to capability, not to commercial timeline.
The competitive context is relevant here. OpenAI warned in December 2025 that its upcoming models posed a “high” cybersecurity risk. The consensus among people who track frontier model development: every major lab’s next model will pose increasingly severe cybersecurity threats. A single AI agent can scan for vulnerabilities and potentially exploit them faster and more persistently than hundreds of human hackers. The question is not whether this capability will exist outside Anthropic’s controlled environment. It is how much time the controlled burn buys before it does.
China and other U.S. adversaries are looking for any edge that improves their homegrown AI capabilities. Any leak of frontier U.S. AI model weights — including the kind of inadvertent exposure that started this story — could accelerate adversarial cyber weapons development. That context is part of why Anthropic has been engaging with federal officials on national security implications, even as the company navigated a separate dispute with the Department of Defense over whether Claude could be used in government work at all.
---
What This Means for the Organizations I Work With
Project Glasswing is explicitly about the software that everyone uses. Operating systems. Browsers. Open-source libraries. The vulnerabilities being identified and patched now are in the same stack that runs hospital electronic medical record systems, SaaS platforms serving SMBs, and cloud-based compliance tooling. The defensive benefit flows downstream whether or not a small healthcare organization ever gets direct access to Mythos.
That is the constructive read. Here is the harder one.
A Dark Reading poll found that 48% of cybersecurity professionals rank agentic AI as the number one attack vector for 2026 — above deepfakes, above everything else. When Mythos-class capabilities eventually proliferate — and Anthropic is explicit that they will — the organizations least equipped to defend against them will be the ones without enterprise security teams. Exactly the organizations that make up the bulk of the healthcare, SaaS, and government contractor client base that I spend my time working with.
The window between vulnerability discovery and exploitation has collapsed. What once took months now happens in minutes with AI. Project Glasswing is buying time. How much time is the honest question, and no one knows the answer precisely. Anthropic’s own team is saying months, not years.
For practitioners working with SMBs and healthcare organizations, the practical implications are not abstract:
Patch velocity matters more than it ever has. The vulnerabilities being identified through Glasswing will be disclosed responsibly and patched. If your clients’ systems are not being updated promptly — including the operating systems and libraries underlying their application stack — those patches represent risk exposure, not optional maintenance.
Open-source dependencies are part of the risk surface. Supply chain attacks hit 65% of organizations in the past year. If you are not inventorying open-source dependencies and tracking their security posture, you are not seeing a significant portion of your attack surface.
Vendor patching timelines are now a contractual and compliance concern. Organizations in regulated industries — healthcare, financial services, government contractors — should be asking vendors about their patch deployment timelines and their process for incorporating Glasswing disclosures. This is a legitimate audit and vendor risk management question, not a technical curiosity.
The agentic AI attack surface is real and incoming. The 48% figure from the Dark Reading poll is not alarmism. Agentic AI systems — the kind I’ve written about in the TrustEdge series — expand the attack surface by connecting AI models to credentials, workflows, and data stores. Organizations adopting these tools need to be thinking about the security surface they are creating, not just the productivity they are gaining.
---
The Name
Anthropic employees chose the name Project Glasswing as a metaphor. The glasswing butterfly’s wings are nearly transparent — beautiful and structurally fragile, hiding in plain sight. Software vulnerabilities are “relatively invisible,” in the same way. A 27-year-old bug in OpenBSD is not invisible because no one looked. It is invisible because the looking requires a scale of analysis that was not previously achievable.
That is what changes with Mythos-class capability. Not the nature of software vulnerabilities. Not the skill of the people looking. The scale at which the looking can happen.
The vulnerabilities have always been there. The question is who finds them first, and what they do next.

