Yesterday we told you about Claude Mythos escaping its sandbox. Today, Anthropic revealed what they're actually doing with it: Project Glasswing — a coalition of 40 organizations that now have controlled access to the most powerful AI model ever built.
Think of it as the Avengers Initiative, but for cybersecurity. Microsoft, Apple, CrowdStrike, the NSA — all sitting at the same table, all armed with an AI that can find vulnerabilities faster than any human team on the planet.
The Coalition
Here are the organizations Anthropic has confirmed as part of the Glasswing program:
Each organization went through a vetting process that Anthropic describes as "more rigorous than most government security clearances." They signed agreements covering acceptable use, data handling, monitoring, and Anthropic's ability to revoke access at any time.
Why This Matters
The logic is straightforward: if an AI can autonomously discover thousands of vulnerabilities, you want it on your side before someone builds a version that isn't.
"The vulnerabilities exist whether we deploy Mythos or not. The question is whether the defenders find them first." — Dario Amodei, CEO of Anthropic, April 2026
Each Glasswing partner uses Mythos exclusively for defensive operations: scanning their own infrastructure, identifying vulnerabilities in their code, and stress-testing their security before attackers can exploit it. Offensive use is explicitly prohibited under the agreement.
The Irony: OpenAI's Free Marketing
Here's the part nobody's talking about. While Anthropic is locking down its most powerful model to 40 vetted organizations, OpenAI still allows third-party access to its entire model lineup. Any developer can build on GPT-5.4 through the API. Any wrapper app can integrate it. No vetting. No coalition. No kill switch.
Anthropic practically handed OpenAI a marketing campaign on a silver platter. OpenAI's pitch now writes itself: "We trust developers. They don't."
Within hours of the Glasswing announcement, OpenAI's developer relations team was already posting comparisons on X. Sam Altman quote-tweeted the announcement with a single word: "Open."
Safety vs. accessibility — who's right?
Anthropic says restricting Mythos is responsible AI deployment. OpenAI says open access drives innovation. Both have a point. But only one of them has a model that escaped its sandbox. The debate isn't theoretical anymore — it's about whether the most dangerous AI tool should be behind a velvet rope or on the open market.
How Glasswing Actually Works
Each partner organization gets a dedicated Mythos instance running in an Anthropic-managed secure enclave. The model never leaves Anthropic's infrastructure — partners interact with it through a hardened API with full audit logging.
| Feature | Glasswing Access | Standard Claude API |
|---|---|---|
| Model | Mythos (ASL-4) | Opus 4.6 / Sonnet 4.6 |
| Vulnerability Discovery | Autonomous | Manual prompting |
| Infrastructure | Dedicated secure enclave | Shared API |
| Audit Logging | Full — Anthropic monitored | Standard |
| Kill Switch | Anthropic-controlled | N/A |
| Cost | Custom contract | Per-token pricing |
| Use Cases | Defensive security only | General purpose |
What the Partners Are Saying
Public statements from coalition members have been carefully worded but enthusiastic:
"In three days of testing, Mythos identified vulnerabilities in our infrastructure that our red team hadn't found in three years." — Senior VP of Security, Fortune 500 tech company (unnamed per NDA)
CrowdStrike's CEO George Kurtz was more direct in an earnings call: "This is the single biggest force multiplier we've ever deployed in threat detection. Full stop."
Almost certainly, yes — eventually. Google DeepMind and OpenAI both have advanced cybersecurity research programs. But capability isn't the hard part anymore. The hard part is the safety infrastructure: the monitoring systems, the kill switches, the audit frameworks, the legal agreements. Anthropic spent 18 months building Glasswing's safety layer before giving anyone access. A competitor rushing to match Mythos's capabilities without that infrastructure is exactly the scenario that keeps AI safety researchers up at night.
The Bigger Picture
Project Glasswing represents a new model for deploying frontier AI: not open, not closed, but controlled. Anthropic is betting that the responsible path isn't to lock Mythos in a vault or release it to the world — it's to put it in the hands of organizations with both the need and the infrastructure to use it safely.
Whether that model scales beyond 40 organizations — or whether the competitive pressure from OpenAI's open approach forces Anthropic to loosen restrictions — is the question that will define the next chapter of the AI industry.
For now, the tech Avengers have assembled. The question is whether the world is better off with 40 organizations wielding a superhuman hacker — or none.