Anthropic just ran the cleanest “have your cake and don’t ship it” play we’ve seen in AI.

I’m looking at the Mythos + Opus 4.7 rollout and all I see is one thing:

We just watched a company ship a BANGER model… and still get the press to obsess over the one they didn’t release.

Bloomberg running “Too Dangerous For Release.” Axios going with “concedes it trails unreleased Mythos.” CNBC: “less risky than Mythos.”

This is exactly the narrative @AnthropicAI wanted. And they got it.

1. Genuine Capability — Do we buy the zero-day flex?

Let’s start with the red.anthropic.com Mythos preview: thousands of zero-days in every major OS and browser; capable of hacking banks if misused.

We’re all thinking the same thing: this is either the most cracked red-team engine ever built… or the most overclocked marketing copy since “self-driving is basically solved.”

Here’s where I land:

So do I 100% verify the “thousands of zero-days” claim? No. It’s coming from Anthropic itself. No third-party vuln DBs or independent security orgs have dropped the receipts.

But do I think they’re bluffing? Also no.

Why? Because if you lie that big in security, someone eventually calls you on it. And because we already see LLMs creeping into offensive sec — open tools are already finding soft vulns at scale. It’s not a stretch that a frontier, non-safetied internal model plus tools can sweep codebases and fuzz attack surfaces way beyond human bandwidth.

Genuine Capability — Mythos claims:

Basically: I think the core “this thing is dangerously good at offense” is true. The exact numbers are probably marketing-smoothed. But it’s not sci‑fi.

2. Safety Posture — Project Glasswing: shield or stage prop?

This is where everyone starts posturing.

Anthropic: “Mythos is too dangerous for public release. We’re routing it through Project Glasswing to critical infra partners and defenders first.”

Bloomberg amplifies the “Too Dangerous” frame. Scientific American and SFist are like “why are experts worried?” CNBC echoes “less risky than Mythos” for Opus 4.7. Safety theater catnip.

Here’s the real question: is Glasswing actually constraining risk, or just a clean PR container?

On the “this might be real defense” side:

On the “this might be theater” side:

My read: Glasswing is partly real safety and partly PR container. The cap table gets to tell regulators, “See, we’re the adults,” while still running the crazy stuff internally and in restricted channels.

Safety Posture — Project Glasswing:

It’s not fake. But it is curated. And we shouldn’t pretend it’s some bulletproof “we solved misuse” framework.

3. Marketing Genius — “We have something better and you can’t have it”

This is where you have to tip the cap.

Most companies ship their best model and hope the press notices. Anthropic shipped Opus 4.7 — which, again, beats GPT‑5.4 and Gemini 3.1 Pro on real benchmarks — and still got every major outlet to define it relative to a product that doesn’t exist publicly.

Bloomberg: “Too Dangerous For Release.” Axios: “concedes it trails unreleased Mythos.” CNBC: “less risky than Mythos.”

That is textbook frame control.

They turned Opus 4.7 — already S-tier — into the “safe daily driver” and Mythos into the dragon in the basement. You know who does that? Console makers with dev kits. Weapons labs. Luxury brands with invite-only lines.

They also pulled the craziest move of all: kept Opus 4.7 pricing flat at $5 / $25 per million tokens while heavily implying “we’re holding back a supercar.” So devs get a better model than GPT‑5.4/Gemini 3.1 Pro on agents and money stuff, no price hike, and the whole world walks away thinking Anthropic is sitting on a nuke.

Legit counterargument: this is flirting with fear-mongering. The “too dangerous for release” narrative can and will be used to justify regulation and gatekeeping that lock out open-source and small shops. And Anthropic is not sad about that.

But strictly on marketing craft?

They got the “too powerful” halo and the “safer than Mythos” trust badge in the same cycle. That’s not an accident. That’s a coordinated run with press, policy, and product all aligned.

4. Competitive Pressure — Does this corner OpenAI and Google?

This is the spicy one.

By openly saying “Opus 4.7 trails Mythos,” Anthropic basically told the world: we have internal models beyond what we sell you, and we’re responsible about not shipping them.

Axios literally framed it that way. CNBC calls Opus 4.7 “less risky than Mythos.” The implicit question lands squarely on @sama, @OpenAI, @GoogleDeepMind:

“Do you also have Mythos-class internal models you’re not talking about? If yes, why aren’t you being this transparent? If no, are you behind Anthropic on both capability and safety?”

That’s the trap.

If OpenAI or Google admits they have stronger internal models, they invite the same “too dangerous for public” scrutiny and regulator attention. Now you’re on the back foot explaining safety posture to the same reporters Anthropic just briefed.

If they don’t admit it, Anthropic gets to wear the “we’re the cautious grownups at the frontier” crown — especially with things like the cyber-blocking system in Opus 4.7 and the Cyber Verification Program. Feeds perfectly into the “Constitutional AI, safety-first” mythology they’ve been building since day one.

Realistically, OpenAI and Google also have scary internal stuff. Agents, long-horizon tool use, internal security experiments. We’re not children. But they’ve been quieter on “this is too hot to ship” specifics.

Mythos forces a choice:

Competitive Pressure:

Very fair counterpoint: this can backfire. If regulators

Stay sharp. — Max Signal