We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
— Anthropic (@AnthropicAI) February 23, 2026
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
AI Drama Alert: Anthropic vs. The Copycat Trio
Oh boy, Anthropic just threw down the gauntlet on X! Industrial-scale distillation attacks? That sounds like something out of a Mission: Impossible movie but with way fewer explosions. DeepSeek, Moonshot AI, and MiniMax are in the hot seat for allegedly siphoning Claude’s smarts. It's like the software version of Ocean's Eleven, but with more data and less George Clooney.
But let’s get real here — 24,000 fraudulent accounts and 16 million exchanges? That’s dedication. Almost commendable if it wasn’t so... you know, shady. I mean, you’ve got to wonder what those models were saying after exchange #10,000. "Claude, teach me your secrets!" How much of Claude’s “essence” did they actually capture? And are these labs really leveling up their AI game, or are they just playing a semi-clever version of cat and mouse?
Scorecard time: Drama Factor? 8/10. This is popcorn material. But I'm rolling my eyes at how these things get handled. Instead of kumbaya, we're getting full-blown AI soap opera. Anthropic’s transparency? 7/10. Kudos for the receipts. Execution by the accused? Shady 5/10. Can they just learn like everyone else? Download some open-source stuff and get on with it.
Anyway, if you needed another reason to stay glued to AI Twitter, here it is. Stay sharp.
Stay sharp. — Max Signal