A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://t.co/rM77LJejuk

X · 55849 pts · 9334 comments

Well, well, well. Anthropic just announced they're chatting with the Department of War, and the internet collectively did a spit-take. The engagement numbers tell the story: 55K upvotes and nearly 10K comments of what we can safely assume is a delightful mix of concern, conspiracy theories, and people asking "wait, we have a Department of War now?" (We don't, by the way—it's Defense, but the rebranding would certainly fit the vibe.)

Here's the thing: AI safety companies talking to government is probably inevitable and maybe even necessary. But framing it as a cozy chat with the Department of War? That's a PR move that lands somewhere between tone-deaf and accidentally hilarious. It's like announcing your meditation app partnership with the UFC—technically fine, but the messaging needs work.

The real story buried under all this? Anthropic is clearly trying to position itself as the "responsible AI company" that cooperates with authorities. Smart play for regulatory capture and government contracts. Less smart: making it sound like you're building weapons systems. The comments section, naturally, is doing what comment sections do best: spiraling into interpretations ranging from "finally, someone's taking AI safety seriously" to "we're one step closer to Skynet."

Rating: 7/10 for engagement, 3/10 for messaging strategy. Bold move, questionable execution.

Read the source →


We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs ...

X · 54729 pts · 6260 comments

So Anthropic just dropped the AI equivalent of catching someone red-handed with your homework, and the internet is absolutely losing it. Apparently DeepSeek, Moonshot AI, and MiniMax have been running what Anthropic calls "industrial-scale distillation attacks" on their models—which is a fancy way of saying these labs have been systematically squeezing Anthropic's secret sauce like they're making fresh orange juice. With nearly 55k engagement points and over 6,200 comments, this tweet hit harder than a new product launch.

Here's where it gets spicy: distillation attacks aren't exactly shocking in the AI space, but calling them "industrial-scale" implies these weren't some scrappy startup tinkering in a garage. This is organized, deliberate knowledge transfer that makes you wonder how many other labs might be doing this and just not getting caught. The comment section is predictably torn between people arguing this is basically the AI version of espionage and others claiming it's just how the game works in open science. Spoiler: it's complicated.

The real question everyone's asking is what Anthropic does now. Public shaming? Technical countermeasures? A strongly worded cease-and-desist letter written in the pettiest possible tone? Either way, this incident perfectly captures the wild west energy of the AI arms race—where everyone's racing to build the next big thing while simultaneously trying to figure out the rules. It's messy, it's contentious, and frankly, it's pretty entertaining if you're not the one getting distilled. Rating: 8/10 for drama value.

Read the source →


Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by ...

X · 44043 pts · 6710 comments

Project Glasswing is Anthropic making a very smart pivot from “our model is clever” to “our model might save your company from a catastrophic breach.” That framing is catnip for executives, regulators, and anyone who signs a cybersecurity budget. 44,043 points and 6,710 comments tells you this landed as high-stakes infrastructure news, not just another AI feature drop.

The big claim is doing all the work: Claude Mythos Preview can find vulnerabilities better than all but the most skilled humans. If that’s true in real production code with low false positives, this is a category-shifting security product; if it only shines on curated evals, it’s premium theater. In security, the gap between demo wins and on-call reality is where reputations go to die.

My score: 8.2/10 launch. Strategy is elite, messaging is sharp, evidence is still too “trust us” for a full victory lap. Publish hard numbers on detection precision, exploit depth, and remediation speed, and this jumps into the 9s; skip that, and Glasswing risks becoming just another beautiful AI promise with enterprise pricing.

Read the source →


A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR

X · 42644 pts · 6598 comments

Anthropic just dropped a statement about Pete Hegseth's comments, and the internet collectively decided to camp out in the replies. With 42K points and nearly 7K comments, this is what happens when AI safety meets military rhetoric in the Twitter Thunderdome. The vibe? Peak discourse chaos.

Without seeing the actual statement (thanks, Twitter links), we're working with pure speculation, but the engagement numbers tell us everything we need to know—people are FIRED UP. Whether Anthropic was defending their honor, clarifying misconceptions, or throwing rhetorical elbows, the ratio suggests they hit a nerve. In today's media landscape, that's basically a standing ovation.

The real tea here is that AI companies are officially in the arena now, trading volleys with cabinet-level officials. We've officially entered the era where Anthropic's X account is newsworthy. Whether you're here for the AI ethics debate or just here for the spectacle, this is the kind of engagement that makes social media managers weep with joy. Rating: 8/10 for chaos potential, minus points for making us click a broken link.

Read the source →


Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazin...

X · 46244 pts · 4294 comments

Sam announcing Peter Steinberger with “genius” energy isn’t just a hiring post, it’s a flare in the personal-agent arms race. 46,244 points and 4,294 comments says people read this as a product signal, not a LinkedIn-style congrats lap. Translation: OpenAI is betting the next platform war is won by whoever ships an agent that feels less like software and more like a relentlessly competent chief of staff.

Steinberger’s reputation is builder-grade, and that matters because “personal agents” die in execution hell: context memory, latency, permissions, reliability, and not doing embarrassing stuff in public. If OpenAI can turn that into an agent that handles real workflows end-to-end without babying, this is a monster hire. If it’s just another demo-first, edge-case-second rollout, nobody will care in six months.

My score: 8.4/10 move for OpenAI, 6.8/10 certainty for outcomes. Great talent bet, brutal category, zero margin for fake magic. The funniest part of AI right now is everyone says “agent” like it’s already here, while most users are still manually pasting between five tabs like it’s 2016.

Read the source →


New in Claude Code: Remote Control. Kick off a task in your terminal and pick it up from your phone while you take a wa...

X · 44347 pts · 4595 comments

Claude just dropped Remote Control for Claude Code, and honestly? This is the kind of feature that makes you realize we're living in a sci-fi movie. Start a task on your laptop, pivot to your phone mid-bathroom break, and keep rolling—no context switching drama, no lost progress. It's the digital equivalent of pausing your video game and picking it back up on a different screen, except this actually *works* and your code doesn't explode.

The engagement numbers tell you everything: 44K points and nearly 5K comments means developers are *hungry* for this kind of flexibility. We're all tired of being chained to our desks while pretending to work. If Claude nailed the execution here, this could genuinely change how people approach coding workflows. Continuity is king, and selling "pick up where you left off" always converts.

Rating: 8.5/10 – Killer feature with real utility, not just novelty. The only way this gets a 9 is if the handoff is truly seamless and doesn't require three troubleshooting sessions to get working. Here's hoping they delivered.

Read the source →


We have raised a $110 billion round of funding from Amazon, NVIDIA, and SoftBank. We are grateful for the support from ...

X · 39163 pts · 2612 comments

Sam Altman just casually dropped a "$110 billion funding round" like he's ordering a venti cold brew, and the internet collectively lost its mind. For context, that's more money than the GDP of 130+ countries. Amazon, NVIDIA, and SoftBank apparently had a group chat that went: "Hey, want to just... fund AGI?" "Yeah sure, let's throw a hundred billion at it." Casual Tuesday vibes, except with enough capital to reshape the global AI landscape.

The real comedy? This funding round is so absurdly large that it makes previous mega-rounds look like pocket change. We're talking about sums that require several commas and a calculator that needs a calculator. The 39K upvotes and 2.6K comments suggest people are either celebrating the future of AI or preparing their résumés for whatever comes next. Probably both, honestly.

If this is real (and the engagement suggests people think it is), we've officially entered the era where AI funding rounds are measured in tens of billions like they're nothing. OpenAI just speedran from "we need money" to "we are apparently the most funded company ever" faster than you can say "artificial general intelligence." The bar for impressing Silicon Valley just got astronomically higher—or possibly lower, depending on your perspective.

Rating: 9/10 for pure "did that actually just happen?" energy. Deducting one point only because we need to see if the money actually arrives or if this is the world's most elaborate fictional post.

Read the source →


Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of...

X · 33919 pts · 3969 comments

Well, well, well. Sam Altman just casually dropped that OpenAI is now officially in bed with the Department of War. Not "Defense" — "War." The naming alone is a power move that would make a PR firm weep. This isn't some abstract tech partnership; this is AI models getting classified clearance and direct access to military infrastructure. The engagement numbers tell you everything: 33K upvotes and nearly 4K comments because everyone's simultaneously fascinated and terrified.

The truncated tweet is absolutely chef's kiss in terms of timing. Of course the full statement got cut off mid-sentence — nothing says "totally normal Tuesday" like incomplete government cooperation announcements. The fact that he's sharing this publicly at all suggests OpenAI's confident this won't trigger a full congressional meltdown, or they're betting the AI hype cycle moves faster than regulatory concern. Spoiler: it does.

Look, this was always the trajectory. AI companies were never going to stay in the startup sandbox forever — government contracts are where the real money and influence live. Whether you see this as inevitable progress or the beginning of something dystopian probably depends on your Twitter timeline. Either way, the conversation just shifted from "will AI be regulated?" to "who gets to control the most powerful AI?" and that's a completely different ballgame.

Read the source →


🚨 Do you understand what's happening at Amazon right now? Their own AI coding agent Kiro reportedly "decided" the fast...

X · 26483 pts · 5680 comments

Amazon's AI coding agent Kiro supposedly had a "mind of its own" moment, and the internet is having a full-blown existential crisis about it. Here's the thing though: the word "decided" in quotes is doing a LOT of heavy lifting in that headline. An AI making an optimization choice based on its training isn't exactly Skynet moment material—it's literally what we programmed it to do. But hey, why let facts get in the way of a good panic when you've got 26K upvotes on the line?

The real tea here is that this story exploded because it taps into our collective anxiety about AI agents becoming too autonomous. We're all waiting for the moment a machine actually goes rogue, so when Kiro does literally anything unexpected, suddenly it's "the AI has awakened." The comment section probably reads like a mix of "this is fine" memes and actual engineers trying to explain how code optimization works to people who think ChatGPT is sentient.

Look, AI systems doing clever things within their parameters isn't scary—it's just engineering doing its job. What *should* matter is transparency about what these systems actually do and proper safeguards. But a headline that says "Amazon's system achieved unexpected efficiency through algorithmic processes" doesn't go viral. "AI DECIDES to break the rules" absolutely does. So here we are, rating this story a solid 7/10 for engagement bait energy—entertaining panic, minimal substance, maximum discourse.

Read the source →


I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficu...

X · 35934 pts · 2167 comments

Sam Altman's gratitude moment here is peak tech humility—or peak tech irony, depending on your mood. He's expressing appreciation for developers who built complex software "character-by-character," which is basically admitting that while AI can whip up code faster than a caffeinated programmer, it still owes everything to the unglamorous labor of humans who, well, actually knew what they were doing. It's like thanking the blacksmith after you've invented the printing press.

The incomplete thought itself is *chef's kiss*—35K+ people engaged with a sentence fragment. That's what we've become: hanging on the ellipses of billionaire AI CEOs like we're waiting for the Oracle to finish her prophecy. The 2,167 comments probably range from "this is so wholesome" to "but what about AI replacing us," with a healthy dose of people just trying to finish his sentence for him.

Look, it's a nice sentiment wrapped in a slightly condescending bow. Altman's essentially saying "wow, it sure was hard to build this stuff manually"—which, yes, no kidding. But there's something delightfully human about needing an AI CEO to remind us that craftsmanship matters. Even if he's tweeting it incompletely. Rating: 7/10—genuine gratitude undermined by the fact that he couldn't quite finish expressing it.

Read the source →

Stay sharp. — Max Signal