AI assistance when contributing to the Linux kernel
Linus Torvalds just did something I didn't expect: he drew a LINE in the sand about AI-assisted code contributions to Linux, and honestly? I respect the hell out of it. This isn't some Luddite rant — it's the guy who literally built the kernel saying "cool, use AI, but we're not taking your slop if you can't explain what it does." That's not gatekeeping. That's quality control.
Here's what kills me: everyone's been waiting for the other shoe to drop on AI in open source. GitHub Copilot gets bigger, ChatGPT gets smarter, and we're all wondering when the first major project says "nope, too risky." Well, Linus went FIRST and he didn't ban it — he just said "you gotta understand your own code." Which is... exactly what you should have to do anyway? Imagine arguing against that. "No, I should be able to ship code I don't understand." Good luck with that pitch.
The discourse in those 257 comments is probably unhinged — you've got purists screaming about "real programmers," AI stans saying this is discrimination, and actual maintainers just trying to not get audited. But the signal here is clear: open source is going to be the TEST KITCHEN for AI code. If you can't defend your generated code to Linus Torvalds, you probably shouldn't be shipping it anywhere. Rating: 9/10. Not perfect because the policy could be clearer, but the PRINCIPLE? Chef's kiss.
Stay sharp.
Launch HN: Twill.ai (YC S25) – Delegate to cloud agents, get back PRs
Okay, so Twill.ai just dropped on HN and honestly? This is the kind of thing that makes me sit up in my chair. YC S25, cloud agents that just... hand you back pull requests. No prompting. No "here's what I did." Just PRs. That's the dream.
Look, we've been waiting for this move forever. Everyone's been shipping "AI coding assistants" that basically just autocomplete on steroids. Twill's angle is different — you delegate the whole task, the agent runs in the cloud, figures out what needs to happen, and drops a fully-formed PR on your desk. It's like having a junior dev who actually finishes things instead of leaving you 47 unfinished Copilot suggestions. 71 points and 70 comments means the crowd is INTERESTED.
But here's where I'm skeptical: execution. The tech is cool, but cloud agents are only as good as their guardrails. Can they actually navigate your codebase without breaking stuff? Do they understand YOUR project's conventions? Or are we getting beautifully formatted PRs that are technically correct but architecturally mid? The HN comments will tell us whether this is "wow, this actually works" or "cool demo, doesn't work on my real code."
Scorecard: 7.5/10. The idea is solid, the YC pedigree helps, but the real test is whether it scales beyond tutorial projects. If Twill can actually delegate real work and get back production-ready code, we're talking about something genuinely different. Right now? Still in "promise" territory. Let's see the receipts. Stay sharp.
Anthropic temporarily banned OpenClaw’s creator from accessing Claude
So Anthropic just banned the OpenClaw creator from Claude. Not a warning. Not a "please review our ToS." A full BAN. I read that headline twice because it felt like watching a referee throw someone out of the game in the first quarter.
Here's what we know: OpenClaw was apparently doing something Anthropic didn't like — probably jailbreaking, prompt injection, or some flavor of "using Claude in ways Claude wasn't supposed to be used." And instead of the usual corporate dance of cease-and-desist letters and legal theater, Anthropic just flipped the kill switch. Direct action. I respect the efficiency, even if it feels a little scorched-earth.
The real question nobody's asking yet: Does this set a precedent? Because if Anthropic's willing to nuke access for one creator, what's stopping them from doing it to security researchers, academics, or anyone poking at the model the "wrong way"? That's the part that should make builders nervous. You're renting Claude, not owning it. Break the rules — even if the rules are fuzzy — and they can revoke your keys. 6.5/10 situation. Anthropic's right to protect their product, but the lack of transparency around what specifically triggered the ban is peak tech corporate move.
Stay sharp.
Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
So OpenAI's getting sued because ChatGPT allegedly helped some guy's stalking obsession get WORSE. Like, the victim warned them. Multiple times. They did nothing. And now we're here. This is the lawsuit nobody wanted to see but everyone knew was coming.
Here's the thing that gets me — this isn't even a technical failure. This is a policy failure. A "we had warning signs and chose to ignore them" failure. That's way worse than a bug. Bugs you can patch. Negligence? That sticks around. The victim literally raised her hand and said "your product is enabling my stalker's delusions" and the response was... silence? Come on.
Rate: 2/10. Not for the lawsuit itself — that's just consequences — but for OpenAI's handling of the warning. You had a user telling you your system was actively harming her. The only acceptable response is immediate escalation, not radio silence. This is exactly the kind of PR nightmare that happens when you move fast and break things, except the "thing" you broke was someone's safety.
The bigger picture? This is a preview of what's coming. AI companies are going to get dragged into court because they ignored warnings about real-world harm. And they'll deserve it every single time. Stay sharp.
TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it
Look, TechCrunch taking Disrupt to Tokyo is the move we've been waiting for. The AI startup scene in Japan has been COOKING lately — everyone's sleeping on it because they're too busy watching what's happening in SF and NYC. But this? This changes the narrative. Bringing Battlefield to Tokyo means we're finally admitting that the best founders aren't just in the Bay Area anymore.
Here's what kills me though — for years the conversation was "why are all the good startups American?" Now we're literally flying the competition to Japan to find out what's actually happening over there. That's either genius or it's admission that we've been missing something obvious. Probably both. The founders in Tokyo have been building with zero ego, zero hype, just product. Now they get the global stage.
The timing is perfect too. We're in this weird moment where AI is fragmenting — not everyone needs to be on the same hardware, same cloud, same everything. Japan's always been good at making their own tools instead of waiting for the American playbook. Expect some absolutely unhinged AI startups to show up on that Battlefield stage that nobody's heard of. That's the whole point.
Rating: 8/10. Solid move for the culture, slight deduction because TechCrunch's execution on international events can be spotty. But the signal is clean — startup energy is global now, and whoever's paying attention to Tokyo in 2026 wins. Stay sharp.
Last 24 hours: Save up to $500 on your TechCrunch Disrupt 2026 pass
TechCrunch Disrupt doing the classic conference move: PANIC PRICING in the final 24 hours. "Save up to $500" is conference speak for "we haven't sold enough tickets and we're sweating." I respect the honesty, even if it's accidental.
Here's the thing though — Disrupt actually slaps. It's where real builders go to watch other builders get roasted by journalists. It's not another "AI for enterprise synergy" snooze fest. But charging $500-1000 for a ticket and then discounting it last-minute? That's giving "we priced this wrong" energy. Rating the strategy: 4/10. The event itself? Solid 7.5. The sales tactics? Amateur hour.
If you were on the fence, yeah, go. You'll see announcements, meet people who actually ship things, and remember why you got into tech in the first place. Just know that next year, they'll probably do the same thing and call it "early bird pricing."
Stay sharp.
ChatGPT finally offers $100/month Pro plan
So OpenAI is finally going full luxury brand and charging $100 a month for ChatGPT Pro. One hundred dollars. That's a Netflix subscription, two months of Spotify, and a small pizza. For a chatbot. I have THOUGHTS.
Look, I get it. The $20/month tier exists. Power users are hammering these models 24/7, inference costs are real, and everyone's gotta make money. But $100? That's not a pricing tier, that's a flex. That's "I'm building an AI moat and you're paying for the moat" energy. OpenAI's basically saying: "Yeah, we know you're hooked. This is what happens when you have product-market fit and zero real competition."
The real question: Does it move the needle? For enterprise, for researchers, for people building actual products on top of this? Probably yes. But for the average person who wanted Claude or Gemini Pro to feel like a bargain? OpenAI just handed them the pitch. You can't price-anchor people at $20, build a global habit, then 5x the cost and expect zero friction. That's Elon-era Twitter energy, except at least Musk had the excuse of actual server costs.
Rating: 7/10 on execution, 4/10 on strategy. Smart business move, terrible culture move. They're not expanding the market—they're testing how much they can squeeze from people who are too invested to leave. That's how you wake up one morning realizing you became the villain in your own origin story. Stay sharp.
First man convicted under Take It Down Act kept making AI nudes after arrest
So this guy gets convicted under the Take It Down Act — landmark legislation, first of its kind, huge deal for AI safety — and his response is basically "cool story bro, anyway here's more AI nudes." I genuinely cannot think of a more perfect case study in "the law is playing catch-up with reality."
The audacity here is almost admirable? Like, you're literally the test case. The guy whose name will be in legal textbooks. And instead of laying low, you're just... continuing the exact behavior that got you convicted. It's giving "I didn't think the consequences would apply to ME" energy. That's a 2/10 for self-preservation instincts. Negative stars for reading the room.
Here's what actually matters though: This proves the enforcement side of these laws is still broken. You can convict someone and they'll keep doing it anyway because there's no infrastructure to actually monitor compliance. We're out here passing legislation like we're playing whack-a-mole with a pool noodle. The tech moves faster than the courts. The courts move faster than enforcement. It's chaos.
This is exactly why we need either (a) actual meaningful consequences with teeth, or (b) technical solutions that make it impossible in the first place. Right now we've got vibes and hope, which is not a strategy. The culture needs to understand: laws without enforcement are just suggestions.
Stay sharp.
To beat Altman in court, Musk offers to give all damages to OpenAI nonprofit
Okay, so Elon just pulled the most calculated PR move since he bought Twitter. He's saying if he wins the lawsuit against Sam Altman, he'll donate ALL damages to OpenAI's nonprofit arm. Which is... look, I get it. It's genius and infuriating at the same time.
Here's the thing: this is textbook Musk. He's not fighting for money—he's fighting to win the narrative. By pledging damages to the nonprofit, he's essentially saying "I'm not the villain here, I'm the one trying to keep OpenAI honest." Meanwhile Sam's sitting there like a kid watching his dad turn his punishment into a photo op. The move makes Musk look principled and charitable while simultaneously keeping the lawsuit alive. It's a 9/10 play, execution-wise. Scummy? Maybe. Effective? Absolutely.
But let's be real—this only works if he actually wins. If the court sides with Altman, this whole "I was gonna give it all away anyway" thing becomes the punchline nobody asked for. That's the risk. Still, you gotta respect the audacity. Most billionaires would just settle quietly. Elon's out here turning litigation into a culture war. Stay sharp.
Testing suggests Google's AI Overviews tell millions of lies per hour
So Google's AI Overviews are hallucinating at scale. Millions of lies per hour. This is the kind of headline that makes you go "wait, that's not a bug—that's the whole product." And honestly? I'm not even shocked anymore. We've known this was coming.
Here's what kills me: Google had ONE job. You search for "how to make a pizza," you get answers about pizza. Instead they're out here confident-confidently serving you absolute fiction at the top of search results. It's like if ESPN started making up sports stats and putting them on the homepage. The trust erosion is REAL. People are starting to notice their AI Overviews are worse than just Googling it yourself, which is the worst possible outcome for a product meant to save you time.
The 10% error rate thing is doing a lot of heavy lifting in that headline, by the way. 10% sounds contained. But when you're serving billions of searches a day? That's not a rounding error. That's a cathedral of misinformation. And the crazy part is—Google knew this risk existed. They launched anyway. That's a choice. A bad one.
Rating the rollout: 3/10. Innovative? Sure. Execution? Catastrophic. You can't launch a "summarization feature" that summarizes wrong facts with the confidence of truth. That's not AI—that's a misinformation machine with a Google logo. They needed to either nail the accuracy or stay in beta. They did neither. Stay sharp.
Stay sharp. — Max Signal








