Linus Torvalds Just Made a HUGE Call on AI in Linux

OK so here's what's actually going on: Linus Torvalds — the guy who basically invented Linux and runs the whole operation — just dropped official rules about using AI tools when writing code for the Linux kernel.

And it's... surprisingly chill? But also kind of a warning shot.

What Actually Happened

Linus added a new document to the official Linux kernel documentation that basically says: "Yeah, you can use AI coding assistants like ChatGPT or Copilot when you're writing kernel code. But here's the deal..."

The gist? You're still responsible for everything you submit. Not the AI. You. This is the key part that everyone needs to understand.

Think of it like hiring a contractor to help renovate your house. They can suggest ideas, show you designs, whatever. But if the roof leaks? That's on YOU. You signed off on it. You're liable.

Why This Matters (And It Actually Does)

Linux powers literally everything. Your phone's Android OS? Built on Linux. Most servers? Linux. Cloud infrastructure? Linux. We're talking about the foundation of the modern internet.

So when Linus says "be careful with AI," he's not messing around.

The problem: AI tools hallucinate. They make stuff up. They copy code from training data. Sometimes that code is buggy. Sometimes it's not even licensed properly — which is a NIGHTMARE legally.

If someone uses ChatGPT to write a kernel patch, submits it, and it turns out that code was trained on GPL-licensed code without attribution? That's a problem. A big one.

Here's What The Rules Actually Say

Linus basically laid down four main things:

1. You gotta understand the code. Don't just copy-paste AI output. If you can't explain what it does, don't submit it. Makes sense.

2. You're verifying it works. Test it. Break it. Make sure it actually does what you think it does. AI isn't magic — it's an autocomplete on steroids.

3. You're responsible for licensing. If the AI trained on something GPL'd, that's YOUR problem now. You need to know where the code came from.

4. Document that you used AI. Be transparent about it. Don't sneak it in like you wrote it yourself.

What This Means For Regular People

Honestly? This is good news wrapped in a warning.

The good: It means Linus isn't banning AI from Linux development. He's not a luddite. He gets that these tools are useful for productivity.

The bad: It means kernel maintainers are about to get VERY strict about code review. They're gonna ask "Did you use AI?" They're gonna look for hallucinations. They're gonna dig into licensing.

The bigger picture: This is how guardrails actually work in tech. Not a blanket ban. Not ignoring the problem. Just "here's how we do this responsibly."

Why It Spread

This blew up on GitHub and tech Twitter because it's THE first major open-source project saying "OK here are the actual rules for AI in our codebase."

Everyone else is gonna copy this. PostgreSQL, Python, Mozilla — they're all watching to see how Linux handles it.

The Vibe Check

What I love about this is that Linus basically said: "We trust developers. But verify." It's not paranoid. It's not anti-AI. It's just... responsible.

It's the equivalent of "sure, use your phone's GPS to navigate the drive. But you're still the driver. Don't blame the map if you crash."

Now you know more than 99% of people.

Now you know more than 99% of people. — Sara Plaintext