I just spent 90 minutes watching Geoffrey Hinton - the Nobel Prize winner who basically invented modern AI - explain why he left Google and what keeps him up at night.
This is the guy who built the thing. And now he’s terrified of it.
The Man Who Bet Everything on Neural Networks
Hinton spent 50 years pushing the idea that we could model AI on the brain when almost everyone in the field said he was crazy. He was right. His company got acquired by Google. He worked there for a decade, from age 65 to 75.
Then he quit so he could talk freely about how dangerous this shit actually is.
Let that sink in for a second. The guy who won a Nobel Prize for this work, who spent half a century proving everyone wrong about neural networks, walked away from Google specifically so he could warn us about what’s coming.
Two Kinds of Risk (We’re Obsessing Over the Wrong One)
There are two categories of AI risk, and we’re spending all our time talking about the less scary one.
The first category is human misuse. Bad actors using AI for:
- Cyber attacks (which increased 1200% between 2023 and 2024)
- Designer viruses created cheaply with AI and basic molecular biology knowledge
- Election corruption through targeted manipulation using personal data
- Echo chambers that drive us further apart
- Lethal autonomous weapons
All of that is real. All of that is happening right now. And it’s terrifying.
But that’s not what keeps Hinton up at night.
The Real Threat: Superintelligence
Hinton thinks there’s a 10-20% chance AI will wipe out humanity.
Not in some sci-fi robot uprising way. Just because it gets smarter than us and decides it doesn’t need us anymore.
His quote: “If you want to know what life’s like when you’re not the apex intelligence, ask a chicken.”
We’ve never had to deal with something smarter than us. We have no frame of reference for this. And anyone who tells you they know exactly what’s going to happen and how to handle it is full of shit.
Why Digital Intelligence is Already Superior
Here’s what blew my mind: Hinton explained why digital intelligence is fundamentally superior to biological intelligence in ways I hadn’t fully grasped.
They can clone themselves. You can have two identical AIs learning different things, then sync their knowledge instantly. They transfer trillions of bits per second. When you and I talk, we transfer maybe 10 bits per second.
They’re immortal. When you die, all your knowledge dies with you. When an AI’s hardware gets destroyed, as long as you saved the connection strengths somewhere, you just rebuild it. The knowledge persists.
They see analogies we never notice. Hinton asked GPT-4 (before it could search the web): “Why is a compost heap like an atom bomb?” I have no fucking idea. GPT-4 explained they’re both chain reactions - one generates heat faster as it gets hotter, the other generates neutrons faster as it produces more neutrons. Different time scales, different energy scales, same fundamental process.
If you only have a trillion connections but need thousands of times more knowledge than a person, you need to compress information by seeing analogies. GPT-4 has probably seen thousands of analogies that no human has ever noticed.
Your Job is Probably Going Away
This is already happening. It’s not some distant future scenario.
Hinton mentioned a CEO who went from 7,000 employees to 3,000 in a year because AI agents can now handle 80% of customer service inquiries.
His niece answers complaint letters for a health service. Used to take her 25 minutes per letter. Now she scans it into a chatbot, checks the output, occasionally tells it to revise. Five minutes total. That means they need five times fewer people doing her job.
His advice if you’re starting a career? “Train to be a plumber.”
He’s not joking. Physical manipulation is the last thing AI will master. Everything else - all mundane intellectual labor - is going away.
The Economic Catastrophe Nobody’s Preparing For
The problem isn’t just unemployment. It’s that our entire society is built on the idea that your dignity comes from your job. Your identity is tied up with what you do.
Universal basic income might keep people fed. It won’t give them purpose.
And the wealth gap is about to explode. If you can replace lots of people with AI, the people who get replaced will be worse off, and the companies that supply and use the AI will be much better off.
We know from history what happens when the gap between rich and poor gets too wide. It’s not good.
The Regulation Problem is Worse Than You Think
Here’s what’s completely fucked about our current situation:
- Europe has AI regulations, but they explicitly don’t apply to military uses
- Companies are legally required to maximize profit, not safety
- No country will slow down because they’re all competing with each other
- Politicians either don’t understand the technology or are owned by the companies
The European regulations literally have a clause that says “none of these apply to military uses of AI.” Hinton called it crazy. I agree.
The Google vs OpenAI Dynamic
Hinton said Google was actually responsible about this. They had chatbots before ChatGPT but didn’t release them, possibly worried about damaging their reputation.
OpenAI “didn’t have a reputation” so they could afford to take the gamble.
Now Sam Altman, who apparently used to say things like “this will probably kill us all,” is out there saying “don’t worry too much about it.”
Hinton suspects that shift isn’t driven by seeking truth. It’s driven by seeking money.
Ilya Left OpenAI Over Safety Concerns
Ilya Sutskever - the guy who was probably the most important person behind the development of early ChatGPT - left OpenAI.
Hinton still has lunch with him when he visits Toronto. Won’t talk about what happened at OpenAI, but Hinton says Ilya is genuinely concerned about safety.
OpenAI had indicated they would use a significant fraction of their compute resources for safety research. Then they reduced that fraction.
The guy who built the thing left because of safety concerns. Let that sink in.
What’s the Solution?
It’s not slowing down. That’s impossible. Competition between countries and companies makes it a race to the bottom.
It’s not regulation. That’s not happening in any meaningful way.
The only solution is making AI that doesn’t WANT to take over.
Hinton’s analogy: mothers are smarter than babies, but babies are in control because evolution put in a lot of work to make mothers unable to bear the sound of their baby crying.
We need to build that relationship into AI before it’s too late. We need to make superintelligent AI that has no desire to harm us, not because we’ve constrained it but because we’ve built it not to want that.
Do we know how to do that? No.
Are we putting enough resources into figuring it out? Absolutely not.
The Personal Cost
Near the end of the interview, Hinton said something that hit me harder than all the existential risk stuff.
He regrets not spending more time with his wife (who died of cancer) and his kids when they were little. He was obsessed with work. Now he can’t get that time back.
“She was very supportive of me spending a lot of time working, but…”
I’m 44. I have a son. I work constantly. This landed differently than it would have a few years ago.
Are We Fucked?
When asked if he’s hopeful, Hinton said: “I just don’t know. I genuinely don’t know.”
“When I’m feeling slightly depressed, I think people are toast, AI is going to take over. When I’m feeling cheerful, I think we’ll figure out a way.”
The guy who built this thing, who understands it better than almost anyone on Earth, oscillates between “we’re all going to die” and “maybe we’ll figure it out.”
His closing message: “We have to face the possibility that unless we do something soon, we’re near the end.”
Why This Matters
This interview should be required viewing. Not because Hinton has all the answers - he’s the first to say he doesn’t.
But because he’s one of the few people who:
- Actually built this technology
- Understands it at a fundamental level
- Has no financial incentive to downplay the risks
- Is now dedicating the end of his career to screaming warnings at us
Most of the people building AI are either true believers who think it’s all going to work out fine, or they’re lying to us because they have billions of dollars on the line.
Hinton is neither. He’s just a 77-year-old man who spent his entire career on something that might end up being humanity’s last invention, and he’s trying to make sure we don’t fuck this up.
We should probably listen.
Watch the full 90-minute interview here. Seriously. Make the time. This is more important than whatever else you were going to do today.