The Godfather of AI: Jensen Huang's Origin Story

December 4, 2025

The Godfather of AI: Jensen Huang’s Origin Story

I just watched Jensen Huang sit down with Joe Rogan for over two hours. The story of how NVIDIA went from making 3D graphics chips for video games to powering the entire AI revolution is wild.

This isn’t another tech CEO talking about quarterly earnings. Jensen’s story is about bet-after-bet reinvention, near-death moments, building things nobody thought could exist.

From Reform School to Silicon Valley

The origin story nobody talks about: Jensen came to the US from Taiwan as a kid and got sent to Oneida Baptist Institute. He thought it was a school. It was a reform school. The kind of place where kids who’d been in trouble ended up.

Flash forward a few decades and that kid founded NVIDIA in 1993. Started with 100-150 people making 3D graphics chips for PCs. Silicon Graphics was the giant then - massive workstations that cost hundreds of thousands of dollars and could render incredible 3D graphics.

Jensen’s vision was simple: bring that power to everyone. Make a personal version of Silicon Graphics regular people could afford. Build chips for video games - Quake, Doom, Unreal - make the graphics as realistic as possible.

The first decade of NVIDIA was just that. Making the best graphics chips they could. Fighting to survive. Multiple times they almost went out of business. Chips that didn’t work. Products that didn’t sell. Each time they had to reinvent themselves or die.

The Transformation That Changed Everything

Around the second decade, Jensen made what he calls “one of the most important decisions” - transform NVIDIA from a graphics company into an accelerated computing company.

Here’s what accelerated computing means: you take a regular CPU (the general-purpose processor in every computer) and add specialization to it. In NVIDIA’s case, that specialization was the GPU - the Graphics Processing Unit that started as a way to render 3D graphics but turned out to be incredibly good at parallel computing.

They created CUDA. Made it possible to use the GPU not just for graphics but for any kind of computing that could be done in parallel. Physics simulations. Scientific computing. Anything that needed massive computational horsepower.

All of NVIDIA’s revenue came from graphics. GeForce chips. Quadro chips. They were the leading 3D graphics company in the world. Jensen looked at that and thought: if we don’t transform into something else, we’re going to get left behind.

The risk was enormous. Betting the company on this idea that GPUs could be more than graphics processors.

The Moment AI and GPUs Collided

Around 2012-2013, someone invented this new way of doing neural networks called deep learning. The way they made it work was by applying these new computing techniques to NVIDIA’s GPUs.

Everything took off.

All the work NVIDIA had done in accelerated computing - the decade-plus of investment in CUDA, in parallel computing, in building this entire software stack - became the engine of AI.

Jensen started working on this in 2006-2007. For over a decade, they were investing in AI computing before anybody noticed. Before most people even understood what neural networks were.

Now the whole world knows GPU computing. When people hear “GPU,” they think “AI processor.” But it started as a video game thing. Making graphics for Quake look better.

The Scale of What We’re Building Now

The technical details Jensen drops in this conversation are staggering.

The chip fabs - the factories that manufacture these processors - cost tens of billions of dollars. The most expensive factories ever built. Inside them are machines from companies like ASML that cost hundreds of millions of dollars each. Printing features on silicon wafers that are just a few nanometers wide.

The supply chain is insanely complex. Design companies. EDA tools. The fabs themselves. Packaging. Substrates. Every piece has to come together perfectly or nothing works.

It takes years. Five years minimum to build a fab, sometimes longer. You have to plan it years in advance. NVIDIA is designing chips today for AI models that haven’t been invented yet.

Blackwell is their next-generation architecture - more compute, more memory, more bandwidth, all designed for large language models and generative AI. They already have several generations in development beyond that.

The Competition Question

Joe asked Jensen about competitors - AMD, Intel, others trying to build similar chips. Jensen’s answer was revealing: “I’m worried about all of them.”

But here’s the thing - even if someone builds a chip with similar performance, they don’t have the software stack. They don’t have CUDA. They don’t have the entire ecosystem NVIDIA has built over decades.

That’s the moat. Not just the hardware. The entire platform.

What AI Is (And Why GPUs Run It)

Jensen’s explanation of why GPUs and AI are so connected is the clearest I’ve heard: AI is massive amounts of matrix multiplications. GPUs are exceptional at that.

NVIDIA built GPUs that are even better at it. Built software that makes it easy for developers to use them.

Now you have these large language models with hundreds of billions of parameters. You can ask them questions, have them write code, summarize documents, translate languages. All running on NVIDIA’s GPUs.

The scale is almost incomprehensible. Training these models requires enormous computational resources. The costs run into tens of millions of dollars for a single training run.

The Energy Problem

Jensen kept coming back to energy. Almost everything in computing is becoming energy constrained.

This is why accelerated computing matters - you can do the same computation with a fraction of the energy if you use specialized processors instead of just general-purpose CPUs.

It’s not just about making AI faster. It’s about making it possible at all. If we tried to train these models on CPUs alone, the energy requirements would be astronomical.

Reinvention as Survival

What struck me most about this conversation was how many times NVIDIA almost didn’t make it. Multiple near-death moments. Multiple complete reinventions of the company.

Graphics company. Accelerated computing company. AI computing company.

Each transformation required betting everything on a new vision. Each time, most people thought they were crazy.

When they started talking about deep learning in the early days, people thought it was a science project. Why are you putting all this money into AI? Who’s going to use this?

NVIDIA believed that this way of doing computing was going to be transformative. Now everybody can see it.

What Comes Next

Here’s what keeps me up: we’re just getting started.

The chips being designed today are for AI models that don’t exist yet. Models that will be exponentially more powerful than what we have now. Models that will do things we can’t even imagine yet.

Jensen mentioned that OpenAI is working on things they haven’t announced. Every major tech company is in this race. The competition is global - US, China, Europe, everyone trying to build the next generation of AI.

The infrastructure being built right now - the fabs, the data centers, the energy systems - all of it is for a future we’re creating in real-time.

The Origin Story That Matters

There’s a throughline in Jensen’s story that matters more than any technical detail: a company that starts in 1993 making 3D graphics chips for video games has no business being one of the most important computing companies in the world.

The only way that happens is through relentless reinvention. Taking risk after risk after risk. Believing you can create what other people can’t.

From a little 3D graphics startup, over 30 years, they built the GPU, CUDA, AI computing, the entire stack that powers modern AI.

It’s a story of a lot of people who worked very hard and took a lot of risks. It’s also a story of vision - seeing what computing could become and building toward it for over a decade before anyone else understood what was happening.

What This Means for the Rest of Us

The implications are massive. We’re living through a fundamental transformation in how computing works. How we interact with technology. How we solve problems.

NVIDIA isn’t just building faster chips. They’re building the infrastructure for a new era of computing. Doing it by looking years into the future and building for things that don’t exist yet.

The next time you use ChatGPT or any other AI tool, think about the decades of work that made it possible. The graphics chips designed to make Quake look better. The decision to bet everything on parallel computing. The years of investment in CUDA before anyone cared.

That’s how paradigm shifts happen. Not overnight. Through decades of work by people crazy enough to believe in a vision before anyone else can see it.