Ilya Sutskever's OpenAI Deposition

November 6, 2025

Ilya Sutskever's OpenAI Deposition

Ilya Sutskever’s OpenAI Deposition

A 365-page deposition transcript dropped this week that finally puts some hard evidence behind the November 2023 OpenAI board drama. Ilya Sutskever, OpenAI’s former chief scientist and one of the board members who voted to remove Sam Altman, sat for nearly 10 hours of questioning in the Musk v. Altman lawsuit.

The headline is simple: Sutskever wrote a 52-page memo to OpenAI’s independent board directors that opened with this line: “Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another.” When asked what action he thought was appropriate, his answer was one word: “Termination.”

The One-Year Plan

This wasn’t some spontaneous decision. When asked how long he’d been considering Altman’s removal, Sutskever answered: “At least a year.”

He wasn’t actively planning it, he clarified, because “it didn’t seem feasible” with the board composition at the time. But he was waiting. Specifically waiting for “a sequence of rapid departures from the board” that would change the dynamics so “the majority of the board is not obviously friendly with Sam.”

Think about that. The chief scientist of one of the most important AI companies in the world spent over a year building a case, collecting evidence, waiting for the right moment to act.

The Secret Memo

Sutskever was methodical. He collected screenshots from Mira Murati (OpenAI’s CTO), compiled evidence, and sent the memo only to the independent directors - Adam D’Angelo, Helen Toner, and Tasha McCauley. He deliberately excluded Altman because, in his words, if Altman “became aware of these discussions, he would just find a way to make them disappear.”

He used disappearing emails. He kept the whole investigation secret from the CEO he was trying to remove.

And it wasn’t just Altman. Sutskever drafted a similar critical memo about Greg Brockman too. That document still exists somewhere, though his lawyers instructed him not to reveal who has copies beyond his own legal team.

The Evidence Problem

Here’s where things get shaky: almost all of Sutskever’s evidence came from one person - Mira Murati. He never verified it with anyone else.

The claim that Sam was pushed out of Y Combinator for “creating chaos, starting lots of new projects, pitting people against each other”? Came from Mira, who heard it from Brad Lightcap. Sutskever never spoke to Brad.

The allegation that Greg Brockman was “essentially fired from Stripe”? Came from Mira. Sutskever never checked with Greg or anyone at Stripe.

The screenshots showing tensions between Sam and Jason Kwon over GPT-4 Turbo and the Deployment Safety Board? All from Mira. Sutskever never spoke to Jason.

When asked if he thought it was a mistake to rely on secondhand knowledge, Sutskever said: “I think secondhand knowledge can be very useful, but I think that secondhand knowledge is an invitation for further investigation.”

The problem? No further investigation happened.

His memo suggested the board “may want to talk to certain people” like Bob McGrew, Nick Ryder, Peter Welinder, and Diane Yoon. When asked if those suggestions were followed through on, his answer: “I don’t know.” When asked if he had any discussion with the other board members about following through: “No.”

Later in the deposition, when reflecting on the process, Sutskever admitted: “I’ve learned the critical importance of firsthand knowledge for matters like this.”

The Anthropic Merger Nobody Talks About

During that chaotic weekend after Altman’s firing, something happened that’s gotten almost no attention: Helen Toner apparently coordinated a call with Anthropic about merging the two companies - with Anthropic taking over OpenAI’s leadership.

Dario and Daniela Amodei were on the call. Sutskever testified he was “very unhappy about it” and “really did not want OpenAI to merge with Anthropic.”

But the other board members? They were “a lot more supportive.” Helen Toner struck Sutskever as “the most supportive” of the merger. “At the very least, none were unsupportive,” he said.

Think about that timing. The board fires Altman on Friday. By Saturday or Sunday, they’re already in discussions with Anthropic - OpenAI’s direct competitor, founded by former OpenAI employees - about handing over the keys.

The merger talks died quickly due to “practical obstacles” that Anthropic raised, but the fact they happened at all is stunning.

The Helen Toner Problem

The deposition explores Helen Toner’s connections to Anthropic in detail. She was associated with Open Philanthropy, which is connected to Holden Karnofsky, who’s married to Daniela Amodei - Dario’s sister and Anthropic’s co-founder.

In October 2023, just weeks before the firing, Toner published an article criticizing OpenAI and praising Anthropic. Sutskever found it “a strange thing for her to do” and thought it was “not far from obviously inappropriate” for a board member.

He even discussed with Sam whether Helen should be removed from the board at that time. “At least at one point, I expressed support” for that, he testified.

So Sutskever supported removing Toner from the board in October for publishing an article that praised Anthropic. Then in November, he worked with that same board member to remove Altman. And then that board member immediately started merger talks with Anthropic.

”Destroying OpenAI Would Be Consistent With The Mission”

In a meeting after the firing, OpenAI executives told the board that if Sam didn’t return, OpenAI would be destroyed, and that would be inconsistent with OpenAI’s mission.

Helen Toner’s response, according to Sutskever: she said something to the effect that destroying the company would be consistent with the mission. When asked to clarify, Sutskever added: “I think she said it even more directly than that.”

Sutskever’s own view at the time? “I could imagine hypothetical extreme circumstances that answer would be ‘Yes’; but at that point in time, the answer was definitely ‘No’ for me.”

Process Failures

When asked to assess the process that preceded the removal, Sutskever admitted: “One thing I can say is that the process was rushed.”

Why? “I think it was rushed because the board was inexperienced” in board matters.

Helen Toner and Tasha McCauley weren’t showing up physically to all board meetings. Toner lived in D.C., at least part of the time. Sutskever didn’t interact with them frequently before the crisis. When asked how familiar they seemed with OpenAI’s operations, his answer: “They seemed to have some familiarity, but it’s hard for me to assess.”

When asked if he viewed them as experts in AI safety, he said he couldn’t really assess that either.

This was the board that decided to fire the CEO of one of the most valuable AI companies in the world.

What Sutskever Expected

The Wall Street Journal article quoted in the deposition said Sutskever “had expected the employees of OpenAI to cheer” when Altman was fired.

Sutskever clarified: he hadn’t expected them to cheer, but he “have not expected them to feel strongly either way.”

For someone who’d been at OpenAI since 2015, who’d helped build the company alongside Altman, that’s a remarkable miscalculation. It shows how isolated the board had become from the actual company they were governing.

Financial Interests Nobody Can Talk About

Sutskever still has an equity stake in OpenAI. The value has increased since he left in May 2024. When Musk’s lawyers tried to get him to quantify it, his attorney blocked the questions repeatedly.

The exchange was tense:

“You’re instructing the witness who has a financial interest in the defendant in the case - and the defendant is being sued for great sum of money here - not to answer the question; is that correct?”

“I’m instructing him not to answer the amount of his financial interest. He can answer whether he has an interest, but he cannot quantify it.”

More interesting: OpenAI is apparently paying his legal fees for this deposition, though Sutskever says he’s “not 100 percent sure.” He hasn’t received any bills. He didn’t discuss the deposition with anyone at OpenAI. But when asked who’s paying, his answer: “I think that’s probably the case… Because I don’t know who else it would be.”

The Investigation Nobody Trusts

After the chaos, a special committee was formed with Bret Taylor and Larry Summers to investigate. They hired WilmerHale. Sutskever was interviewed.

When asked if he had any reason to doubt the independence of Taylor and Summers, his answer: “Nothing to my knowledge.”

But when asked if he had any reason to question the integrity of the investigation itself, his answer changed: “At this point, I was too removed from those procedures” to evaluate it one way or another.

The Deposition Battle

The deposition itself became a war. It ran from 10:19 AM to 8:07 PM - nearly ten hours of testimony.

Sutskever’s attorney objected constantly, instructed him not to answer key questions about his financial interests, and repeatedly told Musk’s team they were “harassing the witness” and “wasting time” with questions about the termination process.

By 7:47 PM, the lawyers were yelling at each other. The court reporter had to note for the record that “it cannot be taken due to simultaneous cross-talk” and asked counsel to “continue with decorum in a professional manner.”

At one point:

“I’m tired of being told that I’m talking too much.”

“Well, you are.”

“Check yourself.”

The deposition ended with Musk’s lawyers declaring: “We have issues… in terms of the deposition remaining open because there’s documents that have not been produced to us. We have the issue of the Brockman report, which we learned of today, and you possess it, and it’s not been produced… We’re not done.”

Sutskever’s attorneys refused to bring him back.

Defense counsel also noted for the record that plaintiffs had pre-highlighted many of the exhibits shown to Sutskever, which they objected to as potentially leading.

What This Actually Shows

The full deposition paints a picture of a board that:

  • Relied heavily on secondhand information from a single source
  • Never verified key allegations directly
  • Was inexperienced and rushed the process
  • Had potential conflicts of interest through Anthropic connections
  • Immediately started merger talks with a competitor after firing the CEO
  • Expected employees to be indifferent to their CEO’s removal
  • Couldn’t even show up physically to board meetings consistently

And yet, despite all these problems, Sutskever still believed termination was appropriate. That conviction, held for at least a year, based on a 52-page memo of evidence he carefully compiled, is what makes this so fascinating.

When asked his opinion on who should be in charge of AGI, Sutskever said: “Right now, my view is that, with very few exceptions, most likely a person who is going to be in charge is going to be very good with the way of power. And it will be a lot like choosing between different politicians.”

He added: “I think it’s very hard for someone who would be described as a saint to make it. I think it’s worth trying. I just think it’s like choosing between different politicians. Who is going to be the head of the state?”

That’s the philosophical framework behind all of this. Sutskever didn’t think Altman was a saint. He thought the position would naturally attract people skilled in the ways of power. And he decided that person shouldn’t be Sam Altman.

The Bottom Line

We all watched the board drama play out in real time last November. We saw Altman get fired, the employee revolt, the weekend of chaos, and his eventual return. But this deposition shows us what happened behind closed doors in the months leading up to it.

The independent directors didn’t just wake up one day and decide to fire the CEO. They asked Sutskever to document his concerns. They collected evidence. They deliberated.

Either Sutskever saw something real that justified all of this, or he convinced himself of a narrative that fell apart under scrutiny. The deposition doesn’t answer which. But it does show us how one of the smartest people in AI can be catastrophically wrong about how organizations actually work.

The receipts are here. The process was flawed. The board was inexperienced. The evidence was secondhand. Anthropic was circling. The outcome was chaos.

And we’re only just starting to understand what actually happened in those rooms.

Discovery is one hell of a thing.