DealBook: Musk’s “Terminator” fears
Also, Trump’s latest tariffs setback.
DealBook
May 8, 2026

Good morning. Andrew here. We’ve got an inside-the-courtroom report by Mike Isaac on the Musk v. OpenAI trial. I was fascinated to hear the diary entries of Greg Brockman, an OpenAI co-founder who testified this week, read aloud.

Much of Brockman’s testimony focused on his musings about the huge financial stakes of converting OpenAI from a nonprofit to a for-profit enterprise. Brockman also seemed to know that the process would probably lead to a battle with Musk, OpenAI’s initial financial backer. “Can’t see us turning this into a for-profit without a very nasty fight,” he wrote, adding: “It’d be wrong to steal the nonprofit from him. That’d be pretty morally bankrupt.”

Brockman suggested that Musk would have a reasonable argument, writing: “He’s really not an idiot. His story will correctly be that we weren’t honest with him in the end about still wanting to do the for-profit just without him.” More below. (Was this newsletter forwarded to you? Sign up here.)

A sign with a big picture of Elon Musk’s face sits among green foliage with white flowers.
Outside the federal courthouse in Oakland, Calif., last week during the Elon Musk v. OpenAI trial. Josh Edelson/Agence France-Presse — Getty Images

Three takeaways from the big A.I. trial

At its core, the trial over Elon Musk’s lawsuit against OpenAI is a complicated, slightly navel-gazing legal battle over nonprofit contract law. It also happens to feature many of the main characters in the high-stakes race to develop superior artificial intelligence tools.

Two weeks in, the jury trial has provided moments of drama and taken some interesting twists, reports Mike Isaac, who is covering the proceedings for The Times at the federal courthouse in Oakland, Calif.

The context: Musk is suing OpenAI and Microsoft, the A.I. company’s biggest outside investor, for $150 billion. He claims that Sam Altman, an OpenAI co-founder and now its C.E.O., defrauded him out of millions by changing OpenAI from a nonprofit to a for-profit company.

Musk already testified. Altman and Microsoft’s C.E.O., Satya Nadella, are set to take the stand in the next two weeks.

A few big themes have emerged:

OpenAI’s internal drama was worse than we thought. The company, which is valued at $852 billion, has seen a revolving door of executives over the past three years. Altman was fired by his board but quickly reinstated in late 2022, an incident that employees refer to as “the blip.”

The legal discovery process has yielded evidence, such as text messages, emails and sworn depositions, showing that top OpenAI executives were in a long-simmering dysfunctional relationship kept private as the company grew more popular.

Mira Murati, OpenAI’s former chief technology officer, testified that she felt Altman had “undermined” her, and that she had a strained relationship with Greg Brockman, OpenAI’s president. Murati left the company in 2024.

Musk is trying to make the case that a “Terminator” future lies ahead. Tech companies are often used to arguing cases before judges, who largely focus on the evidence and how closely the arguments hew to established case law.

But in a jury trial, narrative matters. And the story line Musk is spinning is that he, not OpenAI, is primarily concerned with the future of the human race, which can be saved only if OpenAI is stopped.

“We want to be in a Gene Roddenberry outcome like ‘Star Trek,’” Musk testified last week, “not so much a James Cameron movie like ‘Terminator.’”

Judge Yvonne Gonzalez Rogers has warned Musk that he needs to knock it off with the theatrics and his attempts to make the case about “human extinction.” But it could still play well with the jury.

OpenAI is painting Musk as an unreliable narrator. In some ways, OpenAI’s lawyers — including Bill Savitt of Wachtell, Lipton, Rosen & Katz, who successfully forced Musk to follow through on the deal to buy Twitter — have a very good case against Musk. The billionaire left the company in 2018, stopped providing funding and ultimately washed his hands of his association with it.

And OpenAI’s lawyers point out that Musk didn’t file a lawsuit for six years — around the time that OpenAI was exploding in popularity and Musk was developing a competing A.I. company.

To hammer home the point that this is not a case of a billionaire’s saving humanity, OpenAI pointed to emails and witness testimony that paint Musk as someone plotting to stay connected to OpenAI through intermediaries like Shivon Zilis, the mother of four of his children and a former OpenAI board member.

OpenAI’s counsel also revealed emails discussing potential equity stakes in the company for Musk and others, which would undercut Musk’s claims that he always wanted OpenAI to be a nonprofit.

Brockman, the OpenAI president, described Musk as physically menacing. During one dispute about equity, “I actually thought he was going to hit me,” Brockman testified.

HERE’S WHAT’S HAPPENING

Oil prices remain volatile amid uncertainty over ending the war with Iran. Brent crude, the international benchmark, is trading around $100 a barrel today, as the U.S. and Iran exchanged fire — though President Trump said the cease-fire remained intact.

What to expect from today’s jobs report. Economists predict on average that the latest data from the Labor Department will show that just 55,000 jobs were created in April, enough to keep the unemployment rate at 4.3 percent. Wall Street will be watching for what the numbers say about the labor market and how that could influence the Fed’s interest rate policy.

Anthropic reportedly moves closer to a fund-raising mega-round. The artificial intelligence lab is weighing a potential round that could raise $50 billion and carry a valuation of $900 billion, according to The Financial Times. Separately, here’s how Mythos, Anthropic’s new model, overturned the White House’s A.I. strategy, according to The Wall Street Journal.

Schools largely regain access to a crucial software tool after a cyberattack. Canvas, a platform widely used by colleges and universities in North America, said that most of its services were available after being taken offline for hours yesterday. But a hacking group claiming responsibility said it had retrieved data from more than 275 million people across nearly 9,000 schools.

President Trump, wearing a dark coat and a red tie, speaks outside while others look on.
President Trump’s protectionist trade policy just took another legal hit. Anna Rose Layden for The New York Times

An ill-timed trade defeat for Trump

The latest blow to President Trump’s tariff policy couldn’t have come at a worse time.

The Court of International Trade ruled yesterday that a 10 percent duty on most imports, imposed in February after the Supreme Court invalidated a previous round of levies, was illegal. The decision could weaken Trump’s negotiating position ahead of a summit with Xi Jinping, China’s leader.

What happened: The court rejected the administration’s justification for the new levies, which lay in a never-before-used provision from the Trade Act of 1974, known as Section 122.

The ruling gave tariff protections only to the plaintiffs: the state of Washington, a spice company and a toy business. But legal experts think it could open the door for other companies to sue, and potentially apply for reimbursements of these levies.

(Worth noting: The case was brought by the Liberty Justice Center, a legal group that represents small businesses and was a big part of the Supreme Court case that overturned the earlier tariffs.)

Trump signaled he wasn’t giving up on tariffs. “We get one ruling, and we do it a different way,” he told reporters. One possibility: imposing levies on dozens of countries that the administration says haven’t outlawed imports of goods made with forced labor.

Where does this leave Trump’s trade war? The president’s levies initially drove up the effective U.S. tariff rate to its highest point in a century.

But importers have started getting reimbursements for the $166 billion in duties that the Supreme Court overturned. FedEx and Costco were among the more than 3,000 businesses that sued for refunds.

Levies remain a sticking point in trade talks. A truce on tariffs and export controls for things like rare earths is expected to be high on the agenda for Trump’s China trip next week.

And the president is irate that the E.U. still hasn’t ratified a framework trade deal struck last year.

A chat bubble that reads, "How do you use AI? What are your best use cases?" The bubble underneath indicates a pending response.

Talking A.I with the C.E.O. of CrowdStrike

Every week, we’re asking a leader how he or she uses artificial intelligence. This week, George Kurtz, who leads the cybersecurity company CrowdStrike, talked with Sarah Kessler about how Anthropic’s Mythos and other new A.I. models have changed the cybersecurity landscape. The interview has been condensed and edited.

How do you personally use A.I.?

One thing that has been useful is that I created a skill for my speeches, PowerPoints and public speaking. So I essentially trained it on all my prior speeches, earnings reports and writing so that it actually creates things in my voice and my style.

CrowdStrike is one of a handful of companies that have access to Mythos and OpenAI’s competitor, Cyber. How have these models changed cybersecurity threats?

They’re able to go back through over 30 years of code and quickly identify mistakes the original programmers made. And they’re not only able to find one of those, but can also find multiples that they can chain together to create a high-risk vulnerability. You may have a few small mistakes. But when you put them together, it’s a really big mistake.

There are going to be more attacks. They’re going to come faster. And the defenders are going to have less time to react to it. The whole world is going to be in this very quick patching cycle so that the adversaries don’t take advantage of these vulnerabilities when they come out.

How many patches will be needed?

Hundreds of thousands, if not millions, of patches.

Dario Amodei, the C.E.O. of Anthropic, said recently that we have a six- to 12-month window to patch everything before Chinese A.I. models catch up. Do you think that’s realistic?

I think it’s more like six months. I think the aspiration for many is to get the patching cycle done in six months, but that’s going to be very difficult.

Why?

You have what I call the “patch sound barrier” of actually being able to roll out a patch into a production network.

Think about all the banks, the telecoms and the large infrastructure providers that have, like, 15 minutes of downtime a year. It’s very difficult to just say you’re going to patch everything all at once. There are only so many you can roll out in a production environment without breaking things.

Amodei also said that he thinks the response to Mythos could make “a better world on the other side.”

I think we could look at Mythos as a Y2K moment. We’re galvanizing governments and businesses to say that we have a window where we’re going to find a lot of security flaws and vulnerabilities. And collectively we should be prioritizing these, then racing to get the patches out.

That would dramatically reduce the level of vulnerability and the number of software flaws that are out in the environment today.

We hope you’ve enjoyed this newsletter, which is made possible through subscriber support. Subscribe to The New York Times.

THE SPEED READ

Deals

  • Elliott Management is said to have picked Goldman Sachs and JPMorgan Chase to lead a combined I.P.O. in London of the bookstore chains Barnes & Noble and Waterstones. (Bloomberg)
  • Eric Trump and Donald Trump Jr. have reportedly invested in financial vehicles meant to invest in industries favored by the White House, including artificial intelligence and drones. (FT)

Politics, policy and regulation

Best of the rest

Thanks for reading! We’ll see you tomorrow.

We’d like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.

Andrew Ross Sorkin, Founder/Editor-at-Large, New York @andrewrsorkin
Brian O'Keefe, Managing Editor, New York @brianbokeefe
Bernhard Warner, Senior Editor, Rome @BernhardWarner
Sarah Kessler, Deputy Editor, Chicago @sarahfkessler
Michael J. de la Merced, Reporter, London @m_delamerced
Niko Gallogly, Reporter, New York