DealBook: My interview with Dario Amodei
The Anthropic C.E.O. acknowledged the A.I. spending spree comes with some risks.
DealBook
December 7, 2025

Good morning. Andrew here. On Wednesday, I sat down with Dario Amodei, the C.E.O. of Anthropic, at the DealBook Summit. We talked about everything from the risk of an A.I. bubble to regulation to artificial general intelligence — and a lot more. Below, we’ve got a rundown of some of the highlights from that discussion.

Over the coming days, we’ll be sending you emails with highlights and takeaways from the interviews.

You can also watch all of my conversations from the Summit on YouTube or listen to them as podcasts.

(Was this newsletter forwarded to you? Sign up here.)

Andrew interviewing Dario Amodei, the C.E.O. of Anthropic, at the 2025 DealBook Summit. Michael M. Santiago/Getty Images

What Dario Amodei said at the DealBook Summit

Spending on A.I. now accounts for a huge portion of the United States’ G.D.P. growth. The big question on Wall Street: Does the astronomical level of investment driving that growth actually pencil out?

It depends whether you’re talking about the technology or the economics, Dario Amodei, the C.E.O. of Anthropic, said at the DealBook Summit.

“On the technological side of it, I feel really solid,” he said.

But he also acknowledged the considerable risks that have recently spooked markets: “Even if the technology is really powerful and fulfills all its promises, I think there may be players in the ecosystem who, if they just make a timing error, if they just get it off by a little bit, bad things could happen.”

Amodei, whose company is now valued at $183 billion, spoke at length on the economics of the A.I. boom. He criticized what he called a “YOLO” approach to deal making by some of his competitors and defended the circular deals being made to finance data centers.

The interview, which you can watch here, also addressed A.G.I., the battle over artificial intelligence regulations and the risks of selling advanced chips to China. Here are five highlights.

On A.I. spending

Google, Microsoft, Amazon and Meta are each spending tens of billions of dollars a quarter on infrastructure for artificial intelligence. Anthropic’s main rival, OpenAI, has said it plans to spend at least $1 trillion, and Anthropic recently announced that it plans to spend $50 billion. Amodei said big spending could backfire for some companies:

There’s a real dilemma deriving from uncertainty in how quickly the economic value is going to grow and the lag times on building the data centers that drives it. There’s a genuine dilemma, which we as a company try to manage as responsibly as we can. And then I think there are some players who, you know, who are YOLO, who pull the risk dial too far. And I’m very concerned.

Though he declined to say which companies he considered “YOLO,” the elephant in the room was OpenAI, which he did not discuss.

On circular deals

Analysts have questioned deals to finance data centers in which an A.I. start-up like Anthropic raises money from tech giants that then sell the start-up products and services. Nvidia is set to make 15 percent of its sales this year from so-called circular deals, according to an estimate by Goldman Sachs.

Amodei defended the deal structure, saying:

One player has capital and has an interest, because they’re selling the chips, and the other player is pretty confident they’ll have the revenue at the right time, but they don’t have $50 billion at hand. So I don’t think there’s anything inappropriate about that in principle. Now, if you start stacking these where they get to huge amounts of money and you’re saying, you know, by 2027 or 2028, I need to make $200 billion a year, then yeah, you can overextend yourself.

On A.G.I.

Amodei said that achieving artificial general intelligence, or A.G.I. — a machine that can do anything the human brain can do — is a matter of scale rather than innovation. He said:

I think scaling is going to get us there. Every once in a while there will be a small modification, you know, so small you may not even read about it.

When should we expect A.G.I.?

There’s no one particular point, right? This is what I’ve said over and over. I’ve never liked these terms — A.G.I., artificial superintelligence — I don’t know what it means. Just like we had an exponential with Moore’s law, chips getting faster and faster until they could do any simple calculation faster than any human, I think the models are just going to get more and more capable at everything.

On selling chips to China

Amodei reiterated his stance that the U.S. should not sell advanced A.I. chips in China. That contradicts the position of one of Anthropic’s partners, Nvidia, which has emphasized that it is essential to maintain access to the Chinese market. He said that it’s a national security issue, not an economic issue, adding:

I think we’re building a growing and singular capability that has singular national security implications, and democracies need to get there first. It is absolutely an imperative.

On A.I. regulation

While other companies in the industry oppose strong A.I. regulation, Amodei has taken the opposite position, which he reiterated at the Summit. He said:

I am concerned that there are some who see this technology as analogous to previous technological revolutions — as being like the internet, as being like telecommunications — where, yes, there are some issues, but the market will figure it out, which I think was maybe a more reasonable view in these previous technological revolutions.

I think those who are closest to A.I. don’t feel this way. If you poll the actual researchers who work on A.I. — not investors who invest in some A.I. application companies, not general tech commentators who, you know, think they know something about A.I. — but the actual people who are building the technology, they’re excited about the potential, but they’re also worried. They’re worried about the national security risks. They’re worried about alignment of the models. They’re worried about the economic impacts of the models.

Amodei argued that, despite his concerns, he’s not a “prophet of doom.”

I’m incredibly optimistic about the technology — frankly, much more optimistic than some people who describe themselves as boosters of the technology,” he said. “But nothing that powerful doesn’t have a significant number of downsides.”

We hope you’ve enjoyed this newsletter, which is made possible through subscriber support. Subscribe to The New York Times.

Thanks for reading! We’ll see you tomorrow.

We’d like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.

Andrew Ross Sorkin, Founder/Editor-at-Large, New York @andrewrsorkin
Bernhard Warner, Senior Editor, Rome @BernhardWarner
Sarah Kessler, Deputy Editor, Chicago @sarahfkessler
Michael J. de la Merced, Reporter, London @m_delamerced
Lauren Hirsch, Reporter, New York @LaurenSHirsch
Danielle Kaye, Reporter, New York @danielledkaye

If you received this newsletter from someone else, subscribe here.

Need help? Review our newsletter help page or contact us for assistance.

You received this email because you signed up for DealBook from The New York Times.

To stop receiving DealBook, unsubscribe. To opt out of other promotional emails from The Times, including those regarding The Athletic, manage your email settings.

Subscribe to The Times

Connect with us on:

xwhatsapp

Change Your EmailPrivacy PolicyContact UsCalifornia Notices

LiveIntent LogoAdChoices Logo

The New York Times Company. 620 Eighth Avenue New York, NY 10018