| | In this edition, how Anthropic became a target of the Defense Department, and the backstory on Perpl͏ ͏ ͏ ͏ ͏ ͏ |
| |  | Technology |  |
| |
|
 - Hegseth vs. Anthropic
- Remember that Chrome bid?
- Big Tech digs deeper
- Art of the deal
- AI fear factor
 A rift between the Defense Department and Anthropic, and a new AI model to help determine the toxicity of water. |
|
 If companies don’t want to sell their technology to the military, nobody is forcing them. But if they do sell to the military, they don’t get to decide exactly how it is allowed to be used. That’s the view, more or less, of US officials when it comes to AI. And that’s the tension, laid out in today’s story (detailed below) on the latest push and pull between Anthropic and the Trump administration. Anthropic, and perhaps other AI labs, wants to sell to the military while creating limits on how it can and can’t be used. As the tech industry becomes more intertwined with the military, this issue will continue to crop up, especially if armed forces push the limits of AI and there are casualties as a result. But there’s one thing the tech industry isn’t acknowledging as much as it should: The military has actually been thinking about autonomy for a long time. It’s capable of assessing risks with new technology. As the Pentagon says in its new AI strategy document, there are risks to not deploying AI, too. And a heads-up: I’ll be on the ground in Davos next week, talking with tech moguls, world leaders, and business executives on how they are thinking about this year’s theme, A Spirit of Dialogue. I expect AI, much like last year, will be the talk of the town. We’re also bringing back Semafor Davos Daily — our pop-up email briefing that covers all the deal-making, gossipy, productive, and pretentious grandeur from the global village. You can sign up here. And if you’re on the promenade, feel free to check out the live journalism we have happening at Semafor Haus at the Grandhotel Belvédère. More information here. |
|
Hegseth jabs Anthropic over safety |
Evelyn Hockstein/ReutersWhen Defense Secretary Pete Hegseth announced the Pentagon was adding Grok to its list of generative AI providers, he railed against AI models that “won’t allow you to fight wars.” Hegseth wasn’t just riffing, a person familiar with his thinking said: He was specifically referring to Anthropic, the startup that spun out of OpenAI in an attempt to build safer AI technology. In recent weeks, tension has built up between Anthropic and the military, according to two people with knowledge of the matter, as the Trump administration attempts to more quickly adopt new warfighting technology, including the most advanced AI models. From Anthropic’s perspective, the company feels like it has a responsibility to ensure its models are not pushed beyond their capabilities, particularly in military actions that could have lethal consequences. From the military’s point of view, though, Anthropic shouldn’t attempt to make the final call on how, exactly, its models are used in warfare. Those decisions should be left to the military, like any other technology or weapon the Pentagon purchases, it says. A Defense Department official, speaking on background, said it would only deploy AI models that are “free from ideological constraints that limit lawful military applications. Our warfighters need to have access to the models that provide decision superiority in the battlefield.” Anthropic declined to comment. |
|
Behind Perplexity’s bid for Chrome |
| |  | Rachyl Jones |
| |
Jim Watson/AFP via GettyPerplexity’s $34.5 billion unsolicited bid for Google Chrome last August was largely seen as a publicity stunt. Google had just lost its antitrust lawsuit with the Justice Department, and a US federal judge was considering whether it should be required to sell off Chrome. It turns out, Perplexity never thought the deal would go through if a judge didn’t force it, the company told me. And there was more to the multibillion-dollar bid than a publicity play. Perplexity, which pitches itself as a hybrid between a traditional search engine and AI chatbot, used the bid as a defensive measure to protect its own business and, it says, the state of the internet. Perplexity figured that if it bid for Chrome, it could use the deal terms to send a signal to the greater market about how important it was to maintain the sanctity of Chrome’s open-source software that also underpins Perplexity’s own AI browser. In the potential war for Chrome, it wanted to set a floor for offers to ensure the winning bid would have to exceed what it put on the table. |
|
A Big Tech movement for minerals |
 Amazon is buying the first US-mined copper in more than a decade to support its data center buildout, The Wall Street Journal reported — part of a wider trend among hyperscalers to invest in energy and commodities to power their next-generation technologies. The AI boom is spurring demand for metals and minerals globally, and suppliers are eager to cash in: Saudi Arabia’s state-controlled mining company Maaden will invest $110 billion in production over the next 10 years, its CEO told Semafor this week. Tech companies are looking for creative ways to build and power their data centers, as nuclear power firms — a promising source of future energy — are still ramping up production and residents living near data centers fear the facilities will raise their electricity bills. Microsoft has also pledged to cover energy and water costs for its facilities this week, while Alphabet recently bought an energy infrastructure company, and Meta agreed to purchase power directly from nuclear power firms. The moves signal how hard-pressed tech companies are to fund the infrastructure to support their AI products as energy and tech become increasingly intertwined. |
|
Taiwan strikes tech deal with US |
 Taiwan’s $250 billion investment deal with the US marks a big win for Trump’s evergoing trade negotiations. In exchange for reducing tariffs on Taiwanese goods to 15% from 20%, the Commerce Department announced Thursday that Taiwan’s tech companies agreed to invest in building manufacturing facilities and other projects in the states. The US will also exempt companies like chip giant Taiwan Semiconductor Manufacturing Company from paying tariffs altogether (TSMC has already committed $165 billion to expand chipmaking in Arizona). The money will help the US, which essentially coerced a deal by hiking so-called reciprocal tariffs (the constitutionality of which the Supreme Court is expected to rule on soon), bolstering a domestic supply chain for chips to help fuel the AI boom. But it also helps diversify TSMC’s production away from Taiwan and China, making it less beholden to the geopolitically shaky region. |
|
Firms fear AI more than Trump |
Priyanshu Singh/ReutersCompanies see greater risks to their reputation from the misuse of AI than from becoming a political target of the Trump administration, according to a new report shared exclusively with Semafor. The Reputational Risk Index for the fourth quarter of 2025 — produced by PR firm the Global Situation Room, and based on opinions from dozens of executives — found that organizations are more worried about AI-related problems like deepfakes than they are about Trump singling them out for criticism. The sentiment was a shift from the third quarter report, when the US president was considered the biggest reputational risk for corporate brands, Semafor’s Morgan Chalfant reports. “Artificial intelligence has emerged as the single greatest driver of reputational risk, reshaping how trust is built — or lost — across sectors,” said Isabel Guzman, a former Biden official who chairs the Global Risk Advisory Council, which informs the report. |
|
 Sarah Rogers, the State Department’s undersecretary for public diplomacy, joins Mixed Signals for a wide-ranging conversation about free speech, tech regulation, and why she’s been rattling the patience of some European governments. Max and Ben press her on confronting Europe over X, the Digital Services Act, and online speech — including accusations that she’s carrying water for Elon Musk and the far right. Rogers traces her worldview back to the early internet, Gawker comment sections, and First Amendment litigation, and explains why she sees today’s speech rules as potentially dangerous. |
|
Eva Manez/ReutersTesting the waters. Researchers at Harvard University and Stevens Institute of Technology developed an AI model to identify toxicity levels of byproducts that come from the process of disinfecting drinking water. The idea is that AI-enabled chemical detection can help enhance global safety standards, according to a new paper published Thursday. Chemicals commonly used to disinfect water, including chlorine and chloramine, react with organic matter naturally found in water to create harmful byproducts, which have been linked to cancer and fetal development issues. Many are regulated by the US Environmental Protection Agency, but not all, according to the researchers, who trained a predictive machine learning model on the toxicity data of more than 200 chemicals. Using that data, the AI model predicted the dangers of more than 1,100 other byproducts, some of which had higher toxicity levels than chemicals which the EPA has set standards for. The findings don’t mean that your average glass of tap water is unsafe, but additional research can help scientists and regulators gain a deeper understanding of potential dangers. |
|
|