Dario’s Choice and Anthropic’s FutureIs Anthropic feud with the Pentagon similar to Tim Cook's with the FBI, or something different altogether?A decade ago last month, Apple CEO Tim Cook wrote a “Message to Our Customers” explaining why the company refused to build an encryption backdoor after the FBI demanded access to an iPhone during its investigation of the 2015 San Bernardino shooting. In the blog post, published February 2016, Cook warned that forcing Apple to weaken its security could create a slippery slope that might make it easier to someday force it to build surveillance software and “undermine the very freedoms and liberty our government is meant to protect.” The FBI eventually found another way into the iPhone, but the fight helped solidify Apple’s reputation as a privacy-focused company and build customer trust that’s lasted much longer than its brief legal battle. A decade later, it’s Anthropic instead of Apple on the hot seat. And today, Anthropic CEO Dario Amodei’s refusal to allow the Pentagon to use his AI for domestic surveillance and autonomous warfare (at least contractually) could add a similar halo to Anthropic’s brand. But it’s much less straightforward for Anthropic than it was for Apple, as the rising AI lab faces consequences that are coming into focus this week. Anthropic has, by all accounts, had a hellish few days, enduring repercussions Apple never did in its face-off with the FBI. After being publicly attacked by Pentagon officials, the company was targeted in a social media post by President Donald Trump, who deemed it a supply chain risk and ordered every federal agency to stop using its technology. Already, Treasury Secretary Scott Bessent has said the Treasury Department “is terminating all use of Anthropic products.” Within six months, government suppliers must also stop using Anthropic for their government work. Some are already considering going further than that, preemptively, to stay in the U.S. government’s good graces. Should the government go through with its Claude ban (still an open question) it would cost the Anthropic hundreds of millions, if not billions of dollars. Labeling Anthropic as a supply chain risk is “attempted corporate murder,” as Trump’s former AI advisor Dean Ball put it. Claude RisesAnthropic will be around for a long time though, and if it was looking to use this moment to assert its values, as Apple did a decade ago, that appears to be working. Soon after the company bucked the Pentagon, supporters chalked “God Loves Anthropic” in front of its San Francisco headquarters, PR and legal experts took its side, Katy Perry posted a screen shot of her signing up for Claude Pro, and AI professionals, broadly, came to its defense. “I’ve honestly never had a high opinion of Anthropic before,” Chris Paxton, AI innovation lead at Agility Robotics wrote on X. “But I think I’ve come around to believing that these folks actually believe what they are saying, and have gained a lot of respect for them.” Over the weekend, Claude climbed to the top spot in the Apple App Store for the first time, jumping from No. 42 before its first Super Bowl ad a few weeks ago. Claude’s total free users, meanwhile, have grown more than 60% since January while paid subscribers have more than doubled this year, according to the company. Another chart from AppMagic shows Claude’s total global downloads in a single day reaching 500,000 for February 28. Claude’s website also now includes a page showing users how to import memory to Claude from other AI providers, a clear indication its looking to capitalize on this moment. OpenAI Swoops InAs leading models converge on capabilities, the AI sector will differentiate on brand. And on Friday, when OpenAI agreed to a similar deal to the one Anthropic wanted, but ultimately walked away from, it set a line of delineation between the two companies. OpenAI stands to make hundreds of millions of dollars from the deal, but already, protests are forming in response. Tomorrow, “QuitGPT” organizers are holding a “No Killer Robots, No AI Surveillance” protest outside OpenAI’s headquarters. “Last Thursday, Anthropic refused to let the Department of War sidestep its basic safety guidelines,” its event page says. “A day later, OpenAI swooped in and agreed to let the Pentagon use its AI for killer robots and mass surveillance of American citizens. Protest this CORRUPT deal at OpenAI’s front door.” Soon, we’ll learn whether Amodei’s dealings with the Pentagon will propel Anthropic forward as it competes with OpenAI, or whether the damage done by Trump’s order sets it back decades. Tim Cook’s experience may be instructive, but following the trend of most AI news lately, this situation is unlike those that came before. |