OpenAI announced a policy shift this week that would allow adult users to have conversations with ChatGPT that might not be safe for work. Bloomberg Businessweek senior reporter Max Chafkin asks, Why now? Plus: Robinhood is thinking about how to appeal to Generation Z, and why the Los Angeles Clippers’ owner is causing headaches at the NBA. If this email was forwarded to you, click here to sign up. As generative artificial intelligence has exploded in popularity–with executives hectoring employees about the need to “default to AI” and college students increasingly relying on ChatGPT to avoid learning anything–there have been quite a few downsides. These include reports suggesting that a small minority of users have become addicted to, or fallen in love with, or been driven crazy by a chatbot. In the past, the companies behind the chatbots, including Anthropic, Google, Meta and OpenAI, have emphasized that harmful uses of their software are rare, while acknowledging that the tendency of chatbots to agree with anything a user says can lead them to terrible places, disconnecting from human relationships or even suicide. The general message has been that these are new and mysterious technologies that must be managed with caution. “My worst fears are that we cause significant harm to the world,” OpenAI Chief Executive Officer Sam Altman told a Senate hearing in 2023. “If this technology goes wrong, it can go quite wrong.” Good news then: This week, Altman said that his company had figured out answers to all those thorny questions about chatbots and harm. “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote in a post on X, referring to an earlier version that had limited the software’s sycophancy. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” He announced that the company would release a new version of the software that would be able to “respond in a very human-like way” and “act like a friend.” For adult users, he said, this could include “erotica.” In other words: You’ll soon be able to sext with ChatGPT. (The offering brings OpenAI closer to Elon Musk’s xAI chatbot, which has made edgy content a core part of its sales pitch.) Responses to the announcement ranged from outrage to ridicule. “You made ChatGPT ‘pretty restrictive’?” Senator Josh Hawley asked on X. “Is that why it has been recommending kids harm and kill themselves?” Others mocked the new offering as “perv mode.” Altman later clarified, noting that erotica was merely “just one example” of adult uses of ChatGPT. “But,” he added, “we are not the elected moral police of the world.” Altman. Photographer: Kyle Grillot/Bloomberg If that last line feels familiar, it’s because social media companies have used it repeatedly as part of an effort to disclaim responsibility for the harmful ways their services have been used. In 2019, amid outcry over misinformation in political ads, Facebook CEO Mark Zuckerberg staked out a position that sought to ensure that his company, now called Meta Platforms Inc., wouldn’t be responsible for the content it publishes. Noting that false news stories sometimes “speak to a deeper truth” in someone’s “lived experience,” Zuckerberg said that Facebook would “need to be careful about restricting that.” He continued: “I don’t think it’s right for a private company to censor politicians or the news in a democracy.” The idea was to frame Facebook as part of society’s essential fabric rather than a privately controlled vehicle for selling digital advertising. Altman is doing something similar, arguing that ChatGPT represents an essential service that can cause some unfortunate collateral damage, which we’ll just have to live with. Altman loves to speak about his business in the most grandiose terms possible, suggesting that ChatGPT is just a few iterations away from curing cancer, solving global warming and ushering the world into an era of leisure and abundance. Who knows, maybe it will. But at the moment, ChatGPT looks a lot like Facebook–an addictive digital service that promises an endless stream of information and entertainment of questionable value. The hot new product from OpenAI, naturally, is a social network, Sora, which seems straight out of the worst nightmares of social media critics. The app trades the most attractive thing about social networks–the potential for, you know, social connection–for an endless stream of slop, starring users who allow their likenesses to be manipulated, as well as fictional characters and dead celebrities, who seem to have been added to the app without anyone’s consent. Want to see a video of Stephen Hawking dying in a skateboard accident? Or Martin Luther King Jr. wearing a MAGA hat and making monkey sounds? Sora’s got you. (The company announced Thursday that it had paused the King videos after a complaint from his family, noting that public figures can request to be removed from Sora videos. This is, of course, not the same as asking permission before dumping a person’s likeness into your deepfake machine.) Why would a company that claims to be on the verge of basically unlimited profit mess around with gross-out videos and porn? There are two ways to look at this. One is that generating large quantities of dumb but clickable content is a great business. This is undoubtedly true, attested by the growing profits at Meta and the rising valuation of Only Fans. The other possible explanation is that OpenAI is a lot further from curing cancer than its investors and executives have led the public to believe. According to leaked financials published by the Financial Times, the company lost $8 billion during the first half of this year, a figure that dwarfs its actual revenue over the same period. In other words, it may not be that Altman doesn’t care about the potential harms his service is causing. It may be that he can’t afford to worry too much. |