Here's this week's free edition of Platformer: some thoughts on the Charlie Kirk assassination and how platforms are contributing to our ongoing polarization — and what I think we ought to do about it. Want to kick in a few bucks to support independent journalism? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about how Grok's porn companion is rated for children 12+ in Apple's App Store. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
|
|
|
|
I. On March 15, 2019, a 28-year-old Australian man committed a mass shooting at two mosques in Christchurch, New Zealand. He killed 51 people and injured 89 more. Beyond the tragedy of lost lives, the shooting is remembered for the way the shooter used platforms and internet memes in an effort to ensure his attacks had the greatest impact possible. He broadcast the murders on Facebook Live; he promoted it on Twitter and 8chan; he sardonically told viewers to subscribe to the popular YouTuber PewDiePie. In the aftermath of the attack, members of Congress demanded to know how videos of the attack had spread so quickly — and why the Facebook, YouTube, Twitter, and other sites had not taken all of them down. “Studies have shown that mass killings inspire copycats — and you must do everything within your power to ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence,” Rep. Bennie Thompson, chairman of the House Homeland Security Committee, said at the time. Pressure from New Zealand elected officials and others led to the Christchurch Call, a collaboration between governments and industry focused on building systems to remove extremist content while preserving a free and open internet. Facebook, Twitter, and Google were all among the companies that signed on to the agreement. Six years and countless mass killings later, both Congress and platforms have largely given up on the idea that violence can be contained at the platform level. The assassination of right-wing activist Charlie Kirk last week generated millions of views across X, Instagram, Threads, YouTube and Telegram, among other sites. And while some platforms sought to remove many of those videos or place them behind warning screens, various analyses found that their efforts were patchy and easily circumvented. There was a time not all that long ago when it seemed as if online extremism might be contained through some combination of government regulation and platform-led content moderation. But the right-wing turn against content moderation in the United States on ideological grounds successfully pressured companies to disinvest in systems designed to detect and remove hate speech, bullying, and other harms. And the sheer vastness of the internet, with its many public and private spaces for disturbed and violent people to gather, means that there is always somebody planning to harm others on any platform of any size. The Christchurch shooting, which my eventual Hard Fork co-host Kevin Roose would call the first internet-native mass killing, introduced us to the notion that a killer's politics might be less left or right than "extremely online" — a nihilistic and largely inscrutable worldview developed in arcane forums and communicated via obscure memes. There is evidence that the suspect in the Kirk shooting, Tyler Robinson, is aligned with the left. A charging document in his case said his mother told prosecutors that he had increasingly expressed pro-gay and pro-trans sentiments over the past year; he was in a relationship with his roommate, who is transitioning. The document cited a text message from Robinson to his roommate in which he said he wanted to kill Kirk because he "had enough of his hatred." But the crime also carried the hallmarks of an extremely online shooting — in particular, the bullets that the shooter had etched with obscure memes and catch phrases. (For example: "NoTices Bulge OWO What’s This?", which comes from furry roleplay.) Robinson allegedly texted his roommate that these messages were "mostly a big meme," though he also noted that if he saw the bulge text on "fox new" (sic) "I might have a stroke.” Robinson's actual beliefs and motives remain murky a week later, and they may remain so permanently. And yet I find myself fixated on the idea of a person who has decided to murder a prominent activist in public and first thinking about what meme they might get a Fox News anchor to say out loud on a broadcast, and how funny that might be, and how many views and reposts it might get. "If you Read This, You Are GAY Lmao," another of the bullets read. You can imagine the shooter hearing the Fox News anchor saying that one out loud, too, and smiling. II. Thinking about how many views and reposts you might get is, of course, one of the fundamental drives of the modern era, and it is one at which Charlie Kirk excelled. His debate-me-bro antics introduced highly engaging conflict into the feeds he operated in. He had tens of millions of followers across YouTube, X, TikTok, Instagram and Facebook. And his videos racked up hundreds of millions of views. I found many of Kirk's views odious, and the tactics of his organization shameful. In 2020 it was revealed that Turning Point Action, a subsidiary of Kirk's Turning Point USA organization, was paying teenagers to promote disinformation about the 2020 election: in effect, creating a domestic troll farm aimed at sowing mistrust in mail-in voting and vaccines. (Facebook later banned the marketing firm behind the effort for violating its policies.) Kirk himself was also briefly suspended from Twitter for posting misinformation about mail-in voting. What Kirk excelled at was posting what platform nerds call borderline content: posts that flirt with breaking a platform's rules without quite going over. "One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content," Meta CEO Mark Zuckerberg wrote in 2018. " This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services." Zuckerberg posted a helpful chart to show what happens when provocateurs post content that very nearly breaks a rule: engagement skyrockets. The popularity of this tactic may explain, for example, why Kirk may have speculated (groundlessly) that vaccines may have killed more than 1 million people. It may explain why he said, in 2024, "If I see a Black pilot, I’m going to be like, ‘Boy, I hope he’s qualified.’” It may explain why he promoted the idea that white people are being replaced by nonwhite immigrants — and suggested that Jews may be behind it. In each case, when pressed, he could retreat to safer ground — just asking questions, just a little concerned about whether D.E.I. is going too far, just noting that Jewish donors often fund left-wing causes — and racking up views, while waiting for it all to blow over. It mostly always did. On Aug. 27, before the Kirk shooting, researcher Nathan Witkin wrote a must-read blog post titled The Case Against Social Media is Stronger Than You Think. In it, he responds to recent commentary that reached the opposite approach, attempting to synthesize an explanation for how social media harms society despite limited empirical evidence for the claim in academic research. He notes that research has shown that social media amplifies the reach of highly emotional content of the sort political influencers favor. He also finds research showing that higher social media usage in a given location is correlated with more extreme political behavior, including protests and hate crimes. Witkin offers what he calls "the elite radicalization theory" to explain how social media warps society. He notes a 2024 paper showing that political discussions on X and other platforms are increasingly dominated by a small number of accounts that spread the vast majority of information and start the most online conflicts. Extended exposure to these conflict merchants worsens our overall sense of our political opponents, making us see them as more extreme than most of them are. Witkin writes: While in the past, survey evidence has indeed shown Americans to be moderate and relatively uninterested in politics, it is possible, and even probable, that social media is changing that. If in response to believing their peers are more politically extreme, people in turn become more extreme in reaction, then Farrell’s ‘malformed collective understanding’ may end up a self-fulfilling prophecy. Perhaps the more extreme content we ingest, the more likely it is we internalize this style of political communication and begin posting extreme content ourselves. There is already some limited evidence to support this possibility.
Kim et. al. (2021) found, for instance, that those exposed to toxic ‘featured’ comments on Facebook posts were more likely to post toxic comments later of their own volition. Brady et. al. (2021) found that Twitter users “conform their outrage expressions to the expressive norms of their social networks,” (p. 1) and become more likely to express outrage when their surrounding social network is itself more extreme (p. 8). And in an as-yet unpublished preprint, Brady et. al. (2025) found that in a simulated Twitter-like environment, “engagement-based algorithms” changed participants’ perception of descriptive and prescriptive norms surrounding “ingroup-aligned, moral and emotional political content,” and ultimately made them more intent on posting this content themselves.
To Witkin, this dynamic explains why some family members seem to tumble down online rabbit holes, radicalizing themselves based on their participation in online forums. And it also explains the outsized roles of influencers like Kirk: "The idea is that social media has empowered a (relatively) small group of political influencers who, in response to the perverse incentives created by attentional negativity bias, have disseminated ideas that make people more angry, fearful, and extremist," he writes. There is no justification for violence against someone based on their speech. But if Witkin's analysis holds, it suggests that there is real danger in constantly whipping up large crowds into a fearful frenzy. Spend enough time telling a mass audience that they are locked in a life-or-death battle with their political enemies and the eventually the more disturbed among them are going to take action. It may not be the action you were expecting. III. Nowhere does Witkin's analysis hold more true than it does on X. Twitter was a famously imperfect company, but there was a time when it had teams dedicated to banning people who incite violence, removing and downranking violent videos, and otherwise seeking to lower the temperature of debate. When Elon Musk bought Twitter and turned it into X, he turned it upside down. Content moderation teams shrunk dramatically. Right-wing extremists were welcomed back onto the platform. Its recommendation algorithms began to favor conservatives' accounts (and Musk's) above others. And unlike any other platform owner, Musk loudly performs his politics on X almost every day. He promotes the idea that the right is battling the left for the future of civilization. In the immediate aftermath of Kirk's shooting, when almost nothing was known about the shooter, Musk posted "The left is the party of murder." By Tuesday, he had encouraged his followers to "fight or die" at least five times. Meanwhile, when I've read X over the past week, I've seen countless tasteless and inflammatory posts about Kirk from the left: celebrating his death, mocking his fans, and generally presenting the most polarized picture of America possible. X remains the platform of choice for most politicos, and continues to play an outsize role in shaping elite perception of our politics. The dark picture of America that takes root there spreads to TikTok, Instagram, and cable news, continuously warping Americans' perceptions of one another. The vast majority of Americans continue to reject political violence. Civil war is for idiots and losers. But in the dark mirror of social media, we are only ever on the brink of all-out conflict. And the longer we stare at it, the more we risk that fiction turning into reality. On the podcast this week: Kevin and I share our thoughts on the Kirk assassination and the role that platforms are playing to polarize the United States. Then, David Yaffe-Bellany stops by to unpack the Times' blockbuster investigation into how the UAE curried favor with the Trump administration by buying $2 billion worth of Trump family cryptocurrency — and two weeks later was granted a significant number of state-of-the art AI chips. Finally, we take a spin through the week's AI headlines. Apple | Spotify | Stitcher | Amazon | Google | YouTube Sponsored Unknown number calling? It’s not random…Here’s the disturbing truth: scammers don’t pick phone numbers at random. They buy your personal data from data brokers—your phone number, your email, even details about your lifestyle. That’s how they know exactly who to target. Once your data is out there, it’s not just spam calls. It’s phishing, impersonation, and even identity theft. That’s why we recommend Incogni: - They delete your personal info from everywhere on the web - They monitor and follow up automatically - They keep erasing your data as new risks appear. You can’t stop scammers from existing. But you can make sure they don’t have your information to work with.
Erase your data now — use code PLATFORMER to get 55% off. Governing- China said the framework deal for a US-controlled TikTok is a “win-win” and that it will review TikTok’s technology exports and IP licensing. A deal may be announced on Friday. (Reuters)
- A House committee asked the CEOs of Discord, Steam, Twitch and Reddit to testify in a hearing related to the assassination of Charlie Kirk, citing the “radicalization of online forum users.” And what about the owner of X? (David Shepardson / Reuters)
- Anthropic’s refusal to let federal law enforcement agencies use its models for tasks like surveillance of US citizens has drawn hostility from the Trump administration, sources said. (Reed Albergotti / Semafor)
- Scale AI struck a $100 million deal to let the Pentagon acquire AI capabilities and deploy them on top secret networks with the company’s end-to-end products. (Maria Curi / Axios)
- Amazon violated consumer protection law by getting Prime subscribers’ billing information before disclosing terms of service, a judge ruled ahead of the company’s trial in a lawsuit brought by the Federal Trade Commission. (Jody Godoy / Reuters)
- A look at the reported internal turmoil over the how the @whitehouse TikTok account should be managed and the flood of comments from anti-Trump influencers after its launch. (Drew Harwell and Dylan Wells / Washington Post)
- Google is partnering with StopNCII to use hashes to identify nonconsensual intimate images in search results and remove them. (Terrence O’Brien / The Verge)
- While several frontier AI models exhibit signs of scheming, models know when they’re being tested and can change their behavior as a result, new research shows. (Radhika Rajkumar / ZDNET)
- China’s internet regulator has reportedly banned the country’s biggest tech companies, including ByteDance and Alibaba, from buying Nvidia’s AI chips. (Zijing Wu, Cheng Leng and Tim Bradshaw / Financial Times)
- China ended an antitrust probe into Google amid US-China talks about TikTok, Nvidia and trade. (Cheng Leng, Zijing Wu and Eleanor Olcott / Financial Times)
- DeepSeek often refuses to help programmers or provide them code with major flaws when they say they work for groups that the Chinese government disfavors, research shows. (Joseph Menn / Washington Post)
- The UK’s expansion of online speech oversight, attempts to weaken encryption and its experiments to use AI to review asylum claims has raised concerns of surveillance overreach. (Adam Satariano and Lizzie Dearden / New York Times)
- Ireland’s Data Protection Commission named former tech industry lobbyist Niamh Sweeney as its third commissioner. (Peter Flanagan / Irish Times)
 |