Explaining My Section 230 SpeechSection 230 must be upgraded, not eliminated. I believe the Sunset Section 230 Act can accomplish this.So last week I went to Washington, D.C. to support a bill called the Sunset Section 230 Act. And I posted a video of my speech, got a lot of love and support from a lot of you all out there. And so thank you for that. But also there was a fair amount of confusion and pushback and people raising some very good points that I want to discuss. One point that’s not true is I did not get paid to support this bill. But I actually think a lot of the people that were raising a lot of these good points are people that we probably agree on like almost all of this. We want an internet that has vibrant, free, productive public discourse. I don’t think we currently have that. And, the question is, of course, is this Sunset Section 230 Act a good way to get there? Well, let’s talk about it because that’s the whole point of this, right? Here’s the first thing I’ll say: I’m in favor of reforming Section 230. I’m not in favor of eliminating all of the protections that it affords. I’m going to repeat that because it’s really the crux of this. I’m in favor of reforming, upgrading, modernizing Section 230 because it was passed 30 years ago. I am not in favor of eliminating all of the protections that it affords. Look, I co-founded an online community called HitRecord with a website, with a community of about a million people when it was the biggest. But oftentimes it’s been small. We were a small business. And I completely recognize that if a small business like HitRecord could be held liable for all of the things that anybody might post on a user-generated website, content platform, we wouldn’t be able to afford the legal bills and it could put us out of business. So I really do understand the logic behind Section 230. That said, the Internet has changed a lot and a relatively simple platform like HitRecord is not how most people are using the Internet nowadays. Most people are using a small handful of these gigantic businesses that are driven by engagement optimization algorithms and advertising revenue. And I would argue this is actually really bad for productive public discourse. So the question is, can’t we have both? Can we protect free speech and the First Amendment while also holding these big tech companies accountable? Well, there have been lots of efforts to reform Section 230 in the past, and they keep getting killed by the big tech lobbyists. So this Section 230 Sunset Act is, as far as I understand it, a strategy towards reform. It’ll force the tech companies to the negotiating table. That’s why I supported it. So what would it look like if we were to reform Section 230? Well, that’s a really good question. It’s why I think it’s actually really good that a lot of passionate people are raising their voices and saying, “Well, we have to make sure to protect this and we have to make sure to not lose that.” “But what if we could do this and what if we could do that?” That’s what democracy looks like, right, is those discussions. I will tell you that for me, the most inspiring and intelligent conversation proposal for Section 230 reform that I’ve seen comes from two of my favorite thinkers in this space named Jaron Lanier and Audrey Tang. In fact, I’m going to pull up their article right now, which is called Sunset Section 230 and Unleash the First Amendment. Yeah, it’s co-authored by Jaron Lanier, Audrey Tang, as well as Allison Stanger. I even was able to correspond with Audrey before going to D.C. Audrey said yes, we will have to subsequently make sure that we protect certain things that 230 protects. One example she gave was we didn’t make sure that an app like Signal, which is an app I use, it’s for extreme privacy when you’re sending text messages and stuff, that we make sure that any reform doesn’t harm an operation like Signal. But yes, she said, I think sometimes in order to bring about the new, we need to sunset the old. If you haven’t heard about Audrey Tang, highly recommend you look her up, especially if you’re interested in technology. She’s the former Minister of Digital Affairs of Taiwan. And she, as much as I think anybody in the world, has not only put forward great ideas to improve democracy and the intersection of democracy and technology, but she’s actually put those ideas into practice in really successful, inspiring ways in Taiwan. And if you don’t know Jaron Lanier, Jaron is a polymath extraordinaire technologist, artist, musician, writer. He’s a prime unifying scientist at Microsoft but has also written, I think, some of the most incisive and compelling books over the last 10, 20 years holding up a mirror to Silicon Valley and saying look at the problems that our technology is causing. And I really recommend reading any of his work as well. At the crux of their article is a really important distinction. And that distinction is between free speech and commercial amplification. Free speech meaning what a human being says something. Commercial amplification meaning when a platform like Instagram or YouTube or TikTok or whatever uses an algorithm to maximize engagement and ad revenue, to hook you, keep you, and serve you ads. And this is a really important difference that Section 230 does not appreciate. Section 230, as it’s currently written or as it was written 30 years ago, distinguishes between what it calls publishers and carriers. So a publisher would be a person saying something or a company saying something like the New York Times say or, the Walt Disney Company, publishers. Then carriers would be somebody like AT&T or Verizon, you know, the companies that make your phone or your telephone service. So basically what Section 230 said is that these platforms for user-generated content are not publishers. They are carriers. They are as neutral as the telephone company. And if someone uses the telephone to commit a crime, the telephone company shouldn’t be held liable. And that’s true about a telephone company. But again, there’s a third category that that we need to add to really reflect how the internet works today. And that third category is amplification. There are publishers like the New York Times or Disney, there are carriers like AT&T and Verizon, but then there are these amplifiers like Meta, TikTok, YouTube. They’re not exactly editorial, right? They’re not creating the content the way that the New York Times or the Disney company is, but they’re not just neutrally transmitting someone else’s content like a phone company. There’s nothing neutral about these platforms. The whole thing runs through these algorithms that are designed to hook you and keep you and serve you ads. In fact, I recently heard a stat that when you use Instagram and you’re scrolling the feed on Instagram, it’s like less than 10% of the content that you see is something that you asked to see. 90-plus percent of the content that’s coming your way was algorithmically curated by their engagement optimization and ad revenue machine. That’s not exactly neutral and probably shouldn’t enjoy the same protections as a human being talking or a phone carrier that’s truly neutrally just transmitting. What human beings say is the difference between free speech and commercial amplification. And the article talks about how there’s real legal precedent distinguishing between free speech, what someone says, and commercial speech. But under the current version of Section 230, that distinction can’t be made. And this algorithmic commercial amplification of speech leads to all kinds of harms. Harms to individuals and harms to society as a whole. The harms to individuals are the easiest to understand. And that’s the stuff we were talking about in our press conference. Harms to kids. These platforms are algorithmically exacerbating problems related to drugs, related to the sexualization of children, related to scams. Did you see this thing lately that 10% of Facebook’s revenue is illegal fraudulent scams? 10%, do you know how big a number that is? That’s like tens of billions of dollars. And there’s internal documents leaked showing that Meta knows that this is happening and is not only doing nothing to stop it, but is even kind of pouring gas on it because it makes them money. And they can’t be held liable for any of that because of how Section 230 currently works. That’s not right. We should be able to have both. We should be able to protect free speech and not have Mark Zuckerberg’s company making tens of billions of dollars facilitating scams. These algorithms, this commercial amplification, doesn’t just harm individuals, it also harms society at large. When a significant chunk of the population’s communication with itself is run through these attention-maximizing algorithms, we see a rise in extremism. We see a rise in conspiracy theories. We see a rise in these echo chambers of polarization of people not being able to talk to each |