Why is Section 230 a Big Deal?

In the wake of the January 6 attack at the Capitol and the subsequent banning of Donald Trump from Facebook, Twitter, YouTube, and (apparently) Pinterest, there has been a renewed national focus on the powers various platforms have to control the conversation. Should Twitter be able to ban the now-former president? Should we re-institute a new Fairness Doctrine, the regulation from the 1950s to the 1980s that required TV broadcasters to provide unbiased and honest coverage? The answers are complicated. But politics aside (or at least as much to the side as they can be in a column about laws), what is Section 230, why is it important, and why are some people (from many portions of the political spectrum) calling for its repeal or modification? And what does the Fairness Doctrine have to do with anything?

Let’s start with Section 230. Section 230’s full name is 47 U.S.C. § 230 (that’s the 230th section in Title 47 of the United States Code, the corpus of all federal laws currently in effect), passed into law as part of the Communications Decency Act of 1996. The important part of this section is mostly 230(c)(1), which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Let’s break that down a bit. “Interactive computer service” is 1996 legalese for a website or app like Twitter, Amazon, etc. “Information content provider” is anyone who creates something provided through the internet — which includes making an Instagram post, a tweet, a public Google Doc, anything. Basically, Section 230 says that when someone makes a post on social media, the law treats it just as if they yelled the same information in the middle of the street. The specific social media company they went through has nothing to do with its content, and more critically, cannot be held liable for anything stupid they might say.

For example, let’s say I’m competing with John Smith for a job at Google. I, being a terrible person, decide to write a letter to my local newspaper saying, “This guy John Smith stole my work and punched me in the face, and now he’s looking for a job with Google; just thought I’d let them know.” If this letter is somehow published, then John Smith could sue both me and the newspaper for libel. If I were to put the same content in a tweet, however, the only person who could be sued for libel is myself — Twitter is immune. 

This is a massive deal for social networks and basically any company with user-generated content. Without Section 230, Twitter would have to review about nine thousand tweets per second to make sure none of them are defamatory, inciting violence, advertising illegal activity, and so on and so forth, or risk being sued into oblivion. In other words, without Section 230, Twitter and almost every other public social media website would have to shut down. Perhaps a platform like Facebook could survive if it spent billions on the research or personnel to review that much content, but it would absolutely decimate any upstart company that relies on user-generated content — which is to say, most of the companies that make the internet a vibrant place. 

Now, one might ask: why was Donald Trump, who made Twitter and social media a vital part of his communication strategy, attempting to shut down Twitter? Well, the answer may lie in a subsequent portion, 230(c)(2), which states, “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be … objectionable, whether or not such material is constitutionally protected.” Essentially, it states that Twitter cannot be held liable for moderation decisions it makes “in good faith,” which is broad enough to effectively cover all content moderation. 

My personal analysis is that Trump in particular probably only conceptualizes Section 230 as the moderation-decisions part, and would like to repeal 230(c)(2) so that he can sue Twitter for labeling his tweets and banning him, and has not yet realized that 230(c)(1) exists or why repealing it would cause Twitter to shut down. Populist Republicans have accused Twitter, Facebook and the like of censoring their point of view or boosting their opponents, and occasionally introduce bills to prevent social media companies from moderating their websites except for explicitly unlawful content (an idea roughly as ridiculous as it sounds) or try to ensure the moderation team is explicitly “nonpartisan” through some kind of government certification process. On the other side of the aisle, many on the left have criticized Section 230 for letting platforms get away with harboring hate speech, extremism, and misinformation, particularly in the wake of the Jan. 6 riots. Democratic politicians have proposed a number of reforms such as requiring hate speech enforcement and calling for platforms to be held liable for “knowingly allow[ing] content on their platforms that promotes and facilitates violence” (in the words of Senator Bernie Sanders).

As we drown in misinformation and hyper-partisan media, some have called for the re-enactment of the Fairness Doctrine, a regulation in force from about 1950 to 1990 that required broadcast television networks to promote a “basic standard of fairness” in their content or face sanctions from the Federal Communications Commission (FCC). Reasonable people disagree on whether the doctrine actually helped matters, and for that matter whether the government should be granted that amount of control over speech even if it’s “for the greater good.” But regardless, a new Fairness Doctrine is legally impossible in today’s media landscape. Political speech (and more broadly a news outlet’s right to control what content it publishes) is perhaps the prototypical example of speech protected by the First Amendment. The FCC was able to get around this specifically in the case of broadcast television because the electromagnetic spectrum is a scarce resource under government purview. (That is, you can only divide the spectrum up into so many different frequency ranges before broadcasters start stepping on each other’s toes.) 

The Supreme Court, in Red Lion Broadcasting Co. v. Federal Communications Commission, held that given this scarcity, broadcasters’ rights to free speech did not include the right to monopolize their portion of the airwaves (which are effectively government property), in much the same way that your right to free speech does not extend to a right to graffiti a political slogan on a government building. But such scarcity arguments don’t apply to cable television (which does not use the broadcasting spectrum) and certainly do not apply to the internet, with its effectively infinite amount of broadcasting space. As such, any legislative attempt to ensure “fairness” on the internet would be almost instantly doomed to failure, even before we question the wisdom of giving, say, the Trump administration the power to determine what “fair” means.

Section 230, and more generally what obligations tech companies have to regulate the content they provide, is a rather complex issue. I don’t profess to draw the line on how much we should prioritize suppression of extremism vis-à-vis freedom of speech, burdens on smaller tech companies, and so on, as well as whether large social networks should be explicitly neutral in their moderation (and what role, if any, the government should play in ensuring that neutrality). Hopefully this serves as an introduction to the complex issue of internet content moderation, which has only become more salient with various pieces of misinformation flying back and forth across internet platforms in the wake of the election.

If you have any further questions, would like to see a column on a specific topic, or think that I got something wrong, feel free to email me at zrobins2@swarthmore.edu. You can also DM me on Instagram @software.dude.

Some final notes:

  • I discuss social networks for most of this article, as those are the places where Section 230 applies most explicitly, but it also applies to other online venues. For instance, the section also protects news sites with comments sections, and it provides protection for websites like Airbnb in the case of home listings that are fraudulent or that violate local laws on short-term housing.
  • Technically there are a few exemptions to the Section 230 protections, such as intellectual property laws (which are further affected by the Digital Millennium Copyright Act and are their own complicated mess), and recently SESTA-FOSTA, a 2018 modification that removed Section 230 protections for content related to prostitution and sex trafficking. This modification passed through Congress by margins of 388-25 and 97-2, and had the intended effect of causing sex workers to move off of online platforms; however, instead of eliminating the industry, the amendment has caused it to move offline. This has caused an increase in crime and made sex workers much less safe. It’s also the reason Craigslist no longer has a Personals section—because of the remote possibility it might be used for prostitution. 
  • Other sources for this column include this article from The Verge.

Zachary Robinson

Zack Robinson '24 is a sophomore from Portland, Oregon, studying computer science and English literature. He enjoys tinkering with technology, epeé fencing, and diving into random Wikipedia rabbit holes.

1 Comment

  1. Fascinating article. Thank you for breaking down this complicated topic and making it far more understandable.

Leave a Reply

Your email address will not be published.

The Phoenix

Discover more from The Phoenix

Subscribe now to keep reading and get access to the full archive.

Continue reading