Understanding Section 230 Reform Ahead of 10/28 Big Tech Hearing

October 20, 2020

The Online Freedom and Viewpoint Diversity Act, which was recently introduced by Commerce Committee Chairman Roger Wicker, R-Miss., Judiciary Committee Chairman Lindsey Graham, R-S.C., and Senator Marsha Blackburn, R-Tenn., would modify Section 230 of the Communications Decency Act to clarify the original intent of the law and examine Big Tech’s content moderation practices through an updated, more transparent standard.

Section 230, which was drafted in 1996, needs an update to remain consistent with the intent of the statute: to protect startups from frivolous content moderation lawsuits that could either bankrupt their firms, or severely restrict their access to venture capital. Since then, business practices and judicial interpretations have created gaps that Congress needs to address. Big Tech companies have stretched their liability shield past its limits, and civil discourse and First Amendment protections now suffer because of it.

The Commerce Committee will hold a hearing on October 28 to examine whether Section 230 of the Communications Decency Act has outlived its usefulness in today’s digital age. It will also examine legislative proposals to modernize the decades-old law, increase transparency and accountability among big technology companies for their content moderation practices, and explore the impact of large ad-tech platforms on local journalism and consumer privacy.

Chairman Wicker said:

“For too long, social media platforms have hidden behind Section 230 protections to censor content that deviates from their beliefs. These practices should not receive special protections in our society where freedom of speech is at the core of our nation’s values. Our legislation would restore power to consumers by promoting full and fair discourse online.”

Chairman Graham said:

“I’m very pleased to be working with Senators Wicker and Blackburn to bring about much-needed reform of Section 230. Social media companies are routinely censoring content that to many, should be considered valid political speech. This reform proposal addresses the concerns of those who feel like their political views are being unfairly suppressed.”

Senator Blackburn said:

“The polished megaplatforms we associate with online research and debate exert unprecedented influence over how Americans discover new information, and what information is available for discovery. Moreover, the contentious nature of current conversations provides perverse incentive for these companies to manipulate the online experience in favor of the loudest voices in the room. There exists no meaningful alternative to these powerful platforms, which means there will be no accountability for the devastating effects of this ingrained ideological bias until Congress steps in and brings liability protections into the modern era.”

The Online Freedom and Viewpoint Diversity Act would:

  • Clarify when Section 230’s liability protections apply to instances where online platforms choose to restrict access to certain types of content;
  • Condition the content moderation liability shield on an objective reasonableness standard. In order to be protected from liability, a tech company may only restrict access to content on its platform where it has “an objectively reasonable belief” that the content falls within a certain, specified category;
  • Remove “otherwise objectionable” and replace it with concrete terms, including “promoting terrorism,” content that is determined to be “unlawful,” and content that promotes “self-harm.”
  • Clarify that the definition of “information content provider” includes instances in which a person or entity editorializes or affirmatively and substantively modifies the content created or developed by another person or entity but does not include mere changes to format, layout, or basic appearance of such content.

FREQUENTLY ASKED QUESTIONS

Will this make it harder for platforms to remove objectionable content?

  • No. We’re asking companies to be more transparent about their content moderation practices and more specific about what kind of content is impermissible.

What does the law say about content moderation now, and how will this bill change it?

  • The law currently enables a platform to remove content that the provider “considers to be…. ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”
  • The problem is that “otherwise objectionable” is too vague. This has allowed Big Tech platforms to remove content with which they personally disagree. We’re striking that phrase and instead specifying that content that is promoting self-harm or terrorism, or that is unlawful, may be removed.

Does the bill raise First Amendment concerns?

  • No. This bill was created with free speech in mind. By narrowing the scope of removable content, we ensure that Big Tech has no room to arbitrarily remove content just because they disagree with it while enjoying the privilege of Section 230’s liability shield.

Will this bill protect against election interference campaigns?

  • Foreign interference in elections is unlawful. This bill won’t prevent Big Tech companies from removing content posted by these bad actors.

Why not repeal and start over?

  • The tech industry relies on Section 230’s liability shield to protect against frivolous litigation. If we repeal the law, we risk increasing censorship online, and encouraging the creation of a government body ill-equipped to act as judge and jury over speech and moderation. Repealing Section 230 in its entirety could also be detrimental to small businesses and competition.

Why not create a new cause of action?

  • Creating a new tort will only help enrich trial lawyers.

Why didn’t you cover medical misinformation?

  • We believe that platforms will be able to remove this content under the “self-harm” language in the bill.

Why can’t we use the courts to course-correct?

  • If we left this to the courts, they’d be litigating content moderation disputes all day, every day. This bill creates a clear framework; it’s important for companies to own their moderation practices, and follow them.
  • More broadly, history doesn’t support a court-led strategy. The courts have so broadly interpreted the scope of 230 that tech companies are now incentivized to over-curate their platforms.

What is your position on fact checking?

  • We will always find better solutions from the free market concerning fact checking.
  • This bill provides a starting point for discussion on objectivity by updating the statutory language to include a new “objectively reasonable” standard.

Will this require companies to create more warning labels?

  • Putting a warning label on a tweet could constitute “editorializing,” which would in turn open platforms up to potential legal liability. The idea is to make companies think twice before engaging in view correction.

Will this allow hate speech/racism/misogyny to “flourish” online, as some congressional Democrats claim?

  • No, but we invite opponents of the bill to discuss their views in the Senate Commerce and Judiciary Committees all the same.

Is this legislative push motivated by the President’s social media presence or the 2020 election?

  • No. The Commerce Committee has spent the past several years working on Section 230 reform. Repeated instances of censorship targeting conservative voices have only made it more apparent that change is needed.

Click here to read the bill and here to download the fact sheet.