What is Section 230

What is Section 230?

In 1996, Congress enacted the Communications Decency Act primarily to regulate online pornographic material. Twenty-four years later, arguments surrounding a little-known provision of that law, section 230, may dictate how you answer Facebook’s prompt “What’s on your mind?” or Twitter’s “What’s happening?” among other heavily used social media platforms.

So, what is Section 230? Should there be reforms with how social media platforms monitor content?

In basic terms, “interactive computer services” such as Facebook, Twitter, YouTube, Instagram, Snapchat, and other social media platforms cannot be held liable for the content posted by its users. For example, if a user posts a libelous comment on Facebook, the user who posted that comment can be sued while Facebook cannot. This comes down to whether online content providers are treated as publishers. Traditionally, publishers of written materials have discretion over content. In this regard, they are held responsible for what is included in the publication. However, Section 230 allowed that as long as the online content provider did not participate in the formulation of user content then they will not be treated as a publisher and the liability falls to the user who created the post, tweet, snapchat, or video.

Section 230 also allows online content providers to exercise discretion over the content on their platforms as long as it is a good faith effort to remove inappropriate or objectionable content. For example, if a user posts a lewd photograph that violates a platforms content policy, that platform can remove the photograph without being liable as a publisher. However, if there is another inappropriate post that the content checkers don’t catch, the platform is not penalized for removing one post but not the other.

As a result, Section 230 provides immunity for not regulating content and for making good faith attempts to regulate content. Unfortunately, the courts have not been consistent when determining when liability applies under each part.

Enter the First Amendment

The First Amendment protects the right to free speech. Many misinterpret this to mean they can say whatever they want, whenever they want, to whomever they want; however, it only protects the people from government restrictions on speech. Therefore, online platforms as private entities have a First Amendment right to regulate speech that they host.

Now there are calls for social media sites to be held liable when these platforms moderate the content posted on their sites.

What happens now?

Section 230 is 24 years old this year. Twenty-four years ago you accessed the Internet using a dial-up connection, the first iPhone was 11 years away, and Mark Zuckerberg was only 12 years old. The world has changed, and our laws must mirror current applications of technology rather than regulate relics of the past.

One of the biggest threats to our democracy is the infiltration of disinformation by foreign adversaries. While we can’t legislate away our adversaries, we can restrict their influence by working with the private sector to create a more equitable and open dialogue. This includes promoting facts while respecting the viewpoints of all Americans.

Congress should study all applications of Section 230 across online platforms and evaluate whether the liability protections are meeting original intent. However, we must not trample on the Constitutional rights of individuals and private entities in the process. If the owners of social media companies do not agree with the posts by some of its users, should they receive immunity for editorializing the content they host?

The answer to this question is not simply one of right or wrong but could fundamentally change how we interpret the First Amendment of our Constitution.