Blog

Algorithms and their Role in Content Moderation

Earlier this month, Twitter and Facebook blocked and limited distribution of a New York Post article about Hunter Biden’s dealings in Ukraine. These decisions garnered significant backlash from lawmakers and other Americans who called into question the content moderation policies of social media platforms. While the decisions surrounding the New York Post article were likely verified by a human, most initial content moderation decisions are determined by algorithms rather than humans. Algorithms are human created but, depending on their sophistication, can often learn on their own to achieve outcomes not previously anticipated.

What is an Algorithm?

An algorithm is a set of steps to achieve an outcome. It comprises a set of steps or rules that lead to a conclusion or completion of a task. Following a recipe or map directions, completing a math formula, and playing a game are all examples of basic algorithms. These are mundane examples that we all experience in our daily lives. So why do algorithms have a negative connotation when applied to technology? Simply, because they are not well understood and companies that employ them have not been completely transparent in how they function.  

Algorithmic Content Moderation

Algorithms are the tools that allow machine learning and artificial intelligence. Early on, automated content moderation involved an algorithm that followed a set of criteria to identify preprogrammed content, such as obscene language, and take a specific action, such as putting it in a review queue or blocking it. For example, an algorithm would evaluate a user’s post to see if any of the words matched any in a list of preprogrammed prohibited words. If so, the entire post would be flagged or blocked based on the governance model of the platform. Now, more advanced programming allows algorithms to learn each time a piece of content is checked and classified so that similar, unseen content can be acted upon without previous, specific programming. While this advancement significantly increases the ability of platforms to moderate the content of billions of users, unintended outcomes can occur.

Algorithmic Bias

Algorithms are created by humans. Humans are imperfect. We can design algorithms to account for known bias, but we often overlook unconscious bias or fail to recognize influential factors underlying the data upon which algorithms will act. Machine learning algorithms categorize new content by comparing it to existing examples. In addition, platforms that provide a personalized experience will promote posts or other content to a person based on the likes and interactions of other similarly categorized users. Because humans possess inherent bias, the data upon which these algorithms are trained will reflect some level of bias.

The question becomes, do we let machine learning algorithms reflect the true user population, or do we manipulate them to promote only neutral, appropriate content?

There are certain types of material that platforms automatically block because it is objectionable; for example, illegal, pornographic, violent, or terroristic content. But there are instances where content concerning these topics could be appropriate, like when bringing awareness to human trafficking, promoting drug abuse support groups, or highlighting domestic violence resources. Therefore, fully automated content moderation may not always be the most effective or suitable process.

Algorithmic Liability?

I have previously outlined that Section 230 of the Communications Decency Act provides immunity from liability for distributors of third-party content, as well as good faith efforts to remove objectionable content. When this law was written, dial-up Internet was common. It is possible that the authors of Section 230 wrote it with the expectation that content moderation decisions would be made solely by humans.

When algorithms make content decisions, albeit based on human programming, who is responsible? Keeping the human fact-checker in the process would most likely not change current application of Section 230, but any fully automated content decision would need to be adjudicated. While Congress considers reforms to Section 230, this is a question that should be answered.

Platform Accountability

Even if algorithms are able to better identify and classify objectionable content, the underlying governance structure guiding outcomes may still be problematic. While some content is universally recognized as objectionable, other content is more ambiguous. Should there be a set of industry standards to determine classification of content? If such an agreement were implemented, it would quite possibly eliminate the need for clarification of algorithmic liability.

Until algorithmic content moderation standards and applications are clarified, in statute or otherwise, technology companies should be more forthcoming with how they are identifying and categorizing user content. This increased transparency would go a long way to establish credibility and accountability with the public and lawmakers as content moderation policies evolve.