Public Good versus Profits: A Review of the Big Tech Hearing

Last week, the CEOs of Facebook, Twitter, and Google testified before the Energy and Commerce Committee. The CEOs faced backlash for recent content moderation decisions, criticism for how their algorithms might be harming our children, and concern over Covid-19 vaccine misinformation. Underlying all of this was the question of whether these Big Tech platforms should continue to receive the immunity granted under Section 230 of the Communications Decency Act. 
Section 230 says that “interactive computer services,” like online platforms, cannot be held liable for the content posted by its users. Also, Section 230 allows online platforms to remove inappropriate or objectionable content on their platforms as long as it is a good faith effort to do so. Many argue that without these protections, small and medium sized businesses would not be able to enter the online marketplace and would not have the opportunity to develop into successful companies. Others argue that the First Amendment provides all the protection that is needed to continue operations without Section 230. 
For several years, the conversation surrounding Section 230 has centered on the need to fully evaluate its necessity given the advancement of the Internet and online business. When Section 230 came into effect in 1996, the first iPhone was 11 years away and Mark Zuckerberg was only 12 years old. The world has changed, and it’s time to ensure our laws mirror current applications of technology rather than regulate relics of the past. 
During the Energy and Commerce Committee hearing last week, I questioned Mr. Zuckerberg and Mr. Dorsey on whether editorial discretion on user posts made Facebook and Twitter publishers rather than platforms. Being defined as a publisher would eliminate any liability protections. For example, if a social media platform included a warning or other commentary to accompany a user’s post, does that count as exercising editorial discretion? Mr. Zuckerberg and Mr. Dorsey did not believe so because their platforms did not change the initial post, but merely added context to it. It is still unclear to me whether changing the distribution of a post on their platforms or changing how other users view a post – by adding context or additional information – is indeed exercising editorial discretion that would render them a publisher rather than a neutral conduit of information. 
Last week’s hearing also sought to address misinformation that proliferates through online platforms. Being a physician, one of my biggest concerns is misinformation surrounding the Covid-19 vaccine. If a person is fully informed by facts and chooses not to get the vaccine, that is their choice. But if they believe the false narratives and choose not to get vaccinated based on that false information, that undermines our recovery from this pandemic. Part of the issue is that algorithms  proliferate this false information and, whether intentionally or not, the platforms profit. 
The double-edged sword here is that Section 230 allows platforms to remove false information when it is discovered. On the other hand, as private companies these platforms could fully enforce terms and conditions disallowing this type of content and cite their First Amendment right to do so. 
So, the question becomes, at what point does the public good outweigh profits? I would argue they are not mutually exclusive goals.