Section 230 and the Future of the Internet

Section 230 and the Future of the Internet

In 1994, the Rolling Stones wanted to do something no musical act had ever done before: broadcast a live performance on the internet. Despite this being the earliest days of cyberspace and a limited number of viewers (only 22% of American homes had a computer), the band saw valuable publicity in the breaking of new ground. It had become clear that this new worldwide network had tremendous potential and was even being heralded as "the biggest promotional tool for the music industry since the invention of the press release". Unfortunately, the Stones were beaten to the punch by a motley crew of engineers from Xerox, Apple, and Digital Equipment known as Severe Tire Damage.

This much less-known band knew the channel carrying the Stones broadcast was open to anyone and took the opportunity to stream (or as they called it then: multicast) an impromptu live performance from the Xerox PARC office directly before the Stones concert. And while the group did earn a record deal from the stunt, that wasn't their intention. Instead they wanted to show that anyone could broadcast on the web with little experience or equipment. This sentiment was mirrored by a spokesman for the Stones who stated the performance was a "good reminder of the democratic nature of the Internet".

The great thing about the world wide web is the ability for users to not only access vast information resources but also be providers, self-publishing across a variety of platforms and mediums. It's one of the Internet's most impactful characteristics, helping people across the globe find their audience, creating art, music, and revolution along the way. Unfortunately, with great access also comes a host of problems including hate speech, defamation, and disinformation. But back in the 90's, the extent of these issues had yet to be seen, so lawmakers were far more focused on preserving decency on the web and centered regulatory legislation around protecting children from obviously obscene content.

In 1995, Senator James Exon introduced an amendment to the Telecommunications Act known as the Decency Act, which aimed to make internet companies more proactive at removing objectionable content. But the courts had already proven that this was a difficult position to be in, setting a precedent that if internet companies performed some form of moderation, they were potentially liable for all content that their users posted. 

This meant that companies had to make a choice: use editorial discretion and make themselves vulnerable to lawsuits or do nothing and face no legal action. This dichotomy didn't make sense to Representatives Christopher Cox and Ron Wyden so they created Section 230 or as they called it "the sword and the shield". Effectively, it stated that internet companies were not publishers and could therefore moderate content if they so wished with immunity from liability. Cox and Wyden hoped that this would encourage companies to filter content in good faith and Section 230 remained even after the Decency Act was struck down for violating the First Amendment.

Flash forward to today and the power of social media has made Section 230 especially contentious. Both President Donald Trump and President-elect Joe Biden have expressed their desire for Section 230 to be repealed or modified but for very different reasons.

Biden and his fellow Democrats believe that Section 230 gives too much protection to companies in exchange for inadequate moderatorship. Biden has stated that Facebook is "not merely an internet company, it is propagating falsehoods they know to be false", reflecting Facebook's inability or reluctance to slow the spread of fake news. In one of his most damning critiques, Biden even suggested Facebook should lose its legal immunity. Similar criticisms have been levied against YouTube when it was proven that the site's algorithm creates political rabbit holes which push users towards conspiratorial and radicalising content.

Republicans, on the other hand, feel that moderatorship is akin to censorship and is suppressing conservative voices. Much of this criticism arose during the Trump tenure when Twitter began flagging tweets it considered misleading, disputed, or purposely deceiving. Trump then signed an executive order in March which called upon the FTC to step up regulation of social media companies and reconsider the application of 230, something the FTC looks unlikely to undertake. The issue then reached a boiling point when a New York Post article was blocked by Twitter which led Senator Ted Cruz to ask CEO Jack Dorsey: "Who the hell elected you and who put you in charge of what the media are allowed to report and what the American people are allowed to hear?"

So both sides of the aisle have issues with Section 230 but no one can quite decide what happens now. Foundational to both complaints is a frustration with moderatship, specifically what content is filtered and when. It would appear that a more thorough process with defined parameters would help but this will just place more pressure on the moderators within these companies, slowing down publication speeds and limiting information on the web. In an ideal world, artificial intelligence would filter the majority of content but that task has proven especially difficult. 

During the pandemic, Facebook leaned heavily on its machine-learning system with varying degrees of success. When it comes to terrorist content, it can correctly identify and remove it 98% of the time but when it comes to hate speech that figure drops to 80%. YouTube's AI has had an even more difficult time as graphic imagery appears in newscasts, documentaries, and educational videos, resulting in content being mistakenly removed.

For these reasons, all social media platforms are reliant upon in-person moderators who can provide much needed context to the review process. But even this system isn't perfect, with many having cultural and lingual blind spots that allow harmful and radical content to bloom in developing countries. Facebook's inability to filter content in Myanmar due to a lack of Burmese-speaking moderators has been tied to a genocide in the country. In-person moderatorship also means exposing employees to the darkest underbelly of the internet with damaging long term effects.

And here lies the true problem, more significant than the wording of Section 230, is the sheer size and speed of the internet. Gone are the days of a few engineers in a band who wanted to upstage the Rolling Stones, now everyone is posting and consuming. We have entered an attention economy which fuels content production. 4,000 photos are uploaded to Facebook every second, 5,000 hours of video are uploaded every minute to YouTube, and 6,000 Tweets are sent a second. Our laws, expectations, and abilities have not scaled with the internet so even if we make concrete legal guidelines we don't necessarily have the technology or manpower to enforce them.

Section 230 may be reviewed under the Biden administration but they would be wise to consult with internet companies to ensure expectations align with our current reality. These platforms are not going away but we need to make sure they’re safe without compromising the thing that makes it so special. As of now, that means encouraging companies to invest in new forms of moderatorship and establishing programs to make users more internet-literate as a way to combat fake news. Social media is still incredibly young, it has a lot of room to grow.

Anne MarieAnne Marie

Sign up for free to continue reading.