Where do we draw the line between freedom of speech and allowing misinformation to be broadcasted online?

Content moderation is crucial for social platforms to ensure a trustworthy relationship with their users. Without moderators, billions of social media users would be shown potential harmful content every day.

Government control – trusting the system

There are many nuances of user generated content, and there are concerns that governments will take control over the content posted on media platforms, removing the platforms purpose of sharing content freely (within the guidelines).

For example, the U.S. Government signed new laws to ban social media platform TikTok – which has over 80 million daily users in the U.S. The platform has since won a preliminary injunction that will allow for the app to be used and downloaded from the U.S app store.

This precedent shows that if the government had more control, they would be quick to implement such regulations on these platforms. It is unlikely to happen as political figures use social media platforms to connect with their constituents, communicate their views, and advocate for political campaigns.

Free Speech vs Content Moderation?

According to Gallup and Knight Foundation survey, “55% of Americans say that social media companies are not tough enough, with only 25% saying they get it right”.
For instance, Trump’s behaviors and actions on Facebook, Twitter, and other social platforms, have allowed communicating harmful propaganda which can influence political views and undermine election campaigns. As well as provoke/incite violence by sharing false and deceptive information to the public which we have witnessed during his election campaign in 2020, and more recent events at the US Capitol with Trump supporters.

The violent storming of the US Capitol led to the big tech companies like Twitter and Facebook suspending Donald Trump from using the platform due to his alleged role in inciting violence and sharing misinformation; with many other players permanently banning him from their platforms. The platform Parler, which has a significant user base of Donald Trump supporters, was taken off major service providers app stores as they accused the platform of failing to police violent content.

After Trump’s 12-hour ban was lifted on Twitter, he continued to violate their policy. They concluded that his tweets during the incident was against their Glorification of Violence policy and left them with no choice but to permanently suspend his account.

To give multiple chances to an individual with this level of influence, users continue to express their views that big tech companies are being taken for a ride and not doing enough to stop the virality of content. Consequently, this has resulted in people not trusting the platforms’ moderation policies and algorithms to display authentic, unbiased content efficiently.

Trusting the system

Controversially, US online intermediaries are under no legal obligation to monitor content, “social media companies are under no legal obligation to monitor harmful speech, and governments can’t really make them or compel them to offer things like counter speech without running into First Amendment roadblocks”, Forbes, 2020.

Section 230 – a constitution act for Americans which protects the freedom of expression. In comparison to other countries, the U.S. Section 230 provides online platforms with immunity for legal reprimands with few exceptions, “they can avoid liability, and object to regulation as they claim to be editors of speech” outlined in Section 230(c)(1). There are many caveats and exceptions – particularly when it comes to interpreting images and videos.

Therefore, when it comes to accountability, this legislation has limitations to hold online intermediaries liable for user generated content on their platforms. It does not establish what is considered tortious speech, harmful or misleading information. Rather, big tech companies are left to outline this in their policies; to do the right thing by their users.

Moderating content

Early last year, Twitter introduced new labels on Tweets containing “synthetic and manipulated media”, likewise Facebook created labels that flagged harmful or unverified information.
Although these companies continue to introduce new tools to highlight harmful content, it is important for moderators to have the correct tools and expertise to moderate sensitive content and not solely rely on technology to do this. Without the right guidance and principles, misinformation and propaganda will manage to fall through the cracks.

Lear more about our Digital Services, or contact us to find out more.

SHARE