SHARE

Growing concerns towards content moderation policies aroused over the last few years due to scandals, users, and politics. Subsequently, there have been growing concerns about Content Moderation (CoMo) policies and enforcement on social media platforms. Today, we are dealing with how world-renowned social media platforms enforce their Content Moderation policies, as opposed to how governments or institutions desire them to (See Article). Bear in mind that in most countries, these platforms are not immediately liable for their User-Generated-Content (UGC); Section 230 of the Communications Decency Act of 1996 in the United States is a great example of this ’liability shield’:

When it comes to terrorism or other threats to users, several countries like members of the EU, Brazil and Australia impose a time limit for platforms to delete content after it has been flagged as inappropriate.
With platforms not immediately liable for their User Generated Content, why are huge corporations enforcing stricter policies, raising censorship concerns ? Why don’t they just leave their communities to live freely? They need to for two main reasons:
- To protect their communities from violence, hate speech, racism, harassment and so many more threats to ensure a smooth user experience
- Protect individuals on their platforms from being influenced by other users who might spread misleading information or leverage the network to influence their decisions and behaviors
But as you will discover in this article, some platforms endure growing scrutiny from their audience due to their huge reach, whilst others might benefit from different levels of acceptance to convey a somewhat brand image.
Scrutiny makes social media leaders tighten their CoMo policies
The former is the case, especially for both Facebook and Twitter. Their billions of daily users have the ability to influence mass opinions – far more easily than any other type of media. Following several scandals, trust between these platforms and their users has been damaged (link to WH article). In fact, when interrogated in a hearing by the US Senate last October, leaders of Twitter, Google, and Facebook were pointed out as “Powerful arbiters of truth”, a rather explicit denomination.
Content Moderation has wide political implications. Last year’s American elections played out a bigger trial for large tech platforms showing how they were able to monitor the peaks of political comments, ads, and other UGC, safely and considerately. Numerous examples of Content Moderation can be cited as no political ads on Facebook: first flagging Donald Trump’s tweets as misleading or partly false before permanently banning the former US president on both platforms.
TikTok has also been questioned several times regarding their moderation of political content, but most importantly almost live suicides, paedophilia, and increased usage of guns in the videos were posted by their users. Further to political aspects, the reasons why these types of content should be deleted and not seen by the communities is straightforward. When it comes to firearm usage, local laws make it even more unclear on how to moderate the use and applications of these types of weapons online.
Logically, the pattern rubs off on smaller players
Most Big Tech giants have now funded Safety Advisory Councils generally – “made up of external experts on child safety, mental health and extremism”, signaling to their communities that they are trying their best to protect them while avoiding censorship and audience attrition.
Due to the attention their bigger peers face, targets of the proposed tighter Content Moderation policies are progressing towards them. Platforms such as Parler advocate free speech and use it to promote their brand image, while welcoming the most extreme far-right supporters, whose comments are widely moderated on Twitter and other mainstream social
After Parler was banned from most well-known online app stores (Amazon, Apple, Google, who are the main providers of these apps) due to its lack of Content Moderation, it was forced to go offline and its now-former CEO, John Matze, has been fired over his push for stronger moderation. There are several other social media platforms claiming to promote free speech (Telegram, Gab), but some have chosen bravely to take on the Content Moderation challenge to avoid Parler’s faith.
Nonetheless, such patterns are already observed for new and innovative social media, including Substack (newsletter developing platform) and the infamous Clubhouse (live audio conferences). The former was not expecting such controversy about one of their newsletters until one of its previous releases linked IQ to race. The latter poses new questions on how to efficiently moderate live audio feeds.
Mastering Content Moderation policies is the key to success
The scale of emerging social media platforms, as well as their innovative format and technology imposes new challenges on Content Moderation, which is evidently highlighted by increased scrutiny from users. Unfortunately, without benefiting from years of experience in Content Moderation, newcomers, and smaller players find that their policy is adapted to their own targeted communities, as well as their content. If both areas are too permissive or restrictive, they become dangerous for their longevity and brand image.
Mastering Content Moderation enforcement is a lever to the welfare of your community and reputation.