According to the World Economic Forum, by 2025 we’ll be creating over 400 billion gigabytes of data every day. (That’s equivalent to over 212 million DVDs, if you’ll forgive the old school analogy.)

So there’s an unfathomably vast amount of user-generated content (UGC) out there.

Some of that content will be in some way harmful to other users. The only way to distinguish the safe from the dangerous is moderation – usually with some level of human supervision.

Human content moderators typically work in tandem with automated content moderation tools. They’re the digital first responders who make the online world safer for all of us—not least those of us who are running online
platforms.

This article explores everything you need to know about using human content moderators to keep your platform, platform users and brand reputation safe in a hazardous online environment.

Contents:

  1. Who needs content moderation?
  2. Roles and responsibilities of human content moderators
  3. The 4 main types of content moderation
  4. Human content moderation vs automation
  5. Building a content moderation team
  6. Outsourcing content moderation

Who needs human content moderation?

It’s not just big platforms like Facebook, TikTok or YouTube that need UGC moderation. Any business that hosts online content—whether it’s AirBnB, Tinder, a newspaper, a review site, an online store, or even a B2B blog with comments enabled—needs content moderation.

In other words, any business with a social media presence, online forum or metaverse space.

Content moderation is the cost of entry to participate in the user-generated content space. And it’s a price worth paying. UGC offers a huge set of commercial opportunities. Social proof can increase conversions and influence buying decisions. In fact, nearly 80% of people say UGC impacts their decision to purchase. It can also support brand loyalty and authenticity.

But UGC also exposes businesses to the risks of bad actors. Given enough content, some of it will inevitably be disturbing, misleading, hateful, inflammatory or even dangerous—such as ‘doxxing’, when users post private addresses of perceived enemies and encourage others to harm them.

This kind of content can damage your reputation, repel users and hurt advertising revenue. Content moderators are needed, therefore, to protect your users and shield your brand.

For a more comprehensive overview of the challenges involved in moderating your UGC—and the rewards that come with getting moderation right—head to our Ultimate guide to content moderation.

Roles and responsibilities of human content moderators

Content moderators review all forms of UGC across all platforms. Their aim is to ensure that only content meeting the platform and regulatory guidelines is published.

Examples of this type of inappropriate content can include hate speech, trolling, flaming, spam, graphic content depicting sexual abuse, child abuse and other violent or upsetting acts, propaganda, misinformation and fraudulent links.
Inevitably the ‘keeping the bad guys out’ aspect of the role draws a lot of attention. Equally important, however, is the positive commercial impact the work of content moderators delivers.

Successful content moderation leads to the creation of vibrant user communities in which people feel safe to interact and transact. The right UGC is commercially vital as a signifier of authenticity and trust, particularly for brands seeking traction among younger audiences.

With the advent of the metaverse, human content moderation is at an inflection point. While content moderation will always be important, it will also evolve into conduct moderation, or overseeing real-time interactions between users in a virtual reality environment.

The 4 main types of human content moderation

There are four main types of content moderation, each with its own strengths and weaknesses.

1. Pre-moderation

Human moderators screen user-generated content (often flagged by automated systems) to determine whether that content is fit for publication, or should be reported or blocked before it’s ever shown to a user.
This approach offers a high level of control over content, but also risks creating content bottlenecks when moderation teams can’t keep up with spikes in content.

2. Post-moderation

Content is displayed on the platform in real-time but also placed in a moderation queue. This removes content bottlenecks and conversational interruptions but opens up legal, ethical and commercial risks, since inappropriate and even illegal content can get viewed before you take it down.

Some major content platforms such as Facebook have introduced elements of proactivity to post-moderation. This includes banning or blocking users who post content that doesn’t meet platform community standards and tracking their on-site behaviors.

3. Real-time moderation

Some moderation has to happen in real-time, such as for some streaming videos. In this case, moderators will monitor some or all of the content and remove it if issues pop up.

In some cases, moderators spot-check the content or just drop in for the first few minutes to make sure everything is on track.

The metaverse presents an interesting new use case: moderating person-to-person interactions as they take place in the real-time. This type of moderation is more labor-intensive but will be key to ensuring online experiences feel safe for users (especially in an up-close-and-personal virtual reality experience).

4. Reactive moderation

See those ‘Report this’ buttons where platform users have posted content? That’s what reactive moderation looks like.

Reactive moderation relies on community members to flag undesirable content that runs contrary to platform rules. A useful safety net to pre-moderation or post-moderation, it provides the content moderation team with an army of deputies across your platforms.

This system is, however, open to abuse. If a user wants to target another user they can report everything that user posts and flag it for moderation. Some users may even use this system to target your content moderators, reporting inoffensive content simply to waste their time.

5. Distributed moderation

Distributed moderation assigns responsibility for scrutinizing every piece of user-generated content to a wide number of people (other community members and/or employees of the platform).

This is a democratic but infrequently used approach. Most businesses seek greater levels of content control than user moderation allows, and company polls risk internal divisions and disagreements.

Human moderation vs automation

Due to the high volume of UGC they have to deal with, many platforms use AI and machine learning (ML) tools to help their human content moderators out. Some platforms have even removed the human element altogether.
Relying entirely on human or automated content moderation is risky, because both automatic and human approaches have their strengths and weaknesses.

Let’s take a closer look…

Human content moderation

Advantages

  • Humans have a far greater ability to interpret the nuance of how content is being used than an AI. A human content moderator would, for example, distinguish between a proud parent showing off a child swimming on holiday and child pornography. An algorithm might not be able to differentiate them.
  • Human moderators possess a better grasp of language subtleties like slang, context and irony—and can keep up better with the fast evolution of language. This reduces the levels of unnecessary content penalization.
  • Human content moderators are more culturally attuned to the material they’re reviewing. What is suitable in one region, country or user group may not be in another, so you should consider hiring moderators from that culture or a similar one.

Disadvantages

  • It’s difficult and expensive to scale up human content moderation—particularly at a fast speed. Recruiting, training and retention all take time and money to achieve.
  • The sheer volume of UGC makes it impractical to assign all content to human moderators.
  • Any headcount shortfall leaves platform performance and UX compromised.

Automated content moderation

Advantages

  • Content moderation powered by AI/ML can screen vast quantities of UGC across in a fraction of the time a human would take.
  • It is entirely scalable without prohibitive cost implications.
  • An AI solution isn’t subject to the psychological stresses that moderating disturbing content can result in.

Disadvantages

  • Lack of qualitative judgment / nuance: an AI can’t judge user intent to the same degree as a human. Profanity filters can be fooled. Fake news or misleading information isn’t always distinguished from legitimate sources. The cultural or personal context of content can get missed and misclassified.
  • Blanket application of algorithmic guidelines risks a lot of content being penalized unnecessarily and can annoy your user community.
  • Unsurprisingly, many businesses use a hybrid solution, with AI/ML doing much of the volume heavy-lifting, while flagging more ambiguous decisions for human moderators.

With evident weaknesses on both sides, your platform is likely to be best off following the majority of online platforms and adopting a hybrid approach to content moderation that draws on the best of both worlds.

And since you’ll probably be using some form of automation in your content moderation efforts, we recommend reading our article on How to get automated content moderation right, containing all the tools, tactics and metrics you need to know.

Building a content moderation team

Developing an in-house content moderation team can deliver a number of benefits—most obviously a direct line of communication and control with your content moderators. But it also brings substantial costs and risks.

Recruiting, onboarding, training, developing, managing and looking after the wellbeing of a team of content moderators is a complicated and potentially budget-breaking business.

We’ve written a guide to building your own content moderation team to help you decide if you should do it—and, if you decide it is for you, to do it effectively.

We’ve also written an article focusing specifically on a particularly important and challenging facet of employing in-house moderators: protecting their wellbeing.

If building in-house is something you don’t feel you have the expertise or resources to carry out yourself, there is always another way: outsourcing.

Outsourcing content moderation

Given the complexity and cost involved in building your own in-house team, outsourcing is a hugely attractive option for many companies thanks to its savings in time, effort and money.

BPOs have the experience and process expertise to run programs efficiently, and are often able to tap cheaper talent pools. A good external partner can also provide best practices in this ever-evolving area.

You need to be careful in selecting a partner. There’s a lot at stake here: Your brand’s reputation, your customers’ trust, the patience of your advertisers and the strength of the communities you’ve worked so hard to build online.

Look for a partner that invests serious consideration into training and supporting its content moderators. They should have clear processes for selecting the right personnel with the right skills, traits and cultural attunement for your market.

They should also invest in mental health support, such as ongoing access to counseling and supervisors trained to look for signs of distress.

While humans are essential to moderating content, you don’t want to endlessly scale up costs as content volume increases. A moderator that focuses on delivering value—for instance, using moderator decisions to better train AI tools to take on more complex decisions—may be a better long-term play than just going for the lowest-cost options.

When you rely on a moderation provider that has process engineered humans and AI to work alongside each other in the most effective way, the value you receive should continue to improve the longer they work with you.

For more insights into the key considerations you need to make (and steps you need to take) to outsource your content moderation, take a look at our article: Outsourcing content moderation: how to get it right.

Recognize the challenge of content moderation—and rise to it

No matter how you use people to moderate content—whether in-house or outsourced, augmented by AI or not—it’s important to recognize the crucial role they play.

Human moderators aren’t just censors arbitrarily applying standards; they’re guardians making critical judgment calls and helping maintain vibrant online spaces. These are challenging, specialist, serious roles requiring both empathy and expertise.

And if you think you’ll need help filling those roles, you should consider partnering with Webhelp.

We have thousands of expertly recruited, trained and managed content moderators on hand to meet your needs, operating all across the globe and in over 25 languages.

If you’d like to learn how our expert content moderation services can positively impact your platform’s customer experiences, drop us a line.

Let's talk