Protecting the wellbeing of content moderators

Content moderators commonly endure levels of workplace trauma similar to those suffered by first responders.

Every day, they risk serious harm to their wellbeing as they safeguard digital experiences that contain user-generated content (UGC).

Fortunately, employers can take effective steps to protect moderators’ mental health and wellbeing—ranging from using AI-powered tools to blur pictures of abuse to more traditional workplace wellness support.

Read on to find out about the dangers moderators face, and the measures industry leaders are taking to ensure they are protected and supported.

Risks to content moderators’ wellbeing

While most flagged content isn’t egregious—or even in violation of your standards—moderators may be exposed to hate speech, trolling and graphic content depicting sexual abuse, child abuse and other violent or upsetting acts on a daily basis.

Prolonged and repeated exposure can be damaging. Moderators can develop vicarious trauma, with symptoms that include insomnia, anxiety, depression, panic attacks and post-traumatic stress disorder (PTSD). These can lead to other symptoms, like substance abuse and family problems.

Studies have shown symptoms are consistent with extreme stress workplaces such as front-line healthcare and counseling.

Content moderators on the front line

Hiding from traumatic content isn’t an option for moderators. They have to engage with the material to make effective judgment calls.

What’s more, they’re often battling accuracy and speed targets, which means they need to engage both in detail and in large volumes.

Content moderators need different types of support from employers to mitigate mental health repercussions. For simplicity, let’s break down this support into two areas: technological and organizational.

Guarding the guardians: automated moderation

In recent years, some pretty smart tech has been developed to do some of the heavy lifting for content moderators.

AI tools are increasingly used to eliminate large volumes of the most offensive content—before it ever reaches a human moderator.

Using a combination of natural language processing (NLP), keyword analysis, sentiment analysis, object detection and deep-learning algorithms, solutions can independently detect and remove much of the grimmest online content—whatever the format.

When these systems do need to refer content to a human moderator, you can employ tactics to diminish its effects. For example, muting audio, blurring images or removing color have all been found to lessen the psychological and emotional impact on human moderators.

Beyond that, AI-driven systems can forewarn their human counterparts of the nature and severity of the content by categorizing, prioritizing and triaging. Some automated systems even offer Visual Question Answering (VQA), enabling the moderator to interrogate the AI about the content without having to actually see it. For example: “Is the child clothed?” Clever stuff.

Wellbeing programs for content moderators

As advanced as the AI is, even Mark Zuckerberg acknowledges nuanced content moderation essentially relies on people. And people need more than the tech can offer. More specifically, they need well-planned and structured wellbeing support programs.

Leaving aside for a moment the ethical rationale for wellbeing programs, it’s worth noting that effective wellbeing programs mitigate attrition and enable content moderators to do their jobs better.

Mental resilience fosters better focus and judgment and improves stamina for repetitive work.

Gold-standard programs often include:

  • 24/7 counseling (Webhelp runs ours through an external EAP)
  • On-site psychologists
  • AI-powered analytics tools that cross-analyze different types of data to pick up weak signals indicating mental health issues
  • A bot that pushes different mental health content based on which tools the moderator uses
  • Dedicated management structure with team supervisors trained to spot early warning signs amongst other dedicated support staff.

Industry-leading measures for hiring, onboarding and managing moderators

Whether you’re outsourcing content or building it in house, these tactics can protect the mental health of your moderators.

Hiring moderators

Hiring the right people is critical. Your job description should be upfront about the challenges moderators will face to encourage poor-fitting candidates to opt out.

You should screen for mental resilience, possibly using psychological testing to augment interviewers’ perceptions, and assign the most resilient people to the more challenging content queues.

Also look to hire people excited by the thought of making the internet safer for their fellow humans. Finding meaning in work can make the challenges easier to bear.

Onboarding & training

Normalize wellness maintenance right off the bat in onboarding.

Content moderators should receive upfront training to support resiliency. This promotes healthier and more robust psychological coping skills to help armor individuals against the impact of disturbing content.

Offering wellbeing services in the workplace

In addition to having a wellbeing team, you should provide moderators with easy access to psychological support both inside and outside the workplace, such as through an employee assistance benefit. Employers should provide content moderators access to mental-health services at least 6-12 months post-employment.

Setting boundaries

Moderators need psychological respite and time to decompress. This can be achieved by setting a defined limit to the amount of time any individual spends reviewing the most upsetting categories of content.

Similarly, provide moderators with regular wellbeing breaks to de-stress, seek counseling or simply reset themselves. Where possible, provide a dedicated area away from the work area for these breaks, and you must provide private areas for counseling.

When moderators are having an off-day or get exposed to something upsetting, they can be assigned to a less challenging content queue to minimize the chance of seeing another.

Stress can chip away at team member performance, especially in the more challenging queues. Think about what stressors you can remove, such as dashboard notifications about content piling up.

Checking in and reviewing

Managers play a key role—the connection between team members and their supervisor is critical to managing mental health.

Most people aren’t comfortable sharing their feelings, even when distressed, and won’t raise their hand unless they feel truly supported by their manager. Supervisors also need training to identify early signs of trauma and support the wellbeing of their employees.

Regular employee touchpoints are essential to check in on moderators’ health and help them work through issues.

It’s also worth linking the frequency of these conversations to the individual’s needs, and touchpoints can take different forms, from weekly group therapy sessions to individual, ad hoc sessions. If there are signs that they appear particularly affected by the content they’ve been reviewing, increase the frequency of catch-ups—let them feel the support.

Environment, status and pay

Given the stressful nature of the work, the physical workplace should be a welcoming and soothing space.

Think: well laid-out, well-lit, well-appointed. Beyond that, work environments can be shaped by atmosphere and camaraderie. Activities for workmates that promote teamwork and fun are important. Just don’t make them compulsory: no one likes forced fun.

Also think about elevating this role. Many moderators enter the profession because they want to make a difference and protect people, and you should honor that.

Until recently, content moderation has been a largely invisible job. In many cases, neither the pay nor the job status has reflected its value and demands. Think about how you compensate the role, provide development opportunities and show your employees you respect the work.

The outsourced moderation option

Not every business has behavioral scientists on staff to manage the human element of content moderation. It’s nuanced work, and many companies—including the biggest social media brands in the world— outsource their content moderation.

If you’re considering this option, keep in mind you still bear some responsibility for ensuring the wellbeing of moderators. Moral obligations aside, your business and brand reputation is on the line. At minimum, do due diligence in selecting a reputable BPO.

Choosing the right moderation provider

As the age of the metaverse approaches, content moderation becomes an increasingly important decision. Here are some tips:

  • Look for a partner that invests serious resources into training and supporting its content moderators, prioritizing their mental health
  • Seek out evidence of clear, structured and documented wellbeing policies with a dedicated team to implement them
  • Ask about the technology being used to help the moderators
  • Insist on clear reporting and KPIs on the effectiveness of both wellbeing initiatives and moderation performance

These simple criteria will put you in a great position to make the right decision for your business and your online audience. And you can feel good about how you treat the people who weed out the worst of the internet and make it safe for everyone.

If you’d like to learn more about how Webhelp can provide content moderation services that positively impact your customer experiences, have a look at our digital content services.