Online platforms must do more to safeguard the people protecting us from harmful content

Social media has transformed the lives of billions of people around the world through new connections and shared experiences. But sadly, it’s also proved to be alarmingly effective for spreading dangerous content like scams, child pornography, extremism, terrorism, online abuse, and cyber bullying. These risks to the public are real, serious, and well-documented, with a multitude of initiatives in place to crack down on breaches.

The impact on content moderators is gradually getting the attention it deserves, but initiatives are not always up to it – they are the heroes holding the line. These are the people who day in, day out have the responsibility of constantly monitoring, analyzing, and responding to distressing, disturbing and suspect material.

content-moderator-protect-their-wellbeing

Humans enhanced by AI

The sheer size and scope of social media platforms – many of which are reliant on user generated content (UGC) – means it’s unrealistic that all of them could now be instantly moderated solely by people. For example, Meta (formerly known as Facebook) receives three million reports on content every day, flagged by AI or users. And even this brand – one of the biggest on the planet – has just 15,000 directly or indirectly employed content moderators globally to manage reviews, posts, pictures and videos. Meanwhile, a 2020 report by business school NYU Sterna suggested Twitter had only 1,500 human moderators to deal with 199 million daily users across the globe.

With billions of users across both platforms, those sound-like potential recipes for stress and overwhelm. The ideal solution for human moderators and AI to work in synergy. But for now, human moderators must bear the brunt for the online community – because the hard truth is that AI isn’t able to take over the whole job – at least, not yet.

Documents leaked from Facebook in September 2021 revealed that its automated systems struggle to deal with hate speech and terrorism content. One main stumbling block was that although the AI systems operate in 50+ languages, the platform is used in more than 100 languages. Platform owner, Meta, is now developing its own AI system dubbed the Meta AI Few-Shot Learner (FSL), which has been rolled out on Facebook and Instagram.  Its long-term vision is “to achieve human-like learning flexibility and efficiency.”

Creating these AI is extremely complex and tedious as thousands of items need to be accurately annotated for the AI to independently recognize them and act. Meta’s system is already making progresses on this side as it needs to see fewer examples to identify troublesome posts and works in more than 100 languages.

But even Meta admits these are “early days” of what it describes as intelligent, generalized AI models. Tellingly, it also points out: “There’s a long road ahead before AI can comprehend dozens of pages of policy text and immediately know exactly how to enforce it.”

Elsewhere in the market, we see further positive signs of real progress by independent industry providers. These solutions understand context to a certain degree, work in any language, handle informal language, slang or dialect, and learns from human moderators as they work.

Recognizing employers’ responsibility

The current reality is that machines can’t suffer distress from scanning content – but people can. And as global employers, online platforms have a responsibility to safeguard people’s well-being, and BPOs need to support them in that direction. Content moderators are navigating complex legislation regarding the removal of offensive content, working to legal deadlines to remove posts, as well as brands’ SLAs. Not to mention acting on a moral imperative to protect users, particularly in vulnerable groups like children.

But some BPOs have got it badly wrong. It was widely reported that several content moderators at a NASDAQ-listed BPO had allegedly suffered from secondary traumatic stress as a result of witnessing first-hand trauma experienced by others, which tends to result in anxiety, sleep loss, loneliness, and dissociation.

Similarities can be found between moderators and journalists, sex-trafficking detectives, and emergency dispatchers. With common symptoms from these professions developing PTSD-like symptoms, anxiety and depression.

Many moderators face a daily onslaught of disturbing posts filled with hate speech, violent attacks and graphic content. They are offered little to no support or counselling in large companies, and even after leaving, some have developed mental health disorders and they were still offered no support.

Setting the standard

At Webhelp, we’ve invested heavily in what we think is a leading approach to moderating online content for clients, while prioritizing the mental health of our people. That means fully recognizing and putting in place a raft of services and support mechanisms to proactively monitor and address the unique pressures content moderators are under.

As a people-first company, it’s our stated mission to make sure every team member feels happy, valued, and recognized. It’s a philosophy that underpins everything we do. And because we understand that wellness is such a key factor in enabling our employees to give their best, we’ve designed our own custom-built program comprising wellbeing, technology and psychotherapy.

Well-being as a way of working

We learned that well-being was a concern for our employees’, so we implemented more than 80 new initiatives early-2021 – all aimed at protecting our content moderators’ physical and mental health.

A key part of that is being proactive and being able to recognize when things aren’t quite right, or one of our team members needs help.

We’ve introduced wellness centers, where advisors can access psychological care and support – onsite throughout the day and outside of working hours as 24/7 external helplines are available to them. This is complemented by our WebHEALTH program, which focuses on fitness workouts, massages and meditation sessions for all our teams. We also put in place a tranche of preventative mental health programs.

We’re already seeing positive results, including a boost in loyalty and productivity. For example, since launching a scheme to encourage employees to share experiences, we’ve seen a 50% reduction in mental health-related absenteeism. Now, as part of our intention to expand these services, we’re in the process of enhancing our in-house solutions with an external workplace well-being actor.

Psychological solutions

Our state of mind and conscious thoughts have a huge bearing on how we feel physically. That’s why we’ve implemented several, carefully interlinked facilities and services based around psychological well-being initiatives and counselling.

Most importantly, it’s a crucial tool that helps us identify anyone who might be suffering with poor mental health and address any issues as quickly as possible. Whenever needed, we can offer follow-up support ranging from informal meetings with team leaders, through to appointments with external psychologists.

Technology for good

Webhelp is combining human expertise with technology, and this is core to our value proposition.

On top of managing the amount of sensitive content each individual moderator sees daily, our AI-driven People Analytics tool serves a sophisticated early-warning system that monitors moderators’ daily well-being in real-time. The system monitors signs of potential difficulty, such as absence and accuracy and combines this with insights from daily questionnaires to identify even barely perceptible patterns of behavior that could be red flags. The system attributes a ‘wellness score’ to our resources and can alert human team leaders when it becomes too low, allowing them to be ready and well-prepared – if or when they need to step in.

Our number one job

We’re continually developing our technologies, but we can’t foresee a time when machines could completely replace the human touch and expertise of our people. So, we’ll continue to support our content moderators in doing an incredibly tough job.

 

Because protecting them means protecting the whole community.

SHARE