Trust and safety: Why it matters and how to get it right

The internet can be a risky place to be, both for consumers and for your brand—and you may need trust and safety policies and a dedicated team to protect both.

In addition to mitigating brand damage and customer churn, getting trust and safety right can help you create a better user experience, increase customer loyalty and boost your bottom line.

But it’s more complicated than it sounds. We’ve created this page to give you a crash course in trust and safety and share best practices for building an effective team.

Table of contents:

  1. What is trust and safety and why does it matter?
    • The evolution of trust and safety
  2. Building your trust and safety team
    • Roles and responsibilities
  3. Growing your trust and safety team
    • Securing buy-in at senior level
    • Securing the support of other departments
  4. How to embed trust and safety into your business processes
  5. Metrics and KPIs to measure success
  6. Case studies in trust and safety

What is trust and safety and why does it matter?

Consumers care about trust. A lot. In a 2019 survey, 81% of consumers said they need to be able to trust a brand before they buy from them. A 2021 survey of UK consumers, meanwhile, found 71% will stop purchasing from a company altogether if their trust is broken.

If consumers don’t trust your platform to protect them from fraud and inappropriate and/or misleading content, they will spend their time and money elsewhere.

Conversely, if consumers feel your platform is a safe place to be, they’re more likely to feel positive about your brand, spend money and spread good word of mouth.

A trust and safety team will build and preserve consumer trust in your brand by ensuring that your online platform is a trustworthy and safe place for them to visit and interact with. They do this by drawing up and enforcing policies that regulate the behavior of your platform’s users and by investigating the root causes of major policy violations.

In the context of content moderation, trust and safety is a set of principles (typically drawn up, enforced and updated by the trust and safety team) to regulate the behavior of your platform users and prevent them from uploading content that violates the platform’s guidelines.

While automated content moderation systems can flag content that violates policy, human content moderators are still needed to evaluate the more ambiguous potential violations. And then analysts are needed to investigate the deeper causes behind violations and follow up on them, for example, reporting illegal activity to law enforcement.

Trust and safety teams also protect organizations from a range of security threats and scams. They verify customers’ identities and continuously evaluate customer actions and intentions.

These activities enable organizations to trust their customers—and customers to trust that other platform users are who they say they are and acting in good faith.

The growth of trust and safety

Trust and safety as a business function evolved out of fraud prevention.

Fraud prevention teams focus on preventing financial fraud on their brand’s ecommerce sites, for example, bad actors using stolen credit cards to make purchases.

Trust and safety emerged when ecommerce companies expanded into services that involved providers and experiences not entirely under their control, such as third-party sellers. The function grew more with the explosion of user-generated content on online platforms.

User-generated content has helped many platforms thrive, but has also laid them (and their users) open to numerous abuses, including:

  • Fake user accounts giving fake product reviews that erode consumer trust
  • Websites that imitate a brand to trick users into giving up their credentials
  • Fake ‘special deal’ links that contain malware posted by bad actors
  • Inappropriate or illegal user-generated content that harms other users of your platform

As these sorts of threats proliferate, the growth of trust and safety as a business function is inevitable—as is its expansion into broader areas of focus, including the engineering of new products and the design of customer journeys.

The rise of the metaverse brings new challenges for trust and safety teams. For example, they may have to regulate conduct in real time as opposed to queued content, and must balance the need for creating a safe environment for metaverse users with the danger of turning off users and creators by over-regulating their experience.

Get your ebook: Content moderation & CX in the age of the metaverse

Considering the steadily rising level of threats online, most, if not all, online platforms need a trust and safety team to regulate user behavior sooner rather than later.

That’s easier said than done, of course. But don’t worry: we’re here to offer you advice on building the right team for your organization’s needs …

Building your trust and safety team

First thing’s first: you need to figure out your organization’s trust and safety needs.

To do this, you may need to collect data or review data that’s already been gathered, conduct interviews with employees, survey certain subsets of your platform’s users, and speak to peers at other companies to understand universal issues—and which solutions will help.

Some users will react poorly when you remove content or ban users—or when you don’t. You can mitigate this by making your content standards and processes as transparent as possible. If your policies aren’t transparent, many users will decide it isn’t worth their time contributing content to your platform.

The fastest path to transparency: Publish a community policy that defines what’s acceptable and unacceptable behavior on your online platform, what is and isn’t a cause for action, which actions will be taken and when—and provide a clear mechanism for appeal.

Roles and responsibilities

What roles will you need to fill to run an effective trust and safety team?

Every team will be different according to the organization’s requirements—but some roles are common to all trust and safety teams.

Here’s a list of teams that might need to exist within your team, plus roles within those teams you may need to fill.

Oversight

  • VP of trust and safety: Responsible for creating and implementing policies for user safety, setting broad guidelines for user behavior and establishing protocol in the event of crises

Operations team members

  • Director of operations: Oversees the day-to-day operations of the team, including quality assurance, training, capacity and workflow management, change delivery, developing review protocol, crisis and incident response and management
  • Project manager: Manages content moderators (or, if content moderation is outsourced, the vendor relationship) and works with product, engineering, communications and legal to develop scalable processes to support content moderation

Content moderation team

  • Content policy Manager: Develops detailed policies to determine what is and is not allowed on your platform in accordance with your organization’s values and legal and regulatory requirements
  • Product policy manager: Develops and refines principles and policies specific to your organization’s various products, e.g., ads and sponsored content
  • Content moderators: The front-line moderators responsible for dealing with user-generated content that violates your platform’s guidelines often flagged up by an automated content moderation system

Public policy and communications

  • Public policy manager: Builds and maintains partnerships with external stakeholders (such as NGOs, governments, regulatory bodies), advises internal terms to guide the development of products, services and policies, and shapes public and political opinion about your platform

Engineering team

  • Data scientist: Responsible for building measurement methods to understand the extent of policy violations on the site and the impact of content moderation; may also analyze data to help proactively predict, detect and curb violations
  • Software engineer: Responsible for developing all the technical aspects of content moderation and enforcement, such as machine learning models to scale/automate enforcement, the systems that support user-facing reporting

Legal team

  • General counsel: Responsible for dealing with requests from law enforcement, regulatory bodies and government agencies—as well as identifying potential issues and advising on legal risks

Unless you have an unusually supportive C-suite, you’ll probably have to prove the value of what you’re doing before you can start to build a really comprehensive team.

Our next section looks at how you might go about proving that value, securing that buy-in—and getting your organization on the path to trust and safety success.

Growing your trust and safety team

In order to support this level of recruitment needed to build and grow your team you’ll need to secure buy-in from key stakeholders, at the executive level and throughout other departments.

Securing buy-in at senior level

Here are a few ideas for proving the value of trust and safety to the C-suite:

  • Share your findings regarding your organization’s trust and safety needs. If you’ve done a good job researching this, it should be easy to communicate the value of your initiatives to your organization.
  • Obtain metrics as early as possible showing that your team’s activities are saving your organization money by reducing fraud.
  • Create demand for trust and safety by showing product managers how it can improve their user experiences without them having to further burden their engineering teams or hire additional ops.

It should be noted that many organizations already recognize the importance of trust and safety, and we’re increasingly seeing trust and safety officers appointed at the executive level.

Trust and safety officers are responsible for the safety of the organization’s users. Depending on the scope of your team’s activities, the officer’s remit might cover a wide range of departments including product, engineering, marketing, legal, human resources, privacy and security.

Having a single individual oversee this range of activities helps ensure safety and platform integrity don’t get lost in organizational silos.

But whether or not your organization appoints an executive-level trust and safety specialist, it’s vital to invest in gaining buy-in from a range of stakeholders within the business.

Securing the support of other departments

To bring people throughout the business onboard, you need to frame the objectives of trust and safety in terms that others can easily understand.

Make sure you’ve got people on your team who can liaise with other departments and products within your organization—and spread the word that this isn’t just about preventing credit card fraud and spam. (As important as that is.)

No: trust and safety programs are about creating a better experience for your platform users, improving and maintaining your brand’s reputation, increasing user loyalty and customer lifetime value, growing your platform’s audience and reducing resource-consuming complaints.

Everybody needs to understand this—and how your team’s work relates specifically to their roles and responsibilities—in order for principles to be consistently enforced.

Here are a few examples of functions and teams who will benefit or have trust and safety responsibilities of their own.

Product and engineering teams are key to the success of your initiatives. They’re responsible for developing automated content moderation algorithms as well as embedding mechanisms into the products they develop that meet trust and safety guidelines and protect the users of those products. If their product isn’t trusted by users, it will fail.

Customer/user experience teams can sometimes view the mechanisms required by trust and safety with suspicion, suspecting them of adding needless friction to customer journeys. Your team should reassure them that you share these concerns—after all, a primary objective of trust and safety is to create better customer experiences.

Making those experiences more trustworthy doesn’t necessitate creating unnecessary friction. Customer journeys can be tailored according to trust and risk assessments of individual customers—so that, for example, only highly trusted customers can access one-click checkout.

Your marketing function is, of course, responsible for managing public perception of the brand—and maintaining trust and safety for your platform users is essential for keeping that perception positive.

Similarly, your sales teams should be aware that securing ad revenue depends on advertisers being happy to have their campaigns appear on your platform next to your user-generated content—and won’t want their promotions to appear next to anything problematic. So for them, too, trust and safety should be seen as a priority

Both marketing and sales need to contribute to trust and safety by protecting the privacy of customers and using their data responsibly.

Ultimately, your online platform lives or dies on the trust its users have in it. Trust and safety therefore potentially touches every part of your business.

How to embed trust and safety into your processes

Every business is different, and it’s impossible to generalize about which of your organization’s business processes will need to have trust and safety embedded into them.

Anywhere user-generated content (including comments, reviews and product listings) touches your business processes, is a place that needs the application of trust and safety principles. It must also be a consideration when concepting and designing new processes, capabilities or applications.

Designers should give serious thought to where things could go wrong and harm your platform’s users. Think about where there might be vulnerabilities to bad actors, what measures can be introduced to reduce these vulnerabilities and what needs to happen in the event of a crisis.

And if your organization operates internationally, don’t forget there’s an entire world of potential threats out there. Your trust and safety policies may change from country to country, depending on what’s seen as acceptable or appropriate in a given culture.

Your policies need to be supported in all the languages you operate in (your community guidelines, for example, should be translated). You also will need content moderators in every country and language you operate in who can catch the linguistic and cultural nuances that mark user-generated content out as appropriate or inappropriate.

Metrics and KPIs

Measuring the success of your team’s strategies and activities is critical, both to optimize team performance and to secure ongoing buy-in from other stakeholders.

Here are some metrics that you should track to determine the impact of your team and initiatives:

  • Proportion of your platform users exposed to violations
  • Proportion of users who violate guidelines
  • Percentage of content flagged as inappropriate
  • How accurately your automated content moderation systems are categorizing content (and how this changes over time as these systems are fed more data)
  • Your human content moderators’ response times between activity/content being flagged by automated systems and the ticket being closed
  • Tickets responded to by individual moderators per hour
  • Average time to mitigate security threats
  • The impact of implementing your trust and safety initiatives on customer satisfaction (tracked via metrics such as NPS and average customer review scores)

This list should give you a good idea of how you measure the success of your trust and safety initiatives.

But what does success actually look like in practice? Let’s look at some shining examples.

Case studies in trust and safety

Let’s look at a few examples of how—with our help—our clients have enforced trust and safety policies across their global platforms.

Moderating multilingual content for a leading video social platform

As the top platform for short music videos online, this brand needed to deliver accurate and timely content moderation in a fast-growing, multilingual Indian market.

That’s where we came in. By combining global delivery models with local regulatory and cultural knowledge (all underpinned by AI), we delivered tailored moderation for local users throughout our client’s Indian territories.

We recruited a team of 1,800 moderators from our pool of content moderators in India and Jordan, selecting for language capability and cultural awareness as well as resilience, speed, accuracy and the mindset needed to assess our client’s trust and safety policies.

Our moderators were continually trained and retrained in our client’s policies, values, culture and platform guidelines. We also looked after moderators’ wellbeing to ensure they remained optimistic and dialed in, even after viewing large volumes of often-disturbing content. We offered psychologists on stand-by, a sports trainer and yoga coach, day trips, a chat group and even a Zen relaxation room.

The result? In the first four months of productio,n we decreased our client’s moderation team size by 44% and achieved 99.96% accuracy of moderation decisions. And what’s more, we made our project fun and stress-free to work on, with 74% of our team saying they’d recommend this work to friends and family.

Helping The Sandbox manage interaction in the metaverse

The Sandbox is the largest seller of virtual real estate, and it provides space and tools to create virtual experiences for users. Brands and other creators can buy land and design their own experiences, such as parties, concerts and interactive clothing stores. The Sandbox knows that creating a safe and comfortable experience is essential to attracting users, so we’ve partnered to provide teams of ambassadors to serve as guides and security within the experience, as well as traditional moderation services.

Make trust and safety the firm foundation of your customer experience

We hope this guide has helped you think about how to apply trust and safety principles to your online platform and organization’s activities—and hopefully replicate the success of the organizations we’ve spotlighted.

There’s a lot to consider here. The world is changing fast, and the largest and most innovative organizations often turn to partners who are a step ahead.

Webhelp is the first BPO to partner with the Trust & Safety Professional Association (TSPA), which supports the global community of trust and safety professionals who develop and enforce principles of acceptable behavior and content online.

Our content moderators, all TSPA members, provide moderation services to more than 250 clients across the globe in over 20 languages, making their online platforms trusted and safe for billions of users.

Get in Touch