social media

Webhelp Ranked Highly Across all Aspects of Social Media by Leading Analyst NelsonHall

social media

Firm announces host of analyst accolades 

Paris, France , 11 February 2021  

The leading global customer experience (CX) and business solutions provider, Webhelp has been recognized by top-ranking industry analyst, NelsonHall, for its social media capabilities. 

The firm was recognized across three core areas: customer care and sales capability; online reputation management capability; and content moderation, trust and safety capability.  

NelsonHall’s Evaluation & Assessment Tool (NEAT), part of a “speed-to-source” initiative, enables strategic sourcing managers to assess vendors’ capabilities to identify the industry’s best performers during the sourcing selection process. The methodology specifically evaluates the quality of players’ abilities in several categories, such as technology and tools, service innovation, geographic footprint, and scalability, amongst others. 

“We are thrilled that NelsonHall has recognized our social media capabilities. Now more than ever, and in an increasingly digital world, businesses need to deliver high-quality and trustworthy customer experience interactions. Webhelp has a diverse range of digitally enabled services, which allow us to support global brands with their social media interactions and reputation and work with social media platforms and marketplaces themselves to support a safer online environment for users. We are very proud of our achievements in this space,” said Webhelp Co-Founder Olivier Duha.  

Ivan KotzevNelsonHall CX Services analyst, said:

“Webhelp’s strong performance in social media support and sales is built on a fundament of proprietary technology, channel management experience, and CX consulting capability. Notable is the company’s expertise in lead generation and sales activities on social channels, an increasing priority for brands looking to meet their customers on these channels.” 

Webhelp’s extensive capabilities and growing global footprint continue to be validated by the analyst community, with esteemed U.S.-based analyst, Gartner, naming Webhelp as a Niche Player. This builds on the analyst’s reporting of Webhelp as a Rising Star in 2019/20, as the business further establishes its reputation as an industry disrupter and credible alternative to the more traditional players in the North American market. 

These recent accolades amplify Webhelp’s current positioning by global analyst Everest Group as a Leader in Customer Experience Management (CXM) in its PEAK Matrix® Assessment 2020, as well as a Leader in its CXM in Europe, Middle East, and Africa (EMEA) Services PEAK Matrix, recognizing Webhelp as being particularly strong in terms of both vision and capability. The Everest Group positioning extends to a new report where Webhelp is recognized as a Major Contender in work-from-home solutions amongst other global players.  

Everest Group wrote in its WAHA (Work aHome Agent) CXM Services PEAK Matrix Assessment:

“Webhelp is driving digital transformation through cloud adoption, CX consulting, and automation by partnering with technology vendors such as Amazon Connect, MS Azure, and UiPath, utilizing their platforms as per clients requirements.” 

 


Content Management

Americans distrust tech companies to moderate content online

Where do we draw the line between freedom of speech and allowing misinformation to be broadcasted online?

Content moderation is crucial for social platforms to ensure a trustworthy relationship with their users. Without moderators, billions of social media users would be shown potential harmful content every day.

Government control – trusting the system

There are many nuances of user generated content, and there are concerns that governments will take control over the content posted on media platforms, removing the platforms purpose of sharing content freely (within the guidelines).

For example, the U.S. Government signed new laws to ban social media platform TikTok – which has over 80 million daily users in the U.S. The platform has since won a preliminary injunction that will allow for the app to be used and downloaded from the U.S app store.

This precedent shows that if the government had more control, they would be quick to implement such regulations on these platforms. It is unlikely to happen as political figures use social media platforms to connect with their constituents, communicate their views, and advocate for political campaigns.

Free Speech vs Content Moderation?

According to Gallup and Knight Foundation survey, “55% of Americans say that social media companies are not tough enough, with only 25% saying they get it right”.
For instance, Trump’s behaviors and actions on Facebook, Twitter, and other social platforms, have allowed communicating harmful propaganda which can influence political views and undermine election campaigns. As well as provoke/incite violence by sharing false and deceptive information to the public which we have witnessed during his election campaign in 2020, and more recent events at the US Capitol with Trump supporters.

The violent storming of the US Capitol led to the big tech companies like Twitter and Facebook suspending Donald Trump from using the platform due to his alleged role in inciting violence and sharing misinformation; with many other players permanently banning him from their platforms. The platform Parler, which has a significant user base of Donald Trump supporters, was taken off major service providers app stores as they accused the platform of failing to police violent content.

After Trump’s 12-hour ban was lifted on Twitter, he continued to violate their policy. They concluded that his tweets during the incident was against their Glorification of Violence policy and left them with no choice but to permanently suspend his account.

To give multiple chances to an individual with this level of influence, users continue to express their views that big tech companies are being taken for a ride and not doing enough to stop the virality of content. Consequently, this has resulted in people not trusting the platforms’ moderation policies and algorithms to display authentic, unbiased content efficiently.

Trusting the system

Controversially, US online intermediaries are under no legal obligation to monitor content, “social media companies are under no legal obligation to monitor harmful speech, and governments can’t really make them or compel them to offer things like counter speech without running into First Amendment roadblocks”, Forbes, 2020.

Section 230 – a constitution act for Americans which protects the freedom of expression. In comparison to other countries, the U.S. Section 230 provides online platforms with immunity for legal reprimands with few exceptions, “they can avoid liability, and object to regulation as they claim to be editors of speech” outlined in Section 230(c)(1). There are many caveats and exceptions – particularly when it comes to interpreting images and videos.

Therefore, when it comes to accountability, this legislation has limitations to hold online intermediaries liable for user generated content on their platforms. It does not establish what is considered tortious speech, harmful or misleading information. Rather, big tech companies are left to outline this in their policies; to do the right thing by their users.

Moderating content

Early last year, Twitter introduced new labels on Tweets containing “synthetic and manipulated media”, likewise Facebook created labels that flagged harmful or unverified information.
Although these companies continue to introduce new tools to highlight harmful content, it is important for moderators to have the correct tools and expertise to moderate sensitive content and not solely rely on technology to do this. Without the right guidance and principles, misinformation and propaganda will manage to fall through the cracks.

Lear more about our Digital Services, or contact us to find out more.

SHARE

Read the 6th edition of our OneShot magazine on Social Engagement

Our 6th edition of the OneShot is here!

Download your OneShot Magazine

Tick tock tick tock…

Time is ticking away – now is the time to start focusing on social engagement.

Social commitment means becoming aware, but above all, taking action and standing up for inequalities.

Taking action can be as simple as these recipes to be: more human, more green, and more equal. Not only are these good for you, but for others too.

Compelling your company to pledge and commit in the fight for social and environmental changes, such as the global warming crisis or social justices and equalities – are vital steps to take now for a brighter future.

And it all starts with knowledge. So, here’s to your learning with the latest edition of the OneShot.

Dare to be ‘woke’ and be a driving force for change?

SHARE

legal framework

Legal frameworks of content moderation around the world (Part 3)

CMM_Legal_Frameworks_Web_Header

With an initial goal of curbing fake news and online hate, the NetzDG unfortunately, created a blueprint for internet censorship around the globe.

Turkey
For many years now, freedom of speech and press freedom have been strongly condemned in Turkey, it is ranked 154th out of 180 countries in the RSF 2020 World Press Freedom Index. (Source: www.rsf.org). Denying access to around 3000 articles, Turkish courts blocked articles that were highlighting political corruption and human rights violations in 2018, added to a track record of frequently blocking social media platforms such as Facebook, Twitter, and YouTube.

On 29th July, the Turkish parliament enacted a new law that was hastily ushered in without considering the opposition or other stakeholders’ inputs. Once approved by President Erdogan, the law mandates social media platforms to appoint a local representative in Turkey. However, activists are severely concerned that the law is designed to further conduct government censorship and surveillance.

Australia
Following the gruesome terror attack on two mosques in Christchurch (New Zealand), which was carried out by an Australian in 2019, a bill amending the Australian criminal code was passed. The amendments hold service providers criminally liable for failure to instantly remove violent content that is shared on their platforms.

Despite similarities with the NetzDG, the main difference is the take-down timeframe and the subject matter of illegal content. The amendment faced criticisms from media companies, stating it could lead to censorship of legitimate content due to the incentive it creates to over-screen their users. Others called for the government to address the problem at its root: violence and Anti-Muslim hatred as opposed to holding social media platforms accountable for the manifestation of such problems.

Nigeria
On 5th November 2019, an Anti-Social Media Bill was proposed by the senate of the Federal Republic of Nigeria to bring to book violations in peddling malicious information. The campaign has been backed up with the Northern States Governors’ Forum (NSGF) held with traditional rulers, government officials, and leaders of the National Assembly.

Following the recent terror at Lekki Toll Gate on the night of 20th October 2020 that turned fatal when police brutally invaded peaceful protests to #EndSARS by use of live ammunition, the infringement of freedom of speech amidst media censorship continues to oppress the fundamental human rights and is condemned by Amnesty International. Nigerian Police have since then denied despite evidence of people streaming live on their social media platforms to showcase this cruelty. (Source: amnesty.org)

China
With a more sophisticated censorship approach, China’s government blocks websites, IP addresses, URLs whilst monitoring internet access. Online service providers are expected to authenticate the real names of the online users according to the Cyber Security Law (CSL) that has been effective since 1st June 2017. Additionally, the CSL mandates all network operators to closely screen user-generated content and filter out information that is prohibited from being published or relayed by existing laws or administrative regulations.

Other countries that also have heavy internet censorship through political media restrictions and social media include Iran, North Korea, Somalia, Ethiopia amidst political unrest, and many Eastern European countries such as Moldova.

Following the recently concluded U.S. elections against a highly controversial and polarizing incumbent, President Trump is yet to concede. Instead, he has been making widespread allegations of voter fraud as well as concerns about the integrity of the process. Social media platforms like Twitter and Facebook continue to struggle with screening fake misinforming content.
Due to the thin line that exists between permitted and prohibited speech, enacting a universal solution globally governing content moderation is assertive. When relying on automated decision-making tools, moderation systems are prone to errors. Online platforms are hence forced to assess the amount of collateral damage that would be deemed “legitimate” versus the amount of harmful content that would slip through the cracks. Stronger enforcement means less hate and fake news will be shared, but it also means a greater probability of flagging of for example activists protesting police brutality or journalists exposing injustices and corruption in those particular governments.

This article is the the final part of a series. If you missed the first part, read it here.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today
SHARE

legal framework

Legal frameworks of content moderation around the world (Part 2)

CMM_Legal_Frameworks_Web_Header

Internationally, two documents provide freedom of expression protection. The first is Article 19 of the Universal Declaration of Human Rights (UDHR), and the second is Article 19 of the International Covenant on Civil and Political Rights (ICCPR). The importance of free speech and free expression are recognized as fundamental human rights with caution of unjustly infringing on them.

By obliging social media platforms to delete illegal content within 24 hours or otherwise face exorbitant fines, the NetzDG triggered fierce debates and concerns regarding its ramification on freedom of expression by:

  • The Streisand effect (detrimental outcomes of censorship)
  • Accidental removal of legal content
  • Privatized law enforcement
  • Unnecessary sanctions
  • Global Internet censorship through authoritarian regimes

At least 13 different countries have enacted or outlined laws that are similar to the NetzDG matrix. According to the Freedom House’s Freedom on the Net, five of them (Honduras, Venezuela, Vietnam, Russia, and Belarus) are ranked as “not free”, five others are ranked as “partly free” (Singapore, Malaysia, Philippines, Kenya, and India) and the remaining three are categorized as “free” (France, UK, and Australia). (Source: freedomhouse.org). More recently, Turkey was also added to the list, having passed the worst version of the NetzDG, according to the Electronic Frontier Foundation (Source: eff.org)

United States
According to a study that was conducted last year, 85% of daily active Facebook users live outside of the U.S. and Canada, 80% of YouTube users and 79% of Twitter accounts are mainly from up-coming markets such as Brazil, India, and Indonesia. (Source: www.omnicoreagency.com)

While most of these big tech companies have their headquarters in the United States, the majority of their users are based outside the country. As a result, these companies are essentially governed by U.S. law. The First Amendment of the U.S. Constitution and Section 230 are the two principal legal frameworks that regulate the online freedom of expression.

In the U.S., the government is prevented from infringing on the right to free speech by the First Amendment. However, tech companies are not similarly subordinate to the First Amendment. Consequently, they can enact their codes of conduct and policies that often further restrict speech that would not be prohibited by the government under the First Amendment. For instance, Tumblr and Facebook prohibit the publication of graphic nudity on their platforms.

Yet under the First Amendment law, such prohibition by the government would be unconstitutional. And because Section 230 of the Communications Decency Act protects social media networks, website operators, and other intermediaries, they are not held liable for the generated content in their platforms and have been able to thrive.

United Kingdom
To combat detrimental content, the U.K. released a White Paper last year highlighting multiple requirements. Internet companies must keep their platforms safe and can be held accountable for the content published on their platforms and, they are liable to pay consequent fines. (Source: assets.publishing.service.gov.uk)

This article is the second part of a series. If you missed the first part, read it here.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today
SHARE

legal framework

Legal frameworks of content moderation around the world (Part 1)

legal framework

Following increased pressure to protect the audience from harmful content, both large and small online platforms that mainly host User Generated Content have come under intense scrutiny from governments around the globe.
Depending on their size and capacity, different online platforms deploy two content moderation models to tackle this issue:

  1. Centralized content moderation – using this approach, companies establish a wide range of content policies they apply on a global scale with exceptions carved out to safeguard their compliance with laws in different jurisdictions. These content policies are implemented by centralized moderators who are trained, managed, and directed as such. Facebook and YouTube are examples of big internet platform companies using this model.
  2. Decentralized content moderation – this model tasks the users with the responsibility of enforcing the policies themselves. Being diverse by nature, this approach mainly enables platforms like Reddit to give their users a set of global policies that serve as a guiding framework.

Centralized models help companies to promote consistency in the adoption of content policies while decentralized models allow a more localized, context-specific, and culture-specific moderation to take place encouraging a diversity of opinions on a platform.
After failed attempts to push social media platforms to self-regulate, the German parliament approved the

Network Enforcement Act (NetzDG) on 30th June 2017. Also known as the “hate speech law” the NetzDG took full effect as from 1st. January 2018. The NetzDG directs platforms to delete terrorism, hate speech, and other illegal contents within 24 hours of being flagged on a platform or otherwise risk hefty fines.

While the NetzDG encourages transparency and accountability of social media platforms it also raises concerns regarding the violation of the e-Commerce Directive and fundamental human rights such as freedom of expression. In a statement that was sent to the German parliament in 2017, Facebook considered the NetzDG draft submitted in 2017, to be incompatible with the German constitution by stating, “It would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies”. (Source: businessinsider.com)

Following criticism from a wide array of activists, social networks, politicians, the EU commission, the UN, and scholars, the NetzDG is a controversial law that should be adapted with a grain of salt. Unintentionally, Germany created a prototype for Global Online Censorship from highly authoritarian states who have adapted the NetzDG to manipulate the freedom of speech on the internet by pushing their illiberal agendas camouflaged as moderation policies.

Find out more about this topic

This article is part of a series looking at legal frameworks around the world. The series will focus on countries legal amendments to moderate user-generated content in the following countries: U.S, U.K., Turkey, Australia, Nigeria, and China.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today

OneShot – Win back trust in the era of fake news

We take a look at how the social media landscape is overshadowed by scandals with François-Bernard Huyghe, a specialist in geopolitics, director of research at Iris, expert in influence and disinformation.

Fake news, fake followers, fake influencers, deep fake, etc. Political currents, companies and simple individuals fight to spread their representation of reality and the courses of action. The craziest points of view – conspiracy theories, flat-earthers, anti-vaxxers, and other trolls – bringing together highly active small communities, whose impact is often destructive. In regard to digital technologies, it brings with it an arsenal of highly sophisticated disinformation that is constantly improving and increasingly easy to access. Is there a place for trust among all this?

Fake news, fake followers, fake influencers, deep fake… How did we end up here?
François-Bernard Huyghe: These Anglicisms are recent and numerous: I listed 60 in my essay on fake news (1). They can be found in journalism, politics, geopolitics and even in everyday conversation; so, they are now part of our reality.  Of course, lies and deception go back a long way, but it was in 2016 that the general concern became widespread, with the election of Trump, Brexit, the Facebook-Cambridge Analytica scandal, the Catalonia elections, in Italy, etc. So, we have granted great political power to the spreading of fake news -and other ‘alternative facts’- on social media. To the point that it is a threat to democracies, the media, and ultimately, to trust as a common socio-economic foundation. Thus, we have moved into the era of post-truth. And the context of Covid-19 confirms this point of view; WHO even talks of an ‘infodemic’, with harmful consequences.

Where is trust in social networks and media?
F.-B. H.: Trust in social media has flipped; we’ve gone from a concept, or from a meme, “social networks will establish democracy everywhere”, to “social networks are bringing down democracies”. We started with the idea that social networks provided a freedom of speech that would trouble the powers that be – those of governments and brands, in particular. And this would in turn lead to more lucid citizen-consumers, saner politicians and better-quality products and services. Ultimately it is the opposite that has become widespread. In the case of brands, other negative factors also arose, such as Dieselgate, the leak of personal data, its commercial exploitation, the opaque role of artificial intelligence, fake customer reviews, click farms, etc.

What are the consequences of these disinformation practices for the public?
F.-B. H.: Gafam and social media regularly report on the thousands of harmful messages or fake news that they delete. There is also corrective intervention from fact-checking experts or bodies, such as AFP Fact Check, partly financed by Facebook, whose new role is “to refute anything that did not happen”. However, despite this refutation, those who manipulate opinions are well aware that there is still some doubt. As Hannah Arendt already said, “When everyone lies to you constantly, the result is not that you believe these lies but no one believes anything anymore… And with such a people, you can do whatever you want.” Ultimately, the most serious aspect is not any particular fake news article; it is the torrent of them that has had a toxic impact on our minds. Citizen-consumers find themselves overwhelmed with doubt, with an inability to learn and act, which leads to frustration or even anger. Take a look at the USA, where Trump has attacked Twitter, while the social network was doing its job of moderating; it is like the start of a soap opera about freedom to express anything and everything, in other words, to misinform with impunity.

What kind of influence is legitimate in the eyes of the public?
F.-B. H. : We have gone from a time when mass media would publish a message in line with that of esteemed opinion leaders, and we have now arrived – through this crisis of general trust – at a strong legitimacy of nano and micro-influencers. Therefore, over prestige and authority, we now prefer proximity; people who talk to me should be people like me. They and I, we should find ourselves on a level playing field. Hence, also, a form of insularity. The citizen-consumer is eventually stuck between individualism and tribalism. Because a tribe is still necessary in order to feel valued within their choices and their identity. Consequently, speeches that often end up getting through are not those of the experts or the established authorities; instead they are the simple opinions or the raw emotions of ‘real’ people.

How can we rebuild trust?
F.-B. H.: On the part of the companies and brands, it seems wiser to establish horizontal and genuine links with consumers, rather than try to create messages that descend towards ‘the old style’. This probably happens through the human dimension, proximity, localness, transparency, proof, the personalisation of relationships, and by approaches that are more micro than macro. But, in a context of economic revival, they will have to ask questions about a shift in production, of real needs versus luxuries and ostentatiousness, of meaning and values, of the company’s social and ecological role, etc. Is it time for certain brands to make an ethical change and to become companies with a mission? It is worth thinking about.

(1) The term fake news, ‘infox’ or ‘fausses nouvelles’ in French, refers to untruthful information that is spread in order to manipulate or mislead the public.

“Over prestige and authority, we now prefer proximity.”

François-Bernard Huyghe

 

Read the full OneShot article here


Content Management

The DSA – a newfound content moderator

Strengthening the responsibility towards online platforms, the DSA could be the newfound content moderator.

As the digital economy continues to grow and evolve rapidly, it becomes more imperative for platforms to manage the content they have on their websites.

The Digital Services Act (DSA) is part of the EU’s digital strategy to regulate the online ecosystem. Clarifying rules that propose a new liability framework for online platforms and the content hosted on their sites.

We could wonder – “How does this differ to GDPR?”: GDPR aims to protect customers’ personal data at the forefront of every business. It is the EU legislation that regulates how organizations use personal data, but it does not regulate the content that is shown online to customers. This is where the DSA comes into action.

The European Commission announced the DSA as being a package formed of two pillars proposing the following new frameworks:

  1. New rules framing the responsibilities for digital services – protecting customers in the digital ecosystem when it comes to user-generated content and new risks that may arise on these platforms
  2. Ex-ante rules for large online platforms that act as gatekeepers to ensure platforms act fairly and challenged by new entrants – the market stays competitive and innovative, so customers get the widest choice.

This is not to say it does not come with its own limitations and challenges. These new provisions can facilitate users to identify issues and risks that is indistinct with the current regulations. It augments more attention to platforms’ guidelines and safety measures.

It is crucial these online intermediaries take responsibility and introduce trained content moderators to avoid these potential faults.

Growing liability for online platforms and digital gatekeepers

Online intermediaries have been protected by the e-Commerce Directive against content liability, enabling these providers to establish their own content moderation regulations.

Social media is one of the most popular ways for users to spend their time and engage with people. It has become an integrated communication tool for people to connect with others and express public opinions. From their personal views in politics or about a product they recommend (49% of consumers depend on influencers recommendations on social media according to Oberlo). Statista states Facebook has 2.7 billion monthly active user’s vs Instagram with 1 billion monthly active users.

Social media user-generated content statistics show daily:

  • Every 60 seconds there are more than 317,000 status updates and 54,000 links shared on Facebook
  • 94 million photos and videos are shared on Instagram

The virality of content can be constructive as well as destructive. With the current regulation for the interdependence of these large platforms, it does not allow for legal reprisals and liability.

According to the DSA, a new standard for large platforms that act as digital gatekeepers will attempt to impose tech regulators with the power to enforce rules where content could be deemed illegal or inflammatory. Creating a fairer, and more competitive market for online platforms in the EU.

Implementing these new standards requires content management services to support focusing on the right content for your business. Poorly handled owned content can be pernicious and potentially discriminating.

Adapting the DSA on a global scale

Online platforms are key drivers of digital trade, innovation, and globalization.

The EU is an attractive market that was the motivation for GDPR scope to become transnational as compliance is required when companies encounter EU citizens personal data. Consequently, forcing international firms to adapt to these regulations.

As with the DSA, the intention is to improve the supervision on digital services and to help protect EU citizens across the single market.

The framework offers benefits to sellers and consumers, with an attraction to different gatekeepers in the market as the digital ecosystem continues to grow and broaden its reach. The DSA introduces broad derogations for members discretion – the UK is not obliged to follow these regulations due to Brexit, as the UK’s transition period ends before 2020. Nonetheless, this package requires harmonization between the UK, EU, and even international platforms to obtain the balance of legal protection of responsibilities to protect its customers.

Our services

The DSA invites more regulation for online platforms, but this cannot be transformed in the current way content is moderated. It requires dexterity and vigour.

Putting our people and our clients at the heart to ensure we build trust, and a safe user experience is part of our think-human approach – 74% of our operators recommend Webhelp as an employer (NPS). Our teams are trusted to detect and assess issues for user generated content with our content moderators, as well as finding the right content for your brand with our content management service. We have managed 1 billion pieces of content in 25 languages every year with flexible operations onsite and homeworking. This role is time-consuming and requires attentiveness, so it is important for us to provide our content moderators with mental health support.

We focus on our robust processes and in-house technological solutions to ensure a smooth delivery of outcomes and a high productivity rate to deliver on objectives.

Are you interested in how the DSA may affect your organization? Talk to us today about how Webhelp’s Content Management services can help you.


OneShot - Three opinions

Hervé Rigault, Director General for France of Netino by Webhelp

Herve-Picture

The notion of a key opinion leader is coming back into fashion. Previously, this role was held by journalists, speakers, analysts, etc. Yet, on the one hand, journalists no longer have the time to do research and, on the other, many experts lack neutrality. This is because influencers have learned to establish themselves with solid audiences, mainly thanks to blogs and curation, but also thanks to social media. This phenomenon is seen in both B2C and B2B. LinkedIn’s recent and considerable development, for example, is a result of its transformation: this social network has become a very influential social media platform. So it is no longer enough to be an expert to become an influencer; you have to have a vision, a certain talent for expression, a taste for sharing, a dynamic network, etc. Brands can profit from it, through attentive listening.

 

Jérémy Rodney, Head of Digital Content & Social Media Bouygues Telecom

At Bouygues Telecom, influencer marketing started in 2013, with 4G. We had to spread the word about its high data speeds, relying on the power of recommendations from a few influencers. First we targeted gamers, big bandwidth consumers and their subscribers. Today, the use of influencers is ingrained in our media campaigns. We don’t use nano-influencers, they are too complex to manage with our services and products. When we have a reach objective, we look for macro-influencers. And to find more engagement, and oproduce original content, we work more and more with middle or micro-influencers. Adults, parents, seniors, etc. All age ranges are represented; the palette of influencers has become very large and diverse.

 

Jeroen Dijkema, CEC Cluster Lead Europe Unilever (Rotterdam)

Unilever has a vast galaxy of agribusiness brands of international renown. Some of these brands have strong local ties. On an international or local level, we reach out to influencers with three goals in mind: to develop brand reputation, deliver messages on specific brands and test certain new products. The authenticity of these influencers is a criteria for selection, since our products are built on data that reflects the needs of the consumer, but they are also a societal goal. Mainly on Instagram and Facebook, we reach out to macro or micro-influencers.

Read the full article

OneShot – Hashtag #TrustYourInfluencer

Your brand? Your products? It’s the influencers that talk about them best. In any case, they are better understood by your target market. Here are three tips for working well with them.
1. Consider the influencer to be a true partner.

Everything starts with a good collaboration with them. A good partnership isn’t simply asking an influencer to showcase your product to their followers. This way of looking at it - as forming a human sandwich with the brand - is inefficient, even counter-productive. Today, influencers ask to include the spirit of the brand. Therefore, the influencer should be seen as a consultant for communicating on social media, and not as a simple megaphone. So, the entire challenge is first in identifying which influencers will be the most suitable with respect to the brand’s objectives. The error generally lies in always working with the same pool of influencers and reasoning quantitively based on the number of followers accumulated. It is better to customise together, that is to have a very qualitative and individualised approach based on legitimacy.

2. Let yourself be influenced by your influencers.

In general, brands assume a risk when they express themselves on social media. Trolls will find something there to vent about... The goal of collaborating with an influencer is to create a message that will be appreciated by their community – by relying on their legitimacy and expertise. This opens new doors for the brand, and therefore the brand finds new playing fields and new forums in which to express themselves. In a nutshell, influence allows brands to have a voice accepted by a
community, rather than top-down. The influencer knows their community perfectly well: they are the only person who knows whether or not they will be on board. Therefore, it is better to listen to them and trust them! Particularly as many of them are born communicators...

3. To generate engagement, favour micro-influencers.

On social networks, in order to add a human dimension to the relationship with the brand, it is wise to switch to micro-influencers instead of working with a ‘face of the brand’. Admittedly, the latter option is historically ingrained and it allows brand legitimacy to be established. But today, it is engagement that becomes the main challenge –moreover, platforms are constantly improved to favour it. Once you have set yourself a goal for engagement or ROI, it is better to work closely with micro-influencers, who are involved and relevant, even those with ‘only’ 3,000 to 5,000 followers. Legitimacy is key. A macro-influencer like Bixente Lizarazu, for example, could also be considered a micro-influencer for cycling, which he is a huge fan of!

An article by Ludovic Chevallier, Head of Havas Paris Social.

 

“Be a fan of your fans by making them heroes of your story.”

Mark Schaefer, author of Marketing Rebellion: The Most Human Company Wins