Bots, Bias & Bigotry: safe scaling of AI

In the first of our Risk & Innovation series, James Allen examines the barriers to overcome when scaling AI.

Now that we’re well into the fourth Industrial Revolution (also known as Industry 4.0), we expect to see some fundamental shifts in how businesses operate and serve their customers.

Here’s what we see as the three big pillars of Industry 4.0:

  1. Digitisation of product and service offerings
  2. Digitisation and integration of supply / value chains
  3. Digital business models and customer access

 

The shift toward Industry 4.0 has become more important to many brands, and has accelerated during the Covid crisis as a result of significant changes in supply chain and consumer behaviour.

In fact, a recent McKinsey survey highlighted that 65% of respondents see Industry 4.0 as being more valuable since the pandemic, with the same survey revealing that the top 3 strategic objectives for Industry 4.0 are:

  1. Agility to scale operations up or down in response to market-demand changes (18.4%)
  2. Flexibility to customize products to consumer needs (17.2%)
  3. Increase operational productivity and performance to minimise costs (17.2%)

Yet when the same respondents were asked if they had successfully scaled Industry 4.0 initiatives, only 26% had managed to do so successfully.

 

According to Rothschild & Co, the market for Industry 4.0 is expected to top 300 billion dollars, and with AI and connectivity projected to reduce manufacturing costs by 20% (or 400 billion dollars), it’s essential that companies find a way to scale safely, at pace.

Artificial Intelligence evolution

AI has been in development for years, starting with the first computers in the 1940, with which scientists and mathematicians began to explore the potential for building an electronic brain. In 1950, the “Turing Test” proposed that if a machine could carry on a conversation that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Alan Turing to argue convincingly that a “thinking machine” was at least plausible, and the paper answered all the most common objections to the proposition.

Fast forward many years, and many millions of pounds of research investment, and in 1997 perhaps the first publicly recognised AI computer was developed. This came from IBM in the form of Deep Blue – a chess-playing computer that beat the reigning world chess champion Garry Kasparov.

But machines like Deep Blue were incredibly complex, extremely expensive, and inaccessible to all but a few large technology companies. In the past few years, however, the interest and opportunity presented by AI within Industry 4.0 has exploded.

This is due to a number of factors:

  • Wider availability of computing and access to cloud environments with large processing power
  • Development of deep learning algorithms
  • Big Data platforms
  • Development of Artificial General Intelligence

AI – learnings and barriers to scale

Whilst many companies see the potential presented by AI, companies are also rightly concerned by the risks that it presents, as well as the barriers they need to overcome when scaling.

The most common challenges we tend to come across are:

  • Access to specialist skills
  • Cost of processing in cloud environments
  • Inability to demonstrate fairness, lack of bias and integrity of AI algorithms
  • Risk of unintended consequences
  • Regulatory understanding
  • Ability to seamlessly switch between AI powered processes and regular business processes in the event the AI fails

This presents organisations with a real conundrum. AI use raises questions over ethics, safeguards, interpretability and more. It’s only right that organisations probe these issues and take the learnings from those that have gone before them.

Here’s a few public examples of where AI has gone wrong:

Footballer or felon

A facial-recognition system identified almost thirty professional American footballers as criminals, including New England Patriots three-time Super Bowl champion Duron Harmon. The software incorrectly matched the athletes to a database of mugshots in a test organized by the Massachusetts chapter of the American Civil Liberties Union (ACLU). Nearly one in six athletes were falsely identified.

CEO gets spoofed

In 2019 the CEO of a UK-based energy firm got a call from his boss at their German parent company, instructing him to transfer €220,000 to a Hungarian supplier. The ‘boss’ said the request was urgent and directed the UK CEO to transfer the money promptly. It turned out the phone call was made by criminals who used AI-based software to mimic the boss’ voice, including the “slight German accent and the melody of his voice,” as reported in the Wall Street Journal. Such AI-powered cyberattacks are a new challenge for companies, as traditional cybersecurity tools designed for keeping hackers off corporate networks can’t identify spoofed voices.

Get me out of here!

US airlines were subject to widespread criticism after their AI powered pricing systems charged customers up to 10 times the price of a regular ticket, as they desperately tried to escape Florida ahead of the arrival of hurricane Irma. The systems did not have a kill switch. “There are no ethics valves built into the system that prevent an airline from overcharging during a hurricane,” said Christopher Elliott, a consumer advocate and journalist.

 

Navigating the risks and enabling safe scaling of AI

Webhelp and Gobeyond Partners have developed a comprehensive framework to support the safe scaling of AI, including assessment of risk, key controls, human-centred ethics principles, algorithm management and data handling. This framework includes open source methods that can be used to demonstrate the integrity and explainability of AI algorithms.

Safe scaling of AI

Questions your organisation should consider

Although AI presents a huge opportunity to transform both business operations and customer experience, this is not without risk. Here are some of the long term strategic questions that we recommend you consider, for your organisation:

  • What role does AI have in the working environment and is there such a thing as a post-labour economy? If so, how do we make it fair?
  • How do we eliminate bias in AI?
  • How do we keep AI safe from threats?
  • Is it right to use AI in cyber defence? If so, where is the line?
  • As AI capabilities become more integrated, how do we stay in control of such a complex system?
  • How do we define the humane treatment of AI?

 

Feel free to get in touch, to see how we can help you safely fulfil your Industry 4.0 ambitions at pace and at scale.


Innovation

Innovation is necessary, safety is crucial

James Allen, Webhelp’s Chief Risk & Technology Officer, introduces our new series taking a deep dive into risk and innovation.

Risk and Innovation don’t tend to appear in the same sentence very often. Innovation is, of course, essential for businesses aiming to survive and thrive in the 4th Industrial Revolution. But with an increasing weight of regulation, and with data becoming more valuable than oil, how can companies simultaneously innovate while staying ahead of emerging threats?

Here at Webhelp and Gobeyond Partners, our mission is to be leaders and experts in delivering low risk solutions that help our clients to innovate and stay safe.

In this new series, we’ll be providing some insight and perspective into some of the key questions that we work to solve in partnership with our clients.

  • AI has huge potential to transform customer experience in my business. But how do I safely move from small scale experimentation to deployment at scale?
  • In a world of increasing regulatory burden, how can I use digital technology to automate compliance activities?
  • My firm is part of a critical national infrastructure, but I have a large amount of legacy applications that provide critical economic functions. How can I accelerate transformation of my business without putting these at risk?
  • My business has made massive investments in digitisation, but my control environment is still analogue. How do I pivot to the new without losing control?
  • How do I attract new skills in digital, analytics and cyber into my Risk function?
  • Now that home working is part of the new norm, how do I deliver leading edge cyber security with world-class colleague experience?

Innovation

End-to-end collaboration

By encouraging and facilitating collaboration between different teams, this helps teams reach outside their own comfort zones, think differently, and better consider the customer and colleague experience from beginning to end.

 

Understanding new technology

New developments in technology are the cornerstone to enabling businesses to innovate, operate effectively, and react quickly. But with the adoption of new technology inevitably come new risks, and new challenges. Foremost importance should be placed on understanding these implications, and ensuring the safe deployment of the new technology.

 

At Webhelp, we work every day to drive innovation, with control by design. We adapt and innovate – in our technology, our mindset and our operational practices – and continue to push the boundaries of what we can do for our clients. But at the heart of everything we do is a focus on delivering low risk solutions, helping our clients to innovate while remaining safe, now and in the future.

 

In Risk & Innovation, our new series of articles and white papers, we’ll be exploring how emerging technologies can be used to help companies stay safe while maintaining necessary growth and agility. The first in the series, Bots, Bias & Bigotry- Safe scaling of AI, will arrive on Monday March 22, and will address the fairness, privacy and operational safeguards you need to consider when incorporating AI into your operational model.

Interested in more content from our thought-leader James Allen? Check out his earlier pieces on Taking a human centred approach to cyber security and How AI and data analytics can support vulnerable customers.