In the first of our Risk & Innovation series, James Allen examines the barriers to overcome when scaling AI.

Now that we’re well into the fourth Industrial Revolution (also known as Industry 4.0), we expect to see some fundamental shifts in how businesses operate and serve their customers.

Here’s what we see as the three big pillars of Industry 4.0:

  1. Digitisation of product and service offerings
  2. Digitisation and integration of supply / value chains
  3. Digital business models and customer access

 

The shift toward Industry 4.0 has become more important to many brands, and has accelerated during the Covid crisis as a result of significant changes in supply chain and consumer behaviour.

In fact, a recent McKinsey survey highlighted that 65% of respondents see Industry 4.0 as being more valuable since the pandemic, with the same survey revealing that the top 3 strategic objectives for Industry 4.0 are:

  1. Agility to scale operations up or down in response to market-demand changes (18.4%)
  2. Flexibility to customize products to consumer needs (17.2%)
  3. Increase operational productivity and performance to minimise costs (17.2%)

Yet when the same respondents were asked if they had successfully scaled Industry 4.0 initiatives, only 26% had managed to do so successfully.

 

According to Rothschild & Co, the market for Industry 4.0 is expected to top 300 billion dollars, and with AI and connectivity projected to reduce manufacturing costs by 20% (or 400 billion dollars), it’s essential that companies find a way to scale safely, at pace.

Artificial Intelligence evolution

AI has been in development for years, starting with the first computers in the 1940, with which scientists and mathematicians began to explore the potential for building an electronic brain. In 1950, the “Turing Test” proposed that if a machine could carry on a conversation that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Alan Turing to argue convincingly that a “thinking machine” was at least plausible, and the paper answered all the most common objections to the proposition.

Fast forward many years, and many millions of pounds of research investment, and in 1997 perhaps the first publicly recognised AI computer was developed. This came from IBM in the form of Deep Blue – a chess-playing computer that beat the reigning world chess champion Garry Kasparov.

But machines like Deep Blue were incredibly complex, extremely expensive, and inaccessible to all but a few large technology companies. In the past few years, however, the interest and opportunity presented by AI within Industry 4.0 has exploded.

This is due to a number of factors:

  • Wider availability of computing and access to cloud environments with large processing power
  • Development of deep learning algorithms
  • Big Data platforms
  • Development of Artificial General Intelligence

AI – learnings and barriers to scale

Whilst many companies see the potential presented by AI, companies are also rightly concerned by the risks that it presents, as well as the barriers they need to overcome when scaling.

The most common challenges we tend to come across are:

  • Access to specialist skills
  • Cost of processing in cloud environments
  • Inability to demonstrate fairness, lack of bias and integrity of AI algorithms
  • Risk of unintended consequences
  • Regulatory understanding
  • Ability to seamlessly switch between AI powered processes and regular business processes in the event the AI fails

This presents organisations with a real conundrum. AI use raises questions over ethics, safeguards, interpretability and more. It’s only right that organisations probe these issues and take the learnings from those that have gone before them.

Here’s a few public examples of where AI has gone wrong:

Footballer or felon

A facial-recognition system identified almost thirty professional American footballers as criminals, including New England Patriots three-time Super Bowl champion Duron Harmon. The software incorrectly matched the athletes to a database of mugshots in a test organized by the Massachusetts chapter of the American Civil Liberties Union (ACLU). Nearly one in six athletes were falsely identified.

CEO gets spoofed

In 2019 the CEO of a UK-based energy firm got a call from his boss at their German parent company, instructing him to transfer €220,000 to a Hungarian supplier. The ‘boss’ said the request was urgent and directed the UK CEO to transfer the money promptly. It turned out the phone call was made by criminals who used AI-based software to mimic the boss’ voice, including the “slight German accent and the melody of his voice,” as reported in the Wall Street Journal. Such AI-powered cyberattacks are a new challenge for companies, as traditional cybersecurity tools designed for keeping hackers off corporate networks can’t identify spoofed voices.

Get me out of here!

US airlines were subject to widespread criticism after their AI powered pricing systems charged customers up to 10 times the price of a regular ticket, as they desperately tried to escape Florida ahead of the arrival of hurricane Irma. The systems did not have a kill switch. “There are no ethics valves built into the system that prevent an airline from overcharging during a hurricane,” said Christopher Elliott, a consumer advocate and journalist.

 

Navigating the risks and enabling safe scaling of AI

Webhelp and Gobeyond Partners have developed a comprehensive framework to support the safe scaling of AI, including assessment of risk, key controls, human-centred ethics principles, algorithm management and data handling. This framework includes open source methods that can be used to demonstrate the integrity and explainability of AI algorithms.

Safe scaling of AI

Questions your organisation should consider

Although AI presents a huge opportunity to transform both business operations and customer experience, this is not without risk. Here are some of the long term strategic questions that we recommend you consider, for your organisation:

  • What role does AI have in the working environment and is there such a thing as a post-labour economy? If so, how do we make it fair?
  • How do we eliminate bias in AI?
  • How do we keep AI safe from threats?
  • Is it right to use AI in cyber defence? If so, where is the line?
  • As AI capabilities become more integrated, how do we stay in control of such a complex system?
  • How do we define the humane treatment of AI?

 

Feel free to get in touch, to see how we can help you safely fulfil your Industry 4.0 ambitions at pace and at scale.