Many experts have called for a pause in the ongoing development of artificial intelligence (AI). They argue that we are moving too fast and don’t understand the implications of what is being unleashed on the world. The theory is that if the AI becomes smart enough to improve itself then it will create a system that can grow exponentially more intelligent – far outpacing the humans that created it.

This is not an idle fear or science fiction. Elon Musk warned that AI has the potential to destroy all civilization. Geoffrey Hinton is known in technology circles as the ‘godfather of AI’ – in May 2023 he quit his leadership role at Google so he could be free to talk about the danger of his own research. Before his death, professor Stephen Hawking warned that the development of AI could lead to the end of the human race.

Let’s be clear. There is AI that performs a very specific role, such as searching through data and summarizing it. Then there is what these experts are warning about, better known as Artificial General Intelligence. AGI is the ability to learn any intellectual task that a human can perform. Essentially, it’s the replication of a human brain.

AGI remains hypothetical, so we are not really staring at the end of the world. Of course, if the AI researchers stay focused and keep improving the systems they are creating, then they may eventually achieve AGI. This is why it makes sense to create sensible regulation around the use of AI before that happens.

But what about more immediate regulation for the AI we already have?

If a business sends a personalized email to a customer, should it be openly declared that ‘this message was created by AI?’ Or what about when talking to a customer service chatbot that has a name on screen? Should it have a name if it is a bot or should we just openly say that it is a bot?

The uses of Generative AI is proceeding much faster than any new regulation is being created. This is quite normal as regulation usually proceeds at a thoughtful and considered pace – which can feel glacial to those who feel that consumers need more protection.

These are fairly simple examples, but they do demonstrate the need for companies to think carefully about human or automated interactions with customers. This is not least because automated fraud will grow alongside genuine automated customer interactions. Customers really need to know who they are talking to.


Why businesses can’t afford to wait to embrace Generative AI

However, as a business community one thing is certainly clear. Waiting is not an option. Any customer service organization that announces they are going to wait for regulatory guidelines around the use of AI will soon be a legacy company only known as an entry in the history section of Wikipedia.

There may be some business sectors where progressing at a cautious pace is essential. Financial service companies are almost certainly not able to implement AI financial advice without regulators being aware of how the AI system works and the parameters being used for advice.

In customer service though, the competition to deliver a fantastic customer experience is our focus. Therefore, where AI can be used to augment the tools available for agents or even to directly improve the customer experience then it should be deployed – with caution.


How to manage the risks of Generative AI solutions

The caution should focus on control and risk. If you deploy an AI solution within your customer service process, then is it possible to actively reduce the risk of a negative outcome? Can you consider all the possibilities of a problem arising before using AI? In addition, can you control how the AI works? This can be an issue for generative systems such as ChatGPT because it functions as a black box – you can’t see into the algorithm to control how it works.

A combination of these control and risk variables will determine if an AI deployment is safe for your business. If you are using an intelligent chatbot to advise customers on the location of the pizza, they ordered then the potential for a dangerously wrong response is low. If the bot is being asked to summarize financial investment advice, then the risk of giving incorrect advice is very high.

Innovation is essential, especially in the brand to customer interface. We create the impression that customers have of your brand. AI can play a key role in improving automated customer service and supporting agents inside the contact center, but brands always need to ask this question: “Are we still in control?” And regulation needs to exist so that we remain in control.

It’s important to note that the application of Generative AI is creating large models that are totally “oversized” for most use cases. The future will undoubtedly see the emergence of smaller models that are targeted for given use cases or verticals. They will be easier to build and train, will require less machine power, and will become much more competitive from an economic perspective. As has always been the case in the past, Open Source will play a key role for the major vendors. Last but not least, it is highly likely that ecological and financial considerations will also come into the spotlight.

For more information on why Webhelp could be the CX specialist you are looking for to get the best out of human and Generative AI tech, please check our Generative AI solutions page or get in touch.