In the age of artificial intelligence (AI), every customer experience executive knows they need to implement solutions to remain competitive, but many rush into the decision without first establishing if their organisations are ready.
Businesses are starting to invest heavily in AI, only for the solutions to underperform, not because the technology failed, but because the foundation wasn’t there to support it.
The problem isn’t the technology itself, but rather a lack of AI readiness – fragmented data landscapes, disconnected legacy systems and siloed information that starve these solutions of the context they need to deliver exceptional customer experience (CX).
It’s like the invention of the car, where it took decades before cars where widely used – they initially lacked the ecosystem to support travel, including licensing to operate the vehicle, road infrastructure to support traffic flow and rules of the road to give order to the chaos.
Similarly, AI cannot function effectively in data silos or without the interconnected systems that provide access to a customer’s full interaction history. Successful AI-enabled CX hinges on robust data integration, interconnected systems that surface both structured and unstructured information and the operational maturity to manage it all.
Get your data house in order
In any contact centre context, AI aims to support or replace tasks performed by agents, either totally or in part.
However, both agents and AI solutions need access to the same relevant information to support everyday operations and perform their functions effectively.
Just like agents, AI requires the contextual data of customers to understand who is calling, the products or services they have and previous interactions with the company. If agents did not have access to answers to FAQs and knowledge articles before implementing an AI solution, operators cannot effectively train AI engines to augment or automate the same engagements or tasks.
As such, optimising the agent is a good first step before progressing to AI-driven automation.
Operators must consider what data they need to provide to the AI engine from CRM, CSM, CCaaS and workforce management (WFM) systems, and how this data needs to come together in a customer journey to provide exceptional customer experiences.
AI is also adept at ensuring this information remains current and relevant by updating it automatically when new data becomes available. Once these steps are done, it is easier to implement a digital agent trained on the same data set.
This entails implementing the tools, integrating data and connecting systems and using AI to surface relevant information in real-time and present it to agents in consumable chunks. Deploying AI on the back-end first gives operators the ability to formulate data into answers, identify potential information gaps and create and author new knowledge articles.
The next step could be to use AI to listen to the agent call and produce an automated summary. The accuracy of these transcriptions and summaries will provide you with a good understanding of AI’s accuracy rates. When you have proven this, start to supplement human agents with digital agents.
Implementing AI in an agent-assist role to index and surface the information is often more challenging for CX professionals to adopt, due to the real-time consumption nature of agent assist. The flow of data can be infrequent. Therefore, this tends to be deployed in a later phase and use case by use case.
AI guardrails
However, before implementing customer-facing AI solutions, it is critical to address the governance challenges.
When training AI to generate answers to questions and engage with customers, operators must ensure the outputs align with governance structures and compliance frameworks, which starts with asking how to govern AI usage.
Getting this step right requires partnering with a provider like Connect that can build enterprise toolsets around the company governance framework, and the brand persona, tone and language.
Additional considerations in this regard pertain to the need for a large language model (LLM) that uses anonymised public data, or parts thereof, to train the interaction engine, or whether small language models (SLMs) or micro language models (MLMs) are better suited for specific tasks.
Additional governance considerations becoming topical relate to the ethics around resource consumption. Generic LLMs consume vast amounts of AI compute power from local data centres when countries like South Africa and the UK are not energy independent and have some of the highest power costs in the world. If so, what impact does this decision have on the cost of electricity for the local consumer and the environment?
SLMs and MLM are more energy efficient as they are built for a narrower, more specific task. They are also more agile and able to reside behind firewalls and with very tightly controlled data sets.
These are all important environmental, social and governance (ESG) considerations for enterprises that care about their impact on their communities and surroundings.
Monitor, manage and scale
Once deployed, businesses require the capabilities to monitor, manage and scale AI, which stem from enterprise operational AI frameworks. The right framework and solution set give businesses the ability to pull AI into workflows and integrate it with other systems and data points.
However, while many enterprise application and contact centre vendors are trying to integrate their own AI into their solutions to meet this requirement, the reality is that there is no vendor right now that can really provide an enterprise-wide view of AI workflows across voice and data, contact centre and enterprise.
CX operators need to partner with a vendor-independent solutions integrator that can facilitate a single view, and provide transparency and control with the associated costs. Numerous businesses have received billing surprises at the end of the month because of AI utilisation; some vendors do not expose the raw data used for AI utilisation billing.
Measuring impact
Whatever the reason for leveraging AI in the contact centre environment, whether it is cost savings, streamlined operational efficiency or enhanced customer service, operators must monitor performance against these goals to determine whether it is having its intended impact and delivering the forecasted return on investment.
This process starts with A/B testing to monitor effectiveness, and once live, it is crucial to monitor success and containment in relation to metrics, such as time to completion, customer effort, CX feedback and sentiment data for voice-based interactions.
All this data must be brought together to be analysed and monitored to identify what to tweak and improve, and where to focus next to make continual improvements and track business performance.
Ultimately, these insights help determine whether the AI solution is having a measurable impact on the business based on the strategic objectives of the business – is the automated process improving customer experience, saving money or driving revenue?
Developing clear metrics that measure the impact of the AI against the strategic objectives can help determine whether the technology is holding you back in its current form or driving you forward, as intended.
This is, however, a journey in which you will need to be constantly optimising AI and your CX experience. It will be a constant investment and your CX teams need to be invested and ready to make AI work for you and your customers.
Share