This article has been authored by Achal Kothari, AVP-Customer Success at Haptik
Enterprises across countries and industries are deploying Intelligent Virtual Assistant (IVA) solutions to automate key business functions. At Haptik, we’ve been pioneers in this space since 2013, having successfully implemented IVA solutions for a number of global consumer brands.
Year after year, customers continue to face the same issues while engaging with businesses – inefficient customer support operations, long wait times to get help and a lack of personalized human attention. AI-powered chatbots and virtual assistants emerged over the past decade to address precisely these age-old customer experience pain-points. So it would be fair to say that delivering great customer experience is one of the barometers by which the success of an IVA should be determined.
Indeed, IVA performance has typically been measured using various CX metrics – such as time to resolution (TTR), customer effort in finding product information (CES), customer satisfaction (CSAT), and Net Promoter Score (NPS).
These are undoubtedly some of the best metrics to measure CX. But when it comes to measuring the performance of IVAs, they are limited not only in scale but also in that fact that they are solely focused on customer experience, which in reality is influenced by a number of factors. Thus they ignore the core performance indicator of an IVA – its intelligence.
The graphic below summarizes the shortcomings of some of the metrics currently used to measure IVA performance.
Effective measurement of IVA performance requires detecting and analyzing the behavior of a customer at a granular level, often instantaneously, during a conversation – something that isn’t feasible with existing CX measurement frameworks. This reveals the clear need for an additional framework that provides a more accurate and comprehensive picture of the success of IVAs.
In an endeavor to close that very gap, Haptik’s Customer Success Team developed the Intelligence Satisfaction Score (ISAT) – a new industry-first framework for measuring the performance of an Intelligent Virtual Assistant.
The ISAT Methodology
The success of an IVA is mostly dependent on its ability to understand the message of the user (Natural Language Understanding), recognize the intent of the users (Intent Detection), and process the responses appropriately (Natural Language Processing). Only an IVA which does all three, will be able to deliver the right customer experience and generate ROI for the business.
Haptik’s ISAT framework aims to measure the effectiveness of an IVA, based on its ability to understand and help the end-user.
Every conversation a user has with an IVA can potentially be categorized as positive, bad, and neutral.
The key idea behind ISAT is to separate the conversations where a user had a positive experience from the one’s where a user had a bad experience – and to gain that visibility at scale.
The ISAT score is obtained by subtracting the negative conversations from the positive conversations after ignoring the irrelevant conversations. The difference representing the net performance of the IVA represents the ISAT score.
ISAT = Positive Conversations (%) – Negative Conversations (%)
The ISAT score thus obtained takes into account three key factors – User intent (through the number of messages, segregating low intent from high intent), Query resolution (user experience based on whether or not a particular task or flow was completed) and Bad experience (measuring user drop-offs caused by a negative experience).
ISAT is non-intrusive, independent, works at scale and measures every conversation in the system. It focuses on query resolution, thus giving an accurate count of users benefitted. It effectively filters out non-serious users from serious ones. And feedback is instantaneous and can be collected through any channel.
How ISAT Can Help Brands
At Haptik, our Customer Success Managers have been using ISAT for some time now, to measure and optimize the performance of the IVA solutions we’ve implemented for our partners.
Broadly speaking, a good ISAT score indicates a positive use case and a well-designed conversational flow.
A bad ISAT score indicates that a lot of conversations are in the ‘Negative’ or ‘Neutral’ buckets. A greater number of conversations in the ‘Neutral’ bucket could indicate that the IVA is receiving a lot of low-intent users, which might be due to wrong targeting. A greater number of conversations in the ‘Negative’ bucket, on the other hand, indicate poor IVA design (in terms of unclear bot copies, poor conversational flow etc.), a bad or irrelevant use case, or other issues such as API failures.
By conducting a detailed analysis of the findings from ISAT, a Customer Success Manager can focus on moving more conversations from the Negative or Neutral to Positive – thus improving the overall performance of the IVA solution.
ISAT Helps Drive Customer Success
Our Customer Success Team studied over 50 IVA solutions implemented by Haptik in order to develop the ISAT framework – analyzing more than 5000 conversations to validate the principles that we have used to define the success of IVAs.
Leveraging ISAT, businesses can derive actionable insights that will substantially enhance the performance of their virtual assistant solutions, and significantly elevate their overall customer experience.
At Haptik, we understand, perhaps more than anyone else, that building a great product is only the start of our work. Once a solution has been implemented, our Customer Success Team works closely with our partners to improve end-user experience on the IVA and enhance their overall performance – with the ultimate aim of delivering greater ROI.
It is our hope that the ISAT framework will be a significant contribution to the still relatively young Conversational AI space.
I have barely scratched the surface of the science behind ISAT, and its business impact, in this article. To take a deep-dive into ISAT, do read our whitepaper, now available for download.