Chatbots are the future, no doubt about it. They allow us to engage with companies digitally, without having to endure that mind-numbing ‘please hold’ music or the heavily accented yet lightly trained contact centre agent who finally answers, yet cannot resolve anything.
Oh, to simply click on an app and have everything resolved via a friendly and highly competent chatbot called Bert.
Unfortunately there are few Berts in production. In most cases, the customer experience offered by a chatbot is even more infuriating. It seems that in most cases, chatbots are still a little dim-witted. When they finally understand what you are actually asking, they tend to offer you a generic answer that fails to consider your specific situation or context. And no matter how hard you try and be specific, they prefer to stick to the general.
This experience recently sent a colleague of mine, who banks with a fully online bank in the Middle East, off the edge. To save costs, they have gone 100% digital. Except that none of their digital robo-advisors can answer a targeted and specific question. So my colleague has decided to leave them, for a bank with real people who he can actually talk to.
The question then is, with so much progress in AI, why are we still struggling to deliver a chatbot that is capable of acting like an expert advisor, rather than simply a dim-witted assistant. To act like an expert, it needs to be able to first diagnose my situation or context. It then needs to diagnose the root cause of my problem or need, and only then identify from all possible options a solution that resolves that specific problem or need.
The answer to this question is complicated, but let me give it a shot. The first challenge chatbot developers face is ensuring that the person using free text or voice is correctly understood by the technology, typically via Natural Language Understanding. To ensure customer intent is understood, a great deal of effort needs to be expended to offer the chatbot all possible customer inputs and link them to relevant intents.
Once this is achieved, the next challenge is ensuring that the chatbot has the logic required to effectively respond to the persons request. In many cases, this logic is coded using decision tree logic, so that if a specific request is made, the chatbot has a clear logic pathway to follow in order to resolve it. The problem with coded logic is that you need to have pre-determined all possible pathways, and given the explosive nature of contextual variables, this is often very difficult to achieve. Products, policies and processes also change frequently, making the maintenance of this logic very challenging. As a result, customers tend to find the answer to their question is either wrong or too generic and fails to address all the variables they needed considered.
A way to get round this limitation is to get the chatbot to learn off training data, either accessed off existing knowledge bases and/or off large sets of unstructured data. The aim here is to allow the chatbot to learn from what is already available and then to improve the accuracy of each response using machine learning. In other words, start with what is available and then based on repeated user or customer interactions, improve the accuracy of the responses. While this makes a whole lot of sense in theory, customers are seldom keen to be chatbot teachers, and tend to demand a certain level of engagement accuracy upfront. Getting the logic accuracy to an acceptable level becomes a major hurdle – one that is keeping many chatbots frustratingly trapped within innovation hubs and pilot projects.
Fortunately, a third option to building chatbot logic is now available. This option is to capture expert logic using data tables, not decision trees. This method allows for highly complex prescriptive logic to be captured off structured data, and offers chatbots contextually relevant, adaptive logic that helps drive relevant customer engagements.
Platforms such as CLEVVA have specialized in this form of logic capture and deployment, allowing chatbots to leverage this logic via APIs. As a result, customer interactions can now be guided via this dynamic form of data-driven prescriptive logic.
This means organizations are now able to deploy digital experts that not only replicate the prescribed logic they need applied to all regulated customer engagements (and get a detailed record to prove it), they can also offer customers a far more contextually rich sales or service experience.
By overcoming the decision tree coded logic limitation, technologies like CLEVVA are offering digital teams a rapid way of getting their chatbots production ready. Rather than deploying limited logic, or waiting for accuracies to reach acceptable levels, they can deploy with the confidence that customer context can be handled in a structured, consistent and compliant way – in line with defined engagement formula.