For many companies, artificial intelligence remains elusive – the realm of a few data scientists hired to develop working prototypes that showcase to the executive what the digital future could look like. And while these prototypes, demonstrated from funky looking Innovation Hubs and Digital Labs are often amazing, few translate into fully-fledged operational deployments that change the course of business.
The reason is that many prototypes are built off a small, narrow set of training data that simply illustrates the potential of AI. However, as soon as an executive asks the team to veer off script, the limitations of the logic becomes self evident.
So why have so few companies succeeded in deploying a fully operational AI capability when so much investment is pouring into the world of AI? The short answer is that it has less to do with the technology, and more to do with the existing environment in which the technology needs to operate.
AI in a perfectly connected, data-rich world can perform incredible things. The problem is few companies operate within a perfectly connected, data-rich reality. Most of them continue to be held back by one or more of the following.
- Insufficient quality and quantity of training data. Few companies have the quantity and quality of data required for their AI algorithms to develop the predictive accuracy needed to make them production ready. This is often exacerbated by limited integration of different legacy systems, resulting in data being stored in different parts of the organization, and often in different formats.
- Risk of logic error. In companies that operate within strict regulatory and compliance frameworks, 90% predictive accuracy is not good enough to deploy live. It needs to be 100%, and that is very difficult to achieve given the training data dependency.
- The black box problem. Many companies need to prove that decisions made by AI solutions were in line with prescribed rules and regulations e.g. financial institutions. This means they require a detailed audit trail of how every decision was reached. For many AI platforms, this is difficult to get right.
How then do companies get their AI solutions production ready when these dependencies may take them years to resolve?
One answer is to shift the focus away from AI that depends on unstructured data, and focus initial efforts on AI that thrives off structured data. This means focusing on the capture and perfection of prescriptive not predictive logic. Prescriptive logic is the logic that must be applied when making specific decisions or taking specific actions, based on a set of predetermined rules that govern the required outcomes. It is the logic that resides in a few experts heads, and which historically has been captured to limited effect using knowledge bases or scripted expert systems. It is the logic known to the organization, and therefore not dependent on AI technology to work it out for them.
Examples include the logic sales experts apply when unpacking customer needs, recommending products and identifying possible cross sales and leads. It is the logic service experts apply when diagnosing customer queries and identifying the right solutions, no matter what the context. And it is the logic operational experts apply to diagnosing situations and taking appropriate actions, based on clear policy and procedure rules.
The objective of converting this logic into digital expertise is less about improving predictive outcomes, and more about ensuring the consistent, compliant replication of the logic to all customer and operational engagements and challenges. It means that phase 1 of your AI journey looks at digitizing known and existing expertise, and validating this logic by asking existing staff to use it as they work i.e. allow it to augment them.
And once the known logic is validated, you can then look to optimize this by adding the insights derived from the resulting training data. And then once this is accomplished, you can move to unlock the full power of AI by applying additional cognitive computing and machine learning capabilities.
So why then have so few organisations focused their efforts on prescriptive logic?
The answer is based on our historic approach to knowledge capture, and the tools used to capture prescriptive logic. These include documentation and decision tree or process mapping tools.
From a documentation perspective, we have always been limited by the single dimensional nature of the medium. This means that to reflect the many possible scenarios or variables that can influence a specific decision, you need to try write out each possibility in ever deepening description levels. Not only does this make these documents overly complicated and practically unusable, but it makes maintaining the logic extremely difficult.
Process mapping tools and expert systems then tried to overcome this limitation by applying two-dimensional decision tree or scripted logic. While this approach initially gave the mapping team a sense of cognitive comfort that the logic made sense, the explosive nature of this format makes it very difficult to capture every possibility, let alone maintain the logic, especially as variables increase and change.
As a result, many organisations have placed their hope in AI, hearing that it can read through every knowledge article and miraculously work out the missing detail. The truth is that AI can work out hugely complex data relationships and patterns, but it needs to be given a fighting change to start with. And given the poor quality of documented logic in most company knowledge basis, achieving a workable predictive accuracy is proving to be harder than initially hoped.
An alternative starting point
What many organisations should consider as their first step in the AI journey are platforms that specialize in the capture and replication of multi-dimensional prescriptive logic based off structured, not unstructured data. These platforms are designed to overcome the decision-tree logic limitations, and utilize data tables and relationships to reflect existing expert logic in ways that can be used to drive consistent, compliant decision-making logic.
Most of these technologies play in the space of augmented intelligence, looking to existing staff to offer the customer a human interface while the technology acts as users virtual expert, helping navigate them through every known situation so they ask the right questions, offer the right answers, and take the right actions, with a detailed record to prove it.
The benefit of augmented intelligence is that it works with existing staff, and so reduces the risk of decision error. If the prescribed logic ever fails, staff can be taught to minimize the impact by handling the customer engagement, and then feeding the issue back to the AI team to adjust and improve. This ensures the organization, in partnership with staff, continue to test and enhance the prescribed logic while benefiting from the performance support. As a result, staff performance improves, training dependency reduces and the business is gifted with rich training data can be used to learn and improve predictive accuracy.
Augmented intelligence makes AI practical and applicable to today’s business reality, and is not dependent on a future digital business model that may take many years to materialise. It also allows staff to benefit from digital intelligence and to migrate gradually to more value adding roles. This lowers the shock impact on the business system, and allows organisations to embark on a more managed digital migration without demanding a dramatic and disruptive digital transformation.