Article published by ITWeb 19th February 2019
We still need to be hyper-critical of the impact of AI-powered decisions on people and society.
Everyone is talking about artificial intelligence (AI). The fourth industrial revolution is upon us, and it is challenging our thinking about technology, people, jobs, politics, business, economies and society – to name a few.
What is so enticing about AI is its ability to scale; to leverage enormous sets of data and processing power, and in so doing, to see things we struggle to see.
AI algorithms are increasingly outperforming highly skilled and experienced humans in specific areas, from games such as chess and AlphaGo, to share trading to fault finding and risk assessments. These algorithms can learn by ‘playing’ against themselves, and in a matter of hours can be capable of seeing patterns and predicting outcomes that normally take people a lifetime to perfect.
It feels a little unfair. As non-networked human beings who need to master so many skills in life, we have little chance of outcompeting a networked algorithm whose very existence is defined by perfecting a narrow set of decisions or tasks, and which can be trained to see patterns from huge volumes of data; volumes our brains simply cannot hope to consume in many lifetimes.
The implication of this is that, given enough relevant data, algorithms will come to the most probable decisions quicker, more consistently and more regularly than people can.
As a result, I would far rather use AI to make the decision on who is the most probable bomb suspect in a huge crowd, rather than a team of highly trained professionals.
The same applies to online stores such as Amazon. AI is far more effective at predicting the buying patterns of millions of customers than a thousand different store assistants. A store assistant may be really good with one-on-one engagements but can’t cope with high volumes of shoppers all at once. Plus assistants vary in their skills, and can’t easily transfer their learnings to each other; even if they wish to.
Given enough relevant data, algorithms will come to the most probable decisions quicker, more consistently and more regularly than people can.
Algorithms can, and as they process enormous volumes of consumer data, they get more accurate in their ability to make predictions. As their predictions are validated by customer behaviour and choices, they get more accurate.
Does this mean we people have no place against these algorithms? No. People should be specialising in areas where decision accuracy within a data-rich context is no longer a differentiator, like in the area of customer experience and service. Likewise, this doesn’t mean we should consider handing more of our decisions over to predictive algorithms.
One reason is that in many cases, AI-driven decisions continue to be problematic, largely because of the inherent bias in the targeted data. This leads to racial, gender and other biases being reinforced rather than eliminated. Until we can be comfortable that the data and patterns feeding AI decisions result in the decisions we support, we still need to be hyper-critical of the impact of AI-powered decisions on people and society.
Another reason is that in certain cases it’s less about the accuracy of the decision and more about the pathway taken to that decision. This is typically where decisions are bound by very clear regulatory and compliance rules, and where the ‘right’ decision is based more on proving that the correct decision-making pathway was applied than by proving that the ‘best’ decision was made.
With predictive decision-making, this is not so easy. The ‘black box’ often prevents us from seeing why a certain decision was made, and the nature of the decision-making logic means the focus is more on getting to the most probable answer than enforcing a certain prescribed decision-making pathway.
This is why heavily regulated environments, like the legal, insurance and banking industries, currently struggle with AI adoption. Most start their AI journeys applying decision-tree mapping tools, with the hope that these will build the data required for machine learning. The contextual limitations of decision-tree logic soon proves fatal, however, with many ‘digital experts’ being shelved before production.
Fortunately, new technologies are emerging that are overcoming this challenge. These technologies allow you to capture contextually rich decision-making pathways using data-driven prescriptive logic. This logic allows you to ‘digitise’ regulated decision-making flows with the confidence that while context can be considered, a prescribed logic pathway will always be followed and you will receive a detailed record of this for compliance.
Data-driven prescriptive logic technologies are increasingly helping enable digital decision-making where prescribed pathways matter as much as the accuracy of decision outcomes. This is enabling the predictive algorithms to increasingly be used to optimise the pathway routing, and to help improve the accuracy of the resulting decisions.
This also applies to ethical decisions. AI often makes the ‘right’ decision (kill this person not that because of certain logic), but ethics (rules) may mean this decision is highly problematic. As in cases where compliance is key, you may want to force the decision pathway to ensure compliance to ethical guidelines and not just legal ones.
To view the original article, click here.