To enable organisations to thrive within the knowledge economy, much attention has focused on technologies and approaches that improve the capture, maintenance, and distribution of organisational information across a diverse workforce. Vast improvements have been made in areas such as information search, content management, quality control workflows and collaboration. Yet, staff still struggle to take full advantage of all the ‘helpful’ information that is increasingly available to them.
One possible reason is that it is not more information they require. In tackling jobs within a rapidly emerging digital economy, the pressure on staff is to be able to perform without necessarily possessing the knowledge. It is in the doing, not the knowing. It is all about speed and accuracy of execution, and if staff prove too slow to react and adapt, they run the risk of being replaced by automation.
This requires a change in our thinking. It requires shifting our focus from the effective capture and distribution of information or single-dimensional logic to the effective capture and distribution of expertise or multi-dimensional contextual logic. It requires us becoming less interested in offering people decision-making maps, and more focused on offering them decision-making GPSs. It means shifting our focus from simple knowledge capture to the capture of expertise, in formats that help people perform more accurately and effectively than ever before.
This article explores some of the limitations of our current information-centred approach, and details how the capture and distribution of multi-dimensional, contextual logic can help scale organisational expertise, lower training costs, enable greater consistency in how staff make decisions in line with required policies and procedures, and empower people to focus more on value-adding behaviours within their existing jobs.
The Changing Performance Context
For organisations to survive and thrive in today’s digital economy, they need to be able to effectively bottle their unique business formula or decision-making algorithms in a way that ensures customers receive a consistent experience; irrespective of the chosen channel for engagement. Before automation, much of these formulae were contained in specific people’s heads, built up over many years of experience. In an attempt to capture these formulae for all staff to apply, companies have invested heavily in the documentation of core process and decision-making logic.
Where feasible, this mapped logic is coded into core operational systems. This enables the execution of operational decisions to be driven by technology and removes the decision-making risk of inexperienced people.
While operational systems help automate many core operational decisions and actions, the decisions and actions that precede operational execution are still largely dependent on human knowledge and skill. As an example, while customer relationship management (CRM) systems automate many operational decisions, and help sales and service teams manage their activities efficiently, they still require users to be responsible for the correct application of the organisational formulae when executing assigned activities. Within a sales context, this could include facilitating a detailed need-based conversation that looks to identify relevant products that match a client’s specific needs. Within a service context, it could involve diagnosing a specific client’s query, and then applying the relevant policy and procedural rules to resolve the query effectively.
Either way, operational systems still rely heavily on peoples’ ability to do their part in the integrated execution of the total organisational formula, and to report back to the operating system the outcome of any external engagement so the system can then trigger the next action.
Human beings, therefore, continue to play a key role in acting as the primary interface between clients, the environment, and the operating systems. Whilst many companies are looking to artificial intelligence to remove human involvement, most are still looking for ways to enhance existing staff performance. To help ensure human relevance, we need to find effective ways of lowering the risk of inexperienced or poor human decision-making and to better unlock the unique value that humans can bring to organisations. This value is not in the replication of known repetitive formulae, but in the creation of new ideas and the engagement with non-repetitive logic.
The impact of an information-centric training paradigm
To limit variation in the execution of desired formula, many organisations still apply an information-centred approach to learning. This involves capturing the formulae in training manuals and e-courses, and then asking staff to learn this information in the hope that this knowledge will ultimately translate into consistent, compliant performance. In general, few organisations experience this desired outcome. And there are a few key reasons why this is so.
The first is our apparent desire to continue transferring vast quantities of information into human brains. This information usually includes the logic required to make prescribed organisational decisions and actions. These are largely based on product rules, process rules, policy rules, technical rules and system rules. These rules are typically captured in informational objects such as documents, e-modules, graphics and process maps.
And then to speed up the information upload times during training, we encourage learners to use their short term memory. We also tend to test learner recall within 24 hours of the training session, so as not to penalise the inevitable ‘forgetting’. The result is that within a few weeks much of the information is no longer remembered. Only the information that is applied within a few days of the training tends to migrate across to longer term memory.
This means that trained and assessed staff often end up with varied versions of the desired organisational formula in their heads, inevitably resulting in inconsistent application back in the workplace.
And the problem is not limited to memory recall. It is compounded by the increasing rate of information change. Little remains static—product, processes, policies and system rules change frequently. Given that humans are ‘non-networked’ and cannot be version controlled off a central server, the information version control needs to be done manually. This is either done via constant retraining or the request for staff to read and self-version control their brains off e-mails and updated documents.
The reality is that this version control is seldom successful. The personal benefits to staff are usually minimal (the benefit tends to rest entirely with the organisation), and so staff tend to avoid investing in the cognitive energy required to constantly update their brains to the latest version. As a consequence, change is often experienced as negative and exhausting, and organisations are increasingly forced to invest heavily in change management programmes that focus much of their energy on motivating people to embrace the version-control effort willingly. This seldom materialises.
The resulting information-centric knowledge management paradigm
To overcome the issue of memory and information change, many organisations then look to knowledge management for the answer. Companies remain hopeful that when people forget, they will proactively access an online library of information to plug their self-confessed knowledge gap. This usually begins with a determined effort to capture known organisational information, with a focus on documenting all the details around every organisational product, policy, and procedure. This information is often initially saved on a central server that acts as the general location for all organisational reference material.
As more information is documented, staff tend to become overwhelmed and increasingly frustrated by the non-intuitive folder structures and confusing file naming conventions. Many actually stop referring to this central repository for organisational knowledge, and prefer to rely on informal knowledge networks that reinforce inconsistent execution based on varying versions of truth.
To mitigate this unintended consequence, organisations often look to invest in more advanced knowledge management systems with the aim to improve the quality of information capture, maintenance, distribution, and search. The common held belief is that if we can just make it easier to find the right reference information, people will make use of this resource more effectively.
As a result, with improved meta-tagging and search algorithms; improved workflows and version control mechanisms; better integration into social media platforms and greater integration into operational systems, staff do tend to find it a lot easier to locate and maintain relevant information. The problem is that, even with these advances, they still tend to avoid making active use of the available information to guide their operational decisions and actions.
A possible cause is the underlying logic structure of information. Information is by its nature linear. Decision-making logic is typically multi-dimension. A decision usually depends on a combination of many factors. With most decision-making, context matters, and so the answer a person should come to will often depend on the combination of contextual factors they are dealing with, as well as the combination of decision rules they need to apply. This is why standard operating procedure manuals and frequently asked questions are so limited. They usually apply to only a few generic situations, and leave the non-generic interpretation to the end user.
The problem with optimising a model that needs a major overhaul
To overcome this issue of contextual relevance, information-centred technologies are increasingly trying to improve the ability of authors to capture and maintain context-specific information objects that can provide answers to very specific use cases. Authors are enabled to capture ‘generic’ paragraphs that can be maintained centrally, and inserted into many other sub-documents as required. The theory is that this allows you to build many versions of the same document, with just the information that varies per scenario being captured and maintained at the sub-document level.
While this sounds workable in theory, contextual factor combinations are exponential by nature, and the addition of just a few factors can lead to thousands of possible scenario outcomes. The sheer number of ‘exception cases’ means that building and maintaining separate information objects to answer each use case is usually impractical. Even if you enhance your search algorithms to find context-specific information within documents, the application of a single-dimensional model to a multi-dimensional challenge still ultimately leaves most of the interpretation risk with the individual.
The reality is that you cannot fine-tune a single-dimensional model in the hope that it can mirror a multi-dimensional model. Fine-tuning won’t work. You unfortunately need a major overhaul.
Moving from an information-centred model towards a data-centred model
The reality is that when users’ request help to find a relevant answer to a specific question, and especially when that answer depends on many factors and is influenced by many rules, they are usually not looking for information to help them work out the answer. They are looking for the answer.
To provide the answer to a very specific question, one needs to tap into the logic of an expert. This logic reflects the mental algorithm they typically apply; an algorithm that they have improved and fine-tuned over multiple applications and failures. And it is often applied across a series of decision-making steps that include (a) diagnosing the context (situational factors); (b) diagnosing the root cause of the problem or challenge; (c) analysing which of the available solutions they should apply to the identified context and root cause; and finally, (d) applying the identified solution in line with known decision-making rules e.g. policy and procedural rules. Each stage in this process involves navigation through an exponential set of possibilities. And each outcome compounds the possibilities that follow.
This is why offering people decision-making maps (information objects), especially when decision rules are complex and where there are many possible factors or outcomes to consider, has limited decision support value. It simply leaves too much interpretation for the user. We need to rather look at offering people more useful and effectual decision-making GPSs. In effect, its like offering them a navigational app capable of guiding them through all the known decision-making paths, and adapting to changing situations and locations. A little like an online expert that takes the road block into account or the new shortcut that has been built to get you quicker to your desired destination. By taking the interpretation guesswork out of it, people can stop trying to interpret all the complicated decision-making maps they are being offered within their knowledge systems, and can leverage their decision-making GPS to help get them to the prescribed destination while they focus more of their energy on other value-adding aspects of the trip. In the vehicle metaphor, this would include avoiding poor drivers, creating a fun social environment for the occupants, and maintaining a safe driving distance. In an organisational context, it could include focusing on offering the client an exceptional customer experience or looking at ways to innovate a standard process or come up with new thinking to a known problem.
Unlocking human value rather than mitigating human risk
In many jobs today, work primarily involves people executing required decision-making formula. Whether this involves selling products, answering queries, applying procedures, or executing system actions, the role is predominantly to get it right— according to a set of prescribed rules. There is little room left for people to add new value to their jobs, other than the interpersonal client engagement that is difficult to formularise. As a result, most conventional training is still more focused on preventing human decision error than on developing people in areas where they can potentially add new organisational value. Training is therefore typically more a risk mitigation spend than a human development investment.
But what if this could change? What if people’s energy can be targeted at areas where there is currently no clear formula; areas where staff can be asked to discover new ways of doing things, not simply to repeat tried and tested formula? Where learning is more focused on unlocking new insights than teaching old insights?
For this to be a possibility, organisations need to find a way to remove the need for people to learn and constantly update their knowledge of required decision-making formula in order for them to execute required decisions and actions correctly. Only then will organisations be willing to allow people to focus their efforts elsewhere; to the area of the unknown and the unchartered.
Capturing and scaling organisational expertise, not simply knowledge
To do this, organisations need to find a way to capture expert logic, not simply knowledge. This entails capturing all the components that make up a decision and ensuring that no matter what the context, people can be guided in real-time to ask the right questions, find the right answers, and take the right actions based on prescribed decision rules. It means capturing multi-dimensional logic; building a GPS that is aware of all the possible routes, and is capable of adapting to different situations as and when they arise.
To capture multi-dimensional logic in a way that can take the guesswork out of operational decision-making, organisations require tools that are capable of reflecting the logic models used by experts in making known decisions. Early attempts with decision tree tools highlighted the intrinsic limitations of this hard coded ‘catch all’ approach, especially when contextual factors increase and the resulting replication of decision trees becomes unmanageable.
Fortunatley things have changed. Commercially available authoring tools such as CLEVVA can now enable non-coders to capture wide ranges of multi-dimensional logic forms into searchable, shareable logic objects. This includes the ability to capture the diagnostic logic required to analyse which item or product one should choose, based on any combination of attributes or factors; to capture the logic that can cascade down diagnostic levels when an answer to a question depends on your answer to a previous question; to capture the logic that can consider any combination of possibility and trigger a relevant action; and to capture the logic that looks to identify related possibilities based on prior selections.
Each logic object can then be linked to other logic objects, that then link to others and so on. The result is a dynamic web of logic that can offer relevant answers to any given context. In effect, a decision-making GPS that will constantly adapt do your ever-changing environment.
This means when a user searches for specific help to solve a specific challenge e.g. what product must I offer my client?, they no longer need to rely solely on an information object e.g. a product brochure for decision support. Instead, they can access a Virtual Advisor app that operates like a digital expert or real-time GPS, helping the user diagnose their context, identify the root cause of their challenge; identify the right solutions based on defined possibilities, and then guiding them through the execution of that solution in line with the latest policy and procedure rules. At the end of the process, the user can also be offered a detailed record of how they came to that outcome for compliance and analytical purposes—in much the same way as a GPS offers you a record of the route that was taken.
Maintaining human relevance
As new ways of capturing and scaling expert logic emerge, so the pressure on human relevance increases. Artificial intelligence is fast becoming a daily reality, and the tipping point towards the singularity has already been reached. This ability to not only replicate known decision-making formulae but to enable technology to self-learn and to create new decision-making algorithms places enormous pressure on the majority of staff whose roles are defined by prescribed decisions and actions based on prescribed rules.
One way to mitigate the mass redundancy of ‘formula-driven’ staff is to embrace this move towards logic capture, yet to focus efforts on building decision navigation apps that help people perform better, not replace them entirely. This means building performance support that is capable of enabling people to make the required decisions and take the required actions without any pre-requisite knowledge. This would then free them up to focus their efforts on the behaviours and decisions that can add increasing value to those prescribed by a formula, and where pure automation is less able to compete. It means arming people with guided decision support so they get the basics right, while becoming stronger at adding the non-prescribed value computers still struggle to add.
Trying to improve the impact of training and knowledge management by optimising the current information-centric paradigm still offers organisations incremental benefits. Yet for the realisation of exponential gains in human performance, organisations need to optimise their ability to capture, maintain and distribute expert decision-making logic. This means building fewer maps and more GPSs. It means ensuring users can access the right decision navigation app as and when needed.
If we can achieve this, we can potentially offer people enough time to adapt and to change how and what they learn, so that they can equip themselves with the capabilities required to thrive in a world where technology will ultimately drive standardised decision-making algorithms. Important and differentiating capabilities such as innovation, creativity, problem-solving, and the ability to operate effectively within a diverse cultural reality will in future become key to our ability to unlock new thinking and enhance new value.
This means the task of replicating known decision formulae should in future be left to technology. Humans should rather be enabled to focus their efforts on adding new value.