Although the terms Artificial Intelligence (AI), machine learning and cognitive computing are sometimes used interchangeably, they are different concepts. And when it comes to whole field of AI, it’s worth understanding where big data and data science fits in too.
Wikipedia tells us that artificial intelligence (AI) is the intelligence exhibited by machines and is applied when a machine mimics cognitive functions that humans associate with other human minds, such
as learning and problem-solving. Among the capabilities we classify as AI are the successful understanding of human speech, competing at a high level in strategic game systems, self-driving cars, intelligent routing in content delivery networks, and interpreting complex data.
Ryan Falkenberg, co-CEO of Clevva, explains that there are two different types of AI. “Artificial intelligence is more than just cognitive computing and machine learning,” he says.
“Machine learning/cognitive computing can be likened to the Mars Explorer, which is designed to operate without human intervention. It gathers data from the environment and then looks to make decisions based on the data available. The more data available, the better the decisions. And, as a result of the decisions taken, the results inform future decisions and allow it to learn and adapt, much like a baby learns by exploring its new world.
“The more it tries things, the more it learns, the better it gets. This form of cognitive computing can be seen in things like driverless cars and robots.”
In short, says Falkenberg, this is wherever human decision-making can be removed. AI is loosely split into artificial narrow intelligence – which is AI focused on a very specific area or field, and artificial general intelligence – like IBM’s Watson – the type of AI most people think of when they hear the term.
“Then there is the kind of AI that captures known intelligence and expertise and uses three- or four-dimensional logic to assist humans in their decision-making. Called decision navigators, these work to augment humans rather than replace them.”
AI is being used by organisations like Apple and Google to power virtual assistants like Siri and Google Now, Falkenberg adds. “It is used by Facebook and Google Photos to help identify and tag people, places and things. “It is used in gaming to make characters more real and give them much-needed personality. Other businesses are using it to detect fraud, predict customer behaviour and target their activities based on those predictions. For example, online shopping sites are using it to try and predict what you will buy before you know it, and sending promotions and coupons based on those predictions,” he says.
“Decision navigators are assisting local banks, insurers, petroleum companies, electronic distributors and telcos to guide staff in their decision-making, within their product, policy and procedural realities. This is particularly useful from a corporate governance point of view, as decision navigators leave an audit trail.”
These navigators are also helping companies to get people onboard and trained in substantially less time than was previously required. “Instead of having to teach people everything they need to know by rote learning – most of which they will forget – we can teach them what they need to know and augment that with a decision navigator which will guide them through whatever it is they do on a daily basis, whether it is a sales call, a help desk query or a technical task.”
Wikipedia describes machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed. It has evolved from study pattern recognition and computational learning theory in artificial intelligence, exploring the study and construction of algorithms that can learn from and make predictions on data. These algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.
Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on prediction- making through the use of computers.
It has strong ties to mathematical optimisation, which delivers methods, theory and application domains to the field.
There is apparently no widely-agreed upon definition for cognitive computing in either academia or industry, according to Wikipedia. However, the term has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. It is considered to be a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons and responds to stimulus.
Some features that cognitive systems may express are: adaptive – they may learn as information changes, and as goals and requirements evolve; interactive – they may interact easily with users so that those users can define their needs comfortably; and contextual – they may understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal.
The foundation of all AI or cognitive machines is big data, which refers to data sets that
are so large or complex that traditional data processing application software is inadequate to deal with them. According to Wikepedia’s definition, challenges associated with big data include capture, storage, analysis, data curation, search, sharing, transfer, visualisation, querying, updating and information privacy. The term big data sometimes refers to the use of predictive analytics, user behaviour analytics or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set.
Making sense of the floods of data now available, in both structured and unstructured formats, and developing systems that use the concepts of AI, machine learning and cognitive computing requires the application of data science.
Wikipedia describes data science – also known as data-driven science – as an interdisciplinary field using scientific methods, processes and systems to extract knowledge or insights from data in various forms. It is a “concept to unify statistics, data analysis and their related methods” in order to “understand and analyse actual phenomena” with data, employing techniques and theories drawn from many fields within the broad areas of mathematics, statistics, information science, and computer science, in particular from the subdomains of machine learning, classification, cluster analysis, data mining, databases, and visualisation.
© Copyright 2017. Futurewave Business IT