The dream of being able to create systems that can simulate human thought and behaviour is not new. Now that this dream appears to be coming closer to reality there is both excitement and alarm. Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race”. Should we be alarmed by these developments and what in practice does this mean today?

The origins of today’s AI (Artificial Intelligence) can be traced back to the seminal work on computers by Dr Alan Turing. He proposed an experiment that became known as the “Turing Test”, to define the standard for a machine to be called "intelligent". A computer could only be said to "think" if a human was not able to distinguish it from a human being through a conversation with it.

The theoretical work that underpins today’s AI and ML (Machine Learning) was developed in the 1940s and 1950s. The early computers of that era were slow and could only store limited amounts of data, this restricted what could practically be implemented. This has now changed – the cloud provides the storage for vast amounts of data and the computing power needed for ML.

The theoretical basis for ML stems from work published in 1943 by Warren McCulloch and Walter Pitts on a computational model for neural networks based on mathematics and algorithms called threshold logic. Artificial neural networks provide a framework for machine learning algorithms to learn based on examples without being formally programmed. This learning needs large amounts of data and the significant computing power which the cloud can provide.

Analysing this the vast amount of data now available in the cloud creates its own challenges and ML provides a potential solution to these. Normal statistical approaches may not be capable of spotting patterns that a human would see, and programming individual analyses is laborious and slow. ML provides a way to supercharge human ability to analyse data. However, it changes the development cycle from programming to training based on curated examples overseen by human trainers. Self-learning systems may provide a way around the programming bottleneck. However, the training-based development cycle creates new challenges around testing, auditing and assurance.

ML has also provided a way to enhance algorithmic approaches to understanding visual and auditory data. It has for example enabled facial recognition systems as well as chatbots for voice-based user interactions. However, ML is only as good as the training and is not able to provide explanations for the conclusions that it reaches. This leads to the risk of adversarial attacks – where a third party spots a weakness in the training and exploits this to subvert the system. However, it has been applied very successfully to visual component inspection in manufacturing where it is faster and more accurate than a human.

One significant challenge is how to avoid bias – there are several reported examples of bias in facial recognition systems. Bias can come from several sources. There may be insufficient data to provide a representative sample. The data may have been consciously or unconsciously chosen in a way that introduces bias. This latter is difficult to avoid since every human is part of a culture which is inherently founded on a set of shared beliefs and behaviours which may not be the same as in other cultures.

Another problem is one of explanation – ML systems are not usually capable of providing an explanation for their conclusions. This makes training ML doubly difficult because when the system being trained gets the wrong answer it is hard to figure out why. The trainer needs to know this to correct the error. In use, an explanation may be required to justify a life-changing decision to the person that it affects, to provide the confidence needed to invest in a project based on a projection, or to justify why a decision was taken in a court of law.

A third problem is that ML systems do not have what most people would call “common sense”. This is because currently each is narrowly focussed on one specialized problem. Common sense comes from a much wider understanding of the world and allows the human to recognize and discard what may appear to be a logical conclusion because in the wider context it is clearly stupid. This was apparent when Microsoft released a chatbot that was supposed to train itself did not recognize mischievous behaviour.

 Figure 1: AI, Myths, Reality and ChallengesFigure: AI, Myths, Reality and Challenges

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular science fiction. ML is ready for practical application and major vendors offer tools to support this. The problems where AI is ready can be applied today can be described in two dimensions – the scope of knowledge required and the need for explanation. Note that the need for explanation is related to the need for legal justification or where potential consequences of mistakes are high.

Organizations are recommended to look for applications that fit the green area in the diagram and to use caution when considering those that would lie in the amber areas. The red area is still experimental and should only be considered for research.

For more information on this subject attend the AI track at EIC in Munich in May 2019.

See also