Before we get ahead of ourselves, should we perhaps pause and ask if the general use of the term ‘AI’ represents the stage of development we are currently at? Have we yet reached the point where we are truly developing artificial intelligence, or rather, a precursor to this, which at present are just very advanced algorithms?
As a child growing up through the 70’s and 80’s, there were many films and TV shows glamorising intelligent robots and cyborgs of the future. I loved Twiki from Buck Rogers, Metal Mickey, R2D2, and was obsessed with KITT from Knight Rider. As I grew older, some of my favourite films were those dystopian science fiction movies such as Robocop and The Terminator. The idea of artificial intelligence and autonomous robots has been a popular theme for nearly a century.
Like most Gen X’ers, the fantasy of AI has been a constant in the fictional sphere of my childhood, through to adulthood. It’s understandable that the technologists and visionaries from my generation and the next are fascinated by AI.
As we continue to develop ’AI’, there is much excitement from technical communities and, conversely, some fear and trepidation from the general public, fuelled mostly by the media looking for the next great apocalypse story. Whilst there are some genuine concerns and potential societal issues arising from the development of ‘AI’, I would argue we are a considerable way off ‘autonomous thinking machines’.
Defining Intelligence
The definition of intelligence found in the OED is summarised…
“The faculty of understanding, intellect… a mental manifestation of this faculty, a capacity to understand.”
On the surface this would seem to be a reasonable definition to apply to the fictional interpretation of Artificial Intelligence, but when assessed against current AI capabilities, it could be concluded that describing it as intelligence is somewhat premature.
Interestingly, and perhaps a little ironically, when I asked the question of “what is intelligence?” to ChatGPT the response certainly provided a more discerning answer…
Intelligence is the ability to acquire and apply knowledge and skills. It involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It can also refer to the ability to manipulate one’s environment or to think creatively.
Commercial AI systems based around Machine Learning (ML) and, more recently, Deep Learning (DL), throw into question some of the key observations from this statement, such as the ability to think abstractly or comprehend complex ideas.
AI platforms are typically developed to perform a single, or very constrained set of tasks. Broadly, these types of AI sit within the classification of ‘Limited Memory’. Trained to make decisions by analysing historical data sets that form a reference model for solving future problems. The decisions arrived at however, are very much dependent on the data they are exposed to, and the ‘learning’ derived from it. What we are discovering from this approach is that some of the decisions can be in stark contrast to what our intelligent human minds would conclude from the same information.
A Wolf in Husky’s Clothing
One famous case in the field of AI is that of a group of researchers who proposed a test of their latest imaging AI development. They chose to distinguish between images of wolves and huskies, and assembled a team of human testers to oversee the results.
The system appeared to return a very high success rate in initial stages. As the project progressed, the testers raised concerns over the system’s accuracy distinguishing between wolves and huskies.
Following an investigation, it was discovered that a key characteristic of a wolf was associated with the environment, namely, if the image included snow, then the AI classified it as a wolf, completely overlooking the characteristics of the animal itself.
This example is often cited as evidence of AI being untrustworthy or eliciting systematic bias. The true story is slightly more nuanced. The focus of the research was, in fact, the human testers themselves and their responses to the technology. The researchers identified that the testers were happy to accept the results from the AI until it demonstrated it was making mistakes. Only then was the need to investigate raised and the misaligned understanding identified.
In summary, the authors Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin argued that…
“Despite widespread adoption, machine learning models remain mostly black boxes” … “Understanding the reasons behind predictions is, however, quite important in assessing trust.”
This argument suggests that the reasoning applied by machine learning cannot be assumed, and requires a level of human understanding, abstract thinking, and comprehension, to assess the effectiveness, and therefore the reliability of, the decisions arrived at by an artificially ‘intelligent’ system.
Similar issues have been found in many commercial AI systems. Reports of biases in Automated Video Interviews (AVI’s) have proved to exhibit racial and gender prejudices. The outcomes of these AI decisions typically centre around how the systems interpret the data sets they are fed, or conversely, the data that is overlooked.
Evolution from an Illusionary Truth
It is the ability to think abstractly that distinguishes our human intelligence from current AI. The AI system’s knowledge is only as good as the reference material it is exposed to and the conclusions it arrives at. It is with the programmers developing the algorithms for these Limited Memory systems where the true intelligence lies. The technology is very much following the procedures with which they have been programmed to compute the results of any such ‘learning’.
Despite our childhood sci-fi fantasies of artificial intelligence becoming a reality, the current stage of AI development should not be labelled as such. By doing so, we run the risk of adopting an illusory truth and instilling a misguided trust in AI decisions that could result in unfavourable and even harmful outcomes. For the moment, we should continue to test these systems with appropriate rigor and challenge, to aid our learning and understanding within this complex field of technology.
May the AI journey continue to evolve into truly intelligent systems, but for the time being, let’s be realistic and call these systems by a more accurate name; Intelligent Algorithms (IA).