Subscribe
About

So, just how smart is artificial intelligence?

Despite great (human) minds presenting many clever hypotheses through the years, artificial general intelligence is still a considerable distance away from being fully realised.
Johan Steyn
By Johan Steyn, Founder, AIforBusiness.net.
Johannesburg, 25 Oct 2021

Artificial intelligence (AI), in contrast to most other new technologies, has gone through several ‘hype cycles’ – an initial period of exuberance about the possibilities − followed by a period of realism-checking and disappointment (the so-called ‘AI winter’) before the next major breakthrough.

Human learning, according to psychologist Edward Thorndike, is caused by a previously unknown property of neural connections in the brain. Thorndike worked at Columbia University and published his theory in 1932.

This idea was expanded upon by another psychologist, Donald Hebb of the University of Chicago, who claimed in 1949 that learning entails increasing the probability (or weight) of induced neuron firing between linked connections in certain brain activity patterns. A hardware model of the human brain had been constructed, researchers working on an artificial brain thought.

This hypothesis, called the Computational Theory of Mind (CTM), assumes the human mind is a computational system with thought processes similar to what we currently recognise as software running on a digital computer.

In 1936, Alan Turing came up with his Turing Machine, a mathematical model of a physical device that could do any computation. Many saw it as a path to get to AI, while Turing saw it as the foundation for natural intelligence.

The development of digital computers that could run the first ‘intelligent’ computer programs resulted in a lot of work on CTM in the 1950s. Ferranti Mark 1 at Manchester University performed the first AI algorithm in 1951. If given enough time, it could compete fairly in a game of Draughts with a human opponent.

To demonstrate machine intelligence improvements to the public, researchers have traditionally used games against human opponents since Deep Blue's chess victory in 1996 and AlphaGo's victory in go in 2015.

Research on AI was supported by governments primarily focused on language processing from the 1950s through the early 1970s. For automatic language translation, perceptron networks were considered the best option. Large sums of money were wasted trying to get a system that could handle the complexities of language in the first place. During the first AI winter, which lasted into the 1980s, interest in the connectionist approach to AI waned.

Expert or knowledge-based systems continued to be developed until the 1990s, but it became evident they did not represent true AI.

Afterwards, the excitement revolved around a technique for turning a standard computer into an expert system, capable of simulating, for example, the diagnostic powers of a human medical doctor. An inference engine was used to draw conclusions from the patient data, which was fed into the knowledge base. The knowledge base contains all the facts, assertions and rules related to diseases and other medical conditions, such as symptoms. It was created in a fundamentally different way from normal procedural code using the new programming languages LISP and Prolog.

Expert or knowledge-based systems continued to be developed until the 1990s, but it became evident they did not represent true AI, and commercial use dwindled due to the difficulties and time necessary in transferring human expertise to the knowledge base.

Despite their enormous size in terms of memory, the knowledge bases were restricted to a small number of specific topics. Another issue was administration, which necessitated meticulous audits in order to eliminate false information and questionable rules that had ‘learned’ on their own.

As early as the 1970s, scientists recognised that a single simulated neural layer could only detect a small number of well-specified things, each of which was evaluated to confirm the neuron output prior to each activation function was unique. Classifications could grow even more sophisticated with the addition of a second, ‘hidden’, layer of neurons.

Multiple-layer neural networks provide the foundation for what is currently called deep learning. Finally, everyone thought that machines as complex as the human brain, as well as sentient robots, could be produced. Networks are growing in size as a result of the latest generation of multi-core processors and the resurgence of connectivism.

By the 2000s, despite all the advancements in hardware technology, the traditional AI disillusionment had set in, as it became clear that deep learning was still incapable of constructing an intelligent robot. AGI (artificial general intelligence) is still a considerable distance away.

What is in store for AI as we go into the new year? Technological trends can change fast. As an example, the COVID-19 outbreak prompted many businesses to refocus their technology efforts on enabling and supporting remote work.

Nonetheless, industry observers have a basic sense of what is likely to happen in the future. Many identify cyber security and the internet of things as key trends to watch in 2022.

We can expect more mature smart automation platforms, where autonomous automation (no human in the loop) can become a real prospect. We are likely to see an avalanche of voice technology innovation, which may lead to human-less call centres staffed by conversational AI bots.

Bias in algorithms − especially as it relates to facial recognition technology − will continue to plague us. Deep fakes will become widespread and it will pose unique challenges for the legal industry in particular.

Extended reality will reach maturity. It has the potential to profoundly alter how businesses use smart media platforms, as it enables seamless interaction between the real and virtual worlds, offering users an immersive experience. This technology has numerous applications, ranging from healthcare to education, but most notably in the business world.

Will we head for another AI winter? It seems that the sun is shining bright enough on the AI landscape and I doubt that anything will stop its drastic and speedy advances.

Share