Subscribe

Rethinking machine intelligence

Lezette Engelbrecht
By Lezette Engelbrecht, ITWeb online features editor
Johannesburg, 24 Feb 2010

Creating artificial forms of intelligence has long been a scientific goal, but while some computers are beginning to understand information the way people do, manufacturing intelligence remains an elusive concept.

“Intelligence is measured in many different ways,” notes artificial intelligence (AI) researcher, Dr Dion Forster. “For example, IQ tests measure one's ability to perform linear processes with accuracy and speed, and in this regard machines are already faster and more reliable than humans are. So a well-designed machine could have a much higher IQ than a person (chess supercomputers beating grand chess masters is a case in point).”

He adds, however, that there are many other forms of intelligence, such as emotional, which are much more challenging to emulate. “I can tell a great deal about a person's state of mind by looking at their facial expressions and posture, for example. These are not linear processes, but rather inter-subjective processes that have to do with subtle stimuli, some visual, some auditory.”

“Everyone's got their own take on what 'intelligence' is,” notes Steve Kroon, computer science lecturer at the University of Stellenbosch. He adds that people exhibit all sorts of behaviour, which isn't necessarily intelligent, just human, with emotions being the most obvious example.

“So, while some researchers don't think intelligence needs to mimic humans, most feel that to really call a machine intelligent, it needs to exhibit what we informally call 'common sense',” says Kroon.

“This is probably the biggest barrier in AI. In specialised domains, computing power can vastly outstrip human reasoning; but the computer lacks intuition, and the bridge-building capability between domains, which common sense provides us with.”

Many researchers have singled out the decisive, goal-oriented nature of human thinking as an indicator of intelligence. Stanford University computer scientist John McCarthy describes intelligence as “the computational part of the ability to achieve goals in the world”.

This brings in the “strong” and “weak” aspects of AI, with strong AI aimed at mimicking the human brain to help make decisions and solve problems, while weak AI simulates some, but not all, of the mind's capabilities.

“Domain-specific 'intelligence' usually uses a very specific form for storing information about its knowledge and its environment,” explains Kroon, “and those forms are not usually very flexible -they've been designed to be good at solving the specific problem under consideration. To store more general knowledge is a much more difficult problem.”

He says the issue is that computers can store all the information they get (with enough disc space), but the way they store it makes getting certain information back fast, and getting other information back much slower.

“I think a part of the difference is that humans' common sense helps them to arrange their information, so the information they need more often is easily available, and irrelevant information is discarded. AI can never know what information it might need later.

“Similarly, once you've stored all the information, there are so many ways of processing the data. Specialist systems are optimised for certain operations, and can only perform other operations slowly, if at all,” says Kroon.

Elementary, my dear

Clifford Foster, CTO at IBM SA, says: “From early on, AI has focused on computers comprehending and using information; and the next generation of machines will be able to understand information in the same way that humans do.”

In line with this, IBM has developed a Q&A machine it hopes will be able to compete in the popular US trivia programme, Jeopardy. The BlueGene parallel supercomputer, nicknamed Watson, has to use its huge bank of knowledge to understand the clues given, calculate the relative certainty of the answer being correct, and buzz in, all in three seconds, without being connected to the Internet.

What makes the game slightly more tricky is that contestants are given clues in the form of answers, to which they must respond with a question. So the machine has to both “understand” the clues within context, and then formulate a question to which they would be the answer. For example, when given the prompt: “This downtown boy met and married an uptown girl”, Watson was able to come up with “Who is Billy Joel?” - a feat Foster says marks a profound change in the way machines traditionally understand data.

“The difference with this multiple-machine system is its ability to understand information, as well as the meaning behind it.” Foster adds that Watson also needs to play strategically, as the game involves picking certain categories, and making decisions about monetary values and risk.

While it has not yet been announced when Watson will put its electronic wits to the test on the show, Foster says it has already beaten a number of previous Jeopardy contestants in trial runs.

Language remains one of the most considerable challenges faced by machines, due to the contextual, nuanced nature of words and phrases, which often deviate from their dictionary definitions. Communication dependent on background knowledge, colloquialisms and figures of speech, is understood almost immediately by people, but this kind of experiential interpretation is often difficult to program into computers.

This is where Watson is different, claims Foster, as it actually requests further information to enrich its understanding, as opposed to merely responding in a set way to a certain input. “Watson understands both ontology - the relationship of concepts, and semantics - the meaning of concepts. This goes beyond anything we've seen before; where understanding meaning is critical.”

According to IBM, useful business applications are the ultimate goal of the Watson project. These applications would be able to determine the meaning behind words and answer complex questions that require the identification of relevant and irrelevant content. They would also be capable of interpreting expressive language, and making logical inferences to deliver precise answers and clear justifications, says IBM.

Great pretender

During the 1950s, increased interest developed in creating artificially intelligent systems, and mathematician Alan Turing put forward the Turing test - whereby a machine could successfully appear to be human to an observer conversing via type with both a computer and an actual person.

Kroon is sceptical about the chances of a machine passing the Turing test in the next decade. “I don't believe it will. In my opinion, machine learning has made great strides in the last 20 years or so, while progress in AI (in the Turing test sense), has slowed to a crawl.

“Machines are still not able to adapt, and make connections between concepts in the way humans can and do, and this is exactly the sort of thing a careful judge would look for when performing a Turing test.”

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says this might be possible someday, but not in the next 10 years. “It took human beings millions of years to achieve the level of intelligence we see today. We will not reproduce this complex evolutionary process in such a short time.”

Kroon believes it could be possible to create a system that mimics the human brain, “but, I wonder if we can do it in any other way than nature does it: expose the system to inputs from the environment and let it learn from experience.

“The base challenge after all these years is still the same: store sensory inputs appropriately, and describe how to adapt current knowledge based on new perceptions.”

Related story:
AI comes of age

Share