Subscribe

Sci-fi meets society

Lezette Engelbrecht
By Lezette Engelbrecht, ITWeb online features editor
Johannesburg, 26 Feb 2010

As artificially intelligent systems and machines progress, their interaction with society has raised issues of ethics and responsibility.

While advances in genetic engineering, nanotechnology and robotics have brought improvements in fields from construction to healthcare, industry players have warned of the future implications of increasingly “intelligent” machines.

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says ethics have to be considered in developing machine intelligence. “When you have autonomous machines that can evolve independent of their creators, who is responsible for their actions?”

In February last year, the Association for the Advancement of Artificial Intelligence (AAAI) held a series of discussions under the theme “long-term AI futures”, and reflected on the societal aspects of increased machine intelligence.

The AAAI is yet to issue a final report, but in an interim release, a subgroup highlighted the ethical and legal complexities involved if autonomous or semi-autonomous systems were one day charged with making high-level decisions, such as in medical therapy or the targeting of weapons.

The group also noted the potential psychological issues accompanying people's interaction with robotic systems that increasingly look and act like humans.

Just six months after the AAAI meeting, scientists at the Laboratory of Intelligent Systems, in the Ecole Polytechnique F'ed'erale of Lausanne, Switzerland, conducted an experiment in which robots learned to “lie” to each other, in an attempt to hoard a valuable resource.

The robots were programmed to seek out a beneficial resource and avoid a harmful one, and alert one another via light signals once they had found the good item. But they soon “evolved” to keep their lights off when they found the good resource - in direct contradiction of their original instruction.

According to AI researcher Dion Forster, the problem, as suggested by Ray Kurzweil, is that when people design self-aggregating machines, such systems could produce stronger, more intricate and effective machines.

“When this is linked to evolution, humans may no longer be the strongest and most sentient beings. For example, we already know machines are generally better at mathematics than humans are, so we have evolved to rely on machines to do complex calculation for us.

“What will happen when other functions of human activity, such as knowledge or wisdom, are superseded in the same manner?”

Sum of parts

According to Steve Kroon, computer science lecturer at the Stellenbosch University, if people ever develop sentient robots, or other non-sentient robots do, we'll need to decide what rights they need. “And the lines will be blurred with electronic implants: what are your rights if you were almost killed in an accident, but have been given a second chance with a mechanical leg? A heart? A brain? When do you stop being human and become a robot?”

Healthcare is one area where “intelligent” machines have come to be used extensively, involved in everything from surgery to recovery therapy. Robotic prosthetics aid people's physical functioning, enabling amputees to regain a semblance of their former mobility. The i-Limb bionic hand, for example, uses muscle signals in the remaining portion of the limb to control individual prosthetic fingers.

More “behavioural” forms of robotic medical assistance, such as home care robots, are also emerging. Gecko Systems' CareBot acts as a companion to the frail or elderly, “speaking” to them, reminding them to take medication, alerting patients about unexpected visitors, and responding to calls for help, by notifying designated caregivers.

According to Forster, daily interaction with forms of “intelligent” machines is nothing new. “We are already intertwined with complex technologies (bank cards, cellphones, computers), and all of these simple things are connected to intelligent machines designed to make our lives easier and more productive.

“The question is not 'should' we do this, we already do, but how far should we go?” He adds this question is most frequently asked when one crosses the line of giving over control to a machine or technology that could cause harm.

While the advent of certain technologies, such as pacemakers, or artificial heart valves, or steel pins inserted to support limbs are generally beneficial, explains Forster, their progression could bring complexities.

“These are all technologies that make life better, and are designed to respond to environmental changes in order to aid the person in question. But, if my legs, arms, eyes, ears and memory are replaced by technologies, the question is when do I cross the line and stop being human and become a machine?”

Sun Microsystems founder William Joy is one of the industry's more outspoken critics of people's increasing dependence on technology, and warns in a 2000 Wired article: “The 21st-century technologies - genetics, nanotechnology, and robotics - are so powerful that they can spawn whole new classes of accidents and abuses.

“Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.”

Drones to decision-makers

Another area of contention regarding the increasing involvement of “intelligent” machines is in military applications. A recent US mandate requires that a third of its military forces be unmanned - remote controlled or autonomous - in future. While this could significantly reduce the number of human casualties in fighting, it also raises fears around autonomous machines' power to drop bombs and launch missiles.

Some argue that giving robots the ability to use weapons without human intervention is a dangerous move, while others say they will behave more ethically than their human counterparts. While these realities are still a way off, they have raised concerns over what is considered ethical behaviour.

The AAAI writes in a 2007 issue of AI Magazine that there's a considerable difference between a machine making ethical decisions by itself, and merely gathering the information needed to make the decision, and incorporating it into its general behaviour. “Having all the information and facility in the world won't, by itself, generate ethical behaviour in a machine.”

Clifford Foster, CTO of IBM SA, says ethics is something that has to be tackled across industry, with various people, including the public, collaborating to make sure policies are in place. “You can't advocate responsibility to machines. There are certain cases that present opportunities for technology to assist, such as telemedicine, but then it may be necessary to limit this to certain categories, with final decisions resting with professionals.”

In addition, as machine capabilities increase from automated mechanical tasks to more high-level, skilled ones, it calls into question their competition with humans in the workforce. “I think we've been staring one of the simplest ethical issues in the face for a few centuries already, and we still haven't reached consensus on it,” says Kroon.

“How do we balance the need of unskilled people to be employed and earn a reasonable living with the benefits of cheaper industrialisation and automation?” He adds the issue will only get more glaring in the next few decades, as the skill level needed to contribute meaningfully beyond what automated systems can do increases.

“Things like search engines have already radically changed how children learn in developed countries. If we simply dumb down education, that would be a pity,” notes Kroon. “We need a generation of people who can utilise the new capabilities of tomorrow's machines, rather than a generation of people who can contribute nothing meaningful to society, since any skills they possess have been usurped by machines.”

Share