Subscribe

Why ethics must be at the heart of AI

AI and machine learning are more than trendy or futuristic topics, they are real computing advances that are playing out in homes and businesses, says Zoaib Hoosen, MD of Microsoft SA.


Johannesburg, 26 Jul 2018
Why ethics must be at the heart of AI.
Why ethics must be at the heart of AI.

Artificial intelligence (AI) creates the potential for exponential gains in analytical and automated processes, and because of this, we stand at a critical juncture in the AI journey, a moment in time where we get to define not just what AI can do, but how it does it.

So says Zoaib Hoosen, managing director of Microsoft SA: "As it goes mainstream, thanks to the cloud, deep learning, and big data, AI will boost productivity and unlock economic growth. It will transform the workplace, and change the shape, look and feel of many industries, including health, transport, manufacturing, and more," he adds.

But, for some, the rise of AI conjures images from the "Terminator" films or "West World" TV series. "In these stories, humans are at the mercy of these faster, stronger, smarter systems with no ethical hang-ups. These narratives are clear on the problem with AI as they imagine it: no humanity, no heart."

Exploring ethics within capabilities

For Hoosen, the ethics of AI go beyond regulation and legislation, and it's fundamentally about creating an operating framework that limits and directs the priorities of an AI system.

"A real-world example is how one might program a driverless motor vehicle to treat an imminent crash. Should the system act to save its own passenger or should it prioritise the life or safety of a pedestrian? We need to know where we stand on these kinds of issues, to tell learning, thinking machines how they should handle them," he adds.

"If AI can give us natural language interaction, what are the rules we put in place to manage its responses, or to ensure it doesn't discriminate against non-native English speakers, for example? If an AI business analytics system can unlock new sales techniques or customer journeys, are these ethical and fair for customers? What does the system do with the private and personal data it collects before, during and after these interactions?"

There are myriad concerns at play once you scratch beneath the surface. "At Microsoft, we take this responsibility extremely seriously. In fact, one of our three core pillars in this field is: "Developing a trusted approach so that AI is developed and deployed in a responsible manner." This relates directly to the principles of fairness, accountability, transparency and ethics that guide us in ensuring our AI systems are fair, reliable and sage, inclusive, transparent and accountable, and private and secure."

Of course, Hoosen says principles are only as good as the processes that flow from these. "Take 'inclusivity', for example. We believe that to achieve AI that is inclusive, we must nurture inclusivity and diversity in the teams creating the systems, and that the output is just as inclusive. These are the kinds of concerns that our internal advisory committee examines, to help ensure our products adhere to these principles."

The bigger picture

Hoosen believes enterprises must be aware they are not the only player in the game, and that AI advances will happen across companies, NGOs and countries. This is where the role of leadership, and the guidance of community, will be critical. "We are an active participant in AI-related forums and organisations, such as the Partnership on AI, for this exact reason, and we encourage all AI players to get involved and help us develop the best practices for AI.

"Our approach to AI is grounded in, and consistent with, our company mission to help every person and organisation on the planet to achieve more. If we remain true to this, as we always strive to be, then we must also consider how to mitigate any of the potential downsides that might result from technological advancement," he adds.

One source of fear for many is the idea that AI will change our workplaces and, in certain cases, eliminate jobs. "Mitigating this will necessitate nurturing new skills and preparing the workforce (and those who will soon join it) for the future of work."

Based on this, the transformative power of AI will also mean more regulation from governments across the globe, and across the progressive-conservative spectrum, Hoosen says.

"This will bring private and public sectors into closer collaboration, so AI providers must be prepared to engage, to train, to advocate, and to listen, as we move towards a consensus on the values that we inculcate into AI systems."

Sweet spot

Some people will always fear the unknown, and others will always stride forward in pursuit of progress, he says. The sweet spot lies between them, in the power of AI to unlock creativity, potential and insight, while still behaving in an ethical and responsible manner."

Put aside the scary chapters of a science fiction future for a moment. There is another icon of pop culture that applies, Mary Shelley's classic tale of Dr Frankenstein and his monster. In Frankenstein, the doctor is driven by ambition and ego, to create a being made up of parts, reanimated into life. But the doctor is horrified by the creature he creates and abandons it, rather than guiding it and helping it into this new life it finds itself in, ultimately leading to deadly consequences.

"The spectre of that ghoulish creature looms large in our minds, but, as the novel so wonderfully conveys, the real monster in Frankenstein is the doctor, the flawed man who creates a life without consideration of the chain of events he has set in motion. Similarly, those of us working in AI today need to be sure that we give our own 'creation' firm rules and guidelines for operating in the world."

Hoosen concludes by saying that to avoid becoming the doctor-monster of Shelley's nightmare, enterprises need to put the heart into the machine.

Share