Subscribe

The philosophical data scientist

By Tiana Cline, Contributor
Johannesburg, 06 Mar 2020

While teaching part of the University of the Witwatersrand's data science degree, Dr Helen Robertson, whose philosophical doctorate research primarily looked at Immanuel Kant's epistemology, recently found herself becoming interested in contemporary epistemology questions. "Some of the old philosophical questions apply to data science, to machine learning and artificial intelligence," says Robertson.

"In terms of ethics, this is a relatively new field, both in a practical sense and in research centres. It's been fun to be involved in these new fields of philosophical research, especially given that many philosophical questions are very old."

Where the two fields meet sits somewhere between asking old questions in new forms, and also asking new questions. The questions, says Robertson, are more current, spoken about, but can be difficult to formulate.

“If we look at the topic very broadly, we’re still dealing with ethical questions that fall under the ethical theories that try to take into account all ethical questions. It’s just that, more recently, with certain technologies being developed quite quickly, the related ethical questions have become more pressing. It’s in this sphere that you get questions about automated driving and so on,” adds Robertson.

There are two main ethical theories that are most useful – and seen as the most plausible: consequentialism and deontology. When understanding where technology fits in, ethical questions can be answered in one of two ways: “You can look at the consequences of some action or you can look at what moral claims seem to be involved in the action itself,” says Robertson. “ So in the case of autonomous vehicles, some of the questions are questions of consequences. If we adopt this technology, given how well it’s developed now, what are the consequences going to be? Are we going to have fewer deaths or more deaths on the road? In a deontological sense, we may, for example, wonder if we should allow technologies that make these moral decisions at all. The same sorts of questions arise for all cases of emerging technology.”

Data privacy

While the ethical concerns of autonomous vehicles is becoming more widely discussed, data science ethics is something Robertson looks at with her post graduate students. With data science, one of the main philosophical research areas is privacy.

“With companies having access to data (or wanting to get hold of a lot of data), there may be very good consequences. Data makes everything more efficient. There are good consequences for the companies because they can make more profit by making use of the data in various ways that can profile their customers nicely. They can sell the data. Data also provides good consequences for customers themselves as they can derive all sorts of benefits from a company having access to their preferences and habits. But then one of the main concerns that’s raised is people’s privacy,” says Robertson.

As more and more information gets collected, there’s a clear deontological concern: what rights does a person have to control what information is known about them?

Ethics in real life

“Out of a digital context, we believe we have this sort of right. We build houses around ourselves. We have various ways in which we control what is known about us, what is seen about us…we think this is carried over to the digital case,” she says.

Robertson adds that there has been a shift in how much of their data, their personal information, people are willing to give up.

“People are more conscious of it. In popular forums and policy forums, people are asking what these big tech companies are going to do with all this personal data.”

Robertson’s students are encouraged to research real-world cases of data science data use to try to figure out their ethical concerns. In the case of privacy concerns, there are various computational techniques. Computer scientists, for example, are working on making datasets anonymous and, therefore, safe to share.

“We’re at the beginnings of a project where I’m doing some of the philosophical work, looking at what precisely the concern is when talking about privacy – what is the ethical concern? – and then once we’ve got that, to develop actual technical solutions within computer science,” she explains.

What do philosophers and computer scientists have in common?

“We both have formal methods. We’re both keen on logical systems. But computer scientists tend to want solutions. For any given problem, they’ll give you a solution. Whereas philosophy is, in a sense, the reverse – for any given solution, we’re going to find some further problem with it, further distinctions, further objections,” laughs Robertson.

“One interesting thing about being on the programme now is the possibility of collaborative projects. I think there’s a lot of potential for good and proper philosophy to collaborate with many other disciplines,” she concludes.

Understanding the terms

Just in case, you’re not as well-versed in the work of Immanuel Kant as Dr Robertson, here’s the skinny on three of the terms used:

  • Epistemology is concerned with the theory of knowledge.
  • Consequentialism holds that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness or wrongness of that conduct.
  • Deontology uses rules to distinguish right from wrong. Unlike consequentialism, deontology doesn’t require weighing the costs and benefits of a situation, and avoids subjectivity and uncertainty because it follows set rules.

This article was originally published in the March 2020 issue of ITWeb Brainstorm magazine.

Share