Privacy has become a consistent part of data conversations, and for good reason. The digital era delivers incredible advances and opportunities, often powered by data. What was once touted as the big data era has become our modern world's standard operating model.
Exploiting data feeds progress, efficiency and insight. But data practices also impact people's rights, leading to a heightened awareness of privacy and its importance to an inclusive and equitable digital society.
For Jasmine Asandze Sikhosana, Junior Legal Advisor at Nexio, the privacy issue underpins a more fundamental principle: "Privacy is about the protection of human rights. Threats to a person's privacy and identity are increasing as we have these technological advancements."
Privacy and AI black boxes
Linking privacy to human rights elevates the topic as a legal and business priority, and there has been considerable headway in protecting personal information, exemplified in laws such as the Protection of Personal Information Act (POPIA) and General Data Protection Regulation (GDPR).
Yet, new technologies add new dimensions to these considerations. Artificial intelligence is a prime example. Most forms of personal information storage and processing are straightforward to manage. Whether the information is in a document, an e-mail or a database, it's directly tangible and not obscured or highly abstracted. Deleting or anonymising personal information is relatively straightforward.
But AI models don't operate in that fashion. AI models incorporate information into a nebulous digital brain where the private data is disseminated, abstracted and even duplicated. Often called the "black box" effect of AI, this abstraction could be a big challenge for privacy policy.
It's not a hypothetical problem. Companies already prohibit employees from feeding sensitive information into public AI services. Those models can consume the information, making it part of their logic, and external users can uncover it cannily or unintentionally. In a recent study of a major dataset used to train AI models, researchers found keys to no less than 12 000 APIs, which outsiders could access with the right prompts.
Yet, inhibiting AI is not a winning strategy. Organisations derive massive benefits when they combine AI tools with personal information. The question is: how do they respect that privacy while still enjoying AI's advantages?
The same rules still apply
Fortunately, this situation is not as tricky as it seems at first. AI is a tool; humans decide what data it uses and how to control it.
"AI isn't a new threat to privacy. We've managed other privacy threats with sufficient controls. AI is another layer. But this layer is a new animal altogether because it's all over the place. And yet, it still comes down to how a user directs it to the right place," says Fhatuwani Rasivhaga, Executive Head of Nexio's Legal Department.
The fundamentals of data privacy haven't changed. We can divide those into three domains: secure personally identifiable information (PII), use and control it within context, and don't expose it to the outside. By using policy frameworks, risk management practices, security diligence and visibility such as continuous monitoring, companies can diminish the risks AI poses to privacy.
There is disagreement about whether legislating AI will inhibit its development. But ultimately, the risks of data leakage – private, business, confidential or otherwise – are much greater and more devastating than any innovation drag. By focusing on fundamental data privacy practices, organisations can pre-empt privacy issues before legislation does.
Courts and official bodies will have the final say. By that point, though, competitive companies will be reaping the rewards of their AI investments. A sudden shift in legislation could undermine their progress. Yet, organisations can stay ahead of legislative winds by implementing data policies, adding guardrails to AI, determining valid uses for personal data and, crucially, using service providers with appropriate standard certifications and skills to address privacy challenges.
"It is invaluable that you conserve and prioritise the right levels of privacy, governance and standards, especially when implementing AI tools for your day-to-day operational responsibilities," says Sikhosana. "Maintaining standards and compliance, and adjusting policies, are important in ensuring that the company maintains its reputation and competitive advantage."
Share