Subscribe

AI and regulating the unknown: Lessons from the EU and UK

Recent developments in the UK and EU on regulating artificial intelligence may help to provide guidelines for South African lawmakers in drawing up much-needed regulation for the sector, by Fatima Ismail, Associate.

Johannesburg, 14 Sep 2021
Fatima Ismail, Associate, Webber Wentzel.
Fatima Ismail, Associate, Webber Wentzel.

Anyone who has watched an episode of Black Mirror (a sci-fi series based on the impact of technology on humans) has probably questioned the limits of technology while staring back at their reflection on the blank screen at the end of the episode. Among other tech-related subjects, the series has tackled artificial intelligence or AI. 

AI regulation has recently become the focus of discussion globally as AI is being rapidly deployed in various sectors. A recent example is the use of AI by Moderna to speed up the development of the COVID-19 vaccine. While many countries are gearing up to roll-out regulations on the use of AI, SA lags behind. The closest South Africa has come to regulating AI is in creating the Presidential Commission on the 4 Industrial Revolution (4IR) in 2019, which recommended, among other things, establishing the National Artificial Intelligence Institute.

Since a legally binding set of rules to govern AI is yet to be passed in South Africa, we have considered recent developments in the regulation of AI in the UK and the EU to determine whether South African regulators should follow their example.

Regulatory developments on AI in the UK

In June 2021, the UK's Information Commissioner's Office (ICO) (the UK's watchdog in charge of upholding information rights and protecting personal information), published an opinion on the use of live facial recognition technology (LFRT). LFRT collects biometric information, such as fingerprints, irises or facial features of individuals, in real-time.

The ICO's opinion focuses on the applicability of data protection law to the use of LFRT. Data protection law applies to the processing of LFRT, given that it involves the processing of several types of personal data under the GDPR (the EU's equivalent of POPIA, South Africa's data protection law), including personal, biometric and criminal offence data. LFRT can be used for many purposes. At the time of publishing the opinion, the ICO had completed investigations into six examples of planned or actual use of LFRT in public places. The examples included the use of LFRT for surveillance, marketing and advertising.

The opinion emphasises the ICO's data protection concerns about the use of LFRT. These include:

  • The automatic collection of biometric data at speed and scale without justification (in many examples, the organisation processing data could not show that the processing was necessary and proportionate to the purpose of the collection of the data); and
  • The lack of choice or control by those people whose data was being collected. Other concerns raised by the ICO include the lack of transparency afforded to those people on how, when and why their data was being processed, as well as the potential for bias and discrimination. (LFRT works with less precision for some demographic groups, including women, minority ethnic groups and disabled people, owing to design flaws or deficiencies in training data.)

To address its concerns, the ICO emphasised that organisations must ensure their use of LFRT meets the requirements of lawfulness, fairness and transparency, and a robust evaluation of necessity and proportionality should be conducted. These requirements are based on GDPR principles, which are also present in POPIA. While the ICO set out these obligatory requirements for the lawful use of LFRT, it concluded that any investigation into the use of LFRT should be circumstance- and fact-specific.

Regulatory developments on AI in the EU

In April 2021, the EU Data Protection Board and EU Data Protection Supervisor published a joint opinion on the European Commission's set of proposals for the introduction of harmonised rules on AI (proposal). The purpose of the proposal was to achieve legal certainty within the AI industry, to ensure that AI systems are safe and respect EU laws and values.

However, the board and supervisor highlighted various concerns with the proposal.

These include:

  • The exclusion of international law enforcement co-operation from the scope of the proposal, which might risk circumvention by third party countries and international organisations using AI systems within the EU; and
  • The need for clarification that the GDPR and the Privacy and Electronic Communications Directive (an EU directive on data protection and privacy in the digital age) shall apply to the use of AI systems.

While the board and supervisor welcomed the risk-based approach to AI adopted by the European Commission, they highlighted the need to ensure that AI systems, including those that do not involve the processing of personal data but still impact the interests or fundamental rights and freedoms of individuals, also fall within the scope of the proposal.

A further concern highlighted by the board and supervisor is that the proposal requires the providers of AI systems to perform a risk assessment. However, in most cases, the controllers (as defined in the GDPR) or the responsible party (as defined in POPIA) will be the users, rather than the providers, of the AI systems.

While it is clear from these two opinions that the UK and EU are still grappling with how to regulate AI, these opinions may provide South Africa’s regulators with a guideline on what may be important in a local data protection context in regulating AI. With the rapid growth in SA's tech sector, the country is in desperate need of AI regulation. After all, a Black Mirror-esque future may not be too far off. 

Share