South Africa can build upon existing regulations and watchdog guidelines to help control generative artificial intelligence (AI) in the healthcare system.
This is according to experts participating in the “Future of work in healthcare: Harnessing generative AI” event last week, hosted by the Broadreach Group.
The panel began by analysing how generative AI can shape the future of healthcare and make it accessible to everyone, especially those in third world countries. They cautioned it must be integrated carefully into healthcare systems.
Jaya Plmanabhan, chief data officer at Cellivate Technologies, said there are several important factors to keep in mind when implementing generative AI in healthcare systems.
“Firstly, generative AI relies on data and that is why it is important to have rigorous data collection practices in place. Secondly, health organisations must adhere to strict regulations and ethical guidelines to protect patient data. Lastly, human oversight and collaboration because generative AI should augment human decision-making and not override it entirely.
“Generative AI has the potential to significantly benefit healthcare outcomes, while upholding patient privacy, fairness and trust,” said Plmanabhan.
Broadreach CTO Ruan Viljoen touched on the unsolved problems that exist in the healthcare sector and how generative AI can help solve these.
“One practical example of how generative AI can be used is to work with healthcare workers to identify repetitive tasks, such as administrative tasks, and use AI to do that work so the healthcare worker has more time to attend to the patient.
“We can also do better automation of data capturing. If we can automate that process, that can relieve a lot of burden for healthcare workers,” said Viljoen.
The speakers agreed that AI can help solve problems in the healthcare sector, but it comes with ethical risks.
During the event, Vedantha Singh, PHD candidate at the Graduate School of Business at the University of Cape Town, spoke about how SA can build upon existing policies to regulate AI.
“There is a perception that there are no regulations for AI at the moment, but in reality, we are not starting from scratch. A number or regulatory authorities and even watchdog organisations have initiated policies and guidelines that govern the use of AI,” said Singh.
South Africa has existing guidelines in place, such as the Policy Action Network around the use of AI and data in healthcare. There are also key policies embedded in the National Digital Health Strategy, which seeks to use digital health technologies to augment and not replace existing systems.
“This policy [National Digital Health Strategy] acknowledges the need to skill and upskill healthcare workers and to empower them to use digital technologies,” explained Singh.
The National Health Act stipulates the protection of patient confidentiality and health information, and is supported by the Protection of Personal Information Act (POPIA).
The discussion also touched on how at an international level, the United Nations and Organisation for Economic Cooperation Development have called on member states to re-evaluate their existing strategies on AI and take generative AI into account.
John Sargent, co-founder of Broadreach and moderator of the event, emphasised that AI has to be looked at as a means to an end for problems that have plagued third world countries and their healthcare systems.
With many challenges in SA’s healthcare sector − such as insufficient patient monitoring, shortage of healthcare workers and limited access to healthcare for the poor − the experts said using AI can enable a more robust healthcare system that caters for everyone.
The Information Regulator, which regulates POPIA, stated earlier this year that it is holding internal discussions on how to approach the regulation of ChatGPT and other AI technologies, to ensure they don’t violate data privacy laws.
Singh emphasised SA has a unique opportunity to develop its own policies to ensure these ethical issues are taken into account from inception to deployment.
She added that in the absence of specific policy regulations for companies in the private sector, organisations will need to put in their own mechanisms to deal with data breaches that may occur when integrating AI into their systems.
“We call that having guardrails by design. Guardrails mean companies can develop generative AI systems that have ethics embedded into the system. That includes how information of the system is portrayed in the media and how limitations are communicated to the public,” concluded Singh.