Subscribe
  • Home
  • /
  • Health tech
  • /
  • Generative AI can revolutionise healthcare but experts advise caution

Generative AI can revolutionise healthcare but experts advise caution

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 17 Jul 2023
Dr John Sargent, co-founder of the BroadReach Group.
Dr John Sargent, co-founder of the BroadReach Group.

The next generation of ethical generative artificial intelligence (GenAI) provides new hope for more equitable healthcare, but advances in technology must never come at the cost of patient rights.

This is according to Dr John Sargent, co-founder of the BroadReach Group, a social impact business that supports private and public health sectors, NGOs and governments in 30 countries to deliver better healthcare.

“The fundamental issue in healthcare, whether you are in Sub-Saharan Africa, Western Europe, or the USA, is that demand outstrips supply in terms of health services, doctors, nurses, and medications. In Sub-Saharan Africa, for instance, there are 0.2 doctors per 1000 people,” says Dr Sargent.

He says the healthcare sector id trying to deliver on an antiquated model of ‘sick care’, where there is a certain ratio of doctors to patients. “We need to change this paradigm to be more effective by matching the supply and demand sides of our health systems in new digital ways.”

Ruan Viljoen, chief technology officer, BroadReach Group, adds, “I believe the biggest challenge is still health inequity – healthcare access can vary depending on race, location, or age.”

According to Viljoen, GenAI technology can help to solve practical problems, such as overburdened staff and insufficient time.

“What are the repetitive, administrative tasks that are stealing their time? For instance, GenAI can help nurses with automated note-taking in patient interviews, relieving an administrative burden. The goal is not to replace the role but to free up their time for value-added work,” he says.

Vijoen says one of the greatest uses of AI in health is to help healthcare workers focus on the next best action. “We can use large datasets and extract insights to help healthcare workers, delivered via easy-to-digest and secure messaging like emails or text messages. What I’m most excited about, is how we can augment the quality of the interactions to bring together human and artificial intelligence.”

Combating diseases like HIV and AIDS

Jaya Plmanabhan, chief scientist at innovation consultancy Newfire Global who trains health AI models for a living, says he is particularly excited about how large language models could be trained to revolutionise virtual expertise on diseases such as HIV and AIDS.

“We call these ‘Role Specific Domain Models’ and they have the potential to be programmed to know everything about a particular disease, to better guide healthcare professionals on how to treat patients. This is a tremendously exciting prospect in the mission to end new HIV infections by 2030.”

According to Newfire Global, these Private Language Models (PLMs) become oracles on a subject and are especially useful in helping solve hard problems in HIV management, such as loss to follow-up - a term for patients who drop off treatment.

Viljoen says, “Trying to find patients is critical to ensure that they don’t become resistant to drugs due to skipping doses. We can make our outreach much more engaging through conversational messages in their mother tongue and this can help us get people back into the clinic and back into care.”

Heeding the risks

Vedantha Singh, an AI ethics in healthcare researcher and virologist from the University of Cape Town, says the top ethical considerations for AI in healthcare are privacy, accuracy, and fairness. She urged at all AI systems should start with guardrails and ethics within their foundational design.

“There is a perception that there are no regulations for the use of AI in healthcare, but to assume we are operating in the wild west is not true. International bodies are sharing guidelines and regulation is slowly evolving - including in Africa. Egypt, Rwanda, and Mauritius already have strong AI policies,” says Singh.

This includes an emphasis on human labour not being completely replaced and giving patients agency over how their data is used.

Singh says that companies must embed ethical guardrails - aka ‘guardrails by design’ in their health products from the start.

Plmanabhan adds that GenAI can reduce costs and personalise care, but it must be used carefully. “For example, if the data is biased, the model will be biased. GenAI can also be used to create fake patient profiles to commit fraud.”

Unbiased, quality data which complies to regulations such as HIPAA and POPIA or GDPR must be prioritised.

Plmanabhan emphasises the importance of patients giving informed consent, knowing how GenAI is being used on their data. “We need to stay committed to immovable core principles - we cannot compromise on the human in the middle of it all.”

Share