Subscribe

Will ChatGPT transform insurance?

With AI, there is the opportunity to get the same level of expertise across the entire insurance industry, from a single place.
Giulio di Giannatale
By Giulio di Giannatale, CTO, Sanlam Indie.
Johannesburg, 02 May 2023

The hype around ChatGPT, an artificial intelligence (AI)-powered chatbot that can respond to questions and essentially engage in conversation with curious internet users, has everyone talking about AI. And with good reason.

If you take a moment to Google what ChatGPT can do – or perhaps even plug this exact question into GPT – you’ll find that this natural language processing tool can write a simple e-mail or compile an entire Masters thesis thousands of words in length.

As someone who has been following the progress of AI for a while, ChatGPT is particularly exciting for me because it serves as one of the first practical and tangible examples of the potential of this emerging technology. It also makes it more accessible.

In the past, there was this belief that you needed an entire team of data scientists to make the most of AI but now we can all use AI as our own personal assistant who can handle some of the more tedious and time-consuming work that we’d rather not do.

Within the insurance space, the applications for AI relate mostly to fraud reduction, risk assessment and the streamlining of claims processes. But looking at other applications for AI, I believe it can make digital advice possible; enabling clients to get advice from a bot in the same way they would from a broker.

This, for me, will be a real game-changer and is probably what has the industry most interested in AI.

Dawn of the digital broker

In the past, a client would approach an insurance business with a query and the broker would have to do quite a bit of research to find the best solution for their particular needs and preferences.

But with AI, clients can interact with a digital broker who will analyse their needs and then package something specifically for them, pulling different products from a whole ecosystem of offerings. This digital broker won’t have a preference for a particular brand or product; it will take your request at face value and give you extremely tailored advice.

I think in all of this, we need to be cognisant of the fact that while the results produced by ChatGPT might seem like magic, it is just a machine.

From an underwriting perspective, AI will enable clients to essentially have a conversation with a computer. You can explain who you are, how old you are, what you do and where you are in your life stage and it will give you more targeted pricing and cover.

If a client is dealing with a human broker who has a better understanding of one particular component of finance than another, they might not be equipped to offer the best advice in that area.

But with AI, you've got the opportunity to get the same level of expertise across the entire industry, from a single place. We still need to apply this kind of thing practically, and without bias, so we have a fair amount of progress to make on this front but the potential is there.

Does AI spell mass job losses?

While some fear what AI means for the workforce, I am confident there is still a space for brokers. And for the developers who are making the digital solutions that are being used across the industry today.

Some of my more curious team members are already experimenting with AI and have found it to be a good assistant. It’s not going to replace them but it can boost their overall productivity by as much as 70%. And for our more junior developers, ChatGPT is a bit like having a senior staff member on hand at all times to answer their questions, or give them guidance when they have an issue they are struggling to solve.

From a business perspective, if a team member can’t get their job done because they have to wait to speak to somebody who has more technical knowledge, you’re wasting time. With AI, we are essentially enabling people who don't have a specific skill set to dabble and learn more in areas where they lack the expertise, which, if you ask me, is pretty cool.

But as is the case with any new innovation, there is fear and scepticism around the potential of artificial intelligence. We saw this recently when Elon Musk and various industry experts called for AI development to be paused immediately.

While I believe they cite valid reasons for a pause and share the opinion that we must develop the right foundational rules for the long-term safety of AI use, I worry that this approach is not very practical given that the metaphoric cat is well and truly already out of the metaphoric bag.

I think in all of this, we need to be cognisant of the fact that while the results produced by ChatGPT might seem like magic, it is just a machine. It can’t do everything and it won’t always get the answers right.

What we need to do as we start exploring more with this technology in insurance is figure out how to check the AI is doing what it’s supposed to do.

It’s inevitable that there will be someone who comes up with a way to defraud the system. If this person is smart enough to build an AI that interacts with an insurer’s AI, they can do a lot of damage in a very short space of time.

In action, a smart hacker could easily commit insurance fraud on a grand scale by taking advantage of things like complementary cover where there is no cost and then commit further crimes to collect as a beneficiary.

There have been cases where lives were covered and then people were murdered shortly after the cover became active, or instances where short-term insured products were covered and stolen and then AI can be used to generate a slip for the purchase and complete the claim process. These risks demand that we put various checks and balances in place.

There’s no denying that the race is on – to get the right component in the right place working on AI. Whoever does so first will enjoy some pretty big wins.

But we need to proceed with care. When we allow a computer to make decisions, we must be certain that this computer has the right ethical frameworks; frameworks that align with our own so that the results it produces align with who we are as a business. 

Share