Subscribe
About

Implement guardrails to support safer GenAI use

By Tracy Burrows, ITWeb contributor.
Johannesburg, 06 Jun 2025
Paul Beyleveld, senior solutions engineer at Netskope, speaking at the ITWeb Security Summit 2025.
Paul Beyleveld, senior solutions engineer at Netskope, speaking at the ITWeb Security Summit 2025.

Generative AI use is soaring, and companies must take steps now to ensure it can be used safely and productively.

This is according to Paul Beyleveld, senior solutions engineer at Netskope, who was speaking at the ITWeb Security Summit in Sandton this week.

“AI is at the peak of its hype cycle, changing everything. But there are concerns,” Beyleveld said. “For example, the fine print on GenAI tools says that they will collect your conversations and use that data for training the AI model. So if you use the tool and don’t opt out of this, your data will be used for training the model. Nearly 10% of employee prompts now include sensitive data – for example, sensitive information from board meetings.”

Malicious attacks on DeepSeek illustrated the risks associated with using GenAI for sensitive data and code, Beyleveld said.

“This problem isn’t going to get better in future,” he noted.

He highlighted a study by IDC, which indicated that organisations were achieving up to ten times the return on investment of their GenAI investments. In addition, a recent Ipsos survey for Google, published in the report: ‘Our Life With AI: From innovation to application’, indicated that AI usage had increased in every region surveyed, with emerging markets seeing the highest growth. In every region, excitement about AI's potential benefits (57%) outweighed concerns about AI (43%).

“South Africans are open to AI, showing high optimism about the possibilities of AI and adopting AI in the next five years,” he said. “But having more sensitive data move out of our control is risky – there is the potential for data leaks and some hefty fines as a result.”

Beyleveld highlighted efforts to try and address this problem, such as a NIST AI Framework, and the new EU AI Act, which offer guardrails and frameworks of what best practices should be when dealing with AI.

“We have to make sure we manage the risk around the data we use and how it is exposed,” he said. “We cannot outright block GenAI, but there are ways to securely enable it.”

Beyleveld said key measures include detection, educating users and defending where necessary.

“We must start with visibility: think about 'how do I know what GenAI platform is being used, what it’s being used for and whether sensitive data is being used?' We also need instance detection and controls. Visibility is key because it allows us to answer those questions, and real-time coaching and control – for example, using pop-up warning notices – helps reinforce policies.

“Education is a big part of this. We also need to reduce risk by understanding the users, their devices, the destination, the activity and the risks associated with these. You may want to take a zero trust approach to ensure GenAI is productively used in a secure manner,” he said.

Share