Subscribe
About
  • Home
  • /
  • Security
  • /
  • Apply caution and critical thinking against GenAI social engineering, says KnowBe4

Apply caution and critical thinking against GenAI social engineering, says KnowBe4


Johannesburg, 11 Dec 2023
Anna Collard, SVP of content strategy and evangelist, KnowBe4 Africa.
Anna Collard, SVP of content strategy and evangelist, KnowBe4 Africa.

Generative AI gives cyber criminals a powerful new tool for manipulation, so people need to be more cautious than ever before.

So says Anna Collard, SVP of content strategy and evangelist for KnowBe4 Africa, who was speaking during a webinar on embracing and securing generative AI in South Africa.

Collard said that while generative AI was 'amazing and could offer multiple benefits', people had a tendency to trust it where it wasn’t competent. “Despite warnings, they don’t challenge generative AI outputs. This highlights the importance of critical thinking,” she said.

She added: “Because generative AI has access to vast amounts of data, it can create fake, nuanced new versions of existing pictures – for example of celebrities. It can create completely virtual people and use them as influencers and spokesmen, for example recreating a young version of Abba to perform in concerts. Anyone and everyone can create deepfakes. One of the most worrying uses of deepfakes is political fraud, while another concern is vishing, or deepfake voice calls.”

“Social engineering is still one of the most effective ways to manipulate people, and generative AI is making social engineering even more effective,” she said.

Collard said that trust in, and over reliance on, generative AI made people susceptible to deepfakes and manipulation, and that cyber security training should take this into consideration.

She said: “The recent ITWeb – KnowBe4 AI survey found that 36% of organisations aren’t dealing with potential misuse of generative AI at all. 42% don’t yet provide specific training around the risks of generative AI, but at the same time most people are comfortable sharing their personal information with generative AI tools. 58% don’t provide training around deepfakes, yet 48% of end users are not aware of what deepfakes are.”

“Cyber criminals, propagandists and other bad actors tap into people’s own vulnerabilities and will continue doing so using generative AI. People need to apply mindfulness and critical thinking to ‘patch’ themselves, embracing new technology while maintaining a healthy level of scepticism,” she said.

She also highlighted the importance of not uploading sensitive information to ChatGPT, or using the privacy function when using AI tools.

“Theres no silver bullet in security to mitigate the risk of generative AI-enabled cyber threats. Organisations need multiple layers of security and a zero-trust mindset,” Collard said.

She recommended implementing strong security measures, conducting regular audits and testing of AI systems, focusing on transparency and accountability, developing and enforcing ethical AI policies, fostering a culture of cyber security and staying informed about AI advancements.

Collard said all users should have a culture of always learning, while IT and cyber security teams should make an effort to understand the capabilities and risks associated with AI.

Share