Guardrails and good governance are key to mitigating the risks associated with generative AI, according to spokesmen from Liquid C2 and Scytale, who were speaking during a webinar on AI security recently.
Flip Erasmus, Client Engagement Manager at Liquid C2, said: “Since the launch of ChatGPT in 2022, generative AI has permeated all areas of business and continues to grow exponentially. Through our engagements with clients, we see AI being used largely in operations and cyber security.”
He noted, however, that McKinsey finds there are risks associated with generative AI. These include inaccuracy, intellectual property infringement and personal privacy and regulatory issues.
Erasmus said: “Many organisations have resorted to blocking the use of generative AI to mitigate these risks, but we believe good governance and solid guardrails offer a better alternative. Good areas to start with include addressing data security, vulnerable models, new attack vectors, access control, misuse and compliance with laws and regulations.”
Conceding that implementing guardrails could be easier said than done, he said tools and governance frameworks were available to enable organisations to mitigate risk more effectively.
Hadir Labib, Blue Team Manager at Liquid C2, outlined Liquid C2’s framework for secure and responsible AI governance. Running on seven domains, it sets out to understand the organisation and its AI use cases, establishes appropriate policies and procedures, creates an AI risk management framework, defines the AI supply chain, aligns with laws and regulations, safeguards data and integrates with security assurance activities.
She recommended that organisations set the right tone for the acceptable use of AI and update policies and procedures around AI.
“Many organisations already have risk management frameworks, and an AI risk management framework should be similar,” Labib said.
She said organisations should also define their AI supply chains and their associated risks, and introduce AI-specific security controls within third-party processes, contractual agreements and other components.
“Organisations must also consider data and its security throughout the data life cycle within the AI environment, and ensure that the focus on AI security does not come at the expense of fundamental controls,” she said.
On vulnerability assessments, she said AI-specific assessments should include regular penetration tests, code reviews, vulnerability testing and tabletop exercises.
They noted that Liquid C2 SecureAI governance, risk and compliance services offer a holistic approach to AI security, ensuring compliance, risk mitigation and governance excellence
Victor Lange, Partnerships Manager at Scytale, outlined how Liquid C2, in partnership with Scytale, simplifies compliance with Scytale’s AI-powered security and compliance hub.
Lange said: “In the South African market, we see many organisations still using legacy, manual systems to improve compliance, but this is risky, expensive and time-consuming. We implement advanced AI security and compliance hubs, with risk management, trust centres and training to help organisations get compliant and stay compliant. We integrate into all the organisation’s IT systems to monitor all the tools and pull the evidence needed to prove compliance.”
He noted: “There are many AI-related regulations in place and these are constantly changing and being updated. The Scytale and Liquid C2 partnership supports all of them. We support over 45 frameworks, including SOX, ITGC, POPIA, the EU AI Act, ISO 42001 with AIMS. Preparing for audits can be a scramble when organisations use legacy approaches, but Scytale and Liquid C2 make compliance happen fast.”
Share