About
Subscribe

Navigating AI risk with confidence: TrueMark’s approach to the NIST AI Risk Management Framework

Johannesburg, 29 Jul 2025
Managing the risks associated with AI adoption.
Managing the risks associated with AI adoption.

As artificial intelligence (AI) continues to transform industries, the need for structured, responsible and secure implementation is more critical than ever. TrueMark, a leader in cloud-based risk and compliance solutions, is helping enterprises navigate this evolving landscape. Central to their approach is the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF), a global standard designed to help organisations manage the risks associated with AI adoption.

In a recent interview, Will Hasty, Chief Risk, Governance, and Compliance Officer at TrueMark, shared how the company is guiding businesses through AI risk. With a background in global data centre operations and enterprise consulting, Hasty leads TrueMark’s risk and compliance division, which focuses on enterprise assessments and building compliant cloud environments, primarily on AWS, but also across Microsoft Azure, Google Cloud and on-premises.

Understanding the NIST AI RMF

TrueMark leverages NIST RMF to ensure trustworthy AI, aligning with South Africa’s regulatory and societal needs. “AI is everywhere now,” Hasty explained. “It’s reshaping core business functions, but it’s also introducing a new class of risk.” The NIST AI Risk Management Framework (RMF) is a voluntary, flexible guide to manage AI risks, structured around four functions: map, measure, manage and govern.

  1. Map: Identifies AI risks, such as bias in credit scoring systems used in South African microfinance, ensuring compliance with POPIA’s fairness principles.
  2. Measure: Quantifies risks, like assessing data quality to prevent errors in AI-driven healthcare diagnostics.
  3. Manage: Mitigates risks, ensuring AI systems respect South Africa’s data sovereignty by prioritising local data storage.
  4. Govern: Establishes policies and accountability, aligning with POPIA’s requirement for an information officer to oversee data protection.

Explore more real-world use cases

For South African firms, NIST RMF offers a practical path to AI governance, complementing POPIA and addressing local challenges like resource constraints and diverse populations.

TrueMark’s role in AI risk management

TrueMark offers a suite of services to help businesses implement the NIST AI RMF. “We start with an assessment,” said Hasty. “We evaluate where a company stands, identify gaps and provide corrective action plans.” TrueMark also supports the development of policies, procedures and automation tools to help clients mature their AI governance capabilities.

This approach is especially valuable for companies that have already adopted AI but lack a structured risk strategy. “A lot of organisations don’t even know where their AI is running or what data it’s using,” Hasty noted. “We help them discover, assess and secure those environments.”

Addressing common AI risks

Among the most pressing risks in AI are bias and model drift. Bias can lead to unfair or discriminatory outcomes, while model drift, where an AI model’s performance degrades over time, can compromise accuracy and reliability. These risks have serious implications for compliance, reputation and operational integrity.

The NIST framework helps organisations identify these issues early and take corrective action. “It’s about building trust in AI systems,” Hasty emphasised. “And that trust starts with transparency, accountability and continuous evaluation.”

Integrating AI risk into broader strategies

A key strength of the NIST AI RMF is its compatibility with existing cyber security and enterprise risk frameworks such as ISO 27001 and NIST CSF. “You don’t have to start from scratch,” Hasty explained. “This framework extends what you already have into the AI space.”

TrueMark encourages clients to avoid siloing AI initiatives. Instead, they advocate for integrating AI governance into existing risk registers, reporting structures and change management processes. “AI shouldn’t live in a separate world,” Hasty said. “It needs to be part of your enterprise-wide strategy.”

The human element and cross-functional collaboration

AI risk management isn’t just a technical challenge; it’s a human one. TrueMark emphasises the importance of cross-functional collaboration and company-wide training. “AI touches everything, legal, compliance, HR,” Hasty said. “Everyone needs to understand the implications of using AI, especially when it comes to handling sensitive data.”

He also stressed the importance of ongoing education and periodic evaluations to ensure AI systems behave ethically and securely over time.

AI governance at the board level

Given the high stakes, Hasty believes AI governance should be a boardroom-level concern. “AI affects operations, compliance and reputation,” he said. “It should be treated like any other enterprise risk.” Some organisations are even appointing dedicated AI risk officers and expanding governance committees to include AI oversight.

Looking ahead

As AI continues to evolve, so too will the frameworks that govern it. Hasty anticipates updates to the NIST AI RMF in the near future. “This is version one,” he said. “We expect more guidance as the technology and its risks evolve, but now is the time to ensure that your organisation is compliant.”

Share