AI adoption is exploding across Africa, but without proper readiness assessments and controlled adoption, it can present new risks to organisations.
This is according to Kejendree Pillay, Microsoft Portfolio Manager at First Distribution, one of the leading Microsoft CSP distribution companies across Africa.
Speaking during a webinar presented by First Distribution in partnership with ITWeb, Pillay said: “AI is embedded in our daily workflows, helping organisations make decisions faster and be more productive.”
However, the rise of generative AI tools has introduced new risks, and organisations are now faced with the critical question of how to use AI without introducing risk to the environment, she said.
Pillay noted that recent reports showed 84% of AI tools have experienced data breaches and up to 58% of employees admit to pasting sensitive data in AI.
“In many cases, they use their own personal AI – in fact, 45% of sensitive prompts were admitted to personal AI accounts. Users may not be doing it maliciously, but they might use their personal AI – or shadow AI tools – to enhance productivity,” she said.
Data misuse and privacy violations are not the only risks associated with generative AI, Pillay warned. Other common AI threats are prompt injection attacks, model hallucinations and AI-generated phishing. She noted that recent reports found 82% of phishing e-mails contain some form of AI-generated content.
Pillay highlighted the advantages of Microsoft Copilot, the AI assistant built into Microsoft 365.
“In local businesses, we find that most employees want Copilot to take meeting notes and summarise e-mails,” she said. “But Copilot is a lot more than that. Special use cases I have seen include putting proposals together based on historical data, and summarising tender documents to highlight key points. Many lawyers use Copilot to help them search and analyse previous cases in their archives. We also have agricultural customers who utilise the agent side of things – for example, analysing crop outputs based on the weather, pests, irrigation and the amount of fertiliser used.”
Pillay said Microsoft had set the bar high in terms of tools for closing security gaps around AI. These include Microsoft Purview, Microsoft Defender for Cloud and Microsoft Entra.
“Microsoft Purview helps you govern how data flows in and out of AI tools, and supports compliance. Microsoft Threat Defender protects cloud-based AI workloads and can actually pick up threats before they enter your organisation. Microsoft Entra delivers identity and access governance, supporting conditional access and rule-based access control. It also ensures that AI tools only work within secure environments,” she said. “Microsoft security helps organisations build cyber resilience across identify, protect, detect, respond and recover.
"If your security posture is correct and you configure Copilot correctly, you will enjoy enterprise data protection, making it safe to use for all employees,” she said.
She also highlighted First Distribution’s role in helping organisations optimise and secure Copilot.
“We come with a wealth of experience to help organisations understand the features within the Microsoft stack, including Copilot. Because employees don't always know the risks of using certain tools, First Distribution can help with training and change management,” Pillay said. “We offer security assessments and pre-sales and post-sales workshops to our CSP partners’ customers, to help them get AI into their environments in the correct way, and optimise its value for them. We offer training that ranges from basic Copilot use and the art of prompting, through to tailored technical training, which includes managing policies and mitigating security risks.”
Share