In a global study undertaken by the Ponemon Institute and OpenText, the research found that only half of the companies surveyed have adopted AI as part of their overall IT and business strategy. Only 19% are planning on adopting the technology over the next six months. The challenge lies in the friction points – 57% of companies rated AI adoption as very to extremely difficult, 53% said the same about reducing security and legal risks.[1]
There’s a constant tension between the potential of AI and the complexities that come with its implementation within the business. Thabiso Hlatshwayo, Solution Consulting Senior Manager for Africa at OpenText EMEA: Emerging Markets, points out that success lies in AI readiness.
“AI readiness should be treated as an information discipline before it’s treated as a technology programme,” he explains. “Yes, AI is critical because it does deliver measurable benefits, but the first step is to uncover what areas of your business aren’t quite ready for AI, and right now, the most common friction point is data.”
The data that resides within the organisation has become critical to building an agile AI ecosystem. Public data isn’t as important anymore because it isn’t clean, it lacks governance and its not contextual to the organisation. AI that relies on unstructured or generic data won’t deliver outputs that are relevant to the business’s reality. A competitive advantage can’t be built on non-contextual information.
“You need clean data at the source,” says Hlatshwayo. “And this translates into managing, structuring, categorising and understanding it within your business context.”
The Ponemon report also underscores the value of prioritising AI readiness below the application layer. When respondents were asked what steps they were taking to reduce AI-related risk, the most common was data security and practice. Other frequently cited steps included the verification of AI prompts and responses (39%), training teams to spot AI-generated behaviour patterns or threat actors (39%) and using data cleansing and governance (38%).
However, Hlatshwayo cautions that improving AI readiness requires that companies confront their structural weaknesses in how their information is stored and secured, while also prioritising the development of AI-focused governance frameworks. One of the biggest misconceptions, he says, is that you can simply switch AI on and the value will follow.
“If the underlying data is fragmented or duplicated or poorly classified, then AI will simply amplify those weaknesses,” he continues.
Another challenge is clarity. Information sits across multiple systems, often in silos and with no unified structure, so integration becomes complicated and risk management inconsistent. “Companies use multiple systems, and if those systems aren’t anchored to a central repository, then integrating information becomes stressful and unreliable,” says Hlatshwayo. “Reducing risk comes down to creating a trusted internal data environment, protecting information both in transit and at rest, and ensuring stakeholders can rely on the integrity of what they’re seeing. Poorly governed data is also challenging as it increases the likelihood of inconsistent outputs and unreliable insights.”
OpenText prioritises AI readiness while reducing risk, with intentional approaches built around how companies live within their data and systems. The company ensures that information is protected throughout its life cycle, removing the threat and constant worry and making security into a managed discipline. Importantly, by elevating readiness through incremental and intelligent processes, companies can focus on controlled and repeatable use cases that deliver value consistently.
“The idea of a big bang approach to AI is very difficult,” Hlatshwayo concludes. “Connecting automated processes through agent-to-agent connections, giving users the power to experiment in contained areas, and winning within small niches within the organisation reduces operational and reputational risk. This approach allows companies to refine governance while strengthening security and validating outputs before scaling AI broadly.”
Ultimately, Hlatshwayo believes that AI readiness is a steady posture around information maturity, embedding trust as a foundational layer within AI and then building innovation on top of that trust. It is this preparedness that allows companies to step into AI and extract value instead of frustration.
Share