Phila May, Executive GTM at inq. Digital.
As organisations move beyond experimentation with generative AI (GenAI), the focus is shifting from curiosity to commercial value. The question many technology leaders are now asking is not whether AI works, but how it can deliver measurable business outcomes.
According to Phila May, Executive GTM at inq. Digital, the companies seeing real returns from GenAI are those that treat it as an engineering challenge rather than a standalone technology.
“Over the past two years, many organisations experimented with GenAI tools. The real shift we are seeing now is from experimentation to production. Businesses want to know how AI improves revenue, reduces costs or accelerates delivery. That requires the right architecture behind the models,” says May.
In practice, this means integrating AI into existing data environments, software development pipelines and operational systems rather than deploying isolated tools.
Across sectors such as financial services, retail and telecommunications, GenAI is already influencing day-to-day operations. In customer service environments, organisations are deploying AI assistants to help agents respond faster and resolve issues more efficiently. In software development, AI coding tools are helping teams modernise legacy applications and reduce time spent on routine tasks.
“AI is increasingly being used in places where it can influence the bottom line. Customer service operations, software engineering teams and infrastructure management are three areas where we are seeing the strongest impact.”
However, many organisations discover that scaling AI is far more complex than running a pilot project. The challenge is rarely the AI model itself, but the surrounding systems required to support it.
“Most AI projects stall between pilot and production. The barrier is usually fragmented data, unclear governance or a lack of operational processes around how AI applications are built and monitored.”
For this reason, organisations are increasingly focusing on building stronger data foundations and governance frameworks before expanding AI adoption. Unified data environments, clear access controls and monitoring systems are becoming essential components of enterprise AI strategies.
Responsible AI practices are also gaining prominence as companies deploy generative systems that interact directly with customers and employees. Issues such as data privacy, bias and transparency must be addressed within the architecture rather than added later as compliance requirements.
“Responsible AI cannot be treated as an afterthought. If organisations want to scale AI safely, governance has to be built into the platform from the beginning,” adds May.
Another area receiving growing attention is how organisations measure the return on their AI investments. Early AI discussions often focused on productivity gains, but executives are increasingly demanding clearer financial outcomes.
“Boards and CFOs are asking a much tougher question now. Where is the revenue impact? Where are we saving costs? AI projects need to connect directly to business performance.”
This shift is pushing organisations to track metrics such as customer service efficiency, software development velocity and operational uptime when evaluating AI deployments.
Despite the excitement surrounding GenAI, May cautions that meaningful transformation takes time.
“AI adoption is a journey,” he says. “The organisations that succeed will not be those deploying the most AI tools, but those building the strongest foundations for data, governance and operations.”
As GenAI continues to mature, the ability to translate experimentation into reliable, scalable systems will increasingly determine which organisations capture lasting value.
“The real opportunity lies in engineering AI into the business. When that foundation is in place, GenAI moves from hype to measurable advantage,” concludes May.