The challenge: Bringing order to a fragmented platform
The engagement began not with a contract, but with a conversation. Inspired Testing was initially brought in to conduct a structured assessment, designed to understand exactly what a leading global fintech needed and what Inspired Testing could offer.
What Viresh Nandkumar, Test Automation Architect at Inspired Testing, and his team discovered when they looked under the proverbial bonnet was a quality assurance environment in a precarious state.
"Their regression pack was, number one, insufficient," says Nandkumar. "And number two, a lot of their test cases were living in an isolated repository that was not part of the main codebase. The code had become stale and out of date with the current application."
In parallel, some test code hadn't even made it into that isolated branch, meaning it was effectively invisible to the team. The client, which powers digital and AI transformations of financial institutions worldwide across the retail, commercial and wealth management industries, was releasing software with significant gaps in regression coverage, test asset visibility and codebase alignment.
The picture that emerged was one familiar to many fast-growing technology organisations: a testing estate that had struggled to keep pace with the speed of product development.
The opportunity: AI skills meet deep testing expertise
Inspired Testing's proposition going into the engagement was grounded in two things: a structured methodology for building robust, scalable test architectures and practical, hands-on experience applying AI to accelerate the delivery of those architectures.
But Nandkumar is candid about the fact that deploying AI effectively on a platform of this complexity required more than arriving with existing skills; it required a willingness to evolve them and applying critical thinking.
"I came into the project understanding how to prompt AI in terms of context and what I was trying to achieve," he explains. “The complexity of the financial institution environment made effective use of AI far more demanding, but it also enabled us to learn how to guide and empower AI in a way that delivers real value.”
That levelling up centred on a core principle: AI will always find the quickest path to an output, and if it doesn't have enough context, it will take shortcuts.
For a platform serving financial institutions across multiple markets, with distinct rules and behaviours for US and European operations, shortcuts are not an option. The team had to ensure AI was fully briefed on the market context it was operating in, the specific configuration of the application it was testing and the boundaries of what constituted a valid result.
Alongside this human expertise, Inspired Testing's own VeloAI platform, a copilot for testers, played a meaningful role. VeloAI generates positive, negative and boundary test cases, together with automation code, from structured requirements. Rather than allowing AI to decide what to automate, testers remain in control by selecting the scenarios themselves, enabling testers to be more efficient and effective. As Nandkumar puts it, this is about guiding AI through the process instead of simply saying: “Here are all my feature files, go and run them.”
The process: Two work streams, running in parallel
The remediation programme ran across two concurrent work streams, each targeting a distinct layer of the quality problem.
Work stream one was dedicated entirely to resolving the technical debt. Over a 2.5-month period beginning in October, the team executed what internally became known as the "great migration": consolidating all siloed test assets from isolated branches directly into the main codebase. It proved more involved than anticipated.
"There was code that was also not put into that isolated branch," Nandkumar notes. "So we had to get that done as well."
Rather than routing through the holding branch, the team adopted a "main first" approach, taking code directly into the primary repository, bypassing the isolated branch altogether. The migration ran from October through to December, with further integration work continuing into February. By the end of January, nearly 200 tests had been stabilised across two variants, totalling approximately 400 scenarios, and the entire suite was live inside the client's CI/CD pipeline.
Work stream two ran concurrently and focused on building net-new automated test coverage, structured around the test pyramid.
The first department tackled the core functionality of the platform in its unconfigured, pre-production state, the fundamental engine beneath the client-specific customisation. Using a combination of business analysis, AI-driven discovery (ingesting screenshots and user flows to reverse-engineer a reliable source of truth), and agent-based automation, the team got their tests aligned to the pyramid by month two. Then came the acceleration.
"Seventy-five percent of those 168 automated test cases were done in the last two weeks and maybe the first week of January," notes Nandkumar. "That gives you the idea of how we used AI to ramp up the production of our tests."
The critical turning point was giving AI full access to the code repository, not just the documentation. Once the system had genuine context, output accelerated dramatically.
The second department, covering transaction processing, presented a step up in complexity. With less documentation to work from, a complete UI overhaul under way and significant market-specific customisation to account for, the team spent January upskilling on a new repository structure focused on state management and transactional verification.
Requirements were completed and signed off, and over 250 new test scenarios were authored and submitted for review, with more still being identified and added. A separate proof of concept also demonstrated that AI could successfully translate existing test logic from the original Playwright framework into a new, experimental automation framework, a task that would have been prohibitively time-consuming if attempted manually.
The benefits: Pipeline-ready, pyramid-structured quality
The value delivered across both streams is tangible and measurable. "If you go back to our original stream one, they now can use their automation on a release basis since it's part of their testing pipeline as they go through their various gates of environments," says Nandkumar. "And if you go to stream two, the value is that at the lowest level, they're able to test components, and at the highest level, they're able to test a full integration."
For an organisation operating at the scale and complexity of a global financial institution that shift from fragmented, disconnected tests to a fully integrated, pyramid-structured regression suite represents a fundamental change in how quality is governed.
Defects can now be identified at the component level, where they are fastest and cheapest to resolve, rather than surfacing only at the end of a release cycle, when the cost of remediation is at its highest.
The productivity proof point is equally compelling. Two engineers produced 168 automated test cases in a single month compared to what was produced previously. The cross-framework POC demonstrated that years of embedded test logic could be repurposed into a new architecture without months of manual rework.
Moreover, the 250-plus transactional scenarios now moving through review represent a depth of coverage the client simply did not have before.
Looking ahead, the team is already investigating back-end API and service-layer testing, moving into the engine room of the platform to validate the transactional services that underpin everything above them, with performance engineering and API security testing on the horizon.
Security and governance, often cited as a concern when AI is introduced into a regulated financial services environment, were addressed by using the client's own isolated AI environment, one with built-in guardrails and full context of their codebase, ensuring that data remained contained and protected throughout.
The learnings: AI-assisted, not AI-driven
Perhaps the most important output of this engagement is the clarity it provides on a question the industry is still working through: what does responsible, effective AI adoption in software testing actually look like?
Nandkumar draws a clear and deliberate line between two distinct modes of operation. "AI-driven is where AI is doing everything on your behalf, and you're just taking its results and moving with it," he explains.
"AI-assisted is:
- I'm trying to achieve a goal.
- I prompt the AI for what I want to achieve.
- AI writes my test case.
- I validate the output against my reasoning and that becomes my test case going forward.
"Who's in control becomes the central question. Is the AI in control, or is the AI only the assistant to the tester?"
The distinction is not merely philosophical. It has direct, practical consequences in a financial services context.
"You don't know whether AI is right or wrong if you don't know the system you're testing," Nandkumar says. "You're then making an assumption of what the AI is supposed to produce, and you cannot validate AI's results."
This same clarity of thinking extends to the team's exploration of MCP (Model Context Protocol) servers as an automation mechanism. Nandkumar sees real value in the approach for rapid, lightweight validation: “It's fast, it's quick and the technical debt and maintenance is low, but it provides insufficient level of validation detail for comprehensive regression coverage.”
The lesson Inspired Testing carries from this engagement into every client conversation is straightforward: AI is transformative when it amplifies human expertise. It becomes a liability when it replaces it.
Inspired Testing
Inspired Testing is a global software testing company backed by the software and technology group, Dynamic Technologies. With an unwavering commitment to delivering tailored expertise, Inspired Testing offers talent augmentation to strengthen teams, strategic consulting to advance test environments and a comprehensive suite of software testing services to ensure software excellence. Their expertise includes AI-assisted Testing, Test Automation, Performance Testing, Manual Functional Testing, Accessibility Testing, and Security Testing, along with specialised services such as Test Data Management, Data Testing, and Test Environment Management.
With over 300 employees, Inspired Testing has evolved into a thriving organisation, serving clients from its head office in the UK and delivery centres worldwide. Dedicated to staying at the forefront of industry trends and technologies, the Inspired Academy offers leading training programmes to upskill and keep both internal and external testing professionals relevant.
Inspired Testing is part of the Dynamic Technologies group. For more information, visit https://www.inspiredtesting.com/
Editorial contacts

