Stephen Platten, Principal Consultant, Inspired Testing.
As Inspired Testing outlined in the press release titled: “Why financial services platforms fail after customisation – and what to do about it”, modern banking platforms are among the most heavily engineered software products in existence.
Vendors invest significantly in regression automation, regulatory compliance, static analysis and controlled release processes, which is why the vendor core is typically stable, predictable and well understood.
However, financial services organisations typically build their own business-specific customisations on top of this core, and so we get defects, incidents and test complexity.
This is where using AI in testing comes in: to help us accelerate how we test these expanded, customised systems, especially in industries like banking, where (down)time is literally money.
But while AI is now an essential tool for speeding up complex testing processes, AI alone can’t be expected to solve the complexity issues. As a company, Inspired Testing differentiates itself with experienced testers that not only know how to use AI optimally for testing, but also how to interrogate AI-generated information to ensure the best possible quality outcome for the client.
The three complexity problems AI helps to solve
It is now exceedingly difficult to test complex financial services systems to the level required, in the timeframe that the client requires them tested, without using AI.
To understand why, here are the three primary problems that AI helps us resolve more quickly and reliably than ever before:
- Interaction-driven defects. The majority of defects no longer originate from inside isolated features. Instead, they arise from the new interactions created by the customisation framework. These are not bugs in the code – they are emergent behaviours that surface only when specific combinations of configurations, rules and integrations converge. They are also difficult and time-consuming to spot with traditional test techniques, so AI helps us identify and isolate these defects far more efficiently.
- Scenario explosion. As integrations and amendments to the framework grow, exhaustive testing becomes impractical, for the same reason given above. In an attempt to compensate, traditional tests become shallow and broad, a coverage illusion that trades depth for volume and leaves the highest-risk areas under-tested. AI gets us back on track by sorting through this complexity quicker than we’re able to in other ways, and presenting us with the information we need to make better decisions.
- False confidence from coverage. This happens when our understanding of the original framework moves further away from what it once was. High coverage percentages reinforce the illusion of control, while obscuring where risk actually concentrates, specifically around those interaction points. The system appears tested, except it’s not tested where it matters. AI gives us a more comprehensive and, importantly, more accurate view of the entire system, without the gaps.
When you take all three of these scenarios into account, it becomes clear that testing with AI is no longer a nice-to-have, it’s a must-have.
AI helps Inspired Testing solve the issues it has with testing banking and other complex financial systems using traditional testing methods. It does this by analysing large volumes of artefacts – such as rules, configurations, mappings, test results – and rapidly identifying patterns across releases, environments and data states.
In doing so, AI surfaces those issues where behaviour diverges from the intent of the design or requirements – in other words, where the customised and organically expanded system diverges from the stable core platform.
AI helps identify which processes flow under tension, and where interconnections and changes have created risks that Inspired Testing didn’t intend or foresee. It gives QA teams the ability to gather richer information about systemic risk, so the stakeholders who make release and governance decisions are genuinely informed, not merely reassured.
Testing with AI is no longer a future ambition or a headline Inspired Testing is chasing; it’s now a core part of the company's everyday skillset. As such, Inspired Testing uses it across client environments to solve all the issues described above, along with many others that fall outside the scope of this discussion.
But there’s a big difference between acknowledging and using AI autonomously, and knowing how to use AI correctly for the best possible outcomes, and it’s what differentiates what Inspired Testing does from many other testing organisations.
Not only is Inspired Testing comfortable with using AI in practice, but the company understands both the power and the limits of AI. In fact, Inspired Testing has developed its very own purpose-built AI testing platform, VeloAI, specifically to help it operate where complexity accumulates, and to give its human testers the necessary oversight to validate the AI-generated results.
Inspired Testing experts are – and will always remain – responsible for interpreting risk, applying context, making release decisions and owning outcomes. AI gives them the information they need to do this well, and at a speed and scale that was previously impossible without it.
But AI cannot, and is not, left to its own devices. It can and does make mistakes, write incomplete test cases and miss important nuances. AI is fundamentally poorer without expert oversight, and that expertise is what makes its use of AI in testing a competitive advantage instead of a potential liability.
Vendor platforms are rarely fragile, but the complexity we build on top of them can easily overwhelm the abilities of traditional testing methodologies to work expediently.
That’s because, over time, issues emerge from well-intentioned customisation, and we end up creating systems that no longer behave predictably. Trying to test unpredictable systems manually, even with advanced test automation techniques and large test teams, simply takes too much time in today’s modern, competitive landscape.
Using VeloAI and other AI-assisted testing platforms gives us new and better ways to observe, prioritise and manage complexity at a scale beyond the capability of human testers using traditional methodologies alone.
They quickly and effectively identify tension points, facilitating better and more accurate human decisions, but only if the testers making those decisions have the skill and experience to review and assess the AI-generated content at their disposal.
As Inspired Testing's AI capability matures, it is already gearing for what comes next. Inspired Testing's AI platforms – and human skills – are not static, they’re constantly evolving, and the company is committed to staying relevant as the complexity of financial services environments continues to grow.