About
Subscribe
  • Home
  • /
  • TechForum
  • /
  • Why financial services platforms fail after customisation – and what to do about it

Why financial services platforms fail after customisation – and what to do about it

By Tristan Brown, Senior Solutions Consultant, Inspired Testing
Johannesburg, 27 Jan 2026
Tristan Brown, Senior Solutions Consultant, Inspired Testing. (Image: Inspired Testing)
Tristan Brown, Senior Solutions Consultant, Inspired Testing. (Image: Inspired Testing)

There's a long-standing tradition in IT: when something breaks, blame the vendor. It's practically muscle memory at this point. A screen hangs, a workflow sulks, a report refuses to load, and somebody mutters "vendor bug" before anyone even checks the logs.

But here's the awkward truth: the vendor core is usually the most predictable, least offensive and frankly most boring part of your estate. Modern financial services platforms are hardened, regulated, endlessly regression-tested and engineered by teams whose full-time job is to prevent drama.

The chaos doesn't come from the core. It comes from everything organisations add to it. This press release explores that reality and outlines exactly how to test the creature you've built, not the platform you bought.

The vendor core isn't the monster

People love painting the vendor as the villain because it's convenient, tidy and helps everyone avoid looking too closely at their own estate. But the idea that the vendor core is the unstable part simply doesn't hold up anymore.

Modern financial services platforms have been engineered to within an inch of their lives. Vendors run regression farms that execute more tests in a single night than some internal teams run in a quarter. Static analysis tools catch issues long before human code reviews, while regulators examine the architecture until even the edge cases behave themselves.

The core is dull, which is exactly what you want. Dull is predictable and doesn't accidentally corrupt your billing file at 2am.

The real trouble starts when the clean, polished product arrives in your organisation and people get creative with small rule tweaks. Each change looks harmless on its own, but stitched together over time, the platform stops looking like the vendor's product and starts looking like something built in a dimly lit laboratory.

The vendor gives you a clean, stable body and your team adds wings, teeth and occasionally a tail. When it finally lurches into production and knocks over an integration or two, the reaction is always the same: "It must be the vendor." It rarely is. The monster is the collection of stitched-on limbs you added yourselves.

The customisation layer: Where the bodies are buried

This is the attic of your estate, where teams store things they don't want to deal with and then act surprised when something growls back.

This layer contains business rules copied from systems retired years ago, glue code patched repeatedly, logic written "temporarily" three years ago, data models bent out of shape to protect legacy reports, undocumented UI tweaks, integration mappings that have drifted quietly for a decade and workflows that survived four restructures and now behave like folklore.

It's the wilderness that nobody owns, nobody remembers designing and where 70% to 80% of your defects originate. In financial services especially, this layer can turn an elegant vendor platform into a sprawling ecosystem where no one is entirely sure what talks to what or why.

The vendor core stays calm while your edges behave like a house party getting out of hand.

Why teams still treat custom code as safe

Despite all evidence to the contrary, teams often act as if the custom layer is the least risky part of the platform, mistaking familiarity for stability.

Here are the myths that keep the chaos alive:

"It's only a small customisation." There is no such thing as a small customisation, only customisations whose consequences haven't yet introduced themselves.

"We copied it from the old system, so it's proven." Bad logic doesn't improve with age; it just finds new places to break.

"It passed UAT before, so it must be stable." UAT proves one thing only: someone can complete a happy path without crying.

"The vendor must have broken something." This usually translates to inadequate regression testing of your changes. The logs, the vendor's architects and reality usually disagree.

"It should just work." Custom code doesn't "just work". It works exactly as written, not as intended.

The defect snowball

The defect snowball doesn't start with a bang but with something tiny: a mapping mistake, a misaligned rule, a workaround that quietly rots. Then it spreads.

Regression cycles expand as confidence drops and teams widen scope "just to be safe", turning regression into an endurance event. Upgrades become rescue missions involving late-night debugging of logic written by someone who left the company two years ago.

Integrations start behaving like estranged relatives that don't communicate well or agree on anything. Incidents multiply as a small bad mapping becomes wrong data, which triggers a failed workflow, breaks an integration, corrupts a payload and leaves the service desk drowning in alerts.

And everyone repeats the ritual phrase: "Why is the vendor platform so unstable?" Because the custom layer is quietly fraying at the edges and nobody wants to admit it.

Where this hits hardest

Some platforms attract customisation like wasps to cake. Bank-in-a-box platforms like Temenos, Mambu, nCino and Thought Machine arrive clean and modular but leave covered in legacy rules and bespoke products. Investment platforms such as FNZ, Avaloq and Bravura have stable engines until someone adds custom pricing scripts and bespoke tax logic that melts during upgrades. CRM ecosystems including Salesforce FS Cloud and Dynamics get heavily customised under the illusion that "it's only configuration".

Across all of them, the pattern holds: the vendor core stays calm while the custom layer starts a rebellion.

How to test for customisation-driven defects

Testing needs to move to where the risk actually lives: the stitched-on logic, not the vendor baseline.

Build a customisation risk map. Catalogue every custom element and score it by complexity, criticality and likelihood of causing problems. Prioritise accordingly.

Test the seams, not the centre. Most defects don't live in systems; they live between them.

Follow the data. Features don't break; mappings, transformations and reconciliations do.

Make upgrade testing non-negotiable. Upgrades aren't side quests; they're boss battles that require proper preparation.

Automate where it hurts. Focus automation on the brittle areas, not just the easy stuff.

Publish a custom code defect density metric. This changes behaviour faster than any governance framework.

Enforce ownership of custom logic. If it runs in production, someone must own it.

Treat custom work as first-class engineering. If it can break the platform, test it like the platform.

Own the creature

The vendor core is fine. It's your customisation footprint that's causing the drama.

Once you start seeing your estate honestly, the testing strategy becomes obvious: test the creature you built, not the stable engine underneath. Stop blaming the vendor, start owning the custom layer, and test the reality you created rather than the fiction everyone wishes were true.

Do that and the whole estate calms down. Upgrades stop feeling like Russian roulette, regression shrinks, incidents drop and the creature finally behaves like it should.

Share

Inspired Testing

Inspired Testing is a global software testing company backed by the software and technology group, Dynamic Technologies. With an unwavering commitment to delivering tailored expertise, Inspired Testing offers talent augmentation to strengthen teams, strategic consulting to advance test environments and a comprehensive suite of software testing services to ensure software excellence. Their expertise includes AI-assisted Testing, Test Automation, Performance Testing, Manual Functional Testing, Accessibility Testing, and Security Testing, along with specialised services such as Test Data Management, Data Testing, and Test Environment Management.

With over 300 employees, Inspired Testing has evolved into a thriving organisation, serving clients from its head office in the UK and delivery centres worldwide. Dedicated to staying at the forefront of industry trends and technologies, the Inspired Academy offers leading training programmes to upskill and keep both internal and external testing professionals relevant.

Inspired Testing is part of the Dynamic Technologies group. For more information, visit https://www.inspiredtesting.com/

Editorial contacts

Karin van Blerk
Marketing & PR Manager
kvanblerk@inspiredtesting.com