Subscribe
  • Home
  • /
  • TechForum
  • /
  • Mainframe rehosting doesn't have to be a big bang approach

Mainframe rehosting doesn't have to be a big bang approach

A more effective approach to mainframe rehosting is a staggered, application-by-application, user-by-user approach that leverages bidirectional data replication capabilities, says Sam Selmer-Olsen, MD of Bateleur Software.


Johannesburg, 29 Nov 2018
Sam Selmer-Olsen, MD, Bateleur Software.
Sam Selmer-Olsen, MD, Bateleur Software.

Traditionally, when an enterprise wanted to move some or all of its business applications off a mainframe computer and into the cloud or onto another database platform, it had to be an all-or-nothing, big bang approach. Every user, system and application had to be migrated in one go.

This required extensive planning, testing and management to ensure the project was delivered on time and within budget. There was also the risk of business downtime, massive user change management, and potential performance and stability issues.

From big bang to little sparks

A more effective and less risky approach to mainframe rehosting is a staggered, application-by-application, user-by-user approach that leverages bidirectional data replication capabilities, says Sam Selmer-Olsen, MD of Bateleur Software.

"In the past, it was difficult to move a single application onto a different technology platform because the databases used within an organisation's individual business units were so tightly integrated and it couldn't be done piecemeal. The big bang approach was often the only option. While some got it right, for many, when switch-over date arrived after months of preparation, the business was immediately at risk of data loss or downtime of business-critical applications. It was a scary exercise," says Selmer-Olsen.

To complicate matters, the fact that mainframe workloads typically used proprietary technology made it even more difficult to move applications to open platforms, like Unix and Linux. Organisations had to use dedicated solutions for each technology to ensure they maintained the application's capabilities, availability and performance, without impacting the user experience. This made migration expensive, risky and complicated.

Real-time replication

While data replication in migrations is nothing new, bidirectional synchronisation can introduce loop-back conditions, meaning when a change is made to the source database, that change will be replicated to the target database, which then sends that same change back to the source database in a never-ending loop of re-extracting and re-replicating, what some call the 'ping-pong effect'.

The smarter bidirectional data replication tools solve this problem and also allow for all manner of migration possibilities, says Selmer-Olsen.

The four fundamental methods for replicating data are:

1. Unidirectional en masse replication entails copying (and perhaps transforming) the entire source database to the target database on a scheduled basis.
2. Unidirectional batch replication involves using the database archive update logging capability to replicate only the changes made to the source database to the target database on a scheduled basis.
3. Unidirectional real-time replication uses the database active update logging capability to replicate, in real-time, only the changes made to the source database to the target database.
4. Bidirectional real-time replication also uses the database active update logging capability to replicate, in real-time, only the changes made to the source database to the target database, but includes an option for replicating the target database changes back to the source database.

In real-time, when a change is made to one database, it immediately propagates to the other, ensuring the two are always in sync. This is unlike the big bang approach, he says, when the target database was almost immediately out of date the moment the data was moved over because changes were still happening to the source database.

In-sync databases is just one benefit of this approach, Selmer-Olsen adds. "It's common for enterprises to use a mix of technologies in their environments but, when it comes to the mainframe, they're typically locked in to a single vendor. Using a vendor-neutral replication solution allows businesses to move their data from multiple sources in phases and at their own pace. For example, they could move one application at a time or test the migration with a few users at first, rather than the entire department or organisation. And because the databases are always in sync, they could even run parallel environments with some users on the new system and others on the old system."

Not only is this a safer option than the big bang approach, but real-time replication has implications for reporting, business intelligence and data modelling.

Should I stay or should I go?

There are many reasons why an enterprise would want to move some or all of its data off the mainframe, and retiring the technology is generally not one.

Selmer-Olsen says there's a misperception in the market that the mainframe is an obsolete and expensive technology associated with centralised computing. Rather, it has evolved to successfully support distributed environments and compares favourably on cost, performance and reliability when compared to other distributed on-premises and cloud systems. Just last year, IBM released IBM Z, its new mainframe system capable of running more than 12 billion encrypted transactions a day.

A recent survey found many organisations are actually modernising their mainframe operations and technologies, which still form the core of many IT environments, especially those of banks, insurers and retailers. The survey found 91% of respondents predict their mainframe workloads will continue to grow and view it as a viable, long-term platform, while 67% of respondents are actually looking to increase capacity on their mainframes to meet business priorities.

So, why move data to another platform?

Mainframe technology has evolved to work side by side with other technologies and remains relevant even as enterprises increasingly adopt multi-cloud infrastructures to support their digitisation efforts, says Selmer-Olsen.

By moving some data to another database, enterprises free up capacity on the mainframe to run mission-critical workloads that have specific performance and security requirements, without incurring additional maintenance and licensing costs. Lifting and shifting some applications to the cloud enables enterprises to take advantage of the benefits offered by the different platforms without the risk of completely rewriting the application.

"Spreading information across a number of databases is not only cheaper than storing everything on the mainframe, but it also allows enterprises to use solutions from many different vendors for BI queries and reporting capabilities and to conserve mainframe capacity for high-speed data extraction for analysis," says Selmer-Olsen.

Having the added benefit of real-time replication supports business scalability and changing business conditions and allows enterprises to process increasing data volumes faster and more effectively.

Share