Who said the mainframe is dead?

Read time 13min 00sec
Comments (0)
Dean Maier, Cloud Project lead at Synthesis.
Dean Maier, Cloud Project lead at Synthesis.

The mainframe has been declared dead so many times over the years, it’s sometimes hard to believe it still has a role to play in the enterprise world. But it does. IBM recently reported that 75% of the top 20 global banks are running the newest z15 mainframe. Enterprises use mainframes for applications that rely on scalability and reliability, and to perform large-scale transaction processing, as well as support thousands of users. More importantly, mainframes have become the foundation upon which businesses are building their DevOps and digital initiatives.

But it’s not all rosy. Larger environments such as the mainframe are by their nature more difficult to manage, with interoperability implications and a lot of complexity that needs addressing.

Gerard King, CA Southern Africa’s mainframe pre-sales and support engineer, says the marriage between the mainframe and the distributed environment is often an uneasy union. “Achieving harmony in this hybrid IT world can seem like an unworkable dream. However, it is possible to achieve not just a peaceful coexistence between these two systems, but a true integrated partnership. Doing so requires zeroing in on the areas where the differences between mainframe and distributed systems are most acute.

“‘Keeping the lights on’ has been the top priority for developers since the mainframe first came into existence, and, in contrast, the distributed DevOps culture is known for its agility. Distributed programmers, often younger than their mainframe counterparts, are accustomed to continuous integration and continuous delivery to meet the needs of a rapidly changing marketplace.”

He says building a unified culture across mainframe and distributed environments is essential to optimise outcomes for the business. The foundation for a unified culture is communication, and information needs to be made transparent so mainframe developers and distributed developers can understand the challenges that exist in both halves of the hybrid IT world. Communication can then lead to increased collaboration as the teams begin to share goals and the responsibility for success.

“It’s usually a good idea to make sure that you catalogue your business processes too,” says Dean Maier, Cloud Project lead at Synthesis. “In most organisations running more than one system, it’s safe to assume there would be more than one team contributing to the platform and by nature, software developers tend to prefer autonomy. In practice, you will find multiple technologies, frameworks and protocols being used. This can cause difficulty when maintaining and integrating with other systems. To attempt to have some form of alignment, it’s helpful when teams agree on at least the system or systems of reference, the patterns for integration, and in what format and occurrence teams will meet to align. Another important practice is following the KISS principle, or ‘Keep It Simple, Stupid’. By keeping processes isolated and single-purpose, you will ensure that no additional logic is being implemented and that given the same input, you will always get the same output.”

Organisations must be able to harness scalable and secure enterprise storage systems alongside the flexibility to run, build and manage and modernise cloud-native workloads on their choice of architecture.

Rishi Nirghin

The main attraction to the mainframe is the fact that it’s a singular environment and not complex in its own right, as long as you are working solely from the mainframe, comments Gary Allemann, MD at Master Data Management. “The challenge comes when people want to take data from the mainframe and integrate that into the rest of our data architecture, more particularly into advanced analytics platforms. This creates complexities; one needs to look into specialist vendors that understand the mainframe, how data is sorted, can handle the performance and reliability of the mainframe and also understand the advanced analytics capabilities. This will assist in vendors providing simple tools that can be developed once as an integration platform. This platform should be able to pull data off the mainframe, convert it into a format that is understood by the PC world and do that quickly without affecting the performance of the mainframe and real-time stats. That can be considered as the best practice.”

The role of cloud

Over and above interoperability, what else is needed to optimise and maintain availability for critical workloads? Rishi Nirghin, business unit executive, IBM Systems at IBM South Africa, says a secured hybrid cloud is the future of business and where every business has unique workloads with industry-specific compliance and data residency requirements – they need choices where to run their workloads, especially as they pivot and scale. “On our journeys with customers we’ve seen that a ‘one-cloud-fits-all’ approach doesn’t work. While 94% of enterprise customers globally are already using multiple clouds – 80% of our clients and partners want solutions that support hybrid cloud, including containers and orchestration, giving them the best of all worlds, namely an integrated, flexible approach that gives our customers the control they want over their business.

“Many organisations on a digital transformation journey are navigating challenges in the hybrid cloud. How do they continue to innovate while maintaining the highest standards in data privacy and security? And how can their enterprise storage system be the centre point of a secured hybrid cloud strategy that directly addresses the challenges they are facing, as well as anticipates the innovations that will drive growth? Organisations must be able to harness scalable and secure enterprise storage systems alongside the flexibility to run, build, manage and modernise cloud-native workloads on their choice of architecture. The best way to optimise availability is to provide a solution that can run large volumes of transactions and provide digital asset custody while accelerating the transformation to greater portability and agility through integrated tooling and a feature-rich ecosystem for cloud-native development secure hybrid cloud workloads. The system has to be built for reliability and recovery and designed for business resiliency – providing instant recovery to reduce the impact of both planned and unplanned downtime.”

King says it’s also essential to deploy dynamic capping to protect the critical workloads, something that will move resources dynamically to systems that are more critical when they are needed, for example, take resources from a development partition and move across to production. “When it comes to mission-critical workloads and unplanned downtime – however brief they may be – this negatively impacts clients, employees, reputation and of course, profits. Availability must be looked at in light of requirements, which differ from customer to customer, whether that’s accepting a planning outage in order to achieve maximum uptime. If availability is to be optimised, strategies must be carefully planned, which not only help to improve availability, but also minimise system downtime.


Maier adds: “When considering availability of workloads, there are multiple areas that would need attention. This could range from resource availability to data quality, but the most important would be the business processes they serve and resilience to failures.

“It’s crucial that all these areas have appropriate monitoring in place. Where possible, this should feed automation, which can handle events such as scaling, recovery, and alerting DevOps members should things go wrong and require manual intervention. A combination of monitoring, capacity planning and a tested disaster recovery plan will most certainly play a key role in ensuring availability of any system, small or large. It helps to simulate failures and disrupt your environments regularly to learn and understand the failure points.”

Alleman believes that rather than focusing on optimising the critical workload, the focus should also be around shifting as much of the non-critical workloads as possible off the mainframe onto other CPU power in order to use the mainframe for core processing. “For example, we have customers offloading sort, copy and join workloads to IBM zIIP engines to slash CPU costs and free up resources for critical workloads. Capacity management solutions can then focus on the critical workloads, ensuring that you plan your future processing needs effectively.”

Depleting skills

Mainframe skills is another area that needs future planning, as the skills gap is widening as a generation of mainframers is retiring. King says committing to creating a culture of mainframe vitality is one that all mainframe organisations and providers must share to see essential gains in the short term and sustainable benefits in the long term. “We have the tools and technologies at our command to encourage and deliver excellence on the mainframe. Let’s continue to make this culture of mainframe vitality a reality. In this environment, younger and older programmers will thrive, seeing no distinction between interactions with different platforms. In fact, enthusiasm for the mainframe will grow as the next generation of programmers is able to explore its potential readily and seamlessly. The mainframe will no longer have any connotations of being a legacy or difficult technology — it will reclaim its rightful reputation as a platform unequalled in its potential to support business growth and customer-centric innovation.”

King believes that several factors are contributing to the narrowing of the skills gap. “For example, operational intelligence, as the mainframe is being opened up through initiatives such as Zowe, an open-source framework launched by the Open Mainframe Project with foundational technologies contributed by Broadcom, IBM and Rocket. Zowe enables development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Opening up the mainframe gives users something very valuable – a choice of tools beyond the set of traditional mainframe tools. New and experienced mainframe users can employ modern tools and frameworks such as Jenkins, GitHub and more.”

Unfortunately, the skills for maintaining mainframes are becoming hard to find and fostering them will prove to be even harder.

Dean Maier

Maier says that research from mainframe migration specialist lzlabs in 2019 highlighted that there was an average of three years until retiring staff will significantly impact organisations’ mainframe workforce. “Considering we are already in the second half of 2020 it does stress the point that strategic plans need to be developed in organisations where they are at risk. Unfortunately, the skills for maintaining mainframes are becoming hard to find and fostering them will prove to be even harder. A recommended approach would be to consider a migration plan which could replace the current mainframe system with an off-the-shelf solution, modernise the system by re-architecting it or the more time-consuming approach of re-building from the ground up.

“In either of these approaches you should at least have vendor support or be able to source the necessary skills required to resolve the immediate concern. It’s noteworthy to mention that mainframes too were advanced technology capable of handling massive transaction volumes and solving all kinds of problems. Ensuring that you have an experienced architect leveraging modern technologies and warding against anti-patterns can help that you don’t find yourself in this position again in a few years’ time,” he adds.

“What we are seeing is that the hybrid cloud calls for a re-orientation of how we think about team structure,” says Nirghin. “At its heart, digital transformation requires a move away from technology-oriented teams (ones that are focused on a particular type of technology, may it be public cloud or on-prem) to a structure that’s flexible, technology-agnostic and designed to deliver specific solutions and results.

“Many large enterprises that have been running mission-critical workloads already possess the skills and resources they need to build the foundation of their hybrid cloud strategy. As business complexity continues to increase at breakneck speeds, these enterprises have adapted, while still being able to deliver an end solution. And the systems, apps, and databases they’ve created are proving key to enabling rapid innovation and reduced time to market. With a mission critical enterprise storage system that works when customers want, wherever they want, in any cloud of their choice – open, industry-standard cloud capabilities are essential. This provides organisations to build with flexibility and open cloud-native developer tools, and integrations via Kubernetes, APIs and DevOps to hybrid multi-cloud environments. This enables organisations to use the skills they already have in-house, fostering change internally while embracing some shifts toward new ideas in how they approach their people, organisation, culture and processes.”

Beyond your borders

There’s also the option of outsourcing the management of the mainframes, but finding the right vendor isn't always easy, particularly for companies unfamiliar with the mainframe terrain and outsourcing models, adds King. “Issues to look at in a comparative way between vendors include the obvious such as their pricing structures, experience in the market, their core competencies and certifications, their record with outsourcing, their financial stability and business culture and customer service reputation. You must also scrutinise their datacentres in terms of strength, number of them and location. The foregoing are just some considerations that should be thought about before diving in. Others include a vendor’s ability to respond quickly in emergencies, also do they subcontract some aspects of their mainframe services – if yes, you would be wise to look at that provider also. Finally, you need to ask is there a ‘fit’ between your business culture and the vendors. At the end of the day, it’s all founded on trust.”

The marriage between the mainframe and the distributed environment is often an uneasy union.

Gerard King.

For Maier, outsourcing management of the mainframe will take away the immediate risk of losing staff with the skills required to maintain the system, and a good vendor will assist in creating a roadmap whereby a migration plan can be developed and a long-term, sustainable maintenance lifecycle can be adopted. “This will leave you in a position where should you wish to either return management to your own staff or change vendors, you won’t find yourself in a similar position trying to manage the risk of finding those hard to acquire skills.”

With today’s rapidly changing environment, organisations are looking to accelerate their digital transformation efforts – and for this to be successful they need a hyper-secure, agile and continuously available platform – the foundational infrastructure for delivering competitive advantage in digital business, says Nirghin. “In anticipating how business will be done tomorrow in order to meet new customer expectations, from digital transformation to evolving computing models, their chosen partner must provide availability, performance, security for critical workloads. These are the must haves in today’s market to give customers control over their business, in the way they want – an infrastructure that allows multiple approaches to managing business with features like blockchain to create new business value.”

Login with
14 Aug
Be the first to comment
See also