Mainframe imperative across entire customer journey
It is important to understand that customer experience goes far beyond the contact centre and into the heart of IT operations.
Ultimately, platform and process changes mean nothing if the customer experience doesn’t improve. The mainframe plays a big role across the entire customer journey: from storing customer data to serving information through mobile and Web applications.
Organisations must identify ways to leverage and integrate mainframe data, transaction history and new transactions if they are to engage customers. What would appear like simple delivery of customer service such as, for example, providing the right offer at the right time when customers are shopping, cuts to the heart of the matter.
To achieve tasks like this, modern DevOps and Agile practices must come into play. Agility ties directly to customer experience by helping to create personalised experiences. Accessing data, leveraging better analytics and improving service levels all come from being able to fully utilise mainframe processes and data for customer- and employee-facing applications.
Despite the rise of cloud technologies, the fact remains that the mainframe is a constant that has long powered enterprise computing and will continue to do so for years into the future. Many of the world’s largest organisations use mainframe computing for their heavy lifting needs and storage of massive amounts of data.
Therefore, now that I have made it clear why and how the mainframe should be modernised, it’s time to discuss protection of mainframe investment.
Taking into account the strategic importance and growth of mainframe workloads while also recognising the platform’s unique technical attributes and the cost of change, it is important to take steps to protect mainframe investment.
In the context of a multi-cloud world where digital transformation is the order of the day and modern development approaches are a must, the path forward becomes even clearer.
Strategic investment protection plans should include integration of the mainframe into an organisation’s cloud ecosystems as well as moving towards a self-driven data centre via automation and machine learning. As already stated, businesses must also leverage the mainframe to improve customer experiences. All of the foregoing requires modernisation and revitalisation with cutting-edge development tools.
Leading organisations are integrating their mainframes with cloud- based tools with agility. This requires solutions with key attributes, including the means to work across all environments using the latest technologies and architectures. These tools must also be frictionless in that they facilitate rapid adoption and consumption for all skills levels, and finally, they must be optimised with advanced analytics, machine learning and automation to amplify resources.
The fact remains that the mainframe is a constant that has long powered enterprise computing and will continue to do so for years into the future.
Making the mainframe part of a holistic approach to IT operations and automation is another key step towards protecting the investment. Machine learning and AI provide cross-domain, cross-platform visibility and insights, which helps optimise resources and improve service delivery from the mainframe across mobile and cloud platforms.
An API-first strategy can provide intuitive user interfaces, easy access to operational data, and leverage machine learning and operational intelligence to create a self-healing system. This reduces reactive maintenance and assists with enterprise-wide security and compliance.
So, where to from here for the mainframe?
The first mainframe was invented in 1951. The big iron has seen the introduction of many sexy technologies in that time, allegedly threatening its very existence. The new kids on the block are said to be leaner, trendy, user-friendly and deliver better customer experiences. Predictions abound about how the bulky mainframe will never be able to keep up.
But the simple truth of the matter is summed up in this quote from John Mertic, director of program management at The Linux Foundation: “If you want the ability to scale up to millions of transactions per second without breaking a sweat, you want a mainframe. If you want to hot swap out every piece of server architecture, that’s what a mainframe does.”
Over six decades, the mainframe has adapted to meet the challenges of each new technology wave and in so doing to maintain its position as the central nervous system of major industry sectors, including finance and health, but also in the public sector arena.
But where does the mainframe fit into the latest disruptive technology revolution – cloud? More and more businesses are shifting their work to cloud-based infrastructures due to the promise of increased collaboration and access to data practically anywhere. How can the mainframe complete with this?
The response is – with ease. Mainframes offer all of the components necessary to run a private cloud environment: memory and masses of it, huge storage capacity and the ability to virtualise workloads.
But what makes the mainframe truly invaluable is its superior computing abilities – in this regard it is in a class of its own and this is why the cloud will not replace it.
Gerard King, CA Southern Africa mainframe pre-sales and support engineer, commenced his career in mainframe operations in the financial sector in 1982. In a career that has spanned almost 40 years, he completed training in mainframe assembler programming and moved from operations to systems programming.
King has spent the past 32 years as a z/OS systems programmer working at major financial and insurance corporations. He joined CA Southern Africa over a decade ago and is responsible for mainframe support and pre-sales.