BUSINESS TECHNOLOGY MEDIA COMPANY
Companies
Sectors

Finding value in simplifying multi-cloud challenges

Avoid driving blind by instrumenting PaaS services through APIs.

Johannesburg, 09 Mar 2020
Read time 8min 30sec

According to RightScale’s 2019 State of the Cloud Report, from Flexera, enterprises are prioritising a balance of public and private clouds; with 28% prioritising hybrid cloud and an additional 17% prioritising public and private cloud equally.

However, the complexity increases significantly when you move to the cloud, particularly in a multi-cloud or hybrid scenario.

How can one ensure a consistent operational management approach between the on-premises and cloud environment? Grant Morgan, general manager for Cloud at Dimension Data, says one needs to understand limitations and capabilities, contain cloud spend and know that governance is not a once-off activity.

Understand your limitations

The big three hyper-scalers are Microsoft Azure, Amazon Web Services (AWS) and Google Cloud Platform, and if you're in Asia, then Alibaba Cloud. They are by nature there to provide companies with the ability to scale up and down, as and when needed.

It sounds easy, just like ticking a box to ask for an extra few terabytes of storage or computing capacity; however, operating across multiple clouds is very complex, particularly when one lacks the expertise or tool-set needed to handle it.

One of the things hyper-scalers all have in common – besides their massive investment in data centres – is that they all do things differently. "Just because you are an expert on one platform, doesn’t necessarily mean your skill-set can translate seamlessly to the other. Making the move to one of them is a big step, going multi-cloud or hybrid makes things infinitely more complex."

Most people are finding they don't have the skills necessary to operate in the public cloud environment. You cannot take existing on-premises tool-sets and expertise and hope they are going to work on cloud. You will need to change your operations processes and procedures entirely to operate successfully in the cloud.

Your operational environment needs to be able to scale up and down, just like you scale your resources up and down on the cloud, as the amount of your required services grows, and you cannot do that in a static environment.

What to expect when moving to the cloud

When it comes to normal disciplines in the operations environment, certain process and rules need to be followed. Consistency is important for your configuration management database (CMDB), for example, but how do you achieve and maintain a consistency in your CMDB, when the environment is flexing, demanding and inducing change as it does on the cloud? To maintain consistency, you will need to build an automation that will handle the dynamic scaling up and scaling down environment that is inherent in the cloud’s nature.

Another area where cloud differs to on-premises operations is best practices, like those that govern operations in a typical on-premises environment, often clash with the cloud-native mode 2 application developers that work in a completely different way.

Cloud developers want to continuously release code through a DevOps process and in some cases, this is totally in conflict with the on-premises, traditional ITIL processes. It results in clashes of operational culture, with one side wanting to act fast, and the other saying wait, slow down, we have operational change processes to follow.

Monitoring is another challenging area when it comes to the cloud. Most people are going well beyond infrastructure as a service (IaaS), they're going up to platform as a service (PaaS). They want to move to the cloud because they want to use the PaaS service options that exist, like machine learning, IOT or database as a service.

These services need to be constantly monitored; the problem is that most people's monitoring tools have been set up to manage a virtual machine, in other words, a Windows operating system, where you can load a monitoring application and draw statistics out of that server and manage it.

With cloud servers, there is no operating system, so there is nowhere for you to load a monitoring agent. The only way to instrument these PaaS services in a cloud environment is to use APIs. Your monitoring system needs to call the cloud vendor’s API. In a multi-cloud environment, it will need to call multiple vendors' APIs.

The thing is, all of the cloud vendors' APIs are different, as are all their PaaS services, so you will need to set up unique monitoring for each cloud platform. Without automation, this becomes extremely time-consuming and difficult, therefore many clients are turning to managed service providers (MSPs) to handle this for them.

Disaster recovery (DR) and management operate completely differently on cloud than it does on-premises. To contain costs in the cloud environment, you don't want to pay for disaster recovery services that you don't use.

The key is to only create that environment the day you have a disaster, but that is a big job to rebuild your entire production environment and the DR environment.

It will need to be done through the DevOps pipeline as it will be near impossible to meet a two or four-hour recovery time if you are going to rebuild the DR environment on the graphical user interface on Azure, for example, and start configuring the entire environment all over again, without it being automated. So, if you want the most cost-effective DR – which is what the cloud vendors promise – it places a huge automation burden on your operational team in order to execute it effectively and reliably. Disaster recovery management is incredibly more complex because the expectation is a lot higher.

Contain your cloud spend

Rightscale’s report states 13% of enterprises spend more than $12 million a year on public cloud, while 50% spend more than $1.2 million annually. If you're not careful, if you don't put limits in place, and if you don't cost optimise, runaway cloud costs will likely be one of the biggest issues you face.

Azure came to South Africa in March 2019, and AWS will likely have the cloud in South Africa in the first half of 2020. The more options a consumer has, the greater the chance to keep a provider honest as you can then benchmark their rates. Managing costs becomes critical, particularly if you’re wanting to achieve the cost reductions that everyone says cloud can bring.

You’ll need automation in place if you’re wanting to spin down your development environment, after 8pm to 7am the next morning, and you want to spin it down over weekends, switching things off when the developers are not using them, for example. The more dynamic releases you have, the more often you spin up and spin down, the more costs need to be monitored. Add the complexity of two different cloud vendors in a multi-cloud environment that do things totally differently, and you can see you've got an operational challenge on your hands.

Managing the complexity of hyper-scalers

Cloud is meant to benefit the operational environment, and it certainly brings value, but it can be a major operational nightmare. As the pace of change has been so accelerated globally, organisations are battling to keep pace with the operational requirements, to find the expertise and allocate the resources at the scale necessary to take advantage of the benefits promised by the hyper-scalers.

To ensure you’re achieving the efficiencies you’re looking for in such a dynamic environment, your organisation would be well-served if they partner with a managed services partner who is certified and qualified to manage the environment for you.

Choose a managed service provider that already has the automation and the toolsets to continuously itemise costs for you. They should be able to continuously re-evaluate whether you're on the right size virtual machine for the workload and that you're buying your cloud consumption in the most efficient way.

Look for an MSP that can provide you with informed information in order to make any commitment decisions, so you know whether a flexible consumption or a monthly fixed commitment is a better option for you. Your MSP should also be able to implement traditional change control disciplines and change to continuous deployment of a new release of code when necessary for you, all while managing compliance for you too.

You know that a managed services provider can handle these next-generation requirements when they hold specific MSP certifications from the hyper-scale cloud provider, like Azure Expert MSP or an official AWS MSP Partner Certification.

Ideally, you want an MSP that is hooked into the APIs of the cloud vendors; that know how to threshold and monitor to see if a PaaS service is performing well or not; that understands monitoring complexities and has visibility of the necessary PaaS services.

They should be able to make sure the services are not either bottlenecking the performance of the end-user experience or the being the cause of an outage in the environment.

If you are not instrumenting these PaaS services through these APIs, which most people don’t have the ability or the existing on-premises tools to do, then you're going to have a big portion of your application estate where you are driving blind.

Login with