Should you invest in data centre infrastructure?
Dan Crowe, managing consultant, ShapeBlue South Africa will be presenting at the upcoming ITWeb Data Centre Confex 2017, at the Focus Rooms, Sunninghill, on 25 July.
He will provide a thought-provoking look into the future of the data centre and why enterprises need to start thinking differently about the way they manage, maintain and upgrade their infrastructure. At the same time, he says, enterprises must keep their eye on what the next generation data centre will look like and the impact this will have on the way they do business.
ITWeb Events: Tell me more about ShapeBlue and how it is involved in data centres?
Dan Crowe: ShapeBlue have been designing, building and supporting IAAS clouds for predominately service provider customers since 2011. We operate from the UK, US, India, Brazil and South Africa. We're principally a consultancy practice and integrator, specialising in the technology stacks up and down from the orchestration layer, including billing/metering/language/currency above as well as hypervisors/networking/compute and storage below.
We have an in-depth understanding of, and expertise in these technology components, and critically, their integration points. Additionally, we spend much of our time working with clients on strategy and go-to market plans. This involves identification of customer workloads, locations, and market competitive analyses as well as service catalogue design, customisations, training and feature development via our software engineering team. All backed up by 24/7 service level agreement driven support function.
In essence we operate in data centres, but deliver and support services that are consumed by end users around the world. We focus on CloudStack at the orchestration layer, an Apache project, entirely run by active users, and independent of any vendor agendas, providing a clear governance model.
ITWeb Events: Why do you believe that there is a business case for data centres?
Dan Crowe: This is a great question, and depends on your perspective, and what your organisation does. Let's start with the enterprise space. We are seeing a massive shift from 'build' to 'consume' models for IT in SMB and enterprises, with the enterprise market (not necessarily defined by size as such, but the typically regulated industries such as banking/insurance/pharmaceutical etc.) seeing the biggest shift from build to consume. This trend includes traditional workloads, and those currently virtualised.
Between 2015 and 2018 we witnessed a shift from 80% of large enterprises 'building' infrastructure as the primary environment for traditional workloads in 2015 to 37% in 2018. This comes with a corresponding move from 32% with dedicated private cloud as primary in 2015 to 63% 'consuming' those services in 2018, 24% with virtual private cloud in 2015 to 71% in 2018, and 10% with public IAAS as a primary environment for at least one workload type in 2015 to 51% 'consuming' in 2018*.
The result - what we are not seeing is a wholesale and immediate shift to 'consume' models but rather a measured approach, application and workloads at a time, as legacy infrastructure is retired and service delivery models are evolved. The trajectory however, is undisputable.
Enterprises will need to have a level of internal resource available to them, if only in time to manage multiple dedicated external, virtual private and public external services. The design parameters for what remains 'on prem' are moving from 'robust' to 'resilient', from 'best of breed' to 'best fit' and 'locked in' to 'open'. It is this 'open-ness' that we believe is essential. As the new standards for AI (including machine learning), analytics and IOT are established over the next three years, operating an environment that provides agility both to change service provider supplier direction, and respond to markets will prove vital. We are seeing much anecdotal acceptance and adoption of 'Open Source first' strategies within regulated enterprises, as the acknowledgement of the shift in new standards becomes apparent.
ITWeb Events: Where do next generation technology platforms fit in?
Dan Crowe: Next generation platforms, which I take to mean the hyper-scale players of Amazon, Google and Microsoft et al, will clearly take on much of this shift in workload direction we looked at earlier, but never on an exclusive basis and with massive geographical splits. This leaves plenty of whitespace for niche players to add value in local markets and address country specific challenges, as we are seeing here in SA.
If, however you are a service provider, the business case for data centres is self-explanatory! We must conclude that if the 'build' to 'consume' trends continue on current trajectories, we will see data centres managed and designed for true multi-tenancy, distributed worldwide.
Managed by providers with specialistions and localisations become the new service delivery arms of the majority of workloads for a majority of users. The battle for the next dominant standards in AI and IOT will be played out by vendors and providers, with the key for 'consumers' to remain open and adaptive now, so as to emerge as advantageously positioned as possible when the dust settles!
ITWeb Events: Is it possible that that these next-generation technology platforms are more agile, faster, more competitive etc?
Dan Crowe: Off-premises service adoption is driven by a number of factors, and inhibited by others. Speed to market and quality are drivers, while concerns over security and compliance are inhibitors. What we are already seeing is that the hyper scale providers have proved their value and traditionally what were considered 'non-core' applications and workloads. So far, so good. Next we are seeing a shift in trust patterns. With every outage we see dual forces at play. The thinking goes: 'well I can't trust (insert hyper scale vendor name here) because they have outages... but hold on, they do this for a living and spend billions on it, how much would I need to spend to eliminate it? This inhibitor and every public outage we see serves to slowly evolve design parameters to be fault tolerant and applications to fundamentally become contained, automated and replicated. As this design thinking shifts, we then start to see the trust built for core applications and ultra-sensitive workloads to make that migration.
*Source: "IT as a service: From build to consume" Elumalai, Starikova & Tandon, McKinsey & Company, September 2016.