Taming container complexity
Why container monitoring is such an essential ingredient in the rapid innovation of new services.
Containerised applications have created a significant amount of complexity in modern application architecture and cloud-based application management and monitoring.
I will explain how organisations can tame that complexity and why container monitoring is such an essential ingredient in the rapid innovation of new services.
Containerisation has been propelled into the development limelight in recent years, with the promise of upending traditional development processes and delivering what today's digital businesses so urgently demand: the rapid development of scalable, high-quality apps.
Research indicates that 94% of executives are under increased pressure to release apps more quickly.
Organisations are fast embracing container technologies like Docker to improve speed and agility of software development in support of this goal.
Containers offer a solution to the persistent problem of running software reliably when moving from one computing environment to another. This movement could include a transition from a physical machine to a virtual machine, or the separation of personal data and corporate data.
By containerising the application platform and its dependencies, differences in infrastructure become less stark, enabling developers to work with identical development environments and stacks.
Containerised application environments are creating exponential complexity in cloud-based application management and monitoring.
The modular nature of containers also makes them ideally suited to microservices, where complex apps can be split into smaller units.
Application ecosystems are now expanding beyond the typical data centre where microservices and containerised applications are offered as consumables by various cloud and business service providers. It is no longer an organised multi-tier architecture but much more like a multi-linked ever-changing neural network of inter-dependent services.
The example of an online retail application illustrates this point. Rather than tying components such as search, inventory, shopping cart or payments together, each functional element is developed in a separate modular way, then connected as part of the entire application. This way, developers can make changes to any of these loosely coupled components without impacting any other service. The separate elements can then also scale, spin up and shut down as demand varies.
Developers are fast catching onto the power of containers too. In a recent report, IDC forecasts that more than 95% of new microservices will be deployed in containers by 2021.
There's a price to pay for this promise: containerised application environments are creating exponential complexity in cloud-based application management and monitoring.
For instance, containerisation leads to a significant increase in components, dependencies and communication flows. Developers need to understand what those flows involve and where they move to. Moreover, containers have short lifespans, classified as ephemeral workloads.
Monitoring these workloads is tricky, primarily because their identities keep changing. This affects the downstream processing of monitoring data for operational reports and other tasks that rely on reasonably stable identities.
The other challenge is associated with the dynamic nature of containers: DevOps teams need more reliable ways to determine what, when and how application performance is impacted.
So how can your team examine performance across a multitude of short-lived complex containerised apps?
The obvious choice would be to use a traditional application monitoring tool. However, these monitoring tools typically lack the dedicated functionality to cope with the containers' complexity.
For example, if your team is relying on multiple different tools and silos of data, they won't have the necessary drill-down insight to tackle container problems when they occur. Moreover, these traditional monitoring tools lack scale. A large environment may have as many as 50 000 agents reporting data, comprising thousands of containers: manual monitoring simply couldn't cope with processing this volume of metrics.
Traditional tools leave you with a multitude of data points and masses of data without the required context and meaning.
The answer lies in a context-aware application monitoring solution.
There are three parts to successfully monitoring your containerised applications:
* Monitor non-transactional statistics of your physical, virtual and container platforms.
* Understand the dependencies that exist between application and infrastructure components relative to a specific point in time.
* Transactional monitoring that follows and understands the application end-to-end.
You need a solution that enables the analysis of data from multiple sources across application and infrastructure, automatically discovers dependencies, gives you application to infrastructure correlation and analytics-driven insights.
Container monitoring tracks clusters of containers and services, enabling you to have an aggregated view across microservices, apps and containers.
And beyond visualisations and alerts, it's also possible to add custom data sources and extract captured data later.
With container monitoring, you can distil containerised environments into easy-to-understand, sharable views of application performance. Performance metrics can be correlated across hosts, containers, microservices and applications, enabling you to contextualise issues immediately.
In other words, there's no lengthy data capture, manual analysis, problem hand-offs, or multiple tools required.
Your monitoring solution should be able to identify seemingly unrelated events as significant contributors to application experience issues in a much larger and more complex ecosystem and drive root-cause identification.
When you can harness all the data in context, simplify it and understand the impact across the modern application architectures, you will have complete visibility and can drive quality customer experience and improve business outcomes.
Principal consultant, CA Southern Africa.
André Esterhuysen is principal consultant at CA Southern Africa. He is a seasoned ICT consultant with over 20 years’ experience in the ICT arena in the Southern African market. His passion for technology is exemplified by the complex challenges he has tackled in his career and in his quest to devise real-world solutions that make a difference. Esterhuysen focuses on AIOps: the application of artificial intelligence, machine learning, analytics and automation to improve the IT operations of organisations. He drives to leverage technology in a manner that unlocks business value and realises digital transformation journeys.
André Esterhuysen is principal consultant at CA Southern Africa. He is a seasoned ICT consultant with over 20 years’ experience in the ICT arena in the Southern African market.
His passion for technology is exemplified by the complex challenges he has tackled in his career and in his quest to devise real-world solutions that make a difference. Esterhuysen focuses on AIOps: the application of artificial intelligence, machine learning, analytics and automation to improve the IT operations of organisations. He drives to leverage technology in a manner that unlocks business value and realises digital transformation journeys.