Subscribe

The APM approach

'New look' application performance management monitors data from all the layers, including hardware and software.

Jaco Greyling
By Jaco Greyling, Chief technical officer, DevOps, at CA Southern Africa.
Johannesburg, 28 Feb 2012

Application performance management (APM) is not a new concept. In fact, it stems from the early days of IT operations as far back as the mid-1990s, when client-server architectures gave way to multi-tier, Web-oriented applications. With the introduction of programming languages like Java and .NET, it opened the door to new ways of monitoring application response times, including bytecode instrumentation and synthetic transaction monitoring.

Organisations also realised they would have to align IT much closer to core business.

Jaco Greyling is solution strategist at CA Southern Africa.

APM has come a long way since the days of client-server. Because application code was developed, deployed and executed as a singular construct, it was fairly easy to monitor overall application performance using only a few threshold indicators. It was also well understood that this set of indicators could deliver on application availability as well as end-user experience monitoring. Where external variables like the Internet played a role in application delivery, monitoring could be supplemented by synthetic transactions.

With the proliferation of Web-based applications, multi-tier architectures became the norm. This was further accelerated with the broad industry acceptance of services-oriented architecture (SOA), after recommendations by the World Wide Web Consortium (W3C) in 2003. Applications were no longer seen as a single construct, but a series of heterogeneous layers working together to deliver on a single service.

Disconnected

Typically, an enterprise application would comprise load balancers, Web servers, a middleware farm, databases, and in large organisations, the mainframe. Despite this - evolutionary changes from client-server architecture to a multi-tier/multi-layer architecture - operations were left with the traditional system monitoring tools in monitoring application availability and performance. This also meant the infrastructure was observed independently from the application or service delivered to the end-user. This is commonly referred to as the silo approach, whereby separate IT infrastructure layers are monitored independently of the overall IT service delivered to the end-user.

With the financial crisis in 2007 affecting the global markets, it became apparent to CIOs that the current IT model was not sustainable. Organisations also realised they would have to align IT much closer to core business in order to provide value to their customers and stakeholders alike. Amazon realised this and started to modernise its data centres.

In 2006, it released Amazon Web Service (AWS) on a utility computing basis. IT was no longer seen as a means to an end, but an integral part of business innovation and strategy, and so it became tightly integrated with service delivery. Application architectures have become more modular, distributed and dynamic, with the traditional support model becoming obsolete. This gave birth to business service management (BSM), which aligned IT services and the IT infrastructure supporting those services with business processes. A couple of innovative companies saw this gap in the service operation space and started delivering BSM solutions.

Adapt or die

It didn't take long for infrastructure and operation managers to realise that the traditional silo approach wasn't sufficient anymore, and that APM had to adapt to the new service-oriented model. To put it in context, who is to blame if a company has a typical multitier architecture, with monitoring tools reporting 99.999% availability at each layer, yet the customer continues to experience service interruption? Now, more than ever before, it is important for IT to deliver on continued service excellence, end-to-end, knowing exactly what effect an interruption in one layer has on the overall service to the business.

BSM is by no means a silver bullet, and APM has had to adapt or risk becoming obsolete. The problem with BSM is that while it is a step in the right direction, it only looks at the overall service delivery. The problem with this approach is that users still don't have a complete picture.

This is important if a company wants to perform successful root-cause analysis - one of the key attributes of APM. Because of the paradigm switch from system monitoring to service monitoring, infrastructure management has become a key component of APM, aligning application performance with the underlying infrastructure that supports it. This has given birth to the new APM model - a comprehensive tool that monitors data from all the layers, including hardware and software.

APM has now acquired all the attributes of infrastructure management and all the features that made BSM such a great idea.

In the next three parts of this Industry Insight series, I'm going to talk about the five distinct elements of APM, each complementary to all of the others.

In summary they are:

* End-user experience monitoring - how the overall application availability, response time and successful execution affects the end-user experience.
* Application runtime architecture discovery and modelling - visualising the software and hardware layers that make up the transaction execution path.
* User-defined transaction profiling - the ability to define logical units of work (business transactions) in order to track events.
* Application component deep-dive monitoring - the ability to track slow-running transactions to the offending component/code construct.
* Analytics and alerting - the ability to identify what is normal behaviour, flag any deviation from the norm, and alert IT operations to attend to digressions.

Each of these elements will be discussed in detail throughout this series. I will then end with “what's next” in the evolutionary life cycle of APM, and what to expect in the future.

Share