Subscribe

Igniting continuous delivery

The move to agile methodology, DevOps adoption and API proliferation is forcing a disruption in the testing market.

Jaco Greyling
By Jaco Greyling, Chief technical officer, DevOps, at CA Southern Africa.
Johannesburg, 15 Nov 2016

Let's talk about continuous delivery, because this is where the rubber really hits the road in DevOps.

Why lean, agile and DevOps? Lean ensures the elimination of waste and problem-solving. Agile ensures working software or services that meet customer expectations, and DevOps ensures the benefits of agile are realised soon, and are safe and sustainable.

Many companies are struggling to deliver more innovative, higher quality applications, faster and more frequently. In most cases, their application delivery systems and processes are designed only to push out one or two releases a year. So the traditional 'software factory' for transforming an idea into a customer experience becomes a chaotic and complex process with countless obstacles.

Different development teams typically work on different interdependent parts of the app, so development teams often sit idle waiting for other components to be completed.

Free access

Developers need unconstrained access to the systems their applications will be utilising, to make sure it all works. But, most of the time, they don't have unlimited access to everything they need - be it a mainframe system or APIs, or some third-party vendor's system. That means either they wait for it, pay access fees for it or even partial environments, or build brittle stubs and mocks - simulating the behaviour of the mainframe - to make do. So it creates stubs against which a developer codes and can test as if it is getting 'responses' from the mainframe, while not actually connected to the mainframe. Connecting to the mainframe causes delays.

These can't scale, however, and often break when something changes. The mock also can't easily be shared with other teams.

As development teams crank out multiple CI builds every day, these builds also need to be tested and validated so they can move through the pipeline. Manually creating and configuring development and test environments can be expensive and time-consuming.

The move to agile methodology, DevOps adoption and API proliferation is forcing a disruption in the testing market. Legacy testing methods can't keep pace with continuous application delivery models, often inhibiting companies from accelerating development speed and delivering high-quality releases.

Testing is frequently postponed to keep the project moving forward, so testing doesn't happen until the end of the cycle, when errors and defects require significantly more rework than if they're found early. Manually creating test cases, generating test data and executing tests creates a bottleneck, so test plans may be cut short, compromising quality.

Development teams often sit idle waiting for other components to be completed.

Accessing the right data to conduct the tests is also a big bottleneck. Often, requesting this data from the database administrators can take days or weeks of time. Even then, if it is not manually created in some spreadsheet, teams take and try to mask production data. But this often means referential data gets lost. Sub-setting data from production also takes time to get the type that is really wanted. Still, for new applications, developers don't get fit-for-purpose data from production - only from whatever they have, which often doesn't work for what they are trying to build.

Imaginary data

Data for testing purposes offers an automated solution to one of the most time-consuming and resource-intensive problems in continuous delivery - the creating, maintaining and provisioning of the test data needed to rigorously test evolving applications. Data privacy laws prohibit the use of live, actual data - and an ingenious solution is to create testing data on the fly, which looks and behaves likes actual data but is completely fictional.

Multiple teams contribute to the delivery process, with dev, test, release management, and operations teams each having their own set of tools that are constantly changing, with new ones emerging on a regular basis.

Herein lies the Rubicon - version control - also called subversion control, or revision control. It prevents large projects from spinning out of control by letting individual programmers, writers, or project managers tackle a project from different angles without getting in each other's way, and without doing damage that can't be undone - in short - collaboration.

Quality assessment test teams use a variety of different testing tools to manage test data or to automate some aspects of testing.

Release management or ops teams use configuration tools like Chef and Puppet, and many use cloud management tools and/or provisioning tools. These different tools don't just automatically work together, so orchestrating all of the interdependent functions and writing manual scripts to manage how the tools interact may actually take more time than they are meant to save.

Time-to-market isn't the only thing that is impacted by all this inefficiency - when errors or defects aren't discovered until after an app is deployed to production, customer experience suffers, potentially damaging the brand.

The open and integrated framework allows DevOps teams to leverage current investments and tools of choice while moving forward in the continuous delivery journey.

Continuous delivery adoption involves the maturation of culture, application content/frequency (make-up of the application, as well as cadence), release management processes and tooling.

This allows the company to focus on innovation and customer experience, so it can become the next great digital disruptor.

Share