BUSINESS TECHNOLOGY MEDIA COMPANY
Companies
Sectors

Principles of flow: How to move fast and not break things

By automating the software build and deployment process, a team can truly begin to move quicker, reducing the number of catastrophic errors in their applications.
Read time 5min 50sec

Dr Eliyahu Goldratt developed the theory of constraints in his 1984 book “The Goal”. The goal, in terms of this theory, is to identify and eliminate the process(es) in a system that hold the overall system back.

Often known as a bottleneck or a constraint, when identified and corrected, we create a more streamlined and effective process, and we start delivering value quicker to our customers.

How does this relate to DevOps and deploying software? Surely a discussion of constraints hardly seems relevant. Can software development be viewed as a factory floor in which different processes take place in a specific order to deliver a product to our end customer?

While there is no physical product to show, the modern development process could resemble a factory’s assembly line.

If so, what are the bottlenecks and constraints in the software development lifecycle? Anyone who has been involved in a software project will most likely answer that the process of moving code from a developer’s computer to a usable system delivering value to end-users is the longest part of a project. This process is known by many names: deployment, going-live, promoting, to production and others. However, to paraphrase Shakespeare, “What's in a name?” That which we call installing software.By any other name would be just as painful and dangerous.

To understand how this process became the bottleneck in most projects, it would be useful to understand the traditional approach to building software.

Historically, a developer would simply write code. Once completed, they would hand that code over to a tester to validate functionality. Once the tester had concluded their suite of tests, the results would be handed back to the developer(s) to correct.

While there is no physical product to show, the modern development process could resemble a factory’s assembly line.

This backwards and forwards would continue until all bugs, issues or improvements had been made. Next, the code would be given to the operations team to install and configure on the correct set of servers. Again, more issues are discovered that need fixing. This back-and-forth process continues until finally the system was stable and could be released to users.

Most would reasonably feel that the process above seems time-consuming, laborious and difficult to follow. This is exactly the point – and what is wrong with this approach of developing software. If the theory of constraints is applied – as it is applied in the DevOps Handbook – the following causes of this bottleneck can be identified:

  • Multiple handovers: Code is passed between various personnel (eg: testers, operations, etc).
  • Large amounts of re-work and correction: Whenever a problem is found, the code needs to be handed back to the developer for them to fix it. This happens often and repeatedly.
  • Large amounts of work in progress: Since a new feature cannot truly be said to be ‘done’ until all the handoffs and rework is complete, development teams are always going back to work done weeks ago. Consequently, the number of open tasks grows exponentially, and developers are forced to jump between them.
  • Large batch sizes: The result of all these painful experiences is that software teams avoid deployments until they are necessary. Consequently, more and more features are included in each deployment. The upshot of this is that deployments affect larger sections of a system and have a larger chance of unexpected problems. As time goes on, deployments become less and less frequent because of these unexpected problems. This just creates a bigger issue – a negative feedback loop that simply amplifies the pain of deployments.

This all sounds laborious, but can it be changed? The answer is the “First way of DevOps – the principles of flow”.

It may seem optimistic, but by automating the software build and deployment process, a team can truly begin to move quicker, reducing the number of catastrophic errors in their applications and increase the availability of their systems. This can be achieved in the following ways:

  • Automate testing: If developers begin to take personal responsibility for ensuring their code is correct, they begin to write their own tests which they include in their automated build pipelines. They no longer solely rely on the work of dedicated testers. When this happens, bugs and other errors are discovered and fixed sooner.
  • Practise automated deployments: With the advent of tools such as Jenkins, CircleCI, Travis and others, it has become very possible to rebuild and redeploy our applications more and more often. No longer are developers beholden to operations teams to deploy new code to testing environments. This allows quick and immediate feedback from a version of the application if anything is wrong. While there is still re-work required, it is achieved at a much quicker pace with smaller amounts of code.
  • Enable and architect for low-risk releases: As developers embrace more and more automation, the team at large can deploy code to production more often. Additionally, since the code has been extensively tested at every point, there is much less risk involved in releasing the code to production.

When the above is achieved, then the features, fixes and new applications will flow to customers and users with much less resistance and much less pain. This can be attributed to two key reasons. Firstly, the rigorous, automated testing guarantees that the code is not broken. Secondly – and more significantly – the deployment process has been run so many times that the entire team understands how it works. This allows the team to identify and resolve any (unlikely) problems.

At the time of writing, South Africa is once again struggling with a period of load-shedding and blackouts caused by a lack of maintenance of old, legacy systems. It is infuriating when fixes, repairs and updates take longer than expected.

Yet, we forget that we create the exact same experience for the users of our systems. We ignore our bottlenecks and create large batches of work and our systems become brittle and difficult to fix. Our systems are – often – as sensitive and fragile as the power grid in South Africa.

If we – the entire software development community – embrace the first way of DevOps, changes and updates to our system will no longer be fraught with danger. With the power of continuous builds, testing and deployments, teams can be confident that their systems can easily be updated and fixed.

Jonty Sidney

Senior cloud and DevOps engineer at Synthesis
Jonty Sidney is a senior cloud and DevOps engineer at Synthesis with five Amazon Web Services certifications, including certified DevOps Professional. Over the last four years, he has built complex cloud environments for highly regulated financial and retail customers. He believes DevOps practices and principles have huge potential for delivering value to customers and creating greater efficiencies, as well as high morale in teams that are constantly innovating and pushing the technical envelope.

Login with