Software architecture has a big role to play in the costs and impacts of change across all four of the lenses we’ve described. But another part of the story is the way that changes are applied. Microservices architectures, cloud infrastructures, and DevOps practices have enabled practices that are a huge leap forward. Let’s take a look at two modern deployment patterns as well as an older one that has managed to stick around.
Three Deployment Patterns
There are lots of different ways to apply changes and deploy software components. Before we dive into the changeability of the architecture we’ve built, it’s worth reviewing three deployment patterns that we’ll use when we make changes in our system: blue-green, canary, and multiple versions. We’ll start by looking at blue-green deployments.
In a blue-green deployment, there are two parallel environments maintained. One is live and accepts traffic while the other is idle. Change is applied to the idle environment and when ready, traffic is routed to the changing environment. The two environments now switch roles with idle becoming live and live becoming idle, ready for the next change. This is a useful deployment pattern because it allows you to make changes in a production environment safely. Switching the traffic over means that you don’t have to worry about repeating the change in a live system. The actual colors of the environments are unimportant — the key to this pattern is that the two environments inter-change roles between living and idle.
A benefit of this pattern is that it can vastly reduce downtime, all the way down to a zero-downtime model. However, maintaining two environments requires the careful handling of persistent systems like databases. Persistent, changing data needs to be synchronized, replicated, or maintained entirely outside of the blue-green model.
Canary deployment is similar to a blue-green deployment, but instead of maintaining‐ ing two complete environments, you release two components in parallel. The “canary” in this pattern is the version that acts as a “canary in a coal mine”, alerting you to danger early. For example, to perform a canary deployment of a web application, you’d release a new canary version of the web application alongside the original web application that continues to run.
Just like the blue-green pattern, canary deployments require traffic management and routing logic in order to work. After deploying the new version of an application, some traffic is routed to the new version. The traffic that hits the canary version could be a percent of the total load or could be based on a unique header or special identifier. However, it’s done, over time more traffic is routed to the canary version until it eventually gets promoted to a full-fledged production state.
Although the canary pattern is similar to blue-green, it has the added advantage of being finer-grained. Instead of maintaining an entire duplicate environment, we can focus on a smaller, bounded change and make that change within a running system. This can cause problems if the canary we are deploying impacts other parts of our system. For example, if our canary deployment alters a shared system resource in a new way, even handling 1% of traffic in the canary could have catastrophic effects.
But in a system that’s designed for independent deployment, the canary pattern can work quite well. When changes are made to components that are well-bounded and own their own resources, the blast radius of damage is limited. So it’s a good pattern to have in your tool belt if you are working with the right type of architecture.
The last pattern to cover is one that considers users and clients as part of the change process — running multiple versions in parallel. The blue-green and canary deployment patterns we’ve covered already use a mechanism of temporarily running parallel instances (sometimes called the expand and contract pattern). But in both of those cases, you’d typically run your new and old instances privately, not sharing details of the new function until it’s safe to use. The routing decision is implicit and hidden from users of the system.
The multiple versions pattern makes changes more transparent to the users and clients of the system. In this deployment pattern, we explicitly version a component or interface and allow clients to choose which version of the component they want to use. In this way, we can support the use of multiple versions at the same time.
The main reason to employ this technique is if we’re making a change that will require a dependent system to make a change as well. We use this pattern when we know people we don’t coordinate with will need to do work for the change to be completed. A classic example of this situation is when you want to change an API in a way that will break client code. In this scenario, managing migration for all parties would require significant coordination effort. Instead, we can keep older versions running so that we don’t need to wait for every client to change.
There are some significant challenges to using this approach. Every version of a component we introduce brings added maintenance and complexity costs for our system. Versions need to be able to run safely together and parallel versions need to be continually maintained, supported, documented and kept secure. That overhead can become an operational headache and can slow down the changeability of the system over time. Eventually, you’ll need to migrate users of old versions and do some contraction of versions.
There are some systems that almost never contract their versions. For example, at the time of this writing, the Salesforce SaaS API is on version 49 and supporting 19 previous versions in parallel!
We now have a decent framework for assessing the impact of change and a set of typical deployment patterns we can use to describe how change might be handled. Now we can dive into an evaluation of the architecture we’ve built from a changed perspective across infrastructure, microservices, and data.
This was part of my knowledge of reading the Book “Microservices up and Running.”