Deployment Styles: Blue/Green, Canary, and A/B

Deployment Styles: Blue/Green, Canary, and A/B

These days, we seem to have an overwhelming number of deployment options. DevOps, continuous delivery, and similar practices have encouraged introspection on how to release valuable software to customers. You've probably heard of three popular options: blue/green deployments, canary releases, and A/B testing. And maybe you've wondered, aren't these all the same? Or, when should I use blue/green instead of A/B? They both have slashes in them, so they're probably equivalent, right?

This post will help you differentiate between these three deployment options and understand why they're valuable. And, as you'll see, each one is indeed valuable depending on the situation.

 

Deployment Vs. Release

Before we delve into the different styles, I want to make a couple of terms clear. Often the ideas of "release" and "deployment" are used interchangeably. But don't be deceived. These are related but different, and some strategies hinge on understanding the difference between them.

Deployment

Deployment is likely the term you understand best. It refers to updating executable code to a specific environment and running it. Strong practices like continuous delivery and continuous deployment focus on how to package this code and get it to the appropriate environment. They often encourage automation and eliminating disruption risks for your customers. Often, as soon as we deploy code customers can see it. However, deploying code need not be the factor that determines whether or not your customers see it. The goal of deployment is to ensure your code can run properly in the appropriate environment.

Release

Releasing is all about making the output of new code visible to your customers. The simplest way to do this is to deploy that code and let it immediately become visible. This is why this concept is often confused with deployment. However, there are many ways to hide code from customers even while it's deployed. The goal of releasing is ensuring a feature meets customer needs and is turned off when defective.

Blue/Green

Blue/green deploys are focused purely on deployment as a way of eliminating downtime and disruption for customers.

How Does It Work?

Let's say you're writing a blog post (go figure) and your audience is expecting to see something in an hour. But you don't like the post and want to rewrite it from scratch. Well, you're not sure you can finish in time, so you publish the old one. When you do have time, you can rewrite the post completely and publish it after the due date, replacing the older version. Depending on when customers visit the blog, they will either see the old post (and you'll get credit for meeting your deadline), or they'll see the new one and enjoy some quality content. Either way, no one will ever see an empty space where they were expecting a post.

Instead of blog posts, picture running software. When you need to deploy a new software version, instead of replacing the existing version you run the new one side by side with the old one. Once everything looks good, you can switch traffic to the new service. That way, customers are always able to get to your system.

What's It Good For?

Blue/green deployment drastically reduces risk in many situations. If you have a site where it costs significantly to be down, even for a few minutes, this option can save your bacon.

Canary

Canary deployments, also known as canary releasing, are usually release-focused. But, sometimes, they can also be deployment-focused. They are deployment-focused when you use your deployment scripts to only update the code on specific containers or servers. They're release-focused when you can change which canary features are visible to some users without redeploying.

How Does It Work?

Canary releasing can work many ways, but essentially you only release a new version or functionality to a small set of customers to start. You then monitor your system and the customers' responses to see if anything...weird...happens. The odd name for this deployment comes from coal miners lowering canaries into the shafts to detect noxious gases so they can see if it's safe before they descend themselves.

Think of canary releasing like a bottle of pop. You accidentally drop it on the ground. You're not sure if it will start spraying everywhere when you open the bottle, so instead of just turning the cap quickly and risking that the pop will explode, you turn the cap slowly to eke out a little air with every rotation. Eventually you can safely open the bottle and drink a refreshing beverage. Canary releasing is eking that software out to the world, user by user.

What's It Good For?

Canary is fantastic for lowering the risk of changes to production. The "no defects in production" mentality is a little overrated, and canary lets you mitigate the cost of defects without spending an enormous amount on preventive testing. (You should absolutely spend some effort on preventive testing, though.)

A/B Testing

A/B testing is a release strategy. It's focused on experimenting with features.

How Does It Work?

With A/B testing, implementation is similar to the process of canary releasing. You have a baseline feature, the "A" feature, and then you deploy or release a "B" feature. You then use monitoring or instrumentation to observe the results of the feature release. Hopefully, this will reveal whether or not you achieved what you wanted.

You're not only limited to two versions in a test. Netflix, for example, has multiple covers it displays as the graphic for the same show based on the version it predicts users will respond to and want to see. But be careful: It's healthy to only run one experiment at a time.

What's It Good For?

A/B testing from the 1,000-foot view may look a lot like canary releasing: You have a set of users seeing the new stuff, and a set with the old stuff. However, A/B has a much different intent. While the focus of canary releasing is risk mitigation, A/B testing's focus is on how useful a feature is for customers. It's the old argument of "build the thing right" vs. "build the right thing."

 

Working Together

Though these three options all tackle different goals, by no means are they mutually exclusive. You can have a system that's backed by blue/green deploys that set up features you can canary-release by region or customer last name. And you can set up certain key features as A/B experiments with your highest-paying customers. These all can work in harmony.

Get Past the Buzz, Choose a Style

We love throwing out buzzwords in this industry, and it can get confusing fast. This is especially true when these jargon-filled concepts operate similarly on the surface. I hope this article helped clarify the differences between these three deployment strategies. Pick one or more that work for you.

 

Author

This post was written by Mark Henke. Mark has spent over 10 years architecting systems that talk to other systems, doing DevOps before it was cool, and matching software to its business function.

Shakedown Cruise

Shakeout Testing With Synthetics: A TEM Best Practice

Shakeout Testing With Synthetics: A Test Environment Management Best Practice

Testing software is pretty easy, right?  You build it to do a thing.  Then you run a test to see if it does that thing.  From there, it's just an uneventful push-button deploy and the only unanswered question is where you're going to spend your performance bonus for finishing ahead of schedule.

Wait.  Is that not how it normally goes in the enterprise?  You mean testing software is actually sort of complicated?

Enterprise-Grade Software Necessarily Fragments the Testing Strategy

I've just described software testing reduced to its most basic and broad mandate.  This is how anyone first encounters software testing, perhaps in a university CS program or a coding bootcamp.  You write "Hello World" and then execute the world's most basic QA script without even realizing that's what you're doing.

  1. Run HelloWorld.
  2. If it prints "Hello World," then pass.
  3. Else, fail.

That's the simplest it ever gets, however.  Even a week later in your education, this will be harder.   By the time you're a seasoned veteran of the enterprise, your actual testing strategy looks nothing like this.

How could it?

Your application is dozens of people working on millions of lines of code for tens of thousands of users.  So if someone asked you "does it do what you programmed it to do," you'd start asking questions.  For which users?  In which timezone and on what hardware?  In what language and under which configuration?  You get the idea.

To address this complexity, the testing strategy becomes specialized.

  1. Developers write automated unit tests and integration tests.
  2. The QA department executes regression tests according to a script, and performs exploratory testing.
  3. The group has automated load tests, stress tests, and other performance evaluations.
  4. You've collaborated with the business on a series of sign-off or acceptance tests.
  5. The security folks do threat modeling and pen testing.

In the end, you have an awful lot of people doing an awful lot of stuff ahead of a release to see if the software not only does what you want it to, but also to see how it responds to adversity.

Sometimes the Most Obvious Part Gets Lost in the Shuffle

But somehow, in spite of all of this sophistication, application development organizations and programs can develop a blind spot.  Let me come back to that in a moment, though.  First, I want to talk about ships.

A massive ship is a notoriously hard thing to test.  Unlike your family car, it seems unlikely that someone is going to put it up on a lift and run it through some simulated paces.  And, while you could test all of its various components outside of the ship or while the ship is docked, that seems insufficient as well.

So you do something profound and, in retrospect, obvious.

You take it out on what's known as a shakedown cruise.  This involves taking an actual ship out into the actual sea with an actual crew, and seeing how it actually performs in an abbreviated simulation of its normal duties.  You test whether the ship is seaworthy by trying it out and seeing.  Does it do what it was built to do?

In the world of software, we have a similar style of test.  All of the other testing that I mentioned above is specialized, specific, and predictive of performance.  But a shakeout test is observational and designed to answer the question, "how will this software behave when asked to do what it's supposed to do."

And it's amazing how often organizations overlook this technique.

Shakeout Testing Is Important

Shakeout testing serves some critical functions for any environment to which you deploy it.  First and foremost, it offers a sanity test.  You've just pushed a new version of your site.  Is the normal stuff working, or have you deployed something that breaks critical path functionality?  It answers the question, "do you need to do an emergency roll-back?"

But beyond that, it also helps you prioritize behavioral components of your system.  If your shakeout testing all passes, but users report intermittent problems or lower priority cosmetic defects, you can make more informed decisions about prioritizing remediation (or not).  The shakeout test, done right, tells you what's important and what isn't.

And, finally, it provides a baseline against which you can continuously evaluate performance.  Do things seem to be slowing down?  Is runtime behavior getting wonky?  Re-run your shakeout testing and see if the results look a lot different.

Shakeout testing is your window into an environment's current, actual behavior.

Shakeout Testing Is Labor-Intensive, Especially With a Sophisticated Deployment Pipeline

Now, all of this sounds great, but understand that it comes with a cost.  Of course it does -- as the saying goes, "there is no free lunch."

Shakeout testing is generally labor intensive, especially if you're going to be comprehensive about it.  Imagine it for even a relatively simple and straightforward scenario like managing a bank account.  Sure, you need to know if you can log in, check your balance, and such.  But you're probably going to need to check this across all different sorts of bank accounts, each of which might have different features or workflows.

It quickly goes from needing to log in and poke around with an account to needing dozens of people to log in and poke around with their accounts.  Oh, and in an environment like prod, you probably want to do as much of this in parallel as possible, so maybe that's hundreds or even thousands of man-hours.

This becomes time-consuming and expensive, with a lot of potential ROI for making it more efficient.

Low-Code, No-Code, and Synthetics: Helping Yourself

As detailed in the article above, a natural next step is to automate the shakeout testing.  In fact, that's pretty much table stakes for implementing the practice these days.  The standard way to do this would involve writing a bunch of scripts or application code to put your system through the paces.

This is certainly an improvement.  You go from the impractical situation of needing an army of data entry people for each shakeout test run to needing a platoon of scripters who can work prior to deployment.  This makes the effort more effective and more affordable.

But there's still a lot of cost associated with this approach.  As you may have noticed, it isn't cheap to pay people to write and maintain code.

This is where the idea of low-code/no-code synthetics solutions come into play.  It is actually possible to automate health checks for your system's underlying components that eliminate the need for a lot of your shakeout testing's end to end scripting.

You can have your sanity checks and your fit for purpose tests in any environment without brute force labor or brittle automation.

Shakeout Testing Has a Maturity Spectrum

If you don't yet do any shakeout testing, then you should start.  If you haven't yet automated it, then you should automate it.  And if you haven't yet moved away from code-intensive approaches to automation, you should do that.

But wherever you are on this spectrum, you should actively seek to move along it.

It is critically important to have an entire arsenal of tests that you run against your software as you develop and deploy it.  It's irresponsible not to have these be both specialized and extensive.  But as you do this, it's all too easy to lose track of the most basic kind of testing the I mentioned in the lead in to the post.  Does the software do what we built it to do?  And the more frequently and efficiently you can answer that, the better.

Contributor: Erik Dietrich

Erik Dietrich is a veteran of the software world and has occupied just about every position in it: developer, architect, manager, CIO, and, eventually, independent management and strategy consultant.  This breadth of experience has allowed him to speak to all industry personas and to write several books and countless blog posts on dozens of sites.

Test Data Management

5 Steps to Better Test Data Management

Forward

I always say that it's important to test in production because nothing compares to a production environment. But it wouldn't be very professional of you to test only—and directly—to production. Testing in production usually gives the impression that you didn't care enough to test before you reached the production stage.

But I'd say that in order for you to even dare to test something in production, you need to have run a set of tests previously in a similar environment—including all the data you need for testing.

That's where test data management (TDM) comes in.

TDM is the process of creating test data that's similar to the real data being used in a production environment. Developers and testers use this non-production data to be more confident that the new software changes aren't going to break anything during the release.

 

A good testing strategy is usually accompanied by testing data taken from production. Developers sometimes use funny, dummy, and non-real data as well. But what are the steps that you need to follow in order to create good TDM data?

Top 5 Considerations

#1 Only Use the Data You Really Need

If you don't know where to go for your next vacation, just book the next flight and start packing. You might have the best experience of your life...or the worst. If you don't know what data you really need for testing, you might just use it all. That approach has pros and cons, so when you test software without having an idea of which scenarios you need to test, you'll want to have an exact copy of the production database because it's the easiest way to start testing with real data. Otherwise, you'll end up spending too much time and money waiting to get your copy of the data for testing.

When you start creating your testing process by building the list of test cases you'll need, it becomes pretty obvious how much and what type of data you're going to need. More importantly, think about your testing process as an iterative one. If you start testing the login page, you don't need to have all the information from the user for that test case, such as their birthdate or home address.

As you keep iterating, you're going to need more testing data. And as you find more bugs, you're going to need more real data. Unless you need to run stress tests, subsetting data is going to be enough for the majority of the test cases. And even if you still need to validate that the system can handle high waves of traffic, you can also generate varied static data for that purpose. More on this later.

Taking small sets of your production database should be enough for most of the tests you'll run to validate the software. You'll also reduce costs and complexity when building only the test data you really need at the moment.

#2 Avoid Having Sensitive Data for Testing

We've seen a lot of recent GDP-related lawsuits involving big companies in Europe. Europe is taking data protection more seriously than other countries. Pretty soon, regulations like GDPR will be implemented on other continents too. If GDPR is already affecting other companies, we better avoid having unprotected sensitive data in our testing environments.

SOX compliance regulation fosters a separation of duties within an organization. I've worked with these type of regulations. In my experience, auditors only want to see that only certain people have access to the production environments. These people with privileged permissions are legally responsible for what happens with customer data.

Even with regulations in place, data is still leaked. We have to be prepared for that, so you should operate as if you expect the information you're storing will be stolen someday. Mask any data that could identify a person, or what's also called personally identifiable information (PII).

Use irreversible methods to mask data so that it's difficult to unmask it. And make sure you're constantly checking that PII is protected. Managing test data will be simpler and easier if you create subsets of the data to fulfill your different testing needs. And you won't have to worry about giving sensitive data to developers or whoever needs production data for testing.

Ideally, try to avoid having sensitive data. But since sometimes you can't avoid it, try to keep PII data at a minimum, and securely mask the data you need to have.

#3 Build Synthetic Data for Better Efficiency

Even though you decided to mask sensitive data—especially if the data is going to be used for testing purposes—you want to make the security gap as small as possible by not including sensitive data in your tests (even if it's masked). One way to improve security is to replace real data (like credit card numbers) with autogenerated dummy data. That's synthetic data, and it will help you get more efficient results in testing.

You can take advantage of the synthetic data approach by using more realistic data than just dummy data. For example, you might have a user called Joe in your records, but for testing purposes, you decided Joe will be called Jeremy. This gives you a chance to run machine learning experiments where you can learn more about "Jeremy's" preferences without knowing that Jeremy is actually Joe. You're protecting Joe, even if the data is leaked or misused.

Synthetic data makes real data more shareable because you only have the data you need. Why would you need to know a person's name if you're just trying to replicate a bug in production? You're only interested in knowing which paths through the system's workflow the user took. What matters is why the data ended up in a certain state that caused the software to break. You can then decide either to ignore the person's name or replace it with other "real" data.

If you need to have large amounts of data for performance testing, you can use synthetic data to double the size of the database. Along with the previous benefits we discussed, synthetic data makes your tests more efficient by only using the data you need to cover specific test scenarios.

#4 Create Test Data As a Self-Service Model

DBAs are in charge of generating testing data. They know the best ways to do it and what data is sharable among teams (as I explained in the previous section), and sometimes they're the only ones who have access to production databases. When this happens, the DBAs become a bottleneck, and the time spent in the testing stage increases.

That's why you should create test data as a self-service model. It's not just so you don't constantly interrupt DBAs when a developer or tester needs data. The ability to automatically have testing data will let you parallelize the boring task of manually generating data for testing. Do you need to reduce the testing time? Fine. Create more subsets of testing data in parallel and distribute the test cases.

Another benefit of having a self-service model is that you can easily drop and re-create environments on demand. By doing this, you ensure repetition and predictable results when preparing testing data. It's also easier to include TDM in your CI/CD pipeline, which brings you closer each day to one-click deployments.

Creating a self-service model is far from an easy task. So it's important that DBAs, developers, and testers work together to create this model. Not all of them have the same needs and objectives. Join experience, knowledge, and skill to create a better model for data testing.

#5 Keep Testing Data up to Date

Last but not least, keep your testing data up to date. Your software will continue evolving, so the test scenarios and the data they need will keep changing over time too. Some test scenarios will become obsolete, and so will their data. Try to always keep the house clean by making sure you're only generating the testing data you really need regarding its relevance in time.

This process takes discipline, and good communication within the team always helps. Developers need to inform everyone which tests are no longer needed and when it's OK to remove them. And either DBAs or testers need to keep confirming that the data they're using for tests is still valid and relevant.

Keeping data fresh might seem like common sense. But I've seen delivery pipelines where tests continue to grow, even though some of the features no longer exist. Sometimes we get too extreme about trying to have a high percentage of test coverage, which isn't efficient.

Having up-to-date testing data will help you have higher quality TDM.

Benefits: Better Test Results With Better Testing Data

I'd say that testing is the most important stage of any software release life cycle. The more quickly you can verify that everything is still working, the better. Always keep the mindset that parallelizing testing will help you to speed up the process. For that, you need to have better test data quality, and it's not always necessary to have an exact replica of what you have in production. In fact, if you don't, it may help you in the cost, security, or speed departments.

It's important that you start by defining what you truly need and iterate from that. Automation helps with repetitive and boring tasks, but you need to continue taking account of the human side of things in the equation to generate data for testing purposes.

TDM helps you provide only the data you need, on time and securely.

Author:  Christian Melendez 

Further reading suggestions: Holistic Test Data Management