Blog Post

Azure Architecture Blog
4 MIN READ

Blue/Green Deployment with Azure Front Door

gamullen's avatar
gamullen
Icon for Microsoft rankMicrosoft
Aug 24, 2020

Azure Front Door is a powerful global routing solution that provides high availability and performance acceleration, among other features, for HTTP/HTTPS applications. The end users of the application are automatically routed to the nearest Microsoft “Point of Presence” (POP), and from there across the Microsoft global networking fiber to either a backend in Azure, or to the best “exit” POP for external (“Custom Host”) backends.

Blue/Green (or Canary) Deployment is a methodology to introduce application enhancements to a small subset of end users, and if all goes well, slowly increase the ratio until all users are on the new deployment. In case things do not go perfectly, it’s simple to just stop routing requests to the new buggy backend. This is a much safer way to introduce code changes than just suddenly pointing all users at the new enhancements.

Azure Front Door makes Blue/Green simple. This article will explain how.

Azure Front Door provides a nice configuration tool called “Designer.” It allows you to easily connect your frontend(s), backend pool(s) and routing rule(s) together. You essentially go through the Designer when you create your Front Door, but then you can use it later for configuration changes. Here is what it might look like after completion:

 

 

The first step is to configure the frontend.

 

 

Azure Front Door Frontend Definition

The main item here as it relates to blue/green is “Session Affinity.” This determines whether the end user always gets routed to the same backend after first accessing the Front Door.

Whether or not you enable this depends on your application, and the type of enhancements being rolled out. If it’s a major revision you will likely want to enable Session Affinity, so that if the user is initially routed to the new codebase she will continue to use it. If the enhancement is relatively minor, for example involving a single page with no dependencies on other parts of the application, you could potentially leave this disabled. If in doubt, enable Session Affinity.

This is where you configure the host(s) that will end up satisfying your users’ web requests.

In this example, I just chose two relatively-random public endpoints. You would configure your existing backend website and the new one under test. I divided requests between my two on a 75% to 25% split. You would likely send more traffic to the current website than the new one, and then likely direct more to the new one as it proves itself to be bug free.

Here is a screenshot of the definition of one of the two backends. Note the three parameters circled which, with one other, will be discussed in relation to how a configured backend gets chosen by Azure Front Door for a given request.

 

 

Azure Front Door Backend Definition

The following screenshot shows the two backends I configured as part of my backend pool. Note that the total of the “Weights” does not have to add up to 100 — I did that for clarity.

 

 

Azure Front Door Backend Pool Definition

Note that how much traffic gets routed to a given backend also depends on other parameters, which is discussed in excellent detail in this section.

For the purposes of a simple Blue/Green configuration, here is a summary on how you might configure these parameters.

  • Keep the backend under test disabled to start. Once you get all other parameters configured, you can enable it to start your Blue/Green testing.
  • The Priority of both backends should be the same. I recommend just leaving them at “1.”
  • As part of the Backend Pool configuration there is a section called “Load Balancing.” For the purposes of our discussion, the most important parameter here is “Latency sensitivity.” This value determines the highest number of milliseconds that a health probe can take for a given backend to be even considered for selection; setting it to zero disables the parameter entirely. I recommend you set it to a reasonably high value such as 500 (a half a second), meaning any backends responding in less than that time frame will be considered. The goal is to make sure both backends get used, as it’s possible they are in different data centers and present different latencies to the end user.

 

 

Backend Pool Latency Sensitivity
  • Any backends making it this far are finally selected based on weight.

For the purposes of Blue/Green, the routing rules are relatively unimportant. What you select here depends more on how your web application works.

That said, consider enabling “Caching,” which in essence turns your Azure Front End into a Content Delivery Network (CDN). This will allow web content to be cached and delivered from the Front Door POP rather than the backend. The POP is closer, sometimes significantly closer, to the end user than the backend. This can dramatically improve application performance.

This documentation contains full details on Azure Front Door routing if your configuration starts to get more complex.

Configuring Blue/Green with Azure Front Door is relatively simple once you understand a few vital concepts. Give it a try!

Published Aug 24, 2020
Version 1.0
  • colinkershaw's avatar
    colinkershaw
    Copper Contributor

    Just a minor terminology clarification: this article conflates Blue-Green deployment and Canary deployment and suggests they are identical (emphasis added): "Blue/Green (or Canary) Deployment is a methodology to introduce application enhancements to a small subset of end users, and if all goes well, slowly increase the ratio until all users are on the new deployment. In case things do not go perfectly, it’s simple to just stop routing requests to the new buggy backend. This is a much safer way to introduce code changes than just suddenly pointing all users at the new enhancements."

     

    The techniques are actually different in that detail of all users versus subset of users:

     

    • All Users: Blue-Green deployment is a wholesale redirection of all requests from one complete environment instance (eg, Blue) to a second complete instance (eg, Green).
      • Per your cited reference to Martin Fowler's Bliki post (emphasis added): "The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle."
    • Subset of users: Canary deployment is a redirection of some requests (for specific users only) to a second environment. These specific users effectively become "canaries in a coalmine" testing out new features in this second environment. This is similar to A/B testing although with different goals in mind.
      • Per a post on Canary Deployment from Martin Fowler's site: "Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody."

     

    So while there are similarities, namely that they both involve redirecting requests to an environment running new features and the ability to quickly rollback to an environment running a previous version by undoing the request redirection, Blue-Green redirects all requests whereas Canary redirects only a subset of requests (from specific users who are "canaries" - or guinea pigs).