This blog is for the purpose of sharing a few considerations from FastTrack for Azure engineers for customers migrating on-premises .NET applications to App Service or Azure Container Instances (ACI). This includes both non-container and container scenarios where a customer has evaluated various compute services for their applications and chosen the “non-orchestrator” App Service or ACI route instead of the orchestrator Azure Kubernetes Services (AKS) route. You can find our candidate service flowchart here or here (there is some overlap but also information presented in a different manner so both are highly useful). Because we describe this as the “non-orchestrator” route does not mean there will be no orchestration. In this sense, you will do much of the orchestration with the support of the tooling, making conscious decisions that an orchestrator would make on your behalf (with some configuration, of course). There will also be guidance in another post for the orchestrator route.
Rationalization and Remediation
The process of evaluating your applications to best determine how to migrate or modernize has a term called cloud rationalization. As you are making these decision, we suggest that you find out more about “the five Rs” of rationalization here. where apps are simply rehosted in the cloud with no changes in virtual machines. At some later time, the app will be rearchitected/refactored to take advantage of both the cost savings and greater interoperability of cloud-native services. There are also the cases where apps are candidates to be migrated directly into PaaS services with little more than configuration changes, such as a database connection.
What we think needs to be added to this vocabulary is the category of remediation, where customers don’t want a simple lift and shift but want to remediate existing apps to allow them to be rehosted in a PaaS service without taking advantage of PaaS services (some might still call this refactoring, but refactoring is more of a case of making changes to specifically take advantage of the PaaS model on a cloud platform). An example of this is migration of an ASP.NET Framework application that uses GAC assemblies, COM/COM+ components or uses port bindings other than 80/443. None of these will migrate to App Service PaaS service because they are not supported, even though the first two could simply be containerized and migrated to App Service Web App for Containers as is (but that wouldn’t be PaaS in the sense of taking advantage of its specific capabilities). If the customer indeed wants to migrate to App Service PaaS services, then the apps must be remediated, which will take some work, but is far short of a complete re-architecture/rewrite.
Containerization as Remediation
Containerization as a means of remediating an application that needs to be migrated to Azure provides a number of benefits. The first is that the application can be moved “as is” into Azure, immediately taking advantage of the PaaS platform, even though not fully. The existing Basic, Standard, and Premium App Service plans include container support for both Linux and Windows workloads at no additional cost.
Even though Web App for Containers is more costly than Virtual Machines, there is no worry of OS maintenance, and over time as the containerized applications are modernized, there can be a stepwise motion of refactoring within the same App Service plan until such a time the value of PaaS is fully realized (i.e., cloud-native), and the containerized application is no longer needed.
File Share Oriented Remediation
Applications often need to gain access to a file system for storage of documents of various types, so when moving to the cloud it is important to be able to maintain this capability in an application that is being migrated. Since there is no native access to SMB/NFS file shares in App Service at the moment, this becomes problematic for applications with this requirement. In this case, containerization is also recommended since as of current Azure File Shares are onlyfrom containerized applications (which should change in the future). This is one manner of remediation.
The other manner of remediation is to make the necessary code updates in an application to access the Azure File Share APIs. This is most practical that could run natively in App Service but would of course require more work than simply containerizing the application.
User Orchestration: Single Container vs. Multi-Container
This discussion will focus on deployment to App Service in what we will call the user orchestration path. As opposed to orchestration by a system such as Kubernetes, instead it is the user initiating orchestration manually as opposed to expecting the system to relieve much of that burden. Of course, there are constructs in App Service that handle what are generally considered orchestration steps such as scale out, and to some degree deployment slots that allow for fast rollouts and rollbacks that are still not as sophisticated as that of an orchestrator.
Now when migrating on-premises applications to Azure, customers have the option to containerize apps prior to migration, particularly those that won’t migrate directly into the standard multi-tenant App Service. Another reason customers containerize apps is because they’re pursuing a microservices strategy and want to begin the process of containerization with an eye towards breaking complex, containerized apps into smaller, composable services. In this article, we will look at single container vs. multi-container scenarios in what I will call “user orchestrator” scenarios (Azure Container Instances (ACI) or App Service Web App for Containers) and look at cases where you may need to step up to a Kubernetes orchestrator (specifically, Kubernetes with Azure Kubernetes Service).
User orchestrator single-container scenarios are generally due to applications that are self-contained (such as a nightly scheduled background jobs) and don’t require any scale out features. For example, these might be .NET Framework console apps with GAC dependencies or that make calls to COM/COM+ components. These types of apps can be containerized on-premises with Docker and deployed either into ACI or App Service Web App for Containers. If you do not need to scale up or scale out your container, then ACI is the best candidate since it does not provide either scale up or scale out of the box (however, an ACI container may be redeployed to increase CPU or memory). If you do need auto scale capability, then Web App for Containers is the better choice.
Even though Web App for Containers provides scale up and scale out capability, the general design is that any apps deployed into a single Web App plan will all scale together and there will be the same number of instances of each web app within the plan. For example, you have containerized an app that has a front-end and web APIs API1, API2, and API3. By default, as you scale out from one to additional instances, you will have an additional instance for each of these web apps. In other words, if you scale the App Service plan up to three instances, you will have three front-end web apps, three instances of API1, three instances of API2, and three instances of API3. This may not be optimal as you may only require, for example, two instances of API1 and API2, and one instance of API3.
For general App Services scaling considerations, you can find them here. However, you should know that App Service does allow for lower level per-app scaling, so you can scale an app independently of the App Service plan, but you will have to perform this manually. Alternatively, you could place each app that needs to scale with the same number of instances in different App Service plans. In the case of App Service and App Service plans, effectively you are the “orchestrator” with respect to the ability to independently scale apps. This functionality is a basic capability of a Kubernetes orchestrator.
With respect to multi-container scenarios, ACI and Web App for Containers are possible choices in user orchestrator scenarios. There are some distinct differences, however, that must be considered. Docker containers essentially map to pods in Kubernetes. Most often, a pod consists of either a single container or multiple containers with a primary container and tightly coupled secondary, supporting containers. For example, an application container might have a supporting or “sidecar” container that performs functions such as logging for the application container or monitoring the application container. These containers act as a unit and are not intended to operate independently.
In ACI, the equivalent of a Kubernetes pod is a container group, which will have one or more containers in the group. Docker Compose allows you to specify a multi-container group that can be deployed into ACI. Two independent containers can be deployed in the group, but they will not have the ability to, again, scale up or out. Generally, however, we do not recommend this other than as a temporary rehost that will eventually be refactored to access a PaaS database service.
To scale up or out, you would need Web App for Containers instead of ACI. The WordPress container could scale independently of the backend MySQL container since you wouldn’t want two MySQL backends, but as discussed earlier, once deployed that would require per-app scaling within the same App Service plan to achieve multiple front-end WordPress instances with a single backend instance. Of course, you could redeploy the MySQL database in its own App Service plan and then scale the load balanced WordPress container in the original plan. Again, we do not recommend backend database services in containers, but would instead recommend migrating the backend database to a PaaS database service.
Rollback
For App Service, there is no notion of rolling a deployment back, but it does provide multiple “deployment slots” in Standard and above App Service plans. Deployment slot functionality allows you to deploy your app to a staging slot which can then be which can be used to validate app changes before swapping the deployment into the default production slot. After the swap, the previous production deployment has now been swapped into the staging slot. If there are any problems with the current production deployment, the old production app can be swapped again into the production slot. There is only one production slot, but there can be up to twenty staging slots in any App Service plan depending on the plan tier. In the case of Kubernetes, an updated deployment will replace the previous deployment, with a rolling update as the default deployment model. To rollback, Kubernetes can go back to the previous deployment with a single command.
Generally, we recommend using three slots for your App Service production App Services Plan as follows: Production, Staging, and ‘last-known-good’ slot. Swap Production with Staging, and then swap staging with the ‘last-known-good' slot. Now you have an exact version of production in the ‘last-known-good' slot in case you want to roll back. This best practice is explained at the Microsoft Azure Architecture Center website under the “Deployment” section: Basic web application - Azure Reference Architectures | Microsoft Docs.
Self-healing
App Service has the ability to auto-heal applications leveraging App Service diagnostics, but this has to be configured by the user through the setting of rules “request count, slow request, memory limit, and HTTP status code to trigger mitigation actions.” There is also a proactive healing version of auto-healing for Windows only workloads that is turned on automatically with the ability to opt out.
Declarative vs. Imperative
In a PaaS service such as App Service, all actions are imperative, in that to get a deployment to a specific state, a user must take a specific action to direct the system to that state. In the declarative case, the user tells (declares to) the system its desired state through a specification, and then trusts the system to take that desired specification and do everything it can get to that desired state. An orchestrator, such as Kubernetes, while it may accept imperative commands, typically accepts a declarative specification (manifest) and makes it so with no further intervention from the user.
Load Balancing and Protecting Your Apps
If you simply want to load balance your application, with global failover to another region, the standard method for some time has been to use the standard layer 4 Azure Load Balancer with Traffic Manager, the DNS-based traffic load balancer. If you need additional features such as SSL offload, application-layer processing, or performance acceleration, then you will want Azure Front Door. Application Gateway provides similar features to Azure Front Door, except for performance acceleration. While both Front Door and Application Gateway are layer 7 (HTTP/HTTPS) load balancers, the primary difference is that Azure Front Door is a global service whereas Application Gateway is a regional service. And while Azure Front Door can load balance between your different scale units/clusters/stamp units across regions, Application Gateway allows you to load balance between your VMs/containers, etc. that is within the scale unit.
Below are further comparisons of Application Gateway to Azure Front Door:
- Azure Front Door can perform path-based load balancing only at the global level but if one wants to load balance traffic even further within their virtual network (VNet) then they should use Application Gateway.
- Azure Front Door doesn't work at a VM/container level, so it cannot do connection draining. However, Application Gateway allows you to do connection draining.
- Application Gateway can listen on an internal and public endpoints. Azure Front Door can listen only on public endpoints.
- Azure Front Door is richer for web apps exposed publicly as it has automatic integration with CDN’s for static content caching performance
- Azure Front Door and Application Gateway both support session affinity. While Front Door can direct subsequent traffic from a user session to the same cluster or backend within a region, Application Gateway can direct affinitize the traffic to the same server within the cluster.
- For Application Gateway, Health probes are used to check the backend health and take the servers out of rotation when they are unhealthy.
However, in AFD, heath probes are not only used for tracking the health of the backend and taking the unhealthy servers out of rotation but also to route the traffic to the server based on latency, priority and weights. - Unlike Application Gateway, you cannot configure custom probe status on Azure Front Door. Only responses with 200 OK will be accepted.
- The Azure Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities like SQL injection and cross-site scripting are among the most common attacks. WAF coupled with either Application Gateway or AFD stops denial-of-service and targeted application attacks at the Azure network edge, close to attack sources before they enter your virtual network.
Below are some of our recommended best practices:
- Don’t rely on your web application to handle SQL Injection/Cross-site scripting, etc. Leverage WAF with Application Gateway or Azure Front Door.
- Integrate Application Gateway with Key Vault to store Application Gateway certificates in Key Vault. You can also directly upload certificate to Application Gateway, but it's recommended to keep/centralize the certificates in Key Vault and refer from Application Gateway.
- Deploy Application Gateway across multiple Availability Zones if the rest of the solution is also spread across multiple Availability Zones. Now you can deploy Application Gateway v2 across multiple Availability Zones and can also turn on autoscaling.
- Enable WAF in detection mode to ensure that the WAF doesn't block requests and then once you’re confident there are no false positives, enable prevention mode.
In some scenarios, it could also make sense to use both Azure Front Door and Application Gateway for an application. You could use Azure Front Door to globally load balance application traffic and then use Application Gateway to provide fine-tuned load balancing within a region. It is important to understand the application requirement and then select the right load balancing services. You can consult our decision tree for load balancing options here. There is also a great video you can view here: Picking the right Azure Load Balancing Solution - YouTube.
Monitoring
Below is a conceptual architecture of monitoring to assist in understanding how the many components work together.
So how does Azure Monitor work? Azure Monitor Documentation is a place to reference.
It all starts with collecting telemetry. Azure Monitor can collect data from Application, Network, Infrastructure and you can also ingest your own custom data. All the data is stored in centralized, fully managed logs and metrics stores. So, what can you do with all the data:
- Typically, you start with insights which are end to end experiences we have put together for canonical scenarios or resources such as applications, containers, VMs, network, storage etc. Insights provide guidance and help troubleshoot and root cause issues quickly with a drill down experience.
- You may in certain cases, just want to visualize the data. For that we provide Azure dashboards, Power BI integrations and a more native experience called Workbooks.
- After visualizing the data, you may form some hypothesis and want to test them out. For that you need access to the raw data stores. For that we provide metrics explorer and a powerful big data platform called Log Analytics that is capable of querying petabytes of data within seconds.
- If you want to be pro-active and take corrective actions, you can create alerts and runbooks to auto remedy an issue. Or if it’s a capacity issue, you can choose to scale-in or scale-out
- Finally, we know that monitoring isn’t done is silos. Azure Monitor provides out of the box integration with popular ITSM and DevOps tools. You can again use APIs, Event Hubs, and Logic Apps to build custom integrations.
Application Insights
Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. Use it to monitor your live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It's designed to help you continuously improve performance and usability. It works for apps on a wide variety of platforms including .NET, .NET Core, Node.js, Java, and Python hosted on-premises, hybrid, or any public cloud. It integrates with your DevOps process and has connection points to a variety of development tools. It can monitor and analyze telemetry from mobile apps by integrating with Visual Studio App Center.
Application Insights monitors:
- Request rates, dependency rates, response times, and failure rates for pages and external services.
- Exceptions - Both server and browser exceptions are reported.
- Page views and load performance - reported by your users' browsers.
- AJAX calls from web pages - rates, response times, and failure rates.
- User and session counts.
- Performance counters
- Host diagnostics from Docker or Azure.
- Diagnostic trace logs
- Custom events and metrics that you write yourself in the client or server code, to track business events such as items sold or games won.
Log Analytics
Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You may write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you may write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend. Whether you work with the results of your queries interactively or use them with other Azure Monitor features such as log query alerts or workbooks, Log Analytics is the tool that you're going to use write and test them.
Kusto Query Language
A Kusto query is a read-only request to process data and return results. The request is stated in plain text, using a data-flow model designed to make the syntax easy to read, author, and automate. The query uses schema entities that are organized in a hierarchy similar to SQL's: databases, tables, and columns. Along with this Kusto Query Reference, if you have a SQL background, you will find the SQL to Kusto Query cheat sheet useful.
Networking Your App
Networking your applications can be challenging as there are so many options from connectivity and security perspectives. We will try to take some of the mystery out of the various networking options by presenting concepts and best practices as you are considering networking for your applications.
VNet Integration
VNet Integration is a feature that allows outbound connectivity from web applications to the selected subnet within the VNet. This allows the web application to access resources from within the VNet. If the VNet is further connected with an on-premises data center via express route or VPN connectivity, then access to those on-premises resources is also possible.
VNet Integration gives your web application access to resources in your virtual network but doesn't grant inbound private access to your web application from the virtual network. Private site access refers to making your app only accessible from a private network such as from within an Azure virtual network. VNet Integration is only for making outbound calls from your app into your VNet.
There are two forms of the VNet Integration feature:
- Regional VNet Integration: When you connect to Azure Resource Manager virtual networks in the same region, you must have a dedicated subnet in the VNet you're integrating with.
- Gateway-required VNet Integration: When you connect directly to VNet in other regions or to a classic virtual network in the same region, you need an Azure Virtual Network gateway provisioned in the target VNet.
Network Security Groups (NSGs)
When dealing with solutions integrating with VNETs, there may be cases where Network Security Group restrictions have been applied to your subnets, so if there are issues accessing applications that have been integrated into a VNet, it should be one of the first places you look to diagnose connectivity issues. Where NSGs are involved, it is important to review the inbound and outbound security rules to ensure the intended traffic filtering is in place. Do remember that currently when Private Link is used, NSG rules will not be evaluated (however, NSG and UDR support for private endpoints are in public preview on select regions). Please see the later section on Private Link and Private Endpoints.
Azure Firewall
Based on your solution requirements, all inbound traffic to the web application should go through Web Application Firewall enabled with either Azure Front Door or Application Gateway. However, there are scenarios where one needs to send outbound traffic from the web application. For such a scenario, Azure Firewall may be configured to route egress traffic. The web application will need to have VNet integration set up with the VNet that has the Azure Firewall deployed.
There may be a use case where one needs to share a dedicated static outbound IP for web applications hosted on App Service plans. When a web application is deployed on an App Service plan, the platform provides a list of outbound IPs associated with the web application that may get assigned. To advertise the web application via a dedicated static outbound IP, traffic can be routed via Azure Firewall, and then the assigned public IP of the firewall can be advertised as the outbound dedicated static IP. Of course, VNet integration will need to be enabled to the VNet that is hosting the Azure Firewall.
Data exfiltration is indeed a threat where unauthorized data transfer can occur, so do use Azure Firewall to protect against data exfiltration concerns. Azure Firewall can be leveraged as reverse proxy to restrict access to only authorized PaaS services for services where Private Link is not yet supported. Please do note that network controls alone are not sufficient to block data exfiltration, so further hardening with proper identity controls, key protections, and encryption is also needed.
Private Endpoints and Private Link
Private Endpoints can be used to disable all public access and thus secure your App Service web application when you want to access only from within the same VNet, other peered VNets, or from on-premises through a VPN or Express Route connection. A private endpoint is a network interface that uses a private IP address from your virtual network, effectively bringing the web application into your virtual network. The Private Endpoint for your web application will allow clients located in your private network to securely access the app over Private Link, which will be automatically created for you when you configure the Private Endpoint for your App Service web application. Private Endpoint is unidirectional and can be used only to secure inbound traffic. For restricting outbound traffic, refer to Azure Firewall or VNet Integration sections.
Using Private Endpoint for your web application enables you to:
- Secure your web application by configuring the Private Endpoint, eliminating public exposure.
- Securely connect to web application from on-premises networks that connect to the VNet using a VPN or ExpressRoute private peering.
- Avoid any data exfiltration from your VNet.
Monitoring avenues to leverage:
- Bytes In / Out on Portal ‘Monitoring’ Blade
- Network Watcher Connection Monitoring
- Firewall Logs and Metrics
- Running traffic --> Network Virtual Appliances (NVA)
- Private Link service: Configuring Proxy Protocol to identify source Private Endpoints.
- Private Link service: NAT Port Availability
- NSG Flow Logs (Roadmap)
Private Link for App Service web applications does have some limitations:
- Private Link requires a premium SKU web application
- Currently NSG’s do not apply to Private Link (NSG and UDR support for private endpoints are in public preview on select regions).
- When Private Link is enabled the access restrictions on the web application are not evaluated.
- Private Endpoint does not support FTP access to the web application.
- Currently you cannot monitor traffic between the Private Endpoint and Private Link services. However, you can monitor the data before and after it arrives.
If you just need a secure connection between your VNet and your web application, a Service Endpoint is the simplest solution (explained in the next section). But if you also need to expose your app over a private IP or reach the web application from on-premises through an Azure Gateway, a regionally peered VNet, or a globally peered VNet, Private Endpoint is the best solution.
App Service Access Restrictions
App Service access restrictions are for the most part straightforward when it comes to blocking on a set of IPv4 or IPv6 address blocks, but just remember it is looking for CIDR blocks to establish the range of IP addresses to be restricted. You will need to add a restriction for each CIDR block.
With respect to the Virtual Network option, essentially what you are specifying is that you only want certain VNets to have access to your App Service application. This is also called a service endpoint-based rule, where you will have to configure a Microsoft.Web service endpoint on each of the subnets that you wish to access your App Service application. This will provide an optimized route over the Azure network backbone, allowing private IP addresses in your VNet to access your App Service web application without needing a public IP address.
A service tag is similar to a service endpoint, but in this case what you are specifying is that you only want certain Azure services, such as Application Gateway or Azure Front Door, to be able to access your App Service. For public internet-based App Services, you will most likely use this setting so your App Service web application is protected behind Application Gateway or Azure Front Door. A service tag represents a list of IP address prefixes, so the beauty of service tags is that as the address list changes you don’t have to worry about managing that IP address list. Microsoft will manage this list for specific Azure services that provide a service tag.