The Application Gateway for Containers is a new cutting-edge Azure service that offers load balancing and dynamic traffic management for applications running in a Kubernetes cluster. As part of Azure's Application Load Balancing portfolio, this innovative product provides an enhanced experience for developers and administrators. The Application Gateway for Containers represents the evolution of the Application Gateway Ingress Controller (AGIC) and enables Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway load balancer. In this article, we will guide you through the process of deploying an Azure Kubernetes Service(AKS) cluster with an Application Gateway for Containers in a fully automated fashion, using either a bring your own (BYO) or managed by ALB deployment.
For more information, see:
- What is Application Gateway for Containers?
- Application Gateway for Containers components
- Quickstart: Deploy Application Gateway for Containers ALB Controller
- Quickstart: Create Application Gateway for Containers - Bring your own deployment
- Quickstart: Create Application Gateway for Containers managed by ALB Controller
- Advanced load balancing scenarios with the new Azure Application Gateway for Containers
Bicep templates, companion code, Grafana dashboards, and Visio diagrams are in this GitHub repository.
Prerequisites
- An active Azure subscription. If you don't have one, create a free Azure account before you begin.
- Visual Studi - Code installed on one of the supported platforms along with the Bicep extension.
- Azure CLI version 2.50.0 or later installed. to install or upgrade, see Install Azure CLI.
aks-preview
Azure CLI extension of version 0.5.145 or later installed
You can run az --version
to verify above versions.
to install the aks-preview extension, run the following command:
az extension add --name aks-preview
Run the following command to update to the latest version of the extension released:
az extension update --name aks-preview
Architecture
This sample provides a comprehensive set of Bicep modules that facilitate the deployment of an Azure Kubernetes Service (AKS) cluster with an integrated Application Gateway for Containers. Additionally, it offers modules for the optional deployment of other essential Azure services, including the Azure Monitor managed service for Prometheus resource and an Azure Managed Grafana instance for efficient monitoring of the cluster's performance and overall health status.
The following diagram illustrates the architecture and network topology implemented by this sample
Bicep modules are parametric, so that you can choose any network plugin. Currently, Application Gateway for Containers currently only supports Azure CNI with static IP allocation and Azure CNI with dynamic IP allocation. In addition, this sample shows how to deploy an Azure Kubernetes Service cluster with the following extensions and features:
- Istio-based service mesh add-on for Azure Kubernetes Service provides an officially supported and tested Istio integration for Azure Kubernetes Service (AKS).
- API Server VNET Integration allows you to enable network communication between the API server and the cluster nodes without requiring a private link or tunnel. AKS clusters with API Server VNET integration provide a series of advantages, for example, they can have public network access or private cluster mode enabled or disabled without redeploying the cluster. For more information, see Create an Azure Kubernetes Service cluster with API Server VNet Integration.
- Azure NAT Gateway to manage outbound connections initiated by AKS-hosted workloads.
- Event-driven Autoscaling (KEDA) add-on is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Incubation project.
- Dapr extension for Azure Kubernetes Service (AKS) allows you to install Dapr, a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. With its sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic.
- Flux V2 extension allows to deploy workloads to an Azure Kubernetes Service (AKS) cluster via GitOps. For more information, see GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes
- Vertical Pod Autoscaling allows you to automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources. For more information, see Kubernetes Vertical Pod Autoscaling.
- Azure Key Vault Provider for Secrets Store CSI Driver provides a variety of methods of identity-based access to your Azure Key Vault.
- Image Cleaner to clean up stale images on your Azure Kubernetes Service cluster.
- Azure Kubernetes Service (AKS) Network Observability is an important part of maintaining a healthy and performant Kubernetes cluster. By collecting and analyzing data about network traffic, you can gain insights into how your cluster is operating and identify potential problems before they cause outages or performance degradation.
- Windows Server node pool allows running Windows Server containers on an Azure Kubernetes Service (AKS) cluster. You can disable the deployment of a Windows node pool.
In a production environment, we strongly recommend deploying a private AKS cluster with Uptime SLA. For more information, see private AKS cluster with a Public DNS address. Alternatively, you can deploy a public AKS cluster and secure access to the API server using authorized IP address ranges.
The Bicep modules deploy the following Azure resources:
- Microsoft.Network/virtualNetworks: a new virtual network with seven subnets:
SystemSubnet
: this subnet is used for the agent nodes of thesystem
node pool.UserSubnet
: this subnet is used for the agent nodes of theuser
node pool.PodSubnet
: this subnet is used to allocate private IP addresses to pods dynamically.ApiServerSubnet
: API Server VNET Integration projects the API server endpoint directly into this delegated subnet in the virtual network where the AKS cluster is deployed.AzureBastionSubnet
: a subnet for the Azure Bastion Host.VmSubnet
: a subnet for a jump-box virtual machine used to connect to the (private) AKS cluster and for the private endpoints.AppGwForConSubnet
: this subnet contains the proxies created by the Application Load Balancer control plane to handle and distribute the ingress traffic to the AKS-hosted pods.
- Microsoft.ServiceNetworking/trafficControllers: an Application Gateway for Containers used as a service proxy to handle load balancing, routing, and TLS termination for AKS-hosted workloads. There are two deployment strategies for management of Application Gateway for Containers. You can decide specify the deployment strategy using the
applicationGatewayForContainersType
parameter in themain.bicep
module:- Bring your own (BYO) deployment: If you choose this strategy, the Bicep module creates the Application Gateway for Containers resource in the target deployment resource group. In this case, you are responsible to create Association and Frontend child resources for the Application Gateway for Containers using the Azure Portal, Bicep, Azure CLI, Terraform, or Azure REST API. Every time you want to create a new Gateway or an Ingress object in your Azure Kubernetes Service (AKS) cluster, it's your responsibility to provision a Frontend child resource for the Application Gateway for Containers upfront and reference it in the annotations in the Gateway or Ingress object. You are also responsible for deleting any Frontend child resource after deleting a Gateway or Ingress object in Kubernetes.
- Managed by ALB Controller: In this deployment strategy Azure Load Balancer (ALB) Controller deployed in AKS using an Helm chart by the deployment script is responsible for the lifecycle of the Application Gateway for Containers resource and its sub resources. The ALB Controller creates Application Gateway for Containers resource in the AKS node resource group when an
ApplicationLoadBalancer
Kubernetes object is defined on the cluster. Every time you want to create a new Gateway or an Ingress object which references theApplicationLoadBalancer
Kubernetes object in the annotations, the ALB Controller provisions a new Frontend resource and manage its lifecycle based on the lifecycle of the Gateway or Ingress object.
- Microsoft.ContainerService/managedClusters: A public or private Azure Kubernetes Service(AKS) cluster composed of a:
- A
system
node pool in a dedicated subnet. The default node pool hosts only critical system pods and services. The worker nodes have node taint which prevents application pods from beings scheduled on this node pool. - A
user
node pool hosting user workloads and artifacts in a dedicated subnet. - A
windows
node pool hosting Windows Server containers. This node pool is optionally created when the value of thewindowsAgentPoolEnabled
equalstrue
- A
- Microsoft.ManagedIdentity/userAssignedIdentities: a user-defined managed identity used by the AKS cluster to create additional resources like load balancers and managed disks in Azure.
- Microsoft.Compute/virtualMachines: Bicep modules can optionally create a jump-box virtual machine to manage the private AKS cluster.
- Microsoft.Network/bastionHosts: a separate Azure Bastion is deployed in the AKS cluster virtual network to provide SSH connectivity to both agent nodes and virtual machines.
- Microsoft.Network/natGateways: a bring-your-own (BYO) Azure NAT Gateway to manage outbound connections initiated by AKS-hosted workloads. The NAT Gateway is associated to the
SystemSubnet
,UserSubnet
, andPodSubnet
subnets. The outboundType property of the cluster is set touserAssignedNatGateway
to specify that a BYO NAT Gateway is used for outbound connections. NOTE: you can update theoutboundType
after cluster creation and this will deploy or remove resources as required to put the cluster into the new egress configuration. For more information, see Updating outboundType after cluster creation. - Microsoft.Storage/storageAccounts: this storage account is used to store the boot diagnostics logs of both the service provider and service consumer virtual machines. Boot Diagnostics is a debugging feature that allows you to view console output and screenshots to diagnose virtual machine status.
- Microsoft.ContainerRegistry/registries: an Azure Container Registry (ACR) to build, store, and manage container images and artifacts in a private registry for all container deployments.
- Microsoft.KeyVault/vaults: an Azure Key Vault used to store secrets, certificates, and keys that can be mounted as files by pods using Azure Key Vault Provider for Secrets Store CSI Driver. For more information, see Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster and Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver.
- Microsoft.Network/privateEndpoints: an Azure Private Endpoint is created for each of the following resources:
- Azure Container Registry
- Azure Key Vault
- Azure Storage Account
- API Server when deploying a private AKS cluster.
- Microsoft.Network/privateDnsZones: an Azure Private DNS Zone is created for each of the following resources:
- Azure Container Registry
- Azure Key Vault
- Azure Storage Account
- API Server when deploying a private AKS cluster.
- Microsoft.Network/networkSecurityGroups: subnets hosting virtual machines and Azure Bastion Hosts are protected by Azure Network Security Groups that are used to filter inbound and outbound traffic.
- Microsoft.Monitor/accounts: An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. Log Analytics workspaces contain logs and metrics data from multiple Azure resources, whereas Azure Monitor workspaces currently contain only metrics related to Prometheus. Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the Prometheus. This fully managed service allows you to use the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure. The primary method for visualizing Prometheus metrics is Azure Managed Grafana. You can connect your Azure Monitor workspace to an Azure Managed Grafana to visualize Prometheus metrics using a set of built-in and custom Grafana dashboards.
- Microsoft.Dashboard/grafana: an Azure Managed Grafana instance used to visualize the Prometheus metrics generated by the Azure Kubernetes Service(AKS) cluster deployed by the Bicep modules. Azure Managed Grafana](https://learn.microsoft.com/en-us/azure/managed-grafana/overview) is a fully managed service for analytics and monitoring solutions. It's supported by Grafana Enterprise, which provides extensible data visualizations. This managed service allows to quickly and easily deploy Grafana dashboards with built-in high availability and control access with Azure security.
- Microsoft.OperationalInsights/workspaces: a centralized Azure Log Analytics workspace is used to collect the diagnostics logs and metrics from all the Azure resources:
- Azure Kubernetes Service cluster
- Application Gateway for Containers
- Azure Key Vault
- Azure Network Security Group
- Azure Container Registry
- Azure Storage Account
- Azure jump-box virtual machine
- Microsoft.Resources/deploymentScripts: a deployment script is used to run the
install-alb-controller.sh
Bash script that creates the Application Load Balancer (ALB) Controller via Helm along with Cert-Manager. For more information on deployment scripts, see Use deployment scripts in Bicep. - Microsoft.Insights/actionGroups: an Azure Action Group to send emails and SMS notifications to system administrators when alerts are triggered.
The Bicep modules provide the flexibility to deploy the following Azure resources based on your requirements selectively:
- Microsoft.CognitiveServices/accounts: an Azure OpenAI Service with a GPT-3.5 model used by an AI application like a chatbot. Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, and DALL-E models with Azure's security and enterprise promise. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
- Microsoft.ManagedIdentity/userAssignedIdentities: a user-defined managed identity used by the chatbot application to acquire a security token via Azure AD workload identity to call the Chat Completion API of the ChatGPT model provided by the Azure OpenAI Service.
NOTE
You can find the architecture.vsdx
file used for the diagram under the visio
folder.
What is Bicep?
Bicep is a domain-specific language (DSL) that uses a declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
What is Gateway API?
The Ingress resources Kubernetes objects have evolved into the more comprehensive and powerful Kubernetes Gateway API. Ingress Controller and Gateway API are both Kubernetes objects used for managing traffic routing and load balancing. While Ingress Controller served as entry points for external traffic, they had limitations in terms of flexibility and extensibility. The Kubernetes Gateway API emerged as a solution to address these limitations. Designed to be generic, expressive, extensible, and role-oriented, the Gateway API is a modern set of APIs for defining L4 and L7 routing rules in Kubernetes.
Gateway API offers superior functionality compared to Ingress Controllers as it separates listeners and routes into separate Kubernetes objects, Gateway
and HTTPRoute
. This separation allows different individuals with distinct roles and permissions to deploy them in separate namespaces. Additionally, Gateway API provides advanced traffic management capabilities including layer 7 HTTP/HTTPS request forwarding based on criteria such as hostname, path, headers, query string, methods, and ports. It also offers SSL termination and TLS policies for secure traffic management. These features grant better control and customization of traffic routing. The design of the Gateway API was driven by the following design goals to address and resolve issues and limitations in ingress controllers:
- Role-oriented: The Gateway API comprises API resources that model organizational roles involved in using and configuring Kubernetes service networking.
- Portable: Similar to Ingress, the Gateway API is designed to be a portable specification supported by multiple implementations.
- Expressive: The Gateway API resources support core functionality such as header-based matching, traffic weighting, and other capabilities that were previously only possible through custom annotations in Ingress.
- Extensible: The Gateway API allows for the linking of custom resources at different layers of the API, enabling granular customization within the API structure.
Additional notable capabilities of the Gateway API include:
- GatewayClasses: Formalizes types of load-balancing implementations, making it easier for users to understand available capabilities through the Kubernetes resource model.
- Shared Gateways and cross-Namespace support: Allows multiple Route resources to attach to the same Gateway, enabling load balancer and VIP sharing among teams and across Namespaces without direct coordination.
- Typed Routes and typed backends: The Gateway API supports typed Route resources and different types of backends, providing flexibility in supporting various protocols (HTTP, gRPC) and backend targets (Kubernetes Services, storage buckets, functions).
- Experimental Service mesh support with the GAMMA initiative: The Gateway API enables the association of routing resources with Service resources, allowing the configuration of service meshes and ingress controllers.
When to Choose Ingress Controllers or Gateway API
Ingress resources are suitable for specific use cases:
- Ingress Controllers are a straightforward option for setting up and are well-suited for smaller and less complex Kubernetes deployments that prioritize easy configuration.
- If you currently have Ingress controllers configured in your Kubernetes cluster and they meet your requirements effectively, there may not be an immediate necessity to transition to the Kubernetes Gateway API.
Gateway API is the recommended option in the following situations:
- When dealing with complex routing configurations, traffic splitting, and advanced traffic management strategies, the flexibility provided by Kubernetes Gateway API's Route resources is essential.
- In cases where networking requirements call for custom solutions or the integration of third-party plugins, the Kubernetes Gateway API's CRD-based approach offers enhanced extensibility.
What is Application Gateway for Containers?
The Application Gateway for Containers is a new cutting-edge Azure service that offers load balancing and dynamic traffic management for applications running in a Kubernetes cluster. As part of Azure's Application Load Balancing portfolio, this innovative product provides an enhanced experience for developers and administrators. The Application Gateway for Containers represents the evolution of the Application Gateway Ingress Controller (AGIC) and enables Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway load balancer. Azure Application Gateway for Containers enables you to host multiple web applications on the same port, utilizing unique backend services. This allows for efficient multi-site hosting and simplifies the management of your containerized applications. The Application Gateway for Containers fully supports both the Gateway API and Ingress API Kubernetes objects for traffic load balancing. For more information, see:
- What is Application Gateway for Containers?
- Application Gateway for Containers components
- Quickstart: Deploy Application Gateway for Containers ALB Controller
- Quickstart: Create Application Gateway for Containers - Bring your own deployment
- Quickstart: Create Application Gateway for Containers managed by ALB Controller
- Advanced load balancing scenarios with the new Azure Application Gateway for Containers
Deployment Strategies
Azure Application Gateway for Containers supports two main deployment strategies:
- Bring your own (BYO) deployment: If you choose this strategy, the Bicep module creates the Application Gateway for Containers resource in the target deployment resource group. In this case, you are responsible to create Association and Frontend child resources for the Application Gateway for Containers using the Azure Portal, Bicep, Azure CLI, Terraform, or Azure REST API. Every time you want to create a new Gateway or an Ingress object in your Azure Kubernetes Service (AKS) cluster, it's your responsibility to provision a Frontend child resource for the Application Gateway for Containers upfront and reference it in the annotations in the Gateway or Ingress object. After deleting a Gateway or Ingress object in Kubernetes, you are also responsible for deleting any Frontend child resource.
- Managed by the Application Load Balancer (ALB) Controller: In this deployment strategy Azure Load Balancer (ALB) Controller deployed in AKS using an Helm chart by the deployment script is responsible for the lifecycle of the Application Gateway for Containers resource and its sub-resources. The ALB Controller creates an Application Gateway for Containers resource in the AKS node resource group when an
ApplicationLoadBalancer
Kubernetes object is defined on the cluster. Every time you want to create a new Gateway or an Ingress object which references theApplicationLoadBalancer
Kubernetes object in the annotations, the ALB Controller provisions a new Frontend resource and manage its lifecycle based on the lifecycle of the Gateway or Ingress object.
Application Gateway for Containers Components
The components of Azure Application Gateway for Containers include:
- Core Components: Application Gateway for Containers is a parent Azure resource that manages the control plane, which handles the configuration and orchestration of proxies based on customer requirements. It serves as the parent resource for two important child resources: associations and frontends. These child resources are unique to each Application Gateway for Containers and cannot be shared with other instances of Application Gateway for Containers.
- Frontend: An Application Gateway for Containers frontend is a sub-resource of the parent Application Gateway for Containers in Azure. It acts as the entry point for client traffic directed towards a specific Application Gateway for Containers. Each frontend is unique and cannot be associated with multiple Application Gateway for Containers. It provides a unique fully qualified domain name (FQDN) that can be assigned to a customer's CNAME record. Currently, private IP addresses are not supported for frontends. Additionally, a single Application Gateway for Containers has the ability to support multiple frontends.
- Association: An Application Gateway for Containers association is a connection point into a virtual network and is a child resource of the Application Gateway for Containers. Application Gateway for Containers is designed to allow for multiple associations, but currently only one association is allowed. During the creation of an association, the necessary data plane is provisioned and connected to a subnet within the defined virtual network. Each association should have at least 256 available addresses in the subnet. If multiple Application Gateway for Containers are provisioned and each contains one association, the required number of available addresses should be n*256. It is important that all association resources match the same region as the parent Application Gateway for Containers resource. The subnet referenced by the association will contain the proxy components used to handle the ingress traffic to the Azure Kubernetes Service (AKS) cluster.
- Managed identity: A user-defined managed identity with appropriate permissions must be provided for the ALB controller to update the control plane.
- Application Load Balancer (ALB) Controller: The Application Gateway for Containers ALB Controller is a vital Kubernetes deployment that facilitates the seamless configuration and deployment of Application Gateway for Containers. By actively monitoring and responding to various Kubernetes Custom Resources and Resource configurations, such as Ingress, Gateway, and ApplicationLoadBalancer, the ALB Controller ensures efficient management of Application Gateway for Containers. Deployed using Helm, the ALB Controller comprises two essential pods. The first is the alb-controller pod, which takes charge of load balancing configuration for Application Gateway for Containers based on customer preferences and intent. The second is the alb-controller-bootstrap pod, responsible for effectively managing Custom Resource Definitions (CRDs) to further optimize the deployment process.
- Managed proxies: These proxies route traffic directly to pods within your Azure Kubernetes Service (AKS) cluster. To ensure direct addressability, the cluster and the proxies must belong to the same virtual network and be configured with Azure CNI. The Application Load Balancer (ALB) control plane creates the proxies inside the subnet referenced by the association. For this reason, the user-defined managed identity used by the Application Load Balancer (ALB) Controller need to be assigned the Network Contributor role on this subnet. The latter needs to have at least with at least a /24 IP address space.
Features and Benefits
Azure Application Gateway for Containers offers a range of features and benefits, including:
- Load Balancing: The service efficiently distributes incoming traffic across multiple containers, ensuring optimal performance and scalability. For more information, see Load balancing features.
- Implementation of Gateway API: Application Gateway for Containers supports the Gateway API, which allows for the definition of routing rules and policies in a Kubernetes-native way. For more information, see Implementation of Gateway API.
- Custom Health Probe: You can define custom health probes to monitor the health of your containers and automatically route traffic away from unhealthy instances. For more information, see Custom health probe for Application Gateway for Containers.
- Session Affinity: The service provides session affinity, allowing you to maintain a consistent user experience by routing subsequent requests from the same client to the same container. For more information, see Application Gateway for Containers session affinity overview.
- TLS Policy: Application Gateway for Containers supports TLS termination, allowing you to offload the SSL/TLS encryption and decryption process to the gateway. For more information, see Application Gateway for Containers TLS policy overview.
- Header Rewrites: Application Gateway for Containers offers the capability to rewrite HTTP headers of client requests and responses from backend targets. Header Rewrites utilize the
IngressExtension
custom resource definition (CRD) of the Application Gateway for Containers. For more details, refer to the documentation on Header Rewrites for Ingress API and Gateway API. - URL Rewrites: Application Gateway for Containers allows you to modify the URL of a client request, including the hostname and/or path. When Application Gateway for Containers initiates the request to the backend target, it includes the newly rewritten URL. Additional information on URL Rewrites can be found in the documentation for Ingress API and Gateway API.
Advanced Load Balancing
The Application Gateway for Containers offers an impressive array of traffic management features to enhance your application deployment:
- Layer 7 HTTP/HTTPS request forwarding capabilities based on prefix/exact match criteria, including hostname, path, headers, query strings, methods, and ports (80/443).
- Robust support for HTTPS traffic management, including SSL termination and end-to-end SSL encryption.
- Seamless integration with Ingress and Gateway API for streamlined configuration and management.
- Flexible traffic splitting and weighted round-robin functionality to distribute traffic efficiently.
- Mutual Authentication (mTLS) support for establishing secure connections to backend targets.
- Robust health checks to ensure backends are healthy and capable of handling traffic before they are registered.
- Automatic retries to optimize delivery of requests and handle potential failures gracefully.
- TLS Policies that allow for granular control over the encryption protocols and ciphers used for secure communication.
- Autoscaling capabilities to dynamically adjust resources based on workload demands.
- Built-in resilience to handle availability zone failures and ensure continuous operation of your applications.
With these comprehensive features, the Application Gateway for Containers empowers you to efficiently manage and optimize your traffic flow. For more information, see For more information, see Load balancing features.
Tutorials and Samples
You can use the following tutorials to begin your journey with the Application Gateway for Containers:
- Gateway API
- Ingress API
You can find scripts and YAML manifests for the above tutorials under the tutorials
folder.
Deploy the Bicep modules
You can deploy the Bicep modules in the bicep
folder using the deploy.sh
Bash script located in the same folder. Specify a value for the following parameters in the deploy.sh
script and main.parameters.json
parameters file before deploying the Bicep modules.
prefix
: specifies a prefix for all the Azure resources.authenticationType
: specifies the type of authentication when accessing the Virtual Machine.sshPublicKey
is the recommended value. Allowed values:sshPublicKey
andpassword
.applicationGatewayForContainersType
: this parameter specifies the deployment type for the Application Gateway for Containers:managed
: the Application Gateway for Containers resource and its child resources, association and frontends, are created and handled by the Azure Loab Balancer controller in the node resource group ofthe AKS cluster.byo
: the Application Gateway for Containers resource and its child resources are created in the targert resource group. You are responsible for the provisioning and deletion of the association and frontend child resources.
vmAdminUsername
: specifies the name of the administrator account of the virtual machine.vmAdminPasswordOrKey
: specifies the SSH Key or password for the virtual machine.aksClusterSshPublicKey
: specifies the SSH Key or password for AKS cluster agent nodes.aadProfileAdminGroupObjectIDs
: when deploying an AKS cluster with Azure AD and Azure RBAC integration, this array parameter contains the list of Azure AD group object IDs that will have the admin role of the cluster.keyVaultObjectIds
: Specifies the object ID of the service principals to configure in Key Vault access policies.windowsAgentPoolEnabled
: Specifies whether to create a Windows Server agent pool.
We suggest reading sensitive configuration data such as passwords or SSH keys from a pre-existing Azure Key Vault resource. For more information, see Use Azure Key Vault to pass secure parameter value during Bicep deployment.
Application Gateway for Containers Bicep Module
The following table contains the code from the applicationGatewayForContainers.bicep
Bicep module used to deploy a Application Gateway for Containers.
// Parameters
@description('Specifies the name of the Application Gateway for Containers.')
param name string = 'dummy'
@description('Specifies whether the Application Gateway for Containers is managed or bring your own (BYO).')
@allowed([
'managed'
'byo'
])
param type string = 'managed'
@description('Specifies the workspace id of the Log Analytics used to monitor the Application Gateway for Containers.')
param workspaceId string
@description('Specifies the location of the Application Gateway for Containers.')
param location string
@description('Specifies the name of the existing AKS cluster.')
param aksClusterName string
@description('Specifies the name of the AKS cluster node resource group. This needs to be passed as a parameter and cannot be calculated inside this module.')
param nodeResourceGroupName string
@description('Specifies the name of the existing virtual network.')
param virtualNetworkName string
@description('Specifies the name of the subnet which contains the Application Gateway for Containers.')
param subnetName string
@description('Specifies the namespace for the Application Load Balancer Controller of the Application Gateway for Containers.')
param namespace string = 'azure-alb-system'
@description('Specifies the name of the service account for the Application Load Balancer Controller of the Application Gateway for Containers.')
param serviceAccountName string = 'alb-controller-sa'
@description('Specifies the resource tags for the Application Gateway for Containers.')
param tags object
// Variables
var diagnosticSettingsName = 'diagnosticSettings'
var logCategories = [
'TrafficControllerAccessLog'
]
var metricCategories = [
'AllMetrics'
]
var logs = [for category in logCategories: {
category: category
enabled: true
}]
var metrics = [for category in metricCategories: {
category: category
enabled: true
}]
// Resources
resource aksCluster 'Microsoft.ContainerService/managedClusters@2024-01-02-preview' existing = {
name: aksClusterName
}
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2021-08-01' existing = {
name: virtualNetworkName
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets@2021-08-01' existing = {
parent: virtualNetwork
name: subnetName
}
resource readerRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: 'acdd72a7-3385-48ef-bd42-f606fba81ae7'
scope: subscription()
}
resource networkContributorRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: '4d97b98b-1d4f-4787-a291-c67834d212e7'
scope: subscription()
}
resource appGwForContainersConfigurationManagerRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: 'fbc52c3f-28ad-4303-a892-8a056630b8f1'
scope: subscription()
}
resource applicationLoadBalancerManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
name: '${name}ManagedIdentity'
location: location
tags: tags
}
// Assign the Network Contributor role to the Application Load Balancer user-assigned managed identity with the association subnet as as scope
resource subnetNetworkContributorRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(name, applicationLoadBalancerManagedIdentity.name, networkContributorRole.id)
scope: subnet
properties: {
roleDefinitionId: networkContributorRole.id
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
principalType: 'ServicePrincipal'
}
}
// Assign the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity with the resource group as a scope
resource appGwForContainersConfigurationManagerRoleAssignmenOnResourceGroup 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (type == 'byo') {
name: guid(name, applicationLoadBalancerManagedIdentity.name, appGwForContainersConfigurationManagerRole.id)
scope: resourceGroup()
properties: {
roleDefinitionId: appGwForContainersConfigurationManagerRole.id
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
principalType: 'ServicePrincipal'
}
}
// Assign the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity with the AKS cluster node resource group as a scope
module appGwForContainersConfigurationManagerRoleAssignmenOnnodeResourceGroupName 'resourceGroupRoleAssignment.bicep' = if (type == 'managed') {
name: guid(nodeResourceGroupName, applicationLoadBalancerManagedIdentity.name, appGwForContainersConfigurationManagerRole.id)
scope: resourceGroup(nodeResourceGroupName)
params: {
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
roleName: appGwForContainersConfigurationManagerRole.name
}
}
// Assign the Reader role the Application Load Balancer user-assigned managed identity with the AKS cluster node resource group as a scope
module nodeResourceGroupReaderRoleAssignment 'resourceGroupRoleAssignment.bicep' = {
name: guid(nodeResourceGroupName, applicationLoadBalancerManagedIdentity.name, readerRole.id)
scope: resourceGroup(nodeResourceGroupName)
params: {
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
roleName: readerRole.name
}
}
// Create federated identity for the Application Load Balancer user-assigned managed identity
resource federatedIdentityCredentials 'Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials@2023-01-31' = if (!empty(namespace) && !empty(serviceAccountName)) {
name: 'azure-alb-identity'
parent: applicationLoadBalancerManagedIdentity
properties: {
issuer: aksCluster.properties.oidcIssuerProfile.issuerURL
subject: 'system:serviceaccount:${namespace}:${serviceAccountName}'
audiences: [
'api://AzureADTokenExchange'
]
}
}
resource applicationGatewayForContainers 'Microsoft.ServiceNetworking/trafficControllers@2023-11-01' = if (type == 'byo') {
name: name
location: location
tags: tags
}
resource applicationGatewayDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (type == 'byo') {
name: diagnosticSettingsName
scope: applicationGatewayForContainers
properties: {
workspaceId: workspaceId
logs: logs
metrics: metrics
}
}
// Outputs
output id string = applicationGatewayForContainers.id
output name string = applicationGatewayForContainers.name
output type string = applicationGatewayForContainers.type
output principalId string = applicationLoadBalancerManagedIdentity.properties.principalId
output clientId string = applicationLoadBalancerManagedIdentity.properties.clientId
The provided Bicep module performs the following steps:
- Accepts several parameters, such as the
name
,type
,location
,tags
, and more. - Defines variables for diagnostic settings, such as
diagnosticSettingsName
,logCategories
,metricCategories
,logs
, andmetrics
. - References existing resources like the AKS cluster, virtual network, association subnet, Reader role, Network Contributor role, and AppGw for Containers Configuration Manager role.
- Ceates a user-defined managed identity for the Application Load Balancer (ALB) Controller.
- When the
type
parameter is set tobyo
, creates an Application Gateway for Containers resource in the target resource group and sets up a diagnostics settings resource to collect logs and metrics from the Application Gateway for Containers in the specified Log Analytics workspace. - Assigns the Network Contributor role to the Application Load Balancer user-assigned managed identity, scoped to the subnet.
- When the
type
parameter is set tobyo
, assigns the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity, scoped to the resource group. This role enables ALB Controller to access and configure the Application Gateway for Containers resource. - When the
type
parameter is set tomanaged
, assigns the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity, scoped to the AKS cluster node resource group. In this case, the Application Gateway for Containers is created and managed by the ALB Controller in the AKS node resource group. - Assigns the Reader role to the Application Load Balancer user-assigned managed identity, scoped to the AKS cluster node resource group.
- Creates a federated identity credentials resource to establish a federated identity for the Application Load Balancer user-assigned managed identity. This is required by the ALB Controller and uses the name
azure-alb-identity
for the federated credential. - Creates an
applicationGatewayForContainers
resource using the Microsoft.ServiceNetworking/trafficControllers resource type to create the Application Gateway for Containers based on the provided parameters. - Creates module outputs:
id
,name
, andtype
of the Application Gateway for Containers.principalId
andclientId
of the ALB Controller user-defined managed identity.
When the value of the type
parameter is set to byo
, the Bicep module creates an Application Gateway for Containers resource in the specified target resource group. Alternatively, when the type
parameter is set to managed
, the ALB Controller installed via Helm in the deployment script handles the creation and management of the Application Gateway for Containers in the AKS node resource group.
Deployment Script
The following Deployment Script is used to run the install-alb-controller-sa.sh
Bash script stofed in a public container of a storage container. This script installs necessary dependencies, retrieves cluster credentials, checks the cluster's type, installs Helm and Helm charts, creates namespaces and service accounts, and deploys the Application Load Balancer Controller.
# Install kubectl
az aks install-cli --only-show-errors
# Get AKS credentials
az aks get-credentials \
--admin \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--only-show-errors
# Check if the cluster is private or not
private=$(az aks show --name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--query apiServerAccessProfile.enablePrivateCluster \
--output tsv)
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 -o get_helm.sh -s
chmod 700 get_helm.sh
./get_helm.sh &>/dev/null
# Add Helm repos
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add jetstack https://charts.jetstack.io
# Update Helm repos
helm repo update
# initialize variables
applicationGatewayForContainersName=''
diagnosticSettingName="DefaultDiagnosticSettings"
if [[ $private == 'true' ]]; then
# Log whether the cluster is public or private
echo "$clusterName AKS cluster is private"
# Install Prometheus
command="helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--install \
--create-namespace \
--namespace prometheus \
--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Install NGINX ingress controller using the internal load balancer
command="helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--install \
--create-namespace \
--namespace ingress-basic \
--set controller.replicaCount=3 \
--set controller.nodeSelector.\"kubernetes\.io/os\"=linux \
--set defaultBackend.nodeSelector.\"kubernetes\.io/os\"=linux \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true \
--set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" \
--set controller.service.annotations.\"service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path\"=/healthz"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Install certificate manager
command="helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--namespace cert-manager \
--version v1.14.0 \
--set installCRDs=true \
--set nodeSelector.\"kubernetes\.io/os\"=linux \
--set \"extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}\""
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create cluster issuer
command="cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-nginx
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $email
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ -n "$namespace" && \
-n "$serviceAccountName" ]]; then
# Create workload namespace
command="kubectl create namespace $namespace"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create service account
command="cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: $workloadManagedIdentityClientId
azure.workload.identity/tenant-id: $tenantId
labels:
azure.workload.identity/use: "true"
name: $serviceAccountName
namespace: $namespace
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
fi
if [[ "$applicationGatewayForContainersEnabled" == "true" \
&& -n "$applicationGatewayForContainersManagedIdentityClientId" \
&& -n "$applicationGatewayForContainersSubnetId" ]]; then
# Install the Application Load Balancer Controller
command="helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
--install \
--create-namespace \
--namespace $applicationGatewayForContainersNamespace \
--version 1.0.0 \
--set albController.podIdentity.clientID=$applicationGatewayForContainersManagedIdentityClientId"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create workload namespace
command="kubectl create namespace alb-infra"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ "$applicationGatewayForContainersType" == "managed" ]]; then
# Define the ApplicationLoadBalancer resource, specifying the subnet ID the Application Gateway for Containers association resource should deploy into.
# The association establishes connectivity from Application Gateway for Containers to the defined subnet (and connected networks where applicable) to
# be able to proxy traffic to a defined backend.
command="kubectl apply -f - <<EOF
apiVersion: alb.networking.azure.io/v1
kind: ApplicationLoadBalancer
metadata:
name: alb
namespace: alb-infra
spec:
associations:
- $applicationGatewayForContainersSubnetId
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ -n $nodeResourceGroupName ]]; then \
echo -n "Retrieving the resource id of the Application Gateway for Containers..."
counter=1
while [ $counter -le 600 ]
do
# Retrieve the resource id of the managed Application Gateway for Containers resource
applicationGatewayForContainersId=$(az resource list \
--resource-type "Microsoft.ServiceNetworking/TrafficControllers" \
--resource-group $nodeResourceGroupName \
--query [0].id \
--output tsv)
if [[ -n $applicationGatewayForContainersId ]]; then
echo
break
else
echo -n '.'
counter=$((counter + 1))
sleep 1
fi
done
if [[ -n $applicationGatewayForContainersId ]]; then
applicationGatewayForContainersName=$(basename $applicationGatewayForContainersId)
echo "[$applicationGatewayForContainersId] resource id of the [$applicationGatewayForContainersName] Application Gateway for Containers successfully retrieved"
else
echo "Failed to retrieve the resource id of the Application Gateway for Containers"
exit -1
fi
# Check if the diagnostic setting already exists for the Application Gateway for Containers
echo "Checking if the [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers actually exists..."
result=$(az monitor diagnostic-settings show \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--query name \
--output tsv 2>/dev/null)
if [[ -z $result ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers does not exist"
echo "Creating [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers..."
# Create the diagnostic setting for the Application Gateway for Containers
az monitor diagnostic-settings create \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--logs '[{"categoryGroup": "allLogs", "enabled": true}]' \
--metrics '[{"category": "AllMetrics", "enabled": true}]' \
--workspace $workspaceId \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers successfully created"
else
echo "Failed to create [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers"
exit -1
fi
else
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers already exists"
fi
fi
fi
fi
else
# Log whether the cluster is public or private
echo "$clusterName AKS cluster is public"
# Install Prometheus
echo "Installing Prometheus..."
helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--install \
--create-namespace \
--namespace prometheus \
--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
if [[ $? == 0 ]]; then
echo "Prometheus successfully installed"
else
echo "Failed to install Prometheus"
exit -1
fi
# Install NGINX ingress controller using the internal load balancer
echo "Installing NGINX ingress controller..."
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--install \
--create-namespace \
--namespace ingress-basic \
--set controller.replicaCount=3 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true \
--set controller.metrics.serviceMonitor.additionalLabels.release="prometheus" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
if [[ $? == 0 ]]; then
echo "NGINX ingress controller successfully installed"
else
echo "Failed to install NGINX ingress controller"
exit -1
fi
# Install certificate manager
echo "Installing certificate manager..."
helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--namespace cert-manager \
--version v1.14.0 \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"
if [[ $? == 0 ]]; then
echo "Certificate manager successfully installed"
else
echo "Failed to install certificate manager"
exit -1
fi
# Create cluster issuer
echo "Creating cluster issuer..."
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-nginx
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $email
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
EOF
if [[ -n "$namespace" && \
-n "$serviceAccountName" ]]; then
# Create workload namespace
result=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$namespace'")].metadata.name'})
if [[ -n $result ]]; then
echo "$namespace namespace already exists in the cluster"
else
echo "$namespace namespace does not exist in the cluster"
echo "Creating $namespace namespace in the cluster..."
kubectl create namespace $namespace
fi
# Create service account
echo "Creating $serviceAccountName service account..."
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: $workloadManagedIdentityClientId
azure.workload.identity/tenant-id: $tenantId
labels:
azure.workload.identity/use: "true"
name: $serviceAccountName
namespace: $namespace
EOF
fi
if [[ "$applicationGatewayForContainersEnabled" == "true" \
&& -n "$applicationGatewayForContainersManagedIdentityClientId" \
&& -n "$applicationGatewayForContainersSubnetId" ]]; then
# Install the Application Load Balancer
echo "Installing Application Load Balancer Controller in $applicationGatewayForContainersNamespace namespace using $applicationGatewayForContainersManagedIdentityClientId managed identity..."
helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
--install \
--create-namespace \
--namespace $applicationGatewayForContainersNamespace \
--version 1.0.0 \
--set albController.namespace=$applicationGatewayForContainersNamespace \
--set albController.podIdentity.clientID=$applicationGatewayForContainersManagedIdentityClientId
if [[ $? == 0 ]]; then
echo "Application Load Balancer Controller successfully installed"
else
echo "Failed to install Application Load Balancer Controller"
exit -1
fi
if [[ "$applicationGatewayForContainersType" == "managed" ]]; then
# Create alb-infra namespace
albInfraNamespace='alb-infra'
result=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$albInfraNamespace'")].metadata.name'})
if [[ -n $result ]]; then
echo "$albInfraNamespace namespace already exists in the cluster"
else
echo "$albInfraNamespace namespace does not exist in the cluster"
echo "Creating $albInfraNamespace namespace in the cluster..."
kubectl create namespace $albInfraNamespace
fi
# Define the ApplicationLoadBalancer resource, specifying the subnet ID the Application Gateway for Containers association resource should deploy into.
# The association establishes connectivity from Application Gateway for Containers to the defined subnet (and connected networks where applicable) to
# be able to proxy traffic to a defined backend.
echo "Creating ApplicationLoadBalancer resource..."
kubectl apply -f - <<EOF
apiVersion: alb.networking.azure.io/v1
kind: ApplicationLoadBalancer
metadata:
name: alb
namespace: alb-infra
spec:
associations:
- $applicationGatewayForContainersSubnetId
EOF
if [[ -n $nodeResourceGroupName ]]; then \
echo -n "Retrieving the resource id of the Application Gateway for Containers..."
counter=1
while [ $counter -le 20 ]
do
# Retrieve the resource id of the managed Application Gateway for Containers resource
applicationGatewayForContainersId=$(az resource list \
--resource-type "Microsoft.ServiceNetworking/TrafficControllers" \
--resource-group $nodeResourceGroupName \
--query [0].id \
--output tsv)
if [[ -n $applicationGatewayForContainersId ]]; then
echo
break
else
echo -n '.'
counter=$((counter + 1))
sleep 1
fi
done
if [[ -n $applicationGatewayForContainersId ]]; then
applicationGatewayForContainersName=$(basename $applicationGatewayForContainersId)
echo "[$applicationGatewayForContainersId] resource id of the [$applicationGatewayForContainersName] Application Gateway for Containers successfully retrieved"
else
echo "Failed to retrieve the resource id of the Application Gateway for Containers"
exit -1
fi
# Check if the diagnostic setting already exists for the Application Gateway for Containers
echo "Checking if the [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers actually exists..."
result=$(az monitor diagnostic-settings show \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--query name \
--output tsv 2>/dev/null)
if [[ -z $result ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers does not exist"
echo "Creating [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers..."
# Create the diagnostic setting for the Application Gateway for Containers
az monitor diagnostic-settings create \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--logs '[{"categoryGroup": "allLogs", "enabled": true}]' \
--metrics '[{"category": "AllMetrics", "enabled": true}]' \
--workspace $workspaceId \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers successfully created"
else
echo "Failed to create [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers"
exit -1
fi
else
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers already exists"
fi
fi
fi
fi
fi
# Create output as JSON file
echo '{}' |
jq --arg x $applicationGatewayForContainersName '.applicationGatewayForContainersName=$x' |
jq --arg x $namespace '.namespace=$x' |
jq --arg x $serviceAccountName '.serviceAccountName=$x' |
jq --arg x 'prometheus' '.prometheus=$x' |
jq --arg x 'cert-manager' '.certManager=$x' |
jq --arg x 'ingress-basic' '.nginxIngressController=$x' >$AZ_SCRIPTS_OUTPUT_PATH
The script performs the following steps:
- Installs
kubectl
using the Azure CLI commandaz aks install-cli
. - Retrieves the AKS cluster credentials using the Azure CLI command
az aks get-credentials
. - Checks whether the AKS cluster is private or public by querying the
enablePrivateCluster
attribute of the cluster's API server access profile using the Azure CLI commandaz aks show
. - Installs
Helm
by downloading and executing theget_helm.sh
script. - Adds Helm repositories using the
helm repo add
command for the Kube Prometheus Stack and Cert-Manager. - Updates the Helm repositories using the
helm repo update
command. - Initializes variables related to the Application Gateway for Containers.
- The script performs the subsequent steps differently depending on whether the cluster is public or private. The script uses the az aks command invoke to execute commands when the cluster is private.
- Installs Prometheus using the
helm upgrade --install
command. - Installs the certificate manager using the
helm upgrade --install
command. - If a
namespace
andserviceAccountName
are provided, it creates the namespace and service account usingkubectl
. This information is optional, and it can be used to create the namespace and service account for a workload. - If the Application Gateway for Containers is enabled and necessary information is provided, it installs the Application Load Balancer Controller using the
helm upgrade --install
command. The YAML manifest specifies the client id of the ALB Controller managed identity from theapplicationGatewayForContainersManagedIdentityClientId
environment variable and the target namespace from theapplicationGatewayForContainersNamespace
environment variable. For more information on the installation of the ALB Controller via Helm, see Quickstart: Deploy Application Gateway for Containers ALB Controller. - When the
applicationGatewayForContainersType
environment variable is set tomanaged
, creates thealb-infra
namespace usingkubectl
and deploys theApplicationLoadBalancer
resource in the newly created namespace. The YAML manifest specifies the resource id of the subnet used by the association from theapplicationGatewayForContainersSubnetId
environment variable. - Retrieves the resource ID of the Application Gateway for Containers and checks if the diagnostic settings exist. If not, it creates the diagnostic settings using
az monitor diagnostic-settings create
. - Creates an output JSON file containing the Application Gateway for Containers
name
, worloadnamespace
andservice account name
, if any, and the namespace for Prometheus and the Certificate Manager.
Review deployed resources
You can use the Azure portal to list the deployed resources in the resource group. If you chose to deploy an Application Gateway for Containers managed by the ALB Controller, you will find the resource under the node resource group of the AKS cluster.
You can also use Azure CLI to list the deployed resources in the resource group and AKS node resource group:
az resource list --resource-group <resource-group-name>
nodeResourceGroupName=$(az aks show --name <aks-name> --resource-group <resource-group-name> --query "nodeResourceGroup" -o tsv)
az resource list --resource-group nodeResourceGroupName
You can also use the following PowerShell cmdlet to list the deployed resources in the resource group and AKS node resource group:
Get-AzResource -ResourceGroupName <resource-group-name>
$NodeResourceGroup = (Get-AzAksCluster -Name <aks-name> -ResourceGroupName <resource-group-name>).NodeResourceGroup
Get-AzResource -ResourceGroupName $NodeResourceGroup
Deploy Sample
After confirming the successful deployment, you can easily deploy your workloads and configure them to have a public endpoint through the newly created Application Gateway for Containers. To achieve this, you have two options: either a Gateway or an Ingress in Kubernetes. These options allow you to expose your application to the public internet using the Application Gateway for Containers resource. The documentation provides various tutorials for both the Gateway API and the Ingress API, including:
- Gateway API:
- Ingress API:
You can find the scripts and YAML manifests for these tutorials in the tutorials
folder. Additionally, the app
folder contains two samples: byo
for a bring-your-own installation of the Application Gateway for Containers, and managed
which works with Application Gateway for Containers managed by the ALB Controller.
For simplicity, let's focus on the managed
sample, while leaving the byo
sample for the reader to review. Let's start by reviewing the YAML manifests.
Deployment
The deployment.yaml
file contains the YAML definition for the deployment, the service, and a secret that contains a temporary certificate for the Gatewy listener that will be replaced by the certificate issued by Let's Encrypt via the certificate manager. For more information on how to use the certificate manager to issue a new certificate to a Gateway using HTTP01 challenges, see Configuring the HTTP-01 Gateway API solver .
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 3
selector:
matchLabels:
app: httpbin
template:
metadata:
labels:
app: httpbin
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: httpbin
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: httpbin
nodeSelector:
"kubernetes.io/os": linux
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: ClusterIP
selector:
app: httpbin
---
apiVersion: v1
kind: Secret
metadata:
name: listener-tls-secret
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhJQ0FRRXdEUVlKS29aSWh2Y05BUUVMQlFBd1d6RUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlYKQkFnTUNsTnZiV1V0VTNSaGRHVXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREVVTUJJR0ExVUVBd3dMWlhoaGJYQnNaUzVqYjIwd0hoY05Nakl4TVRFMk1EVXhPREV6V2hjTk1qVXdPREV5Ck1EVXhPREV6V2pCYk1Rc3dDUVlEVlFRR0V3SkJWVEVUTUJFR0ExVUVDQXdLVTI5dFpTMVRkR0YwWlRFaE1COEcKQTFVRUNnd1lTVzUwWlhKdVpYUWdWMmxrWjJsMGN5QlFkSGtnVEhSa01SUXdFZ1lEVlFRRERBdGxlR0Z0Y0d4bApMbU52YlRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTm4vcHNCYjh0RVVlV3lLCkd4UHZmaFdSaHl6Qm9veFQraTBGUzBXYlNJdEtGUFl3ZmhKay9TYmgzZ09mL1NzVVE0MU1kVkJDb25meEF1OHAKdkhrdlk5cjIrRlEwcXBqb3RuNVJadm1QVlhnTVU0MHZhVzdJSkVzUEIyTTk4UDlrL2VkZXhFOUNEbVhRRUgySApYYXFoaFVpRnh1Q0NIeThLWHJOb0JMVGZ1VWRsM2lycTFJMFAxSkVJaXQ2WC9DeVFWQmU3SVI5ZGZlVXc5UFlsClRKVVhBRGdRTzBCVGRYb3RRc1VUZjI1dktFRWcyUjVHQXIwVC9FcThjS3BNcWFiYzhydCtZTjlQYTVLcUFyWS8KR2M0UkdpTVNBSWlTclhtMHFYQzU2cjhEVFk0T2VhV292ZW9TcXp1Ymxzc0lZNHd4alF4OUdBSC9GTWpxU0ltTgozREQ0RElFQ0F3RUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSTBuMTc5VU8xQVFiRmdqMGEvdHBpTDBLCkFPS0U4UTlvSzBleS80VlYzREdQM3duR2FOMW52d2xCVFNKWGFVK1JHejZQZTUxN2RoRklGR3BYblEzemxZV1UKVE0zU0V1NXd4eWpVZUtWSVlvOGQ3dTN2UXdDMnhHK1IrbStSZ0Jxcm5ib003cVhwYjR0dkNRRi82TXl6TzZDNwpNM0RKZmNqdWQxSEszcmlXQy9CYlB3ZjBlN1dtWW95eGZoaTZBUWRZNmZJU3RRZVhVbWJ1aWtPTDE1VjdETEFtCkxHOSt5cExOdHFsa2VXTXBVcU45R0d6ZjdpSTNVMlJKWTlpUjdrcHUzMXdDWGY4VUhPcUxva2prU1JTTTV0dzcKWXRDNHdjN2dNS1FmSi9GaS9JVXRKdmx6djk1V0lGSU4rSURtbHBPdFVZQTBwMmVFeERtRFFJc2xZV1YwMVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRRFovNmJBVy9MUkZIbHMKaWhzVDczNFZrWWNzd2FLTVUvb3RCVXRGbTBpTFNoVDJNSDRTWlAwbTRkNERuLzByRkVPTlRIVlFRcUozOFFMdgpLYng1TDJQYTl2aFVOS3FZNkxaK1VXYjVqMVY0REZPTkwybHV5Q1JMRHdkalBmRC9aUDNuWHNSUFFnNWwwQkI5CmgxMnFvWVZJaGNiZ2doOHZDbDZ6YUFTMDM3bEhaZDRxNnRTTkQ5U1JDSXJlbC93c2tGUVh1eUVmWFgzbE1QVDIKSlV5VkZ3QTRFRHRBVTNWNkxVTEZFMzl1YnloQklOa2VSZ0s5RS94S3ZIQ3FUS21tM1BLN2ZtRGZUMnVTcWdLMgpQeG5PRVJvakVnQ0lrcTE1dEtsd3VlcS9BMDJPRG5tbHFMM3FFcXM3bTViTENHT01NWTBNZlJnQi94VEk2a2lKCmpkd3crQXlCQWdNQkFBRUNnZ0VCQUozalpYaW9uK01DZXpjN2g0VVd6akQ4NS9Sb2dqdzBqbHVSSEFWY0JGeXQKMlNTOTFuR29KeG5FT1RKUzYrQUpteXQ1bHZYOGJRT0YwV1E2ekVEUksvZHBMRTZBbnBhRTViZnphU3VTdm9wbQpFeFdNbzBZVE93WUo2b1hjVlBJRXlVaU1BSTZPL3pLS1VZYzVSWVBSM0dDOFUyQkRuaVpKMG5FS0EyNmxJdUlyCjlVcWtkSk9wRzJtK09iTnc5a0paZVRJblN2TkJKQ0NXQlRwcmY3TS9IRUprbE5aQU5mV0F0YXptUFp3QXI2cFIKOEpHbzV1ZUl2NXI3S1FJbkpldEF3YStpQ3VTUHZvUlZNOUdrSmZxSHVtVmNJbjU5Z0ZzcXR6dzVGNUlocWQ5eQo3dHNxUTdxNUYxb1BLeGxPOXl4TVQxaUlnWmRNaDZqODFuM1kwaWFlN2lrQ2dZRUE4UG9tVmQxSXh4c3ZYbmRIClM5MkVQUENkQmYybzM2SmczNEJYc3QwV3BaN1ZNWFhvVjFPeFhGeWpsNm1vNVBjMTRUSXpjd2NVdWJJMGVhbWEKVWxVbnR1bDFPMkdhYlh4eDJrR1l6ZmVvalZBVUh5OGNjeWxoTkpXRDl5Ykx0TCttNTBBTFI3V1JSdG5LSUxaSApJc3NjTGRTcGYyMUNUYWU3REk3Q2NNQ3RSbmNDZ1lFQTU1YkhTRFBaNmFUZWp4cDNwdHBDNitrN1duMVdlYnBmCkdDL1Rlb0pIaHVteDB6K3lPNitPcng0YlRZSFhjcU1Fa2pwRWxqN0xwb3ZxMktjYUN6SUxvVHdQTWVjNncxSVQKZTRld01JM3Nid2FKMFFhZXgvWHdVR1J0R3RuNkVka25qK2VaWSsxYUpscEJBcjlZZ0VKaTFUci9wZW9VdEtJUwpYSGNsbzY3dmFzY0NnWUJwQ2pFaHBuWnR5OHpIR2FrclNhQzF5NUEycDA0d1JTQ0M2L2ZPVUk3cG5LV0RqTWk5CklBOGttb0Q0d0F5TjJiQlR2RVV1ODd3MkFaYmNIWERXU0tZcUZmTnk4ZVdWcWZRYTFoTWNYTUxNN2tZSEhjc0IKNjl5aVJqWWl5bmRyRDB0YWE5RSs3Y2Nvb2hCNFY5d0VMNUxWNjJnQzBvWmZRU2pJbllYbURpYTVtd0tCZ0ZwbworWm1OYklnVExqT3R3SUpwK1BCQ1dFS0daZWtWd2lRZUg3QlhCZmQ4YWtpdk9EU20zOHdyczdyNWNwTzFZb1ozCnF1a0EwTjVQQnpyWFdZcC9XaHp5NW5lejdyUHI2ZUV5NHF6QjYwaVl3OXJQZTlOU2h5UExZUEMzb2pHdmxndE8KL2dvTjBrRGd3VHFDV3RtUGtTZnZaWGh2UHZBWnlaTkJqSGN2UnhabkFvR0JBS2hnZnlUNTVUVTUxR3hJRks2YwpqNkM5cEdveHJ5Qk0wSzVTb3FqWk5ud2J5UEwzL2Yybmcwb2tSek5iNEorTVJrOVk1RXlIZkw5WlNTdUNKMHdnCkNOMlRZSnZZQWRETWJiOThZSXB3cTdqdkp4VG15cHFEK2lxM1BBVU9RQ3hrVy9FMnVyOXZMbmZlcFcvVFVaVEMKOWdnOFFQL3Y2Q1owamRpeVBYZEJpb1ZOCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
type: kubernetes.io/tls
Gateway
The gateway.yaml
contains the definition of the Gateway used by the application. When using an Application Gateway for Containers managed by the ALB Controller, the frontend is auotmatically created for your by the ALB Controller.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: httpbin-gateway
namespace: gateway-demo
annotations:
cert-manager.io/issuer: letsencrypt
alb.networking.azure.io/alb-name: alb
alb.networking.azure.io/alb-namespace: alb-infra
spec:
gatewayClassName: azure-alb-external
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- hostname: dummy.babosbird.com
name: https
port: 443
protocol: HTTPS
allowedRoutes:
namespaces:
from: All
tls:
mode: Terminate
certificateRefs:
- name: listener-tls-secret
Issuer
The Gateway defines a certificate issuer in the cert-manager.io/issuer
annotation, so we need to create an issuer. In the issuer, we define the CA root, Let's Encrypt in this case, for the certificate chain to issue our certificate and the challenge type that our client would like to handle to prove our controll over the domain (in our case we will use HTTP01 challenge).
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: gateway-demo
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'admin@contoso.com'
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: httpbin-gateway
namespace: gateway-demo
kind: Gateway
The certificate manager follows a series of steps to issue a certificate to the Gateway listener:
- The gateway creates a Certificate object with a nil revision flag, initially pointing to a self-signed TLS secret. However, since the CA of the TLS secret is not valid, the Certificate realizes this and proceeds to the next step.
- A CertificateRequest (CR) object is created with the revision flag set to 1, indicating the need to re-issue a valid certificate. The CR contains all the necessary information to send a CSR request in PKCS #10 format to the CA.
- The CR object creates an Order object to monitor the request process.
- The Cluster Issuer registers itself in the ACME server (CA) using our public key, which is included in the CR. The CA server generates a unique token/key for our request and associates them with our public key, enabling the verification of our signature for future requests from our client.
- The CA server returns the unique token and key for each supported challenge to our client (Cluster Issuer), which were stored for our public key on the CA server.
- The cluster issuer updates the Order object with the server-supported challenges, along with their unique token and key.
- Based on the supported challenges and our Issuer configuration, the Order object determines to solve the HTTP01 Challenge, utilizing the parameters provided by the ACME server.
- A new pod is created in the default namespace to run the Challenge. This pod contains an HTTP server that serves on a specific path and expects the origin to be our domain name. If the origin does not match, a 404 error is returned.
HTTPRoute
The httproute.yaml
contains the definition of the HTTPRoute
object used to route requests to the service:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: httpbin-route
namespace: gateway-demo
spec:
parentRefs:
- name: httpbin-gateway
namespace: gateway-demo
kind: Gateway
rules:
- backendRefs:
- name: httpbin
port: 80
Scripts
You are now ready to deploy the managed
sample to your AKS cluster. As a first step, enter a correct value for each variable in the 00.variables.sh
file:
# Certificate Manager
cmNamespace="cert-manager"
cmRepoName="jetstack"
cmRepoUrl="https://charts.jetstack.io"
cmChartName="cert-manager"
cmReleaseName="cert-manager"
cmVersion="v1.14.0"
# Application Load Balancer
applicationLoadBalancerName="alb"
applicationLoadBalancerNamespace="alb-infra"
# Demo
namespace="agc-demo"
gatewayName="echo-gateway"
issuerName="letsencrypt"
httpRouteName="echo-route"
# Ingress and DNS
dnsZoneName="babosbird.com"
dnsZoneResourceGroupName="DnsResourceGroup"
subdomain="shogunagc"
hostname="$subdomain.$dnsZoneName"
Run the 01-install-cert-manager.sh
script if you need to install the certificate manager in your AKS cluster.
#/bin/bash
# Variables
source ./00-variables.sh
# Check if the cert-manager repository is not already added
result=$(helm repo list | grep $cmRepoName | awk '{print $1}')
if [[ -n $result ]]; then
echo "[$cmRepoName] Helm repo already exists"
else
# Add the Jetstack Helm repository
echo "Adding [$cmRepoName] Helm repo..."
helm repo add $cmRepoName $cmRepoUrl
fi
# Update your local Helm chart repository cache
echo 'Updating Helm repos...'
helm repo update
# Install cert-manager Helm chart
result=$(helm list -n $cmNamespace | grep $cmReleaseName | awk '{print $1}')
if [[ -n $result ]]; then
echo "[$cmReleaseName] cert-manager already exists in the $cmNamespace namespace"
echo "Upgrading [$cmReleaseName] cert-manager to the $cmNamespace namespace..."
else
# Install the cert-manager Helm chart
echo "Deploying [$cmReleaseName] cert-manager to the $cmNamespace namespace..."
fi
helm upgrade $cmReleaseName $cmRepoName/$cmChartName \
--install \
--create-namespace \
--namespace $cmNamespace \
--version $cmVersion \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"
Then run the 02-create-sample.sh
script to deploy the application to the specified namespace. The script makes use of the yq tool.
#!/bin/bash
# Variables
source ./00-variables.sh
# Check if namespace exists in the cluster
result=$(kubectl get namespace -o jsonpath="{.items[?(@.metadata.name=='$namespace')].metadata.name}")
if [[ -n $result ]]; then
echo "$namespace namespace already exists in the cluster"
else
echo "$namespace namespace does not exist in the cluster"
echo "creating $namespace namespace in the cluster..."
kubectl create namespace $namespace
fi
# Create a sample web application
kubectl apply -n $namespace -f ./deployment.yaml
# Create Gateway
cat gateway.yaml |
yq "(.metadata.name)|="\""$gatewayName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.metadata.annotations."\""cert-manager.io/issuer"\"")|="\""$issuerName"\" |
yq "(.metadata.annotations."\""alb.networking.azure.io/alb-name"\"")|="\""$applicationLoadBalancerName"\" |
yq "(.metadata.annotations."\""alb.networking.azure.io/alb-namespace"\"")|="\""$applicationLoadBalancerNamespace"\" |
yq "(.spec.listeners[1].hostname)|="\""$hostname"\" |
kubectl apply -f -
# Create Issuer
cat issuer.yaml |
yq "(.metadata.name)|="\""$issuerName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.spec.acme.solvers[0].http01.gatewayHTTPRoute.parentRefs[0].name)|="\""$gatewayName"\" |
yq "(.spec.acme.solvers[0].http01.gatewayHTTPRoute.parentRefs[0].namespace)|="\""$namespace"\" |
kubectl apply -f -
# Create HTTPRoute
cat httproute.yaml |
yq "(.metadata.name)|="\""$httpRouteName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.spec.parentRefs[0].name)|="\""$gatewayName"\" |
yq "(.spec.parentRefs[0].namespace)|="\""$namespace"\" |
kubectl apply -f -
If you delegated the management of your public DNS to Azure DNS, you can use the 03-configure-dns.sh
script to create a CNAME for the FQDN assigned to the frontend used by the Gateway.
# Variables
source ./00-variables.sh
# Get the FQDN of the gateway
echo -n "Retrieving the FQDN of the [$gatewayName] gateway..."
while true
do
fqdn=$(kubectl get gateway $gatewayName -n $namespace -o jsonpath='{.status.addresses[0].value}')
if [[ -n $fqdn ]]; then
echo
break
else
echo -n '.'
sleep 1
fi
done
if [ -n $fqdn ]; then
echo "[$fqdn] FQDN successfully retrieved from the [$gatewayName] gateway"
else
echo "Failed to retrieve the FQDN from the [$gatewayName] gateway"
exit
fi
# Check if an CNAME record for todolist subdomain exists in the DNS Zone
echo "Retrieving the CNAME for the [$subdomain] subdomain from the [$dnsZoneName] DNS zone..."
cname=$(az network dns record-set cname list \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--query "[?name=='$subdomain'].CNAMERecord.cname" \
--output tsv \
--only-show-errors)
if [[ -n $cname ]]; then
echo "A CNAME already exists in [$dnsZoneName] DNS zone for the [$subdomain]"
if [[ $cname == $fqdn ]]; then
echo "The [$cname] CNAME equals the FQDN of the [$gatewayName] gateway. No additional step is required."
exit
else
echo "The [$cname] CNAME is different than the [$fqdn] FQDN of the [$gatewayName] gateway"
fi
# Delete the CNAME record
echo "Deleting the [$subdomain] CNAME from the [$dnsZoneName] zone..."
az network dns record-set cname delete \
--name $subdomain \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--only-show-errors \
--yes
if [[ $? == 0 ]]; then
echo "[$subdomain] CNAME successfully deleted from the [$dnsZoneName] zone"
else
echo "Failed to delete the [$subdomain] CNAME from the [$dnsZoneName] zone"
exit
fi
else
echo "No CNAME exists in [$dnsZoneName] DNS zone for the [$subdomain] subdomain"
fi
# Create a CNAME record
echo "Creating a CNAME in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway..."
az network dns record-set cname set-record \
--cname $fqdn \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--record-set-name $subdomain \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$subdomain] CNAME successfully created in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway"
else
echo "Failed to create a CNAME in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway"
fi
Finally, you can test the sample by running the 04-test-application.sh
script.
#!/bin/bash
# Variables
source ./00-variables.sh
# Curling this FQDN should return responses from the backend as configured in the HTTPRoute
curl https://$hostname
You can also open the application using a web browser:
Clean up resources
You can delete the resource group using the following Azure CLI command when you no longer need the resources you created. This will remove all the Azure resources.
az group delete --name <resource-group-name>
Alternatively, you can use the following PowerShell cmdlet to delete the resource group and all the Azure resources.
Remove-AzResourceGroup -Name <resource-group-name>