The Application Gateway for Containers is a new cutting-edge Azure service that offers load balancing and dynamic traffic management for applications running in a Kubernetes cluster. As part of Azure's Application Load Balancing portfolio, this innovative product provides an enhanced experience for developers and administrators. The Application Gateway for Containers represents the evolution of the Application Gateway Ingress Controller (AGIC) and enables Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway load balancer. In this article, we will guide you through the process of deploying an Azure Kubernetes Service(AKS) cluster with an Application Gateway for Containers in a fully automated fashion, using either a bring your own (BYO) or managed by ALB deployment.
For more information, see:
Bicep templates, companion code, Grafana dashboards, and Visio diagrams are in this GitHub repository.
aks-preview
Azure CLI extension of version 0.5.145 or later installedYou can run az --version
to verify above versions.
to install the aks-preview extension, run the following command:
az extension add --name aks-preview
Run the following command to update to the latest version of the extension released:
az extension update --name aks-preview
This sample provides a comprehensive set of Bicep modules that facilitate the deployment of an Azure Kubernetes Service (AKS) cluster with an integrated Application Gateway for Containers. Additionally, it offers modules for the optional deployment of other essential Azure services, including the Azure Monitor managed service for Prometheus resource and an Azure Managed Grafana instance for efficient monitoring of the cluster's performance and overall health status.
The following diagram illustrates the architecture and network topology implemented by this sample
Bicep modules are parametric, so that you can choose any network plugin. Currently, Application Gateway for Containers currently only supports Azure CNI with static IP allocation and Azure CNI with dynamic IP allocation. In addition, this sample shows how to deploy an Azure Kubernetes Service cluster with the following extensions and features:
In a production environment, we strongly recommend deploying a private AKS cluster with Uptime SLA. For more information, see private AKS cluster with a Public DNS address. Alternatively, you can deploy a public AKS cluster and secure access to the API server using authorized IP address ranges.
The Bicep modules deploy the following Azure resources:
SystemSubnet
: this subnet is used for the agent nodes of the system
node pool.UserSubnet
: this subnet is used for the agent nodes of the user
node pool.PodSubnet
: this subnet is used to allocate private IP addresses to pods dynamically.ApiServerSubnet
: API Server VNET Integration projects the API server endpoint directly into this delegated subnet in the virtual network where the AKS cluster is deployed.AzureBastionSubnet
: a subnet for the Azure Bastion Host.VmSubnet
: a subnet for a jump-box virtual machine used to connect to the (private) AKS cluster and for the private endpoints.AppGwForConSubnet
: this subnet contains the proxies created by the Application Load Balancer control plane to handle and distribute the ingress traffic to the AKS-hosted pods.applicationGatewayForContainersType
parameter in the main.bicep
module:
ApplicationLoadBalancer
Kubernetes object is defined on the cluster. Every time you want to create a new Gateway or an Ingress object which references the ApplicationLoadBalancer
Kubernetes object in the annotations, the ALB Controller provisions a new Frontend resource and manage its lifecycle based on the lifecycle of the Gateway or Ingress object.system
node pool in a dedicated subnet. The default node pool hosts only critical system pods and services. The worker nodes have node taint which prevents application pods from beings scheduled on this node pool.user
node pool hosting user workloads and artifacts in a dedicated subnet.windows
node pool hosting Windows Server containers. This node pool is optionally created when the value of the windowsAgentPoolEnabled
equals true
SystemSubnet
, UserSubnet
, and PodSubnet
subnets. The outboundType property of the cluster is set to userAssignedNatGateway
to specify that a BYO NAT Gateway is used for outbound connections. NOTE: you can update the outboundType
after cluster creation and this will deploy or remove resources as required to put the cluster into the new egress configuration. For more information, see Updating outboundType after cluster creation.install-alb-controller.sh
Bash script that creates the Application Load Balancer (ALB) Controller via Helm along with Cert-Manager. For more information on deployment scripts, see Use deployment scripts in Bicep.The Bicep modules provide the flexibility to deploy the following Azure resources based on your requirements selectively:
NOTE
You can find the architecture.vsdx
file used for the diagram under the visio
folder.
Bicep is a domain-specific language (DSL) that uses a declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
The Ingress resources Kubernetes objects have evolved into the more comprehensive and powerful Kubernetes Gateway API. Ingress Controller and Gateway API are both Kubernetes objects used for managing traffic routing and load balancing. While Ingress Controller served as entry points for external traffic, they had limitations in terms of flexibility and extensibility. The Kubernetes Gateway API emerged as a solution to address these limitations. Designed to be generic, expressive, extensible, and role-oriented, the Gateway API is a modern set of APIs for defining L4 and L7 routing rules in Kubernetes.
Gateway API offers superior functionality compared to Ingress Controllers as it separates listeners and routes into separate Kubernetes objects, Gateway
and HTTPRoute
. This separation allows different individuals with distinct roles and permissions to deploy them in separate namespaces. Additionally, Gateway API provides advanced traffic management capabilities including layer 7 HTTP/HTTPS request forwarding based on criteria such as hostname, path, headers, query string, methods, and ports. It also offers SSL termination and TLS policies for secure traffic management. These features grant better control and customization of traffic routing. The design of the Gateway API was driven by the following design goals to address and resolve issues and limitations in ingress controllers:
Additional notable capabilities of the Gateway API include:
Ingress resources are suitable for specific use cases:
Gateway API is the recommended option in the following situations:
The Application Gateway for Containers is a new cutting-edge Azure service that offers load balancing and dynamic traffic management for applications running in a Kubernetes cluster. As part of Azure's Application Load Balancing portfolio, this innovative product provides an enhanced experience for developers and administrators. The Application Gateway for Containers represents the evolution of the Application Gateway Ingress Controller (AGIC) and enables Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway load balancer. Azure Application Gateway for Containers enables you to host multiple web applications on the same port, utilizing unique backend services. This allows for efficient multi-site hosting and simplifies the management of your containerized applications. The Application Gateway for Containers fully supports both the Gateway API and Ingress API Kubernetes objects for traffic load balancing. For more information, see:
Azure Application Gateway for Containers supports two main deployment strategies:
ApplicationLoadBalancer
Kubernetes object is defined on the cluster. Every time you want to create a new Gateway or an Ingress object which references the ApplicationLoadBalancer
Kubernetes object in the annotations, the ALB Controller provisions a new Frontend resource and manage its lifecycle based on the lifecycle of the Gateway or Ingress object.
The components of Azure Application Gateway for Containers include:
Azure Application Gateway for Containers offers a range of features and benefits, including:
IngressExtension
custom resource definition (CRD) of the Application Gateway for Containers. For more details, refer to the documentation on Header Rewrites for Ingress API and Gateway API.
The Application Gateway for Containers offers an impressive array of traffic management features to enhance your application deployment:
With these comprehensive features, the Application Gateway for Containers empowers you to efficiently manage and optimize your traffic flow. For more information, see For more information, see Load balancing features.
You can use the following tutorials to begin your journey with the Application Gateway for Containers:
You can find scripts and YAML manifests for the above tutorials under the tutorials
folder.
You can deploy the Bicep modules in the bicep
folder using the deploy.sh
Bash script located in the same folder. Specify a value for the following parameters in the deploy.sh
script and main.parameters.json
parameters file before deploying the Bicep modules.
prefix
: specifies a prefix for all the Azure resources.authenticationType
: specifies the type of authentication when accessing the Virtual Machine. sshPublicKey
is the recommended value. Allowed values: sshPublicKey
and password
.applicationGatewayForContainersType
: this parameter specifies the deployment type for the Application Gateway for Containers:
managed
: the Application Gateway for Containers resource and its child resources, association and frontends, are created and handled by the Azure Loab Balancer controller in the node resource group ofthe AKS cluster.byo
: the Application Gateway for Containers resource and its child resources are created in the targert resource group. You are responsible for the provisioning and deletion of the association and frontend child resources.vmAdminUsername
: specifies the name of the administrator account of the virtual machine.vmAdminPasswordOrKey
: specifies the SSH Key or password for the virtual machine.aksClusterSshPublicKey
: specifies the SSH Key or password for AKS cluster agent nodes.aadProfileAdminGroupObjectIDs
: when deploying an AKS cluster with Azure AD and Azure RBAC integration, this array parameter contains the list of Azure AD group object IDs that will have the admin role of the cluster.keyVaultObjectIds
: Specifies the object ID of the service principals to configure in Key Vault access policies.windowsAgentPoolEnabled
: Specifies whether to create a Windows Server agent pool.
We suggest reading sensitive configuration data such as passwords or SSH keys from a pre-existing Azure Key Vault resource. For more information, see Use Azure Key Vault to pass secure parameter value during Bicep deployment.
The following table contains the code from the applicationGatewayForContainers.bicep
Bicep module used to deploy a Application Gateway for Containers.
// Parameters
@description('Specifies the name of the Application Gateway for Containers.')
param name string = 'dummy'
@description('Specifies whether the Application Gateway for Containers is managed or bring your own (BYO).')
@allowed([
'managed'
'byo'
])
param type string = 'managed'
@description('Specifies the workspace id of the Log Analytics used to monitor the Application Gateway for Containers.')
param workspaceId string
@description('Specifies the location of the Application Gateway for Containers.')
param location string
@description('Specifies the name of the existing AKS cluster.')
param aksClusterName string
@description('Specifies the name of the AKS cluster node resource group. This needs to be passed as a parameter and cannot be calculated inside this module.')
param nodeResourceGroupName string
@description('Specifies the name of the existing virtual network.')
param virtualNetworkName string
@description('Specifies the name of the subnet which contains the Application Gateway for Containers.')
param subnetName string
@description('Specifies the namespace for the Application Load Balancer Controller of the Application Gateway for Containers.')
param namespace string = 'azure-alb-system'
@description('Specifies the name of the service account for the Application Load Balancer Controller of the Application Gateway for Containers.')
param serviceAccountName string = 'alb-controller-sa'
@description('Specifies the resource tags for the Application Gateway for Containers.')
param tags object
// Variables
var diagnosticSettingsName = 'diagnosticSettings'
var logCategories = [
'TrafficControllerAccessLog'
]
var metricCategories = [
'AllMetrics'
]
var logs = [for category in logCategories: {
category: category
enabled: true
}]
var metrics = [for category in metricCategories: {
category: category
enabled: true
}]
// Resources
resource aksCluster 'Microsoft.ContainerService/managedClusters@2024-01-02-preview' existing = {
name: aksClusterName
}
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2021-08-01' existing = {
name: virtualNetworkName
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets@2021-08-01' existing = {
parent: virtualNetwork
name: subnetName
}
resource readerRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: 'acdd72a7-3385-48ef-bd42-f606fba81ae7'
scope: subscription()
}
resource networkContributorRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: '4d97b98b-1d4f-4787-a291-c67834d212e7'
scope: subscription()
}
resource appGwForContainersConfigurationManagerRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
name: 'fbc52c3f-28ad-4303-a892-8a056630b8f1'
scope: subscription()
}
resource applicationLoadBalancerManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
name: '${name}ManagedIdentity'
location: location
tags: tags
}
// Assign the Network Contributor role to the Application Load Balancer user-assigned managed identity with the association subnet as as scope
resource subnetNetworkContributorRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(name, applicationLoadBalancerManagedIdentity.name, networkContributorRole.id)
scope: subnet
properties: {
roleDefinitionId: networkContributorRole.id
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
principalType: 'ServicePrincipal'
}
}
// Assign the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity with the resource group as a scope
resource appGwForContainersConfigurationManagerRoleAssignmenOnResourceGroup 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (type == 'byo') {
name: guid(name, applicationLoadBalancerManagedIdentity.name, appGwForContainersConfigurationManagerRole.id)
scope: resourceGroup()
properties: {
roleDefinitionId: appGwForContainersConfigurationManagerRole.id
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
principalType: 'ServicePrincipal'
}
}
// Assign the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity with the AKS cluster node resource group as a scope
module appGwForContainersConfigurationManagerRoleAssignmenOnnodeResourceGroupName 'resourceGroupRoleAssignment.bicep' = if (type == 'managed') {
name: guid(nodeResourceGroupName, applicationLoadBalancerManagedIdentity.name, appGwForContainersConfigurationManagerRole.id)
scope: resourceGroup(nodeResourceGroupName)
params: {
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
roleName: appGwForContainersConfigurationManagerRole.name
}
}
// Assign the Reader role the Application Load Balancer user-assigned managed identity with the AKS cluster node resource group as a scope
module nodeResourceGroupReaderRoleAssignment 'resourceGroupRoleAssignment.bicep' = {
name: guid(nodeResourceGroupName, applicationLoadBalancerManagedIdentity.name, readerRole.id)
scope: resourceGroup(nodeResourceGroupName)
params: {
principalId: applicationLoadBalancerManagedIdentity.properties.principalId
roleName: readerRole.name
}
}
// Create federated identity for the Application Load Balancer user-assigned managed identity
resource federatedIdentityCredentials 'Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials@2023-01-31' = if (!empty(namespace) && !empty(serviceAccountName)) {
name: 'azure-alb-identity'
parent: applicationLoadBalancerManagedIdentity
properties: {
issuer: aksCluster.properties.oidcIssuerProfile.issuerURL
subject: 'system:serviceaccount:${namespace}:${serviceAccountName}'
audiences: [
'api://AzureADTokenExchange'
]
}
}
resource applicationGatewayForContainers 'Microsoft.ServiceNetworking/trafficControllers@2023-05-01-preview' = if (type == 'byo') {
name: name
location: location
tags: tags
}
resource applicationGatewayDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (type == 'byo') {
name: diagnosticSettingsName
scope: applicationGatewayForContainers
properties: {
workspaceId: workspaceId
logs: logs
metrics: metrics
}
}
// Outputs
output id string = applicationGatewayForContainers.id
output name string = applicationGatewayForContainers.name
output type string = applicationGatewayForContainers.type
output principalId string = applicationLoadBalancerManagedIdentity.properties.principalId
output clientId string = applicationLoadBalancerManagedIdentity.properties.clientId
The provided Bicep module performs the following steps:
name
, type
, location
, tags
, and more.diagnosticSettingsName
, logCategories
, metricCategories
, logs
, and metrics
.type
parameter is set to byo
, creates an Application Gateway for Containers resource in the target resource group and sets up a diagnostics settings resource to collect logs and metrics from the Application Gateway for Containers in the specified Log Analytics workspace.type
parameter is set to byo
, assigns the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity, scoped to the resource group. This role enables ALB Controller to access and configure the Application Gateway for Containers resource.type
parameter is set to managed
, assigns the AppGw for Containers Configuration Manager role to the Application Load Balancer user-assigned managed identity, scoped to the AKS cluster node resource group. In this case, the Application Gateway for Containers is created and managed by the ALB Controller in the AKS node resource group.azure-alb-identity
for the federated credential.applicationGatewayForContainers
resource using the Microsoft.ServiceNetworking/trafficControllers resource type to create the Application Gateway for Containers based on the provided parameters.id
, name
, and type
of the Application Gateway for Containers.principalId
and clientId
of the ALB Controller user-defined managed identity.When the value of the type
parameter is set to byo
, the Bicep module creates an Application Gateway for Containers resource in the specified target resource group. Alternatively, when the type
parameter is set to managed
, the ALB Controller installed via Helm in the deployment script handles the creation and management of the Application Gateway for Containers in the AKS node resource group.
The following Deployment Script is used to run the install-alb-controller-sa.sh
Bash script stofed in a public container of a storage container. This script installs necessary dependencies, retrieves cluster credentials, checks the cluster's type, installs Helm and Helm charts, creates namespaces and service accounts, and deploys the Application Load Balancer Controller.
# Install kubectl
az aks install-cli --only-show-errors
# Get AKS credentials
az aks get-credentials \
--admin \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--only-show-errors
# Check if the cluster is private or not
private=$(az aks show --name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--query apiServerAccessProfile.enablePrivateCluster \
--output tsv)
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 -o get_helm.sh -s
chmod 700 get_helm.sh
./get_helm.sh &>/dev/null
# Add Helm repos
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add jetstack https://charts.jetstack.io
# Update Helm repos
helm repo update
# initialize variables
applicationGatewayForContainersName=''
diagnosticSettingName="DefaultDiagnosticSettings"
if [[ $private == 'true' ]]; then
# Log whether the cluster is public or private
echo "$clusterName AKS cluster is private"
# Install Prometheus
command="helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--install \
--create-namespace \
--namespace prometheus \
--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Install NGINX ingress controller using the internal load balancer
command="helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--install \
--create-namespace \
--namespace ingress-basic \
--set controller.replicaCount=3 \
--set controller.nodeSelector.\"kubernetes\.io/os\"=linux \
--set defaultBackend.nodeSelector.\"kubernetes\.io/os\"=linux \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true \
--set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" \
--set controller.service.annotations.\"service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path\"=/healthz"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Install certificate manager
command="helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--namespace cert-manager \
--version v1.14.0 \
--set installCRDs=true \
--set nodeSelector.\"kubernetes\.io/os\"=linux \
--set \"extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}\""
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create cluster issuer
command="cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-nginx
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $email
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ -n "$namespace" && \
-n "$serviceAccountName" ]]; then
# Create workload namespace
command="kubectl create namespace $namespace"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create service account
command="cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: $workloadManagedIdentityClientId
azure.workload.identity/tenant-id: $tenantId
labels:
azure.workload.identity/use: "true"
name: $serviceAccountName
namespace: $namespace
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
fi
if [[ "$applicationGatewayForContainersEnabled" == "true" \
&& -n "$applicationGatewayForContainersManagedIdentityClientId" \
&& -n "$applicationGatewayForContainersSubnetId" ]]; then
# Install the Application Load Balancer Controller
command="helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
--install \
--create-namespace \
--namespace $applicationGatewayForContainersNamespace \
--version 1.0.0 \
--set albController.podIdentity.clientID=$applicationGatewayForContainersManagedIdentityClientId"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
# Create workload namespace
command="kubectl create namespace alb-infra"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ "$applicationGatewayForContainersType" == "managed" ]]; then
# Define the ApplicationLoadBalancer resource, specifying the subnet ID the Application Gateway for Containers association resource should deploy into.
# The association establishes connectivity from Application Gateway for Containers to the defined subnet (and connected networks where applicable) to
# be able to proxy traffic to a defined backend.
command="kubectl apply -f - <<EOF
apiVersion: alb.networking.azure.io/v1
kind: ApplicationLoadBalancer
metadata:
name: alb
namespace: alb-infra
spec:
associations:
- $applicationGatewayForContainersSubnetId
EOF"
az aks command invoke \
--name $clusterName \
--resource-group $resourceGroupName \
--subscription $subscriptionId \
--command "$command"
if [[ -n $nodeResourceGroupName ]]; then \
echo -n "Retrieving the resource id of the Application Gateway for Containers..."
counter=1
while [ $counter -le 600 ]
do
# Retrieve the resource id of the managed Application Gateway for Containers resource
applicationGatewayForContainersId=$(az resource list \
--resource-type "Microsoft.ServiceNetworking/TrafficControllers" \
--resource-group $nodeResourceGroupName \
--query [0].id \
--output tsv)
if [[ -n $applicationGatewayForContainersId ]]; then
echo
break
else
echo -n '.'
counter=$((counter + 1))
sleep 1
fi
done
if [[ -n $applicationGatewayForContainersId ]]; then
applicationGatewayForContainersName=$(basename $applicationGatewayForContainersId)
echo "[$applicationGatewayForContainersId] resource id of the [$applicationGatewayForContainersName] Application Gateway for Containers successfully retrieved"
else
echo "Failed to retrieve the resource id of the Application Gateway for Containers"
exit -1
fi
# Check if the diagnostic setting already exists for the Application Gateway for Containers
echo "Checking if the [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers actually exists..."
result=$(az monitor diagnostic-settings show \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--query name \
--output tsv 2>/dev/null)
if [[ -z $result ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers does not exist"
echo "Creating [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers..."
# Create the diagnostic setting for the Application Gateway for Containers
az monitor diagnostic-settings create \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--logs '[{"categoryGroup": "allLogs", "enabled": true}]' \
--metrics '[{"category": "AllMetrics", "enabled": true}]' \
--workspace $workspaceId \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers successfully created"
else
echo "Failed to create [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers"
exit -1
fi
else
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers already exists"
fi
fi
fi
fi
else
# Log whether the cluster is public or private
echo "$clusterName AKS cluster is public"
# Install Prometheus
echo "Installing Prometheus..."
helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--install \
--create-namespace \
--namespace prometheus \
--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
if [[ $? == 0 ]]; then
echo "Prometheus successfully installed"
else
echo "Failed to install Prometheus"
exit -1
fi
# Install NGINX ingress controller using the internal load balancer
echo "Installing NGINX ingress controller..."
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--install \
--create-namespace \
--namespace ingress-basic \
--set controller.replicaCount=3 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.metrics.enabled=true \
--set controller.metrics.serviceMonitor.enabled=true \
--set controller.metrics.serviceMonitor.additionalLabels.release="prometheus" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
if [[ $? == 0 ]]; then
echo "NGINX ingress controller successfully installed"
else
echo "Failed to install NGINX ingress controller"
exit -1
fi
# Install certificate manager
echo "Installing certificate manager..."
helm upgrade cert-manager jetstack/cert-manager \
--install \
--create-namespace \
--namespace cert-manager \
--version v1.14.0 \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"
if [[ $? == 0 ]]; then
echo "Certificate manager successfully installed"
else
echo "Failed to install certificate manager"
exit -1
fi
# Create cluster issuer
echo "Creating cluster issuer..."
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-nginx
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $email
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
EOF
if [[ -n "$namespace" && \
-n "$serviceAccountName" ]]; then
# Create workload namespace
result=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$namespace'")].metadata.name'})
if [[ -n $result ]]; then
echo "$namespace namespace already exists in the cluster"
else
echo "$namespace namespace does not exist in the cluster"
echo "Creating $namespace namespace in the cluster..."
kubectl create namespace $namespace
fi
# Create service account
echo "Creating $serviceAccountName service account..."
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: $workloadManagedIdentityClientId
azure.workload.identity/tenant-id: $tenantId
labels:
azure.workload.identity/use: "true"
name: $serviceAccountName
namespace: $namespace
EOF
fi
if [[ "$applicationGatewayForContainersEnabled" == "true" \
&& -n "$applicationGatewayForContainersManagedIdentityClientId" \
&& -n "$applicationGatewayForContainersSubnetId" ]]; then
# Install the Application Load Balancer
echo "Installing Application Load Balancer Controller in $applicationGatewayForContainersNamespace namespace using $applicationGatewayForContainersManagedIdentityClientId managed identity..."
helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
--install \
--create-namespace \
--namespace $applicationGatewayForContainersNamespace \
--version 1.0.0 \
--set albController.namespace=$applicationGatewayForContainersNamespace \
--set albController.podIdentity.clientID=$applicationGatewayForContainersManagedIdentityClientId
if [[ $? == 0 ]]; then
echo "Application Load Balancer Controller successfully installed"
else
echo "Failed to install Application Load Balancer Controller"
exit -1
fi
if [[ "$applicationGatewayForContainersType" == "managed" ]]; then
# Create alb-infra namespace
albInfraNamespace='alb-infra'
result=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$albInfraNamespace'")].metadata.name'})
if [[ -n $result ]]; then
echo "$albInfraNamespace namespace already exists in the cluster"
else
echo "$albInfraNamespace namespace does not exist in the cluster"
echo "Creating $albInfraNamespace namespace in the cluster..."
kubectl create namespace $albInfraNamespace
fi
# Define the ApplicationLoadBalancer resource, specifying the subnet ID the Application Gateway for Containers association resource should deploy into.
# The association establishes connectivity from Application Gateway for Containers to the defined subnet (and connected networks where applicable) to
# be able to proxy traffic to a defined backend.
echo "Creating ApplicationLoadBalancer resource..."
kubectl apply -f - <<EOF
apiVersion: alb.networking.azure.io/v1
kind: ApplicationLoadBalancer
metadata:
name: alb
namespace: alb-infra
spec:
associations:
- $applicationGatewayForContainersSubnetId
EOF
if [[ -n $nodeResourceGroupName ]]; then \
echo -n "Retrieving the resource id of the Application Gateway for Containers..."
counter=1
while [ $counter -le 20 ]
do
# Retrieve the resource id of the managed Application Gateway for Containers resource
applicationGatewayForContainersId=$(az resource list \
--resource-type "Microsoft.ServiceNetworking/TrafficControllers" \
--resource-group $nodeResourceGroupName \
--query [0].id \
--output tsv)
if [[ -n $applicationGatewayForContainersId ]]; then
echo
break
else
echo -n '.'
counter=$((counter + 1))
sleep 1
fi
done
if [[ -n $applicationGatewayForContainersId ]]; then
applicationGatewayForContainersName=$(basename $applicationGatewayForContainersId)
echo "[$applicationGatewayForContainersId] resource id of the [$applicationGatewayForContainersName] Application Gateway for Containers successfully retrieved"
else
echo "Failed to retrieve the resource id of the Application Gateway for Containers"
exit -1
fi
# Check if the diagnostic setting already exists for the Application Gateway for Containers
echo "Checking if the [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers actually exists..."
result=$(az monitor diagnostic-settings show \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--query name \
--output tsv 2>/dev/null)
if [[ -z $result ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers does not exist"
echo "Creating [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers..."
# Create the diagnostic setting for the Application Gateway for Containers
az monitor diagnostic-settings create \
--name $diagnosticSettingName \
--resource $applicationGatewayForContainersId \
--logs '[{"categoryGroup": "allLogs", "enabled": true}]' \
--metrics '[{"category": "AllMetrics", "enabled": true}]' \
--workspace $workspaceId \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers successfully created"
else
echo "Failed to create [$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers"
exit -1
fi
else
echo "[$diagnosticSettingName] diagnostic setting for the [$applicationGatewayForContainersName] Application Gateway for Containers already exists"
fi
fi
fi
fi
fi
# Create output as JSON file
echo '{}' |
jq --arg x $applicationGatewayForContainersName '.applicationGatewayForContainersName=$x' |
jq --arg x $namespace '.namespace=$x' |
jq --arg x $serviceAccountName '.serviceAccountName=$x' |
jq --arg x 'prometheus' '.prometheus=$x' |
jq --arg x 'cert-manager' '.certManager=$x' |
jq --arg x 'ingress-basic' '.nginxIngressController=$x' >$AZ_SCRIPTS_OUTPUT_PATH
The script performs the following steps:
kubectl
using the Azure CLI command az aks install-cli
.az aks get-credentials
.enablePrivateCluster
attribute of the cluster's API server access profile using the Azure CLI command az aks show
.Helm
by downloading and executing the get_helm.sh
script.helm repo add
command for the Kube Prometheus Stack and Cert-Manager.helm repo update
command.helm upgrade --install
command.helm upgrade --install
command.namespace
and serviceAccountName
are provided, it creates the namespace and service account using kubectl
. This information is optional, and it can be used to create the namespace and service account for a workload.helm upgrade --install
command. The YAML manifest specifies the client id of the ALB Controller managed identity from the applicationGatewayForContainersManagedIdentityClientId
environment variable and the target namespace from the applicationGatewayForContainersNamespace
environment variable. For more information on the installation of the ALB Controller via Helm, see Quickstart: Deploy Application Gateway for Containers ALB Controller.applicationGatewayForContainersType
environment variable is set to managed
, creates the alb-infra
namespace using kubectl
and deploys the ApplicationLoadBalancer
resource in the newly created namespace. The YAML manifest specifies the resource id of the subnet used by the association from the applicationGatewayForContainersSubnetId
environment variable.az monitor diagnostic-settings create
.name
, worload namespace
and service account name
, if any, and the namespace for Prometheus and the Certificate Manager.
You can use the Azure portal to list the deployed resources in the resource group. If you chose to deploy an Application Gateway for Containers managed by the ALB Controller, you will find the resource under the node resource group of the AKS cluster.
You can also use Azure CLI to list the deployed resources in the resource group and AKS node resource group:
az resource list --resource-group <resource-group-name>
nodeResourceGroupName=$(az aks show --name <aks-name> --resource-group <resource-group-name> --query "nodeResourceGroup" -o tsv)
az resource list --resource-group nodeResourceGroupName
You can also use the following PowerShell cmdlet to list the deployed resources in the resource group and AKS node resource group:
Get-AzResource -ResourceGroupName <resource-group-name>
$NodeResourceGroup = (Get-AzAksCluster -Name <aks-name> -ResourceGroupName <resource-group-name>).NodeResourceGroup
Get-AzResource -ResourceGroupName $NodeResourceGroup
After confirming the successful deployment, you can easily deploy your workloads and configure them to have a public endpoint through the newly created Application Gateway for Containers. To achieve this, you have two options: either a Gateway or an Ingress in Kubernetes. These options allow you to expose your application to the public internet using the Application Gateway for Containers resource. The documentation provides various tutorials for both the Gateway API and the Ingress API, including:
You can find the scripts and YAML manifests for these tutorials in the tutorials
folder. Additionally, the app
folder contains two samples: byo
for a bring-your-own installation of the Application Gateway for Containers, and managed
which works with Application Gateway for Containers managed by the ALB Controller.
For simplicity, let's focus on the managed
sample, while leaving the byo
sample for the reader to review. Let's start by reviewing the YAML manifests.
The deployment.yaml
file contains the YAML definition for the deployment, the service, and a secret that contains a temporary certificate for the Gatewy listener that will be replaced by the certificate issued by Let's Encrypt via the certificate manager. For more information on how to use the certificate manager to issue a new certificate to a Gateway using HTTP01 challenges, see Configuring the HTTP-01 Gateway API solver .
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 3
selector:
matchLabels:
app: httpbin
template:
metadata:
labels:
app: httpbin
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: httpbin
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: httpbin
nodeSelector:
"kubernetes.io/os": linux
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: ClusterIP
selector:
app: httpbin
---
apiVersion: v1
kind: Secret
metadata:
name: listener-tls-secret
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhJQ0FRRXdEUVlKS29aSWh2Y05BUUVMQlFBd1d6RUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlYKQkFnTUNsTnZiV1V0VTNSaGRHVXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREVVTUJJR0ExVUVBd3dMWlhoaGJYQnNaUzVqYjIwd0hoY05Nakl4TVRFMk1EVXhPREV6V2hjTk1qVXdPREV5Ck1EVXhPREV6V2pCYk1Rc3dDUVlEVlFRR0V3SkJWVEVUTUJFR0ExVUVDQXdLVTI5dFpTMVRkR0YwWlRFaE1COEcKQTFVRUNnd1lTVzUwWlhKdVpYUWdWMmxrWjJsMGN5QlFkSGtnVEhSa01SUXdFZ1lEVlFRRERBdGxlR0Z0Y0d4bApMbU52YlRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTm4vcHNCYjh0RVVlV3lLCkd4UHZmaFdSaHl6Qm9veFQraTBGUzBXYlNJdEtGUFl3ZmhKay9TYmgzZ09mL1NzVVE0MU1kVkJDb25meEF1OHAKdkhrdlk5cjIrRlEwcXBqb3RuNVJadm1QVlhnTVU0MHZhVzdJSkVzUEIyTTk4UDlrL2VkZXhFOUNEbVhRRUgySApYYXFoaFVpRnh1Q0NIeThLWHJOb0JMVGZ1VWRsM2lycTFJMFAxSkVJaXQ2WC9DeVFWQmU3SVI5ZGZlVXc5UFlsClRKVVhBRGdRTzBCVGRYb3RRc1VUZjI1dktFRWcyUjVHQXIwVC9FcThjS3BNcWFiYzhydCtZTjlQYTVLcUFyWS8KR2M0UkdpTVNBSWlTclhtMHFYQzU2cjhEVFk0T2VhV292ZW9TcXp1Ymxzc0lZNHd4alF4OUdBSC9GTWpxU0ltTgozREQ0RElFQ0F3RUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSTBuMTc5VU8xQVFiRmdqMGEvdHBpTDBLCkFPS0U4UTlvSzBleS80VlYzREdQM3duR2FOMW52d2xCVFNKWGFVK1JHejZQZTUxN2RoRklGR3BYblEzemxZV1UKVE0zU0V1NXd4eWpVZUtWSVlvOGQ3dTN2UXdDMnhHK1IrbStSZ0Jxcm5ib003cVhwYjR0dkNRRi82TXl6TzZDNwpNM0RKZmNqdWQxSEszcmlXQy9CYlB3ZjBlN1dtWW95eGZoaTZBUWRZNmZJU3RRZVhVbWJ1aWtPTDE1VjdETEFtCkxHOSt5cExOdHFsa2VXTXBVcU45R0d6ZjdpSTNVMlJKWTlpUjdrcHUzMXdDWGY4VUhPcUxva2prU1JTTTV0dzcKWXRDNHdjN2dNS1FmSi9GaS9JVXRKdmx6djk1V0lGSU4rSURtbHBPdFVZQTBwMmVFeERtRFFJc2xZV1YwMVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRRFovNmJBVy9MUkZIbHMKaWhzVDczNFZrWWNzd2FLTVUvb3RCVXRGbTBpTFNoVDJNSDRTWlAwbTRkNERuLzByRkVPTlRIVlFRcUozOFFMdgpLYng1TDJQYTl2aFVOS3FZNkxaK1VXYjVqMVY0REZPTkwybHV5Q1JMRHdkalBmRC9aUDNuWHNSUFFnNWwwQkI5CmgxMnFvWVZJaGNiZ2doOHZDbDZ6YUFTMDM3bEhaZDRxNnRTTkQ5U1JDSXJlbC93c2tGUVh1eUVmWFgzbE1QVDIKSlV5VkZ3QTRFRHRBVTNWNkxVTEZFMzl1YnloQklOa2VSZ0s5RS94S3ZIQ3FUS21tM1BLN2ZtRGZUMnVTcWdLMgpQeG5PRVJvakVnQ0lrcTE1dEtsd3VlcS9BMDJPRG5tbHFMM3FFcXM3bTViTENHT01NWTBNZlJnQi94VEk2a2lKCmpkd3crQXlCQWdNQkFBRUNnZ0VCQUozalpYaW9uK01DZXpjN2g0VVd6akQ4NS9Sb2dqdzBqbHVSSEFWY0JGeXQKMlNTOTFuR29KeG5FT1RKUzYrQUpteXQ1bHZYOGJRT0YwV1E2ekVEUksvZHBMRTZBbnBhRTViZnphU3VTdm9wbQpFeFdNbzBZVE93WUo2b1hjVlBJRXlVaU1BSTZPL3pLS1VZYzVSWVBSM0dDOFUyQkRuaVpKMG5FS0EyNmxJdUlyCjlVcWtkSk9wRzJtK09iTnc5a0paZVRJblN2TkJKQ0NXQlRwcmY3TS9IRUprbE5aQU5mV0F0YXptUFp3QXI2cFIKOEpHbzV1ZUl2NXI3S1FJbkpldEF3YStpQ3VTUHZvUlZNOUdrSmZxSHVtVmNJbjU5Z0ZzcXR6dzVGNUlocWQ5eQo3dHNxUTdxNUYxb1BLeGxPOXl4TVQxaUlnWmRNaDZqODFuM1kwaWFlN2lrQ2dZRUE4UG9tVmQxSXh4c3ZYbmRIClM5MkVQUENkQmYybzM2SmczNEJYc3QwV3BaN1ZNWFhvVjFPeFhGeWpsNm1vNVBjMTRUSXpjd2NVdWJJMGVhbWEKVWxVbnR1bDFPMkdhYlh4eDJrR1l6ZmVvalZBVUh5OGNjeWxoTkpXRDl5Ykx0TCttNTBBTFI3V1JSdG5LSUxaSApJc3NjTGRTcGYyMUNUYWU3REk3Q2NNQ3RSbmNDZ1lFQTU1YkhTRFBaNmFUZWp4cDNwdHBDNitrN1duMVdlYnBmCkdDL1Rlb0pIaHVteDB6K3lPNitPcng0YlRZSFhjcU1Fa2pwRWxqN0xwb3ZxMktjYUN6SUxvVHdQTWVjNncxSVQKZTRld01JM3Nid2FKMFFhZXgvWHdVR1J0R3RuNkVka25qK2VaWSsxYUpscEJBcjlZZ0VKaTFUci9wZW9VdEtJUwpYSGNsbzY3dmFzY0NnWUJwQ2pFaHBuWnR5OHpIR2FrclNhQzF5NUEycDA0d1JTQ0M2L2ZPVUk3cG5LV0RqTWk5CklBOGttb0Q0d0F5TjJiQlR2RVV1ODd3MkFaYmNIWERXU0tZcUZmTnk4ZVdWcWZRYTFoTWNYTUxNN2tZSEhjc0IKNjl5aVJqWWl5bmRyRDB0YWE5RSs3Y2Nvb2hCNFY5d0VMNUxWNjJnQzBvWmZRU2pJbllYbURpYTVtd0tCZ0ZwbworWm1OYklnVExqT3R3SUpwK1BCQ1dFS0daZWtWd2lRZUg3QlhCZmQ4YWtpdk9EU20zOHdyczdyNWNwTzFZb1ozCnF1a0EwTjVQQnpyWFdZcC9XaHp5NW5lejdyUHI2ZUV5NHF6QjYwaVl3OXJQZTlOU2h5UExZUEMzb2pHdmxndE8KL2dvTjBrRGd3VHFDV3RtUGtTZnZaWGh2UHZBWnlaTkJqSGN2UnhabkFvR0JBS2hnZnlUNTVUVTUxR3hJRks2YwpqNkM5cEdveHJ5Qk0wSzVTb3FqWk5ud2J5UEwzL2Yybmcwb2tSek5iNEorTVJrOVk1RXlIZkw5WlNTdUNKMHdnCkNOMlRZSnZZQWRETWJiOThZSXB3cTdqdkp4VG15cHFEK2lxM1BBVU9RQ3hrVy9FMnVyOXZMbmZlcFcvVFVaVEMKOWdnOFFQL3Y2Q1owamRpeVBYZEJpb1ZOCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
type: kubernetes.io/tls
The gateway.yaml
contains the definition of the Gateway used by the application. When using an Application Gateway for Containers managed by the ALB Controller, the frontend is auotmatically created for your by the ALB Controller.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: httpbin-gateway
namespace: gateway-demo
annotations:
cert-manager.io/issuer: letsencrypt
alb.networking.azure.io/alb-name: alb
alb.networking.azure.io/alb-namespace: alb-infra
spec:
gatewayClassName: azure-alb-external
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- hostname: dummy.babosbird.com
name: https
port: 443
protocol: HTTPS
allowedRoutes:
namespaces:
from: All
tls:
mode: Terminate
certificateRefs:
- name: listener-tls-secret
The Gateway defines a certificate issuer in the cert-manager.io/issuer
annotation, so we need to create an issuer. In the issuer, we define the CA root, Let's Encrypt in this case, for the certificate chain to issue our certificate and the challenge type that our client would like to handle to prove our controll over the domain (in our case we will use HTTP01 challenge).
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: gateway-demo
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'admin@contoso.com'
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: httpbin-gateway
namespace: gateway-demo
kind: Gateway
The certificate manager follows a series of steps to issue a certificate to the Gateway listener:
The httproute.yaml
contains the definition of the HTTPRoute
object used to route requests to the service:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: httpbin-route
namespace: gateway-demo
spec:
parentRefs:
- name: httpbin-gateway
namespace: gateway-demo
kind: Gateway
rules:
- backendRefs:
- name: httpbin
port: 80
You are now ready to deploy the managed
sample to your AKS cluster. As a first step, enter a correct value for each variable in the 00.variables.sh
file:
# Certificate Manager
cmNamespace="cert-manager"
cmRepoName="jetstack"
cmRepoUrl="https://charts.jetstack.io"
cmChartName="cert-manager"
cmReleaseName="cert-manager"
cmVersion="v1.14.0"
# Application Load Balancer
applicationLoadBalancerName="alb"
applicationLoadBalancerNamespace="alb-infra"
# Demo
namespace="agc-demo"
gatewayName="echo-gateway"
issuerName="letsencrypt"
httpRouteName="echo-route"
# Ingress and DNS
dnsZoneName="babosbird.com"
dnsZoneResourceGroupName="DnsResourceGroup"
subdomain="shogunagc"
hostname="$subdomain.$dnsZoneName"
Run the 01-install-cert-manager.sh
script if you need to install the certificate manager in your AKS cluster.
#/bin/bash
# Variables
source ./00-variables.sh
# Check if the cert-manager repository is not already added
result=$(helm repo list | grep $cmRepoName | awk '{print $1}')
if [[ -n $result ]]; then
echo "[$cmRepoName] Helm repo already exists"
else
# Add the Jetstack Helm repository
echo "Adding [$cmRepoName] Helm repo..."
helm repo add $cmRepoName $cmRepoUrl
fi
# Update your local Helm chart repository cache
echo 'Updating Helm repos...'
helm repo update
# Install cert-manager Helm chart
result=$(helm list -n $cmNamespace | grep $cmReleaseName | awk '{print $1}')
if [[ -n $result ]]; then
echo "[$cmReleaseName] cert-manager already exists in the $cmNamespace namespace"
echo "Upgrading [$cmReleaseName] cert-manager to the $cmNamespace namespace..."
else
# Install the cert-manager Helm chart
echo "Deploying [$cmReleaseName] cert-manager to the $cmNamespace namespace..."
fi
helm upgrade $cmReleaseName $cmRepoName/$cmChartName \
--install \
--create-namespace \
--namespace $cmNamespace \
--version $cmVersion \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"
Then run the 02-create-sample.sh
script to deploy the application to the specified namespace. The script makes use of the yq tool.
#!/bin/bash
# Variables
source ./00-variables.sh
# Check if namespace exists in the cluster
result=$(kubectl get namespace -o jsonpath="{.items[?(@.metadata.name=='$namespace')].metadata.name}")
if [[ -n $result ]]; then
echo "$namespace namespace already exists in the cluster"
else
echo "$namespace namespace does not exist in the cluster"
echo "creating $namespace namespace in the cluster..."
kubectl create namespace $namespace
fi
# Create a sample web application
kubectl apply -n $namespace -f ./deployment.yaml
# Create Gateway
cat gateway.yaml |
yq "(.metadata.name)|="\""$gatewayName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.metadata.annotations."\""cert-manager.io/issuer"\"")|="\""$issuerName"\" |
yq "(.metadata.annotations."\""alb.networking.azure.io/alb-name"\"")|="\""$applicationLoadBalancerName"\" |
yq "(.metadata.annotations."\""alb.networking.azure.io/alb-namespace"\"")|="\""$applicationLoadBalancerNamespace"\" |
yq "(.spec.listeners[1].hostname)|="\""$hostname"\" |
kubectl apply -f -
# Create Issuer
cat issuer.yaml |
yq "(.metadata.name)|="\""$issuerName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.spec.acme.solvers[0].http01.gatewayHTTPRoute.parentRefs[0].name)|="\""$gatewayName"\" |
yq "(.spec.acme.solvers[0].http01.gatewayHTTPRoute.parentRefs[0].namespace)|="\""$namespace"\" |
kubectl apply -f -
# Create HTTPRoute
cat httproute.yaml |
yq "(.metadata.name)|="\""$httpRouteName"\" |
yq "(.metadata.namespace)|="\""$namespace"\" |
yq "(.spec.parentRefs[0].name)|="\""$gatewayName"\" |
yq "(.spec.parentRefs[0].namespace)|="\""$namespace"\" |
kubectl apply -f -
If you delegated the management of your public DNS to Azure DNS, you can use the 03-configure-dns.sh
script to create a CNAME for the FQDN assigned to the frontend used by the Gateway.
# Variables
source ./00-variables.sh
# Get the FQDN of the gateway
echo -n "Retrieving the FQDN of the [$gatewayName] gateway..."
while true
do
fqdn=$(kubectl get gateway $gatewayName -n $namespace -o jsonpath='{.status.addresses[0].value}')
if [[ -n $fqdn ]]; then
echo
break
else
echo -n '.'
sleep 1
fi
done
if [ -n $fqdn ]; then
echo "[$fqdn] FQDN successfully retrieved from the [$gatewayName] gateway"
else
echo "Failed to retrieve the FQDN from the [$gatewayName] gateway"
exit
fi
# Check if an CNAME record for todolist subdomain exists in the DNS Zone
echo "Retrieving the CNAME for the [$subdomain] subdomain from the [$dnsZoneName] DNS zone..."
cname=$(az network dns record-set cname list \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--query "[?name=='$subdomain'].CNAMERecord.cname" \
--output tsv \
--only-show-errors)
if [[ -n $cname ]]; then
echo "A CNAME already exists in [$dnsZoneName] DNS zone for the [$subdomain]"
if [[ $cname == $fqdn ]]; then
echo "The [$cname] CNAME equals the FQDN of the [$gatewayName] gateway. No additional step is required."
exit
else
echo "The [$cname] CNAME is different than the [$fqdn] FQDN of the [$gatewayName] gateway"
fi
# Delete the CNAME record
echo "Deleting the [$subdomain] CNAME from the [$dnsZoneName] zone..."
az network dns record-set cname delete \
--name $subdomain \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--only-show-errors \
--yes
if [[ $? == 0 ]]; then
echo "[$subdomain] CNAME successfully deleted from the [$dnsZoneName] zone"
else
echo "Failed to delete the [$subdomain] CNAME from the [$dnsZoneName] zone"
exit
fi
else
echo "No CNAME exists in [$dnsZoneName] DNS zone for the [$subdomain] subdomain"
fi
# Create a CNAME record
echo "Creating a CNAME in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway..."
az network dns record-set cname set-record \
--cname $fqdn \
--zone-name $dnsZoneName \
--resource-group $dnsZoneResourceGroupName \
--record-set-name $subdomain \
--only-show-errors 1>/dev/null
if [[ $? == 0 ]]; then
echo "[$subdomain] CNAME successfully created in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway"
else
echo "Failed to create a CNAME in the [$dnsZoneName] DNS zone for the [$fqdn] FQDN of the [$gatewayName] gateway"
fi
Finally, you can test the sample by running the 04-test-application.sh
script.
#!/bin/bash
# Variables
source ./00-variables.sh
# Curling this FQDN should return responses from the backend as configured in the HTTPRoute
curl https://$hostname
You can also open the application using a web browser:
You can delete the resource group using the following Azure CLI command when you no longer need the resources you created. This will remove all the Azure resources.
az group delete --name <resource-group-name>
Alternatively, you can use the following PowerShell cmdlet to delete the resource group and all the Azure resources.
Remove-AzResourceGroup -Name <resource-group-name>
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.