containers
160 TopicsAnnouncing the Public Preview of Azure Container Apps Azure Monitor dashboards with Grafana
We’re thrilled to announce the public preview of Azure Container Apps Azure Monitor Dashboards with Grafana, a major step forward in simplifying observability for your apps and environments. With this new integration, you can view Grafana dashboards directly within your app or environment in the Azure portal, with no extra setup or cost required. What’s new? Azure Monitor Dashboards with Grafana bring the power of Grafana’s visualization capabilities to your Azure resources. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. For Azure Container Apps, this means you can access two new pre-built dashboards: Container App View: View key metrics like CPU usage, memory usage, request rates, replica restarts, and more. Environment View: See all your apps in one view with details like latest revision name, minimum and maximum replicas, CPU and memory allocations, and more for each app. These dashboards are designed to help you quickly identify issues, optimize performance, and ensure your applications are running smoothly. Benefits Drill into key metrics: Stop switching between multiple tools or building dashboards from scratch. Start from the environment dashboard to get a high-level view of all of your apps, then drill into individual app dashboards. Customize your views: Tailor the dashboards to your team’s needs using Grafana’s flexible visualization options. Full compatibility with open-source Grafana: Dashboards created in Azure Monitor are portable across any Grafana instance. Share dashboards across your team with Azure Role-Based Access Control (RBAC): Dashboards are native Azure resources, so you can securely share them using RBAC. Get started today For Azure Container Apps, you can experience these dashboards directly from either your environment or an individual app: Navigate to your Azure Container App environment or a specific Container App in the Azure portal. Open the Monitoring section and select the “Dashboards with Grafana (Preview)” blade. View your metrics or customize the dashboard to meet your needs. For detailed guidance, see aka.ms/aca/grafana Want more? Explore the Grafana Gallery Looking for additional customization or inspiration? Visit the Grafana Dashboard Gallery to explore thousands of community dashboards. If you prefer to use Azure Managed Grafana, here are direct links to Azure Container Apps templates: Azure / Container Apps / Container App View Azure / Container Apps / Aggregate View You can also view other published Azure dashboards here.358Views1like1CommentSimplify Image Signing and Verification with Notary Project and Trusted Signing (Public Preview)
Supply chain security has become one of the most pressing challenges for modern cloud-native applications. Every container image, Helm chart, SBOM, or AI model that flows through your CI/CD pipeline carries risk if its integrity or authenticity cannot be guaranteed. Attackers may attempt to tamper with artifacts, replace trusted images with malicious ones, or inject unverified base images into builds. Today, we’re excited to highlight how Notary Project and Trusted Signing (Public Preview) make it easier than ever to secure your container image supply chain with strong, standards-based signing and verification. Why image signing matters Image signing addresses two fundamental questions in the software supply chain: Integrity: Is this artifact exactly the same one that was originally published? Authenticity: Did this artifact really come from the expected publisher? Without clear answers, organizations risk deploying compromised images into production environments. With signing and verification in place, you can block untrusted artifacts at build time or deployment, ensuring only approved content runs in your clusters. Notary Project: A standard-based solution Notary Project is a CNCF open-source initiative that defines standards for signing and verifying OCI artifacts—including container images, SBOMs, Helm charts, and AI models. It provides a consistent, interoperable framework for ensuring artifact integrity and authenticity across different registries, platforms, and tools. Notary Project includes two key sub-projects that address different stages of the supply chain: Notation – a CLI tool designed for developers and CI/CD pipelines. It enables publishers to sign artifacts after they are built and consumers to verify signatures before artifacts are used in builds. Ratify – a verification engine that integrates with Azure policy and Azure Kubernetes Service (AKS). It enforces signature verification at deployment time, ensuring only trusted artifacts are admitted to run in the cluster. Together, Notation and Ratify extend supply chain security from the build pipeline all the way to runtime, closing critical gaps and reducing the risk of running unverified content. Trusted Signing: Simplifying certificate management Traditionally, signing workflows required managing certificates: issuing, rotating, and renewing them through services like Azure Key Vault. While this provides control, it also adds operational overhead. Trusted Signing changes the game. It offers: Zero-touch certificate lifecycle management: no manual issuance or rotation. Short-lived certificate: reducing the attack surface. Built-in timestamping support: ensuring signatures remain valid even after certificates expire. With Trusted Signing, developers focus on delivering software, not managing certificates. End-to-end scenarios Here’s how organizations can use Notary Project and Trusted Signing together: Sign in CI/CD: An image publisher signs images as part of a GitHub Actions or Azure DevOps pipeline, ensuring every artifact carries a verifiable signature. Verify in AKS: An image consumer configures Ratify and Azure Policy on an AKS cluster to enforce that only signed images can be deployed. Verify in build pipelines: Developers ensure base images and dependencies are verified before they’re used in application builds, blocking untrusted upstream components. Extend to all OCI artifacts: Beyond container images, SBOMs, Helm charts, and even AI models can be signed and verified with the same workflow. Get started To help you get started, we’ve published new documentation and step-by-step tutorials: Overview: Ensuring integrity and authenticity of container images and OCI artifacts Sign and verify images with Notation CLI and Trusted Signing Sign container images in GitHub Actions with Trusted Signing Verify signatures in GitHub Actions Verify signatures on AKS with Ratify Try it now Supply chain security is no longer optional. By combining Notary Project with the streamlined certificate management experience of Trusted Signing, you can strengthen the integrity and authenticity of every artifact in your pipeline without slowing down your teams. Start signing today and take the next step toward a trusted software supply chain.345Views2likes0CommentsPublic preview: Confidential containers on AKS
We are proud to announce the preview of confidential containers on AKS, which provides confidential computing capabilities to containerize workloads on AKS. This offering provides strong isolation at the pod-level, memory encryption, AMD SEV-SNP hardware-based attestation capabilities for containerized application code and data while in-use, building upon the existing security, scalability and resiliency benefits offered by AKS.7.1KViews4likes1CommentSecuring Cloud Shell Access to AKS
Azure Cloud Shell is an online shell hosted by Microsoft that provides instant access to a command-line interface, enabling users to manage Azure resources without needing local installations. Cloud Shell comes equipped with popular tools and programming languages, including Azure CLI, PowerShell, and the Kubernetes command-line tool (kubectl). Using Cloud Shell can provide several benefits for administrators who need to work with AKS, especially if they need quick access from anywhere, or are in locked down environments: Immediate Access: There’s no need for local setup; you can start managing Azure resources directly from your web browser. Persistent Storage: Cloud Shell offers a file share in Azure, keeping your scripts and files accessible across multiple sessions. Pre-Configured Environment: It includes built-in tools, saving time on installation and configuration. The Challenge of Connecting to AKS By default, Cloud Shell traffic to AKS originates from a random Microsoft-managed IP address, rather than from within your network. As a result, the AKS API server must be publicly accessible with no IP restrictions, which poses a security risk as anyone on the internet can attempt to reach it. While credentials are still required, restricting access to the API server significantly enhances security. Fortunately, there are ways to lock down the API server while still enabling access via Cloud Shell, which we’ll explore in the rest of this article Options for Securing Cloud Shell Access to AKS Several approaches can be taken to secure the access to your AKS cluster while using Cloud Shell: IP Allow Listing On AKS clusters with a public API server, it is possible to lock down access to the API server with an IP allow list. Each Cloud Shell instance has a randomly selected outbound IP coming from the Azure address space whenever a new session is deployed. This means we cannot allow access to these IPs in advance, but we apply them once our session is running and this will work for the duration of our session. Below is an example script that you could run from Cloud Shell to check the current outbound IP address and allow it on your AKS clusters authorised IP list. #!/usr/bin/env bash set -euo pipefail RG="$1"; AKS="$2" IP="$(curl -fsS https://api.ipify.org)" echo "Adding ${IP} to allow list" CUR="$(az aks show -g "$RG" -n "$AKS" --query "apiServerAccessProfile.authorizedIpRanges" -o tsv | tr '\t' '\n' | awk 'NF')" NEW="$(printf "%s\n%s/32\n" "$CUR" "$IP" | sort -u | paste -sd, -)" if az aks update -g "$RG" -n "$AKS" --api-server-authorized-ip-ranges "$NEW" >/dev/null; then echo "IP ${IP} applied successfully"; else echo "Failed to apply IP ${IP}" >&2; exit 1; fi This method comes with some caveats: The users running the script would need to be granted permissions to update the authorised IP ranges in AKS - this permission could be used to add any IP address This script will need to be run each time a Cloud Shell session is created, and can take a few minutes to run The script only deals with adding IPs to the allow list, you would also need to implement a process to remove these IPs on a regular basis to avoid building up a long list of IPs that are no longer needed. Adding Cloud Shell IPs in bulk, through Service Tags or similar will result in your API server being accessible to a much larger range of IP addresses, and should be avoided. Command Invoke Azure provides a feature known as Command Invoke that allows you to send commands to be run in AKS, without the need for direct network connectivity. This method executes a container within AKS to run your command and then return the result, and works well from within Cloud Shell. This is probably the simplest approach that works with a locked down API server and the quickest to implement. However, there are some downsides: Commands take longer to run - when you execute the command, it needs to run a container in AKS, execute the command and then return the result. You only get exitCode and text output, and you lose API level details. All commands must be run within the context of the az aks command invoke CLI command, making commands much longer and complex to execute, rather than direct access with Kubectl Command Invoke can be a practical solution for occasional access to AKS, especially when the cost or complexity of alternative methods isn't justified. However, its user experience may fall short if relied upon as a daily tool. Further Details: Access a private Azure Kubernetes Service (AKS) cluster using the command invoke or Run command feature - Azure Kubernetes Service | Microsoft Learn Cloud Shell vNet Integration It is possible to deploy Cloud Shell into a virtual network (vNet), allowing it to route traffic via the vNet, and so access resources using private network, Private Endpoints, or even public resources, but using a NAT Gateway or Firewall for consistent outbound IP address. This approach uses Azure Relay to provide secure access to the vNet from Cloud Shell, without the need to open additional ports. When using Cloud Shell in this way, it does introduce additional cost for the Azure Relay service. Using this solution will require two different approaches, depending on whether you are using a private or public API server. When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shell will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. The benefit is that we can control the public IP used for the outbound traffic using a Nat Gateway or Azure Firewall. Once this is configured, we can then allow-list this fixed IP on the AKS API server authorised IP ranges. Further Details: Use Cloud Shell in an Azure virtual network | Microsoft Learn Azure Bastion Azure Bastion provides secure and seamless RDP and SSH connectivity to your virtual machines (VMs) directly from the Azure portal, without exposing them to the public internet. Recently, Bastion has also added support for direct connection to AKS with SSH, rather than needing to connect to a jump box and then use Kubectl from there. This greatly simplifies connecting to AKS, and also reduces the cost. Using this approach, we can deploy a Bastion into the vNet hosting AKS. From Cloud Shell we can then use the following command to create a tunnel to AKS. az aks bastion --name <aks name> --resource-group <resource group name> --bastion <bastion resource ID> Once this tunnel is connected, we can run Kubectl commands without any need for further configuration. As with Cloud Shell network integration, we take two slightly different approaches depending on whether the API server is public or private: When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shells connected via Bastion will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. As with Cloud Shell vNet integration, we can configure this to use a static outbound IP and allow list this on the API server. Using Bastion, we can still use NAT Gateway or Azure Firewall to achieve this, however you can also allow list the public IP assigned to the Bastion, removing the cost for NAT Gateway or Azure Firewall if these are not required for anything else. Connecting to AKS directly from Bastion requires the use of the Standard for Premium SKU of Bastion, which does have additional cost over the Developer or Basic SKU. This feature also requires that you enable native client support. Further details: Connect to AKS Private Cluster Using Azure Bastion (Preview) - Azure Bastion | Microsoft Learn Summary of Options IP Allow Listing The outbound IP addresses for Cloud Shell instances can be added to the Authorised IP list for your API server. As these IPs are dynamically assigned to sessions they would need to be added at runtime, to avoid adding a large list of IPs and reducing security. This can be achieved with a script. While easy to implement, this requires additional time to run the script with every new session, and increases the overhead for managing the Authorise IP list to remove unused IPs. Command Invoke Command Invoke allows you to run commands against AKS without requiring direct network access or any setup. This is a convenient option for occasional tasks or troubleshooting, but it’s not designed for regular use due to its limited user experience and flexibility. Cloud Shell vNet Integration This approach connects Cloud Shell directly to your virtual network, enabling secure access to AKS resources. It’s well-suited for environments where Cloud Shell is the primary access method and offers a more secure and consistent experience than default configurations. It does involve additional cost for Azure Relay. Azure Bastion Azure Bastion provides a secure tunnel to AKS that can be used from Cloud Shell or by users running the CLI locally. It offers strong security by eliminating public exposure of the API server and supports flexible access for different user scenarios, though it does require setup and may incur additional cost. Cloud Shell is a great tool for providing pre-configured, easily accessible CLI instances, but in the default configuration it can require some security compromises. With a little work, it is possible to make Cloud Shell work with a more secure configuration that limits how much exposure is needed for your AKS API server.237Views1like0CommentsPrivate Pod Subnets in AKS Without Overlay Networking
When deploying AKS clusters, a common concern is the amount of IP address space required. If you are deploying your AKS cluster into your corporate network, the size of the IP address space you can obtain may be quite small, which can cause problems with the number of pods you are able to deploy. The simplest and most common solution to this is to use an overlay network, which is fully supported in AKS. In an overlay network, pods are deployed to a private, non-routed address space that can be as large as you want. Translation between the routable and non-routed network is handled by AKS. For most people, this is the best option for dealing with IP addressing in AKS, and there is no need to complicate things further. However, there are some limitations with overlay networking, primarily that you cannot address the pods directly from the rest of the network— all inbound communication must go via services. There are also some advanced features that are not supported, such as Virtual Nodes. If you are in a scenario where you need some of these features, and overlay networking will not work for you, it is possible to use the more traditional vNet-based deployment method, with some tweaks. Azure CNI Pod Subnet The alternative to using the Azure CNI Overlay is to use the Azure CNI Pod Subnet. In this setup, you deploy a vNet with two subnets - one for your nodes and one for pods. You are in control of the IP address configuration for these subnets. To conserve IP addresses, you can create your pod subnet using an IP range that is not routable to the rest of your corporate network, allowing you to make it as large as you like. The node subnet remains routable from your corporate network. In this setup, if you want to talk to the pods directly, you would need to do so from within the AKS vNet or peer another network to your pod subnet. You would not be able to address these pods from the rest of your corporate network, even without using overlay networking. The Routing Problem When you deploy a setup using Azure CNI Pod Subnet, all the subnets in the vNet are configured with routes and can talk to each other. You may wish to connect this vNet to other Azure vNets via peering, or to your corporate network using ExpressRoute or VPN. However, where you will encounter an issue is if your pods try to connect to resources outside of your AKS vNet but inside your corporate network, or any peered Azure vNets (which are not peered to this isolated subnet). In this scenario, the pods will route their traffic directly out of the vNet using their private IP address. This private IP is not a valid, routable IP, so the resources on the other network will not be able to reply, and the request will fail. IP Masquerading To resolve this issue, we need a way to have traffic going to other networks present a private IP that is routable within the network. This can be achieved through several methods. One method would be to introduce a separate solution for routing this traffic, such as Azure Firewall or another Network Virtual Appliance (NVA). Traffic is routable between the pod and node subnet, so the pod can send its requests to the firewall, and then the requests to the remote network come from the IP of the firewall, which is routable. This solution will work but does require another resource to be deployed, with additional costs. If you are already using an Azure Firewall for outbound traffic, then this may be something you could use, but we are looking for a simpler and more cost-effective solution. Rather than implementing another device to present a routable IP, we can use the nodes of our AKS clusters. The AKS nodes are in the routable node subnet, so ideally we want our outbound traffic from the pods to use the node IP when it needs to leave the vNet to go to the rest of the private network. There are several different ways you could achieve this goal. You could look at using Egress Gateway services through tools like Istio, or you could look at making changes to the iptables configuration on the nodes using a DaemonSet. In this article, we will focus on using a tool called ip-masq-agent-v2. This tool provides a means for traffic to "masquerade" as coming from the IP address of the node it is running on and have the node perform Network Address Translation (NAT). If you deploy a cluster with an overlay network, this tool is already deployed and configured on your cluster. This is the tool that Microsoft uses to configure NAT for traffic leaving the overlay network. When using pod subnet clusters, this tool is not deployed, but you can deploy it yourself to provide the same functionality. Under the hood, this tool is making changes to iptables using a DaemonSet that runs on each node, so you could replicate this behaviour yourself—but this provides a simpler process that has been tested with AKS through overlay networking. The Microsoft v2 version of this is based on the original Kubernetes contribution, aiming to solve more specific networking cases, allow for more configuration options, and improve observability. Deploy ip-masq-agent-v2 There are two parts to deploying the agent. First, we deploy the agent, which runs as a DaemonSet, spawning a pod on each node in the cluster. This is important, as each node needs to have the iptables altered by the tool, and it needs to run anytime a new node is created. To deploy the agent, we need to create the DaemonSet in our cluster. The ip-masq-agent-v2 repo includes several examples, including an example of deploying the DaemonSet. The example is slightly out of date on the version of ip-masq-agent-v2 to use, so make sure you update this to the latest version. If you would prefer to build and manage your own containers for this, the repository also includes a Dockerfile to allow you to do this. Below is an example deployment using the Microsoft-hosted images. It references the ConfigMap we will create in the next step, and it is important that the same name is used as is referenced here. apiVersion: apps/v1 kind: DaemonSet metadata: name: ip-masq-agent namespace: kube-system labels: component: ip-masq-agent kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: ip-masq-agent template: metadata: labels: k8s-app: ip-masq-agent spec: hostNetwork: true containers: - name: ip-masq-agent image: mcr.microsoft.com/aks/ip-masq-agent-v2:v0.1.15 imagePullPolicy: Always securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] volumeMounts: - name: ip-masq-agent-volume mountPath: /etc/config readOnly: true volumes: - name: ip-masq-agent-volume projected: sources: - configMap: name: ip-masq-agent-config optional: true items: - key: ip-masq-agent path: ip-masq-agent mode: 0444 Once you deploy this DaemonSet, you should see instances of the agent running on each node in your cluster. Create Configuration Next, we need to create a ConfigMap that contains any configuration data we need to vary from the default deployed with the agent. The main thing we need to configure is the IP ranges that will be masqueraded as an agent IP. The default deployment of ip-masq-agent-v2 disables masquerading for all three private IP ranges specified by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16). In our example above, this will therefore not masquerade traffic to the 10.1.64.0/18 subnet in the app network, and our routing problem will still exist. We need to amend the configuration so that these private IPs are masqueraded. However, we do want to avoid masquerading within our AKS network, as this traffic needs to come from the pod IPs. Therefore, we need to ensure we do not masquerade for traffic going from the pods to: The pod subnet The node subnet The AKS service CIDR range, for internal networking in AKS To do this, we need to add these IP ranges to the nonMasqueradeCIDRs array in the configuration. This is the list of IP addresses which, when traffic is sent to them, will continue to come from the pod IP and not the node IP. In addition, the configuration also allows us to define if we masquerade the link-local IPs, which we do not want to do. Below is an example ConfigMap that works for the setup detailed above. apiVersion: v1 kind: ConfigMap metadata: name: ip-masq-agent-config namespace: kube-system labels: component: ip-masq-agent kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists data: ip-masq-agent: |- nonMasqueradeCIDRs: - 10.0.0.0/16 # Entire VNet and service CIDR - 192.168.0.0/16 masqLinkLocal: false masqLinkLocalIPv6: false There are a couple of things to be aware of here: The node subnet and AKS Service CIDR are two contiguous address spaces in my setup, so both are covered by 10.0.0.0/16. I could have called them out separately. 192.168.0.0/16 covers the whole of my pod subnet. I do not enable masquerading on link-local. The ConfigMap needs to be created in the same namespace as the DaemonSet. The ConfigMap name needs to match what is used in the mount in the DaemonSet manifest. Once you apply this configuration, the agent will pick up the new configuration changes within around 60 seconds. Once these are applied, you should find that traffic going to private addresses outside of the list of nonMasqueradeCIDRs will now present from the node IP. Summary If you’re deploying AKS into an IP-constrained environment, overlay networking is generally the best and simplest option. It allows you to use non-routed pod IP ranges, conserve address space, and avoid complex routing considerations without additional configuration. If you can use it, then this should be your default approach. However, there are cases where overlay networking will not meet your needs. You might require features only available with pod subnet mode, such as the ability to send traffic directly to pods and nodes without tunnelling, or support for features like Virtual Nodes. In these situations, you can still keep your pod subnet private and non-routed by carefully controlling IP masquerading. With ip-masq-agent-v2, you can configure which destinations should (and should not) be NAT’d, ensuring isolated subnets while maintaining the functionality you need.403Views0likes0CommentsSimplifying Outbound Connectivity Troubleshooting in AKS with Connectivity Analysis (Preview)
Announce the Connectivity Analysis feature for AKS, now available in Public Preview and available through the AKS Portal. You can use the Connectivity Analysis (Preview) feature to quickly verify whether outbound traffic from your AKS nodes is being blocked by Azure network resources such as Azure Firewall, Network Security Groups (NSGs), route tables, and more.770Views1like0CommentsAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav25.2KViews2likes1CommentCustomising Node-Level Configuration in AKS
When you deploy AKS, you deploy the control plan, which is managed by Microsoft, and one or more node pools, which contain the worker nodes used to run your Kubernetes workloads. These node pools are usually deployed as Virtual Machine Scale Sets. These scale sets are visible in your subscription, but generally you would not make changes to these directly, as they will be managed by AKS and all of the configuration and management of these is done through AKS. However, there are some scenarios where you do need to make changes to the underlying node configuration to be able to handle the workloads you need to run. Whilst you can make some changes to these nodes, you need to make sure you do it in a supported manner, which will be applied consistently to all your nodes. An example of this requirement is a recent issue I saw with deploying Elasticsearch onto AKS. Let's take a look at this issue and see how it can be resolved, both for this specific issue, but also for any other scenario were you need to make changes on the nodes. The Issue For the rest of this article, we will use a specific scenario to illustrate the requirement to make node changes, but this could be applied to any requirement to make changes to the nodes. Elasticsearch has a requirement where it needs to increase the limit on mmap count, due to the way it uses "mmapfs" for storing indices. The docs state you can resolve this by running: sysctl -w vm.max_map_count=262144 This command needs to be run on the machine that is running the container, not inside the container. In our case, this is the AKS nodes. Whilst this is fairly easy to do on my laptop, this isn't really feasible to run manually on all of our AKS nodes, especially because nodes could be destroyed and recreated during updates or downtime. We need to make the changes consistently on all nodes, and automate the process so it is applied to all nodes, even new ones. Changes to Avoid Whilst we want to make changes to our nodes, we want to do so in a way that doesn't result in our nodes being in an unsupported state. One key example of this is making changes directly to the scale set. Using the IaaS/ARM APIs to make changes directly to the scale set, outside of Kubernetes, will result in your nodes being unsupported and should be avoided. This includes making changes to the CustomScriptExtension configured on the scale set. Similarly, we want to avoid SSH'ing into the nodes operating system and making the changes manually. Whilst this will apply the change you want, as soon as that node is destroyed and recreated, your change will be gone. Similarly, if you want to use the node autoscaler, any new nodes won't have your changes. Solutions There are a few different options that we could use to solve this issue and customise our node configuration. Let's take a look at them in order of ease of use. 1. Customised Node Configuration The simplest method to customise node configuration is through the use of node configuration files that can be applied at the creation of a cluster or a node pool. Using these configuration files you are able to customise a specific set of configuration settings for both the Node Operating System and the Kubelet configuration. Below is an example of a Linux OS configuration: { "transparentHugePageEnabled": "madvise", "transparentHugePageDefrag": "defer+madvise", "swapFileSizeMB": 1500, "sysctls": { "netCoreSomaxconn": 163849, "netIpv4TcpTwReuse": true, "netIpv4IpLocalPortRange": "32000 60000" } } We would then apply this at the time of creating a cluster or node pool by providing the file to the CLI command. For example, creating a cluster: az aks create --name myAKSCluster --resource-group myResourceGroup --linux-os-config ./linuxosconfig.jsonaz aks create --name myAKSCluster --resource-group myResourceGroup --linux-os-config ./linuxosconfig.json Creating a node pool: az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json There are lots of different configuration settings that can be changed for both OS and Kublet, for both Linux and Windows nodes. The full list can be found here. For our scenario where want to change the vm.max_map_count setting. This is available as one of the configuration options in the virtual memory section. Our OS configuration would look like this: { "vmMaxMapCount": 262144 } Note that the value used in the JSON is a camel case version of the property name, so vm.max_map_count becomes vmMaxMapCount 2. Daemonsets Another way we can make these changes using a Kubernetes native method is through the use of Daemonsets. As you may know, Daemonsets provide a mechanism to run a pod on every node in your cluster. We can use this Daemonset to execute a script that sets the appropriate settings on the nodes when run, and the Deamonset will ensure that this is done on every node, including any new nodes that get created but the autoscaler or during updates. To be able to make changes to the node, we will need to run the Daemonset with some elevated privileges, and so you may want to consider whether the node customisation file option, listed above, works for your scenario, before using this option. For this to work, we need two things, a container to run, and a Daemonset configuration. Container All our Daemonset does is run a container, it's the container that defines what is done. There are two options that we can use for our scenario: Create our own container that has the script to run defined in the Docker file. Use a pre-built container, like BusyBox, which accepts parameters defining what commands to run. The first option is a more secure option, as the container is fixed to running only the command you want, and any malicious changes would require someone to re-build and publish a new image and update the Daemonset configuration to run it. The image we create is very basic, it just needs to have the tools you require for your script installed, and then run your script. The only caveat to this is that Daemonsets need to have their Restart Policy set to always, and so we can't just run our script and stop, as the container will just be restarted. To avoid this, we can have our container sleep once it is done. If the node is ever restarted or replaced, the container will still run again. Here is the most simple Dockerfile we can use to solve our Elasticsearch issue: FROM alpine CMD sysctl -w vm.max_map_count=262144; sleep 365d Daemonset Configuration To run our Daemonset, we need to configure our YAML to do the following: Run our custom container, or use a pre-built container with the right parameters Grant the Daemonset the required privileges to be able to make changes to the node Set the Restart Policy to Always If we want, we can also restrict our Daemonset to only run on nodes that we know are going to run this workload. For example, we can restrict this to only run on a specific node pool in AKSm using a node selector. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: node-config spec: template: metadata: labels: name: node-config spec: containers: - name: node-config image: scdemo.azurecr.io/node-config:1 securityContext: privileged: true restartPolicy: Always nodeSelector: agentpool: elasticsearch Once we deploy this to our cluster, the Daemonset will run and make the changes we require. When using a Daemonset or Init container approach, pay special attention to security. This container will run in privileged mode, which gives it a high level of permissions, and not just the ability to change the specific configuration setting you are interested in. Ensure that access to these containers and their configuration is restricted. Consider using init containers if possible as their runtime is more limited. 3. Init Containers This is a similar approach to Dameonsets, but instead of running on everything node, we use an init container in our application to only run on the nodes where our application is present. An init container allows us to specify that a specific container must run, and complete successfully prior to our main application being run. We can take our container that runs our custom script, as with the Daemonset option, and run this as an init container instead. The benefit of this approach is that the init container only runs once when the application is started, and then stops. This avoids needing to have the sleep command that keeps the process running at all times. The downside is that using an init container requires editing the YAML for the application you are deploying, which may be difficult or impossible if you are using a third party application. Some third party applications will have Helm charts or similar configured that do allow passing in custom init containers, but many do now. If you are creating your own applications then this is easier. Below is an example using this approach, in this example we use a pre-built container (BusyBox) for running our script, rather than a custom container. Either approach can be used. apiVersion: v1 kind: Pod metadata: name: app-pod labels: app.kubernetes.io/name: MyApp spec: containers: - name: main-app image: scdemo.azurecr.io/main-app:1 initContainers: - name: init-sysctl image: busybox command: - sysctl - -w - vm.max_map_count=262144 imagePullPolicy: IfNotPresent securityContext: privileged: true Conclusions Making changes to underlying AKS nodes is something that most people won't need to do, most of the time. However, there are some scenarios you may hit where this is important. AKS comes with functionality to do is in a controlled and supported manner via the use the of configuration files. This approach is recommended if the configuration you need to change is supported, as is simpler to implement, doesn't require creating custom containers and is the most secure approach. If the change you need is not supported then you still have a way to deal with this via Daemonsets or Init containers, but special attention should be paid to security when using this solution.426Views0likes0CommentsEnhancing Performance in Azure Container Apps
Azure Container Apps is a fully managed serverless container service that enables you to deploy and run applications without having to manage the infrastructure. The Azure Container Apps team has made improvements recently to the load balancing algorithm and scaling behavior to better align with customer expectations to meet their performance needs.6.7KViews3likes1Comment