azure linux
41 TopicsAzure Linux: Driving Security in the Era of AI Innovation
Microsoft is advancing cloud and AI innovation with a clear focus on security, quality, and responsible practices. At Ignite 2025, Azure Linux reflects that commitment. As Microsoft’s ubiquitous Linux OS, it powers critical services and serves as the hub for security innovation. This year’s announcements, Azure Linux with OS Guard public preview and GA of pod sandboxing, reinforce security as one of our core priorities, helping customers build and run workloads with confidence in an increasingly complex threat landscape. Announcing OS Guard Public Preview We’re excited to announce the public preview of Azure Linux with OS Guard at Ignite 2025! OS Guard delivers a hardened, immutable container host built on the FedRAMP-certified Azure Linux base image. It introduces a significantly streamlined footprint with approximately 100 fewer packages than the standard Azure Linux image, reducing the attack surface and improving performance. FIPS mode is enforced by default, ensuring compliance for regulated workloads right out of the box. Additional security features include dm-verity for filesystem immutability, Trusted Launch backed by vTPM-secured keys, and seamless integration with AKS for container workloads. Built with upstream transparency and active Microsoft contributions, OS Guard provides a secure foundation for containerized applications while maintaining operational simplicity. During the preview period, code integrity and mandatory access Control (SELinux) are enabled in audit mode, allowing customers to validate policies and prepare for enforcement without impacting workloads. General Availability: Pod Sandboxing for stronger isolation on AKS We’re also announcing the GA of pod sandboxing on AKS, delivering stronger workload isolation for multi-tenant and regulated environments. Based on the open source Kata project, Pod Sandboxing introduces VM-level isolation for containerized workloads by running each pod inside its own lightweight virtual machine using Kata Containers, providing a stronger security boundary compared to traditional containers. Connect with us at Ignite Meet the Azure Linux team and see these innovations in action: Ignite: Join us at our breakout session (https://ignite.microsoft.com/en-US/sessions/BRK144) and visit the Linux on Azure Booth for live demos and deep dives. Session Type Session Code Session Name Date/Time (PST) Breakout BRK 143 Optimizing performance, deployments, and security for Linux on Azure Thu, Nov 20/ 1:00 PM – 1:45 PM Breakout BRK 144 Build, modernize, and secure AKS workloads with Azure Linux Wed, Nov 19/ 1:30 PM – 2:15 PM Breakout BRK 104 From VMs and containers to AI apps with Azure Red Hat OpenShift Thu, Nov 20/ 8:30 AM – 9:15 AM Theatre TRH 712 Hybrid workload compliance from policy to practice on Azure Tue, Nov 18/ 3:15 PM – 3:45 PM Theatre THR 701 From Container to Node: Building Minimal-CVE Solutions with Azure Linux Wed, Nov 19/ 3:30 PM – 4:00 PM Lab Lab 505 Fast track your Linux and PostgreSQL migration with Azure Migrate Tue, Nov 18/ 4:30 PM – 5:45 PM PST Wed, Nov 19/ 3:45 PM – 5:00 PM PST Thu, Nov 20/ 9:00 AM – 10:15 AM PST Whether you’re migrating workloads, exploring security features, or looking to engage with our engineering team, we’re eager to connect and help you succeed with Azure Linux. Resources to get started Azure Linux OS Guard Overview & QuickStart: https://aka.ms/osguard Pod Sandboxing Overview & QuickStart: https://aka.ms/podsandboxing Azure Linux Documentation: https://learn.microsoft.com/en-us/azure/azure-linux/588Views3likes0CommentsAzure Linux 3.0 now in preview on Azure Kubernetes Service v1.31
We are excited to announce that Azure Linux 3.0, the next major version release of the Azure Linux container host for Azure Kubernetes Service (AKS), is now available in preview on AKS version 1.31. Approximately every three years Azure Linux releases a new version of its operating system with upgrades to major components. Azure Linux 3.0 offers increased package availability and versions, an updated kernel, and improvements to performance, security, and tooling and developer experience. Some of the major components upgraded from Azure Linux 2.0 to 3.0 include: Component Azure Linux 3.0 Azure Linux 2.0 Release Notes Linux Kernel v6.6 (Latest LTS) V5.15 (Previous LTS) Linux 6.6 Containerd v1.7.13, but will also offer v2.0 once it becomes stable 1.6.26 Containerd Releases SystemD v255 V250 Systemd Releases OpenSSL v3.3.0 V1.1.1k OpenSSL 3.3 For more details on the key features and updates in Azure Linux 3.0 see the 3.0 GitHub release notes. Using Azure Linux 3.0 in Preview: To get started with Azure Linux 3.0 (Preview) in AKS version 1.31 you simply need to register the Azure Linux 3.0 preview feature flag in your Azure subscription. To do so, run the following az cli command: az feature register --namespace Microsoft.ContainerService --name AzureLinuxV3Preview Verify the registration status using the following az cli command (Please note it will take a few minutes for the status to show Registered): az feature show --namespace Microsoft.ContainerService --name AzureLinuxV3Preview Once registered, any new AKS version 1.31 clusters or node pools created with the ‘--os-sku=AzureLinux’ option will default to using Azure Linux 3.0. You can deploy Azure Linux 3.0 clusters or node pools using the method of your choice: CLI PowerShell Terraform ARM Follow this documentation for further instructions on getting started with Azure Linux 3.0 in preview. Considerations: Please note that Azure Linux 3.0 Preview is only supported on AKS version 1.31, and hence not supported on AKS versions 1.30 and below. If you register the Azure Linux 3.0 preview feature flag on AKS versions 1.30 and below new AKS clusters and node pools will default to using Azure Linux 2.0. Further, during preview existing clusters or node pools running Azure Linux 2.0 cannot be upgraded to 3.0. New node pools or clusters need to be created for Azure Linux 3.0 Preview. Finally, Azure Linux 3.0 support is in preview as part of the v20241025 release. Visit the AKS Release Tracker for the latest on which regions are on this release. How to Keep in Touch with the Azure Linux Team: Your insight and feedback during Preview is incredibly valuable for the Azure Linux team and helps shape Azure Linux 3.0 to ensure it’s ready for production workloads. For updates, feedback, and feature requests related to Azure Linux, there are a few ways to stay connected to the team: Ask questions & submit feedback via Azure Linux GitHub Issues We have a public community call every other month for Azure Linux users to come together to ask questions, share learnings, and get updates. Join the next community call on November 21 st at 8AM PST: here Partners with support questions can reach out to AzureLinuxISV@microsoft.com What’s Next: We will incorporate the feedback gathered during the preview period to prepare for the GA of Azure Linux 3.0 on AKS version 1.32.2.5KViews3likes1CommentWhat's new in Azure Linux: Containers, Azure Portal, Security and more
Azure Linux is Microsoft's proven Linux distribution that has been used for years by internal Microsoft services such as Minecraft, Xbox, HDInsight, Defender, Azure Kubernetes Service and Azure Nexus. At Build in May 2023, Microsoft made this distribution available to external customers via a container host on Azure Kubernetes Service (AKS). In the six months since we announced General Availability of Azure Linux as a container host for AKS, we have seen an incredible reception from our customers and partners. The team has been hard at work, in partnership with the AKS team, to bring many updates to our customers. This blog captures the Azure Linux updates the team has been working on, customer and partner testimonials, upcoming roadmap and how to keep in touch with team! Note, for Azure Kubernetes Service updates, please visit the public AKS Roadmap.4.9KViews3likes0CommentsAzure Linux Now Supports AKS Long-Term Support (LTS) Starting with Kubernetes v1.28+
What’s New Managing Kubernetes upgrades can be a challenge for many organizations. The fast-paced release cycle requires frequent cluster updates, which can be time-consuming, carry operational risks, and require repeated validation of workloads and infrastructure. To address this, in April of this year, Azure Kubernetes Service (AKS) introduced Long-Term Support (LTS) on every AKS version — beginning with Kubernetes version 1.28. With AKS LTS, every community-released version of Kubernetes receives an extended support window of an additional year, giving customers more time to test, validate, and adopt new versions at a pace that suits their business needs. The Azure Linux team is excited to announce that Azure Linux now also supports AKS LTS starting with Kubernetes version 1.28 and above. This means you can now pair a stable, enterprise-grade node operating system with the extended lifecycle benefits of AKS LTS — providing a consistent, secure, and well-maintained platform for your container workloads. Benefits of Azure Linux with your AKS LTS Clusters Secure by Design: Azure Linux is built from source using Microsoft’s trusted pipelines, with a minimal package set that reduces the attack surface. It is FIPS-compliant and meets CIS Level 1 benchmarks. Operational Stability: With AKS LTS, each version is supported for two years, reducing upgrade frequency and providing a predictable, stable platform for mission-critical workloads. Reliable Updates: Every package update is validated by both the Azure Linux and AKS teams, running through a full suite of tests to prevent regressions and minimize disruptions. Broad Compatibility: Azure Linux supports AKS extensions, add-ons, and open-source projects. It works seamlessly with existing Linux based containers and includes the upstream containerd runtime. Advanced Isolation: It is the only OS on AKS that supports pod sandboxing, enabling compute isolation between pods for enhanced security. Seamless Migration: Customers can migrate from other distributions to Azure Linux nodepools in-place without recreating clusters, simplifying the process. Getting Started Getting started with Azure Linux on AKS LTS is simple and can be done with a single command. See full documentation on getting started with AKS Long-term Support here. Please note that when enabling LTS on a new Azure Linux cluster you will need to specify --os-sku AzureLinux. Considerations LTS is available on the Premium tier. Refer to the Premium tier pricing for more information. Some add-ons and features might not support Kubernetes versions outside upstream community support windows. View unsupported add-ons and features here. Please note Azure Linux 2.0 is the default node OS for AKS versions v1.27 to v1.31 during both Standard and Long-Term Support. However, Azure Linux 2.0 will reach End of Life during the LTS period of AKS v1.28–v1.31. To maintain support and security updates, customers running Azure Linux 2.0 on AKS v1.28–v1.31 LTS are requested to migrate to Azure Linux 3.0 by November 2025. Azure Linux 3.0 has been validated to support AKS Kubernetes v1.28–v1.31. Before Azure Linux 2.0 goes EoL, AKS will offer a feature to facilitate an in-place migration from Azure Linux 2.0 to 3.0 via a node pool update command. For feature availability and updates, see GitHub issue. After November 2025 Azure Linux 2.0 will no longer receive updates, security patches, or support, which may put your systems at risk. AKS version Azure Linux version during AKS Standard Support Azure Linux version during AKS Long-Term Support 1.27 Azure Linux 2.0 Azure Linux 2.0 1.28 - 1.31 Azure Linux 2.0 Azure Linux 2.0 (migrate to 3.0 by Nov 2025) 1.32+ Azure Linux 3.0 Azure Linux 3.0 For more information on the Azure Linux Container Host support lifecycle see here. How to Keep in Touch with the Azure Linux Team: For updates, feedback, and feature requests related to Azure Linux, there are a few ways to stay connected to the team: We have a public community call every other month for Azure Linux users to come together to ask questions, share learnings, and get updates. Join the next community call on July 24 th at 8AM PST: here Partners with support questions can reach out to AzureLinuxISV@microsoft.com750Views2likes1CommentRun OpenClaw Agents on Azure Linux VMs (with Secure Defaults)
Many teams want an enterprise-ready personal AI assistant, but they need it on infrastructure they control, with security boundaries they can explain to IT. That is exactly where OpenClaw fits on Azure. OpenClaw is a self-hosted, always-on personal agent runtime you run in your enterprise environment and Azure infrastructure. Instead of relying only on a hosted chat app from a third-party provider, you can deploy, operate, and experiment with an agent on an Azure Linux VM you control — using your existing GitHub Copilot licenses, Azure OpenAI deployments, or API plans from OpenAI, Anthropic Claude, Google Gemini, and other model providers you already subscribe to. Once deployed on Azure, you can interact with an OpenClaw agent through familiar channels like Microsoft Teams, Slack, Telegram, WhatsApp, and many more! For Azure users, this gives you a practical middle ground: modern personal-agent workflows on familiar Azure infrastructure. What is OpenClaw, and how is it different from ChatGPT/Claude/chat apps? OpenClaw is a self-hosted personal agent runtime that can be hosted on Azure compute infrastructure. How it differs: ChatGPT/Claude apps are primarily hosted chat experiences tied to one provider's models OpenClaw is an always-on runtime you operate yourself, backed by your choice of model provider — GitHub Copilot, Azure OpenAI, OpenAI, Anthropic Claude, Google Gemini, and others OpenClaw lets you keep the runtime boundary in your own Azure VM environment within your Azure enterprise subscription In practice, OpenClaw is useful when you want a persistent assistant for operational and workflow tasks, with your own infrastructure as the control point. You bring whatever model provider and API plan you already have — OpenClaw connects to it. Why Azure Linux VMs? Azure Linux VMs are a strong fit because they provide: A suitable host machine for the OpenClaw agent to run on Enterprise-friendly infrastructure and identity workflows Repeatable provisioning via the Azure CLI Network hardening with NSG rules Managed SSH access through Azure Bastion instead of public SSH exposure How to Set Up OpenClaw on an Azure Linux VM This guide sets up an Azure Linux VM, applies NSG (Network Security Group) hardening, configures Azure Bastion for managed SSH access, and installs an always-on OpenClaw agent within the VM that you can interact with through various messaging channels. What you'll do Create Azure networking (VNet, subnets, NSG) and compute resources with the Azure CLI Apply Network Security Group rules so VM SSH is allowed only from Azure Bastion Use Azure Bastion for SSH access (no public IP on the VM) Install OpenClaw on the Azure VM Verify OpenClaw installation and configuration on the VM What you need An Azure subscription with permission to create compute and network resources Azure CLI installed (install steps) An SSH key pair (the guide covers generating one if needed) ~20–30 minutes Configure deployment Step 1: Sign in to Azure CLI az login # Select a suitable Azure subscription during Azure login az extension add -n ssh # SSH extension is required for Azure Bastion SSH The ssh extension is required for Azure Bastion native SSH tunneling. Step 2: Register required resource providers (one-time) Register required Azure Resource Providers (one time registration): az provider register --namespace Microsoft.Compute az provider register --namespace Microsoft.Network Verify registration. Wait until both show Registered. az provider show --namespace Microsoft.Compute --query registrationState -o tsv az provider show --namespace Microsoft.Network --query registrationState -o tsv Step 3: Set deployment variables Set the deployment environment variables that will be needed throughout this guide. RG="rg-openclaw" LOCATION="westus2" VNET_NAME="vnet-openclaw" VNET_PREFIX="10.40.0.0/16" VM_SUBNET_NAME="snet-openclaw-vm" VM_SUBNET_PREFIX="10.40.2.0/24" BASTION_SUBNET_PREFIX="10.40.1.0/26" NSG_NAME="nsg-openclaw-vm" VM_NAME="vm-openclaw" ADMIN_USERNAME="openclaw" BASTION_NAME="bas-openclaw" BASTION_PIP_NAME="pip-openclaw-bastion" Adjust names and CIDR ranges to fit your environment. The Bastion subnet must be at least /26. Step 4: Select SSH key Use your existing public key if you have one: SSH_PUB_KEY="$(cat ~/.ssh/id_ed25519.pub)" If you don't have an SSH key yet, generate one: ssh-keygen -t ed25519 -a 100 -f ~/.ssh/id_ed25519 -C "you@example.com" SSH_PUB_KEY="$(cat ~/.ssh/id_ed25519.pub)" Step 5: Select VM size and OS disk size VM_SIZE="Standard_B2as_v2" OS_DISK_SIZE_GB=64 Choose a VM size and OS disk size available in your subscription and region: Start smaller for light usage and scale up later Use more vCPU/RAM/disk for heavier automation, more channels, or larger model/tool workloads If a VM size is unavailable in your region or subscription quota, pick the closest available SKU List VM sizes available in your target region: az vm list-skus --location "${LOCATION}" --resource-type virtualMachines -o table Check your current vCPU and disk usage/quota: az vm list-usage --location "${LOCATION}" -o table Deploy Azure resources Step 1: Create the resource group The Azure resource group will contain all of the Azure resources that the OpenClaw agent needs. az group create -n "${RG}" -l "${LOCATION}" Step 2: Create the network security group Create the NSG and add rules so only the Bastion subnet can SSH into the VM. az network nsg create \ -g "${RG}" -n "${NSG_NAME}" -l "${LOCATION}" # Allow SSH from the Bastion subnet only az network nsg rule create \ -g "${RG}" --nsg-name "${NSG_NAME}" \ -n AllowSshFromBastionSubnet --priority 100 \ --access Allow --direction Inbound --protocol Tcp \ --source-address-prefixes "${BASTION_SUBNET_PREFIX}" \ --destination-port-ranges 22 # Deny SSH from the public internet az network nsg rule create \ -g "${RG}" --nsg-name "${NSG_NAME}" \ -n DenyInternetSsh --priority 110 \ --access Deny --direction Inbound --protocol Tcp \ --source-address-prefixes Internet \ --destination-port-ranges 22 # Deny SSH from other VNet sources az network nsg rule create \ -g "${RG}" --nsg-name "${NSG_NAME}" \ -n DenyVnetSsh --priority 120 \ --access Deny --direction Inbound --protocol Tcp \ --source-address-prefixes VirtualNetwork \ --destination-port-ranges 22 The rules are evaluated by priority (lowest number first): Bastion traffic is allowed at 100, then all other SSH is blocked at 110 and 120. Step 3: Create the virtual network and subnets Create the VNet with the VM subnet (NSG attached), then add the Bastion subnet. az network vnet create \ -g "${RG}" -n "${VNET_NAME}" -l "${LOCATION}" \ --address-prefixes "${VNET_PREFIX}" \ --subnet-name "${VM_SUBNET_NAME}" \ --subnet-prefixes "${VM_SUBNET_PREFIX}" # Attach the NSG to the VM subnet az network vnet subnet update \ -g "${RG}" --vnet-name "${VNET_NAME}" \ -n "${VM_SUBNET_NAME}" --nsg "${NSG_NAME}" # AzureBastionSubnet — name is required by Azure az network vnet subnet create \ -g "${RG}" --vnet-name "${VNET_NAME}" \ -n AzureBastionSubnet \ --address-prefixes "${BASTION_SUBNET_PREFIX}" Step 4: Create the Virtual Machine Create the VM with no public IP. SSH access for OpenClaw configuration will be exclusively through Azure Bastion. az vm create \ -g "${RG}" -n "${VM_NAME}" -l "${LOCATION}" \ --image "Canonical:ubuntu-24_04-lts:server:latest" \ --size "${VM_SIZE}" \ --os-disk-size-gb "${OS_DISK_SIZE_GB}" \ --storage-sku StandardSSD_LRS \ --admin-username "${ADMIN_USERNAME}" \ --ssh-key-values "${SSH_PUB_KEY}" \ --vnet-name "${VNET_NAME}" \ --subnet "${VM_SUBNET_NAME}" \ --public-ip-address "" \ --nsg "" --public-ip-address "" prevents a public IP from being assigned. --nsg "" skips creating a per-NIC NSG (the subnet-level NSG created earlier handles security). Reproducibility: The command above uses latest for the Ubuntu image. To pin a specific version, list available versions and replace latest: az vm image list \ --publisher Canonical --offer ubuntu-24_04-lts \ --sku server --all -o table Step 5: Create Azure Bastion Azure Bastion provides secure-managed SSH access to the VM without exposing a public IP. Bastion Standard SKU with tunneling is required for CLI-based "az network bastion ssh" command. az network public-ip create \ -g "${RG}" -n "${BASTION_PIP_NAME}" -l "${LOCATION}" \ --sku Standard --allocation-method Static az network bastion create \ -g "${RG}" -n "${BASTION_NAME}" -l "${LOCATION}" \ --vnet-name "${VNET_NAME}" \ --public-ip-address "${BASTION_PIP_NAME}" \ --sku Standard --enable-tunneling true Bastion provisioning typically takes 5–10 minutes but can take up to 15–30 minutes in some regions. Step 6: Verify Deployments After all resources are deployed, your resource group should look like the following: Install OpenClaw Step 1: SSH into the VM through Azure Bastion VM_ID="$(az vm show -g "${RG}" -n "${VM_NAME}" --query id -o tsv)" az network bastion ssh \ --name "${BASTION_NAME}" \ --resource-group "${RG}" \ --target-resource-id "${VM_ID}" \ --auth-type ssh-key \ --username "${ADMIN_USERNAME}" \ --ssh-key ~/.ssh/id_ed25519 Step 2: Install OpenClaw (in the Bastion SSH shell) curl -fsSL https://openclaw.ai/install.sh | bash The installer installs Node LTS and dependencies if not already present, installs OpenClaw, and launches the OpenClaw onboarding wizard. For more information, see the open source OpenClaw install docs. OpenClaw Onboarding: Choosing an AI Model Provider During OpenClaw onboarding, you'll choose the AI model provider for the OpenClaw agent. This can be GitHub Copilot, Azure OpenAI, OpenAI, Anthropic Claude, Google Gemini, or another supported provider. See the open source OpenClaw install docs for details on choosing an AI model provider when going through the onboarding wizard. Most enterprise Azure teams already have GitHub Copilot licenses. If that is your case, we recommend choosing the GitHub Copilot provider in the OpenClaw onboarding wizard. See the open source OpenClaw docs on configuring GitHub Copilot as the AI model provider. OpenClaw Onboarding: Setting up Messaging Channels During OpenClaw onboarding, there will be an optional step where you can set up various messaging channels to interact with your OpenClaw agent. For first time users, we recommend setting up Telegram due to ease of setup. Other messaging channels such as Microsoft Teams, Slack, WhatsApp, and others can also be set up. To configure OpenClaw for messaging through chat channels, see the open source OpenClaw chat channels docs. Step 3: Verify OpenClaw Configuration To validate that everything was set up correctly, run the following commands within the same Bastion SSH session: openclaw status openclaw gateway status If there are any issues reported, you can run the onboarding wizard again with the steps above. Alternatively, you can run the following command: openclaw doctor Message OpenClaw Once you have configured the OpenClaw agent to be reachable via various messaging channels, you can verify that it is responsive by messaging it. Enhancing OpenClaw for Use Cases There you go! You now have a 24/7, always-on personal AI agent, living on its own Azure VM environment. For awesome OpenClaw use cases, check out the awesome-openclaw-usecases repository. To enhance your OpenClaw agent with additional AI skills so that it can autonomously perform multi-step operations on any domain, check out the awesome-openclaw-skills repository. You can also check out ClawHub and ClawSkills, two popular open source skills directories that can enhance your OpenClaw agent. Cleanup To delete all resources created by this guide: az group delete -n "${RG}" --yes --no-wait This removes the resource group and everything inside it (VM, VNet, NSG, Bastion, public IP). This also deletes the OpenClaw agent running within the VM. If you'd like to dive deeper about deploying OpenClaw on Azure, please check out the open source OpenClaw on Azure docs.2.3KViews1like0Comments