linux on azure
47 TopicsBoost your network performance on Linux VMs on Azure
Imagine running a marathon while wearing ankle weights—that’s what it feels like when your Linux VM struggles with network performance issues. Slow data transfers and bottlenecks can keep your systems from reaching their full potential. But here’s the good news: you don’t need to settle for suboptimal speeds anymore. Power of collaboration Just like a relay race requires teamwork to achieve the best result, optimizing your Linux VM's network performance on Azure works best when you combine tools, techniques, and community insights. By pooling knowledge from system administrators, developers, and Azure experts, you can explore a range of solutions designed to tackle common network challenges. Consider experimenting with adjustments to network buffer settings, congestion control algorithms, and queue disciplines to enhance data throughput and resource efficiency. These suggestions provide a foundation to transform the way your systems handle data traffic, offering opportunities to boost performance. And the best part? Everything is open for customization to suit your needs. Why It Matters In today’s fast-paced world, every second counts. Delays in network performance can mean missed opportunities and wasted resources. By implementing optimizations, you’ll have the chance to achieve: Faster data transfers: Move large files between regions without the headache of constant delays. Improved resource utilization: Handle data traffic more efficiently, reducing wasted bandwidth. Scalability: Make your systems ready to grow with your business needs. Take Action Today Don’t let your systems run with ankle weights any longer. Dive into the technical details and explore the solutions by visiting this link. Whether you’re a seasoned system administrator or a business leader looking for better results, this is your opportunity to collaborate, innovate, and maximize performance. To learn more, read the documentation.136Views0likes2CommentsAzure Linux Now Supports AKS Long-Term Support (LTS) Starting with Kubernetes v1.28+
What’s New Managing Kubernetes upgrades can be a challenge for many organizations. The fast-paced release cycle requires frequent cluster updates, which can be time-consuming, carry operational risks, and require repeated validation of workloads and infrastructure. To address this, in April of this year, Azure Kubernetes Service (AKS) introduced Long-Term Support (LTS) on every AKS version — beginning with Kubernetes version 1.28. With AKS LTS, every community-released version of Kubernetes receives an extended support window of an additional year, giving customers more time to test, validate, and adopt new versions at a pace that suits their business needs. The Azure Linux team is excited to announce that Azure Linux now also supports AKS LTS starting with Kubernetes version 1.28 and above. This means you can now pair a stable, enterprise-grade node operating system with the extended lifecycle benefits of AKS LTS — providing a consistent, secure, and well-maintained platform for your container workloads. Benefits of Azure Linux with your AKS LTS Clusters Secure by Design: Azure Linux is built from source using Microsoft’s trusted pipelines, with a minimal package set that reduces the attack surface. It is FIPS-compliant and meets CIS Level 1 benchmarks. Operational Stability: With AKS LTS, each version is supported for two years, reducing upgrade frequency and providing a predictable, stable platform for mission-critical workloads. Reliable Updates: Every package update is validated by both the Azure Linux and AKS teams, running through a full suite of tests to prevent regressions and minimize disruptions. Broad Compatibility: Azure Linux supports AKS extensions, add-ons, and open-source projects. It works seamlessly with existing Linux based containers and includes the upstream containerd runtime. Advanced Isolation: It is the only OS on AKS that supports pod sandboxing, enabling compute isolation between pods for enhanced security. Seamless Migration: Customers can migrate from other distributions to Azure Linux nodepools in-place without recreating clusters, simplifying the process. Getting Started Getting started with Azure Linux on AKS LTS is simple and can be done with a single command. See full documentation on getting started with AKS Long-term Support here. Please note that when enabling LTS on a new Azure Linux cluster you will need to specify --os-sku AzureLinux. Considerations LTS is available on the Premium tier. Refer to the Premium tier pricing for more information. Some add-ons and features might not support Kubernetes versions outside upstream community support windows. View unsupported add-ons and features here. Please note Azure Linux 2.0 is the default node OS for AKS versions v1.27 to v1.31 during both Standard and Long-Term Support. However, Azure Linux 2.0 will reach End of Life during the LTS period of AKS v1.28–v1.31. To maintain support and security updates, customers running Azure Linux 2.0 on AKS v1.28–v1.31 LTS are requested to migrate to Azure Linux 3.0 by November 2025. Azure Linux 3.0 has been validated to support AKS Kubernetes v1.28–v1.31. Before Azure Linux 2.0 goes EoL, AKS will offer a feature to facilitate an in-place migration from Azure Linux 2.0 to 3.0 via a node pool update command. For feature availability and updates, see GitHub issue. After November 2025 Azure Linux 2.0 will no longer receive updates, security patches, or support, which may put your systems at risk. AKS version Azure Linux version during AKS Standard Support Azure Linux version during AKS Long-Term Support 1.27 Azure Linux 2.0 Azure Linux 2.0 1.28 - 1.31 Azure Linux 2.0 Azure Linux 2.0 (migrate to 3.0 by Nov 2025) 1.32+ Azure Linux 3.0 Azure Linux 3.0 For more information on the Azure Linux Container Host support lifecycle see here. How to Keep in Touch with the Azure Linux Team: For updates, feedback, and feature requests related to Azure Linux, there are a few ways to stay connected to the team: We have a public community call every other month for Azure Linux users to come together to ask questions, share learnings, and get updates. Join the next community call on July 24 th at 8AM PST: here Partners with support questions can reach out to AzureLinuxISV@microsoft.com215Views1like0CommentsFrom Kafka to Ray: Deploying AI and Stateful Workloads on AKS with Confidence
Building on our initial announcement for Deploying Open Source Software on Azure, the AKS and the Customer Experience team are excited to announce our expanded library of technical best practice deployment guides for stateful and AI workloads on AKS. These guides are designed to help you accelerate the integration of some of the most critical and heavily adopted open source projects onto Azure, utilizing best practices and optimizations for AKS. For your convenience, you can jump to our collection of Stateful and AI guides at the bottom. To receive updates and read about the other improvements and updates please follow us at the AKS Engineering Blog. In the AKS Engineering Blog, we discuss our updated Postgres guidance with additional storage considerations for data resiliency, performance, or costs using Azure Container Storage. We also highlight our Terraform additions for the Mongo DB and Valkey guides, and the newly developed Azure Verified Module for deploying a production-grade AKS Cluster. Introducing New Guidance Below is a brief overview of each new guide available today. In addition to basic installation steps, the guides detail best practice recommendations on networking, monitoring, and security. The guidance includes initial setup to advanced AKS cluster and node configurations, ensuring applications are successfully deployed with best practices enabled. Deploy Kafka on AKS with Strimzi for Distributed Streaming Apache Kafka is an open source distributed event streaming platform designed to handle high-volume, high-throughput, and real-time streaming data. It is widely used by thousands of companies for mission-critical applications but managing and scaling Kafka clusters on Kubernetes can be challenging. Strimzi simplifies the deployment and management of Kafka on Kubernetes by providing a set of Kubernetes Operators and container images that automate complex Kafka operational tasks. The Kafka on Azure Kubernetes Service (AKS) guide covers essential storage and compute considerations, ensuring you Kafka deployment meets your needs. Additionally, we provide guidance for tuning the Java Virtual Machine (JVM), which is critical for optimal Kafka broker and controller performance. Deploy Airflow on AKS for Workflow Orchestration Apache Airflow is a widely adopted open source workflow orchestrator often used for ETL processes, data pipeline management, and task automation. Despite its interoperability and importance, deploying Airflow on Kubernetes is very complex. As best practice, we recommend installing Airflow using the community-maintained Helm Chart. Deploying Apache Airflow on Azure Kubernetes Service (AKS), provides step-by-step instructions for installing the Helm Chart, configuring the Azure Container Registry and setting up user-assigned managed identities. Additionally, we provide recommendations on adjusting default configurations to optimize Airflow on AKS. Deploy Ray on AKS for Distributed Computing Ray is an open source framework for distributed computing, allowing both Python applications and common machine learning workloads to be parallelized and distributed efficiently. The Ray on Azure Kubernetes Service (AKS) guide details how to configure the underlying infrastructure and deploy KubeRay, the Ray Kubernetes Operator. KubeRay is the best way to deploy and managed Ray on AKS as it provides a Kubernetes-native way to manage Ray clusters. Additionally, this guide includes two illustrative samples: one for distributed training and another for fine-tuning. The distributed training example demonstrates a PyTorch model trained on Fashion MNIST using Ray Train on AKS. The fine-tuning example showcases how to tune the GPT-2 large model using KubeRay in combination with Azure Blob Storage and BlobFuse. Deploy open source technologies on Azure Apache Airflow - Create the infrastructure for deploying Apache Airflow on Azure Kubernetes Service (AKS) Apache Kafka - Prepare the infrastructure for deploying Kafka on Azure Kubernetes Service (AKS) Ray - Deploy a Ray cluster on Azure Kubernetes Service (AKS) Valkey - Create the infrastructure for running a Valkey cluster on Azure Kubernetes Service (AKS) Mongo DB - Create the infrastructure for running a MongoDB cluster on Azure Kubernetes Service (AKS) Postgres - Create infrastructure for deploying a highly available PostgreSQL database on AKS Kubernetes AI Toolchain Operator (KAITO) - Deploy KAITO on AKS using Terraform159Views0likes0CommentsHow Pulp powers Microsoft's Linux Software Repositories
PMC, the backbone of Microsoft’s Linux Software Delivery Microsoft's mission is "to empower every person and organization on the planet to achieve more" - across all platforms. To support this, Microsoft provides PMC (packages.microsoft.com), a key service for distributing Microsoft software products for Linux; and a critical service for Microsoft's Linux Distributions and secure software. PMC is a critical part of Azure Core’s Linux infrastructure. It handles the ingestion and distribution of Linux packages from Microsoft, serving both internal Microsoft teams and external customers. For example, PMC hosts packages from Microsoft that support Azure Linux, Azure Security Pack, .NET, VSCode, and Edge Browser, to name a few. PMC delivers packages via standard Linux package managers, apt (Debian/Ubuntu), dnf/yum (RHEL, Fedora, CentOS), and zypper (SUSE), along with necessary config files and keys. PMC’s Evolution with Pulp The PMC team is responsible for operating and maintaining the infrastructure which replicates and serves published packages, manages repositories, and controls publishing access. At the heart of PMC’s infrastructure is Pulp, an enterprise-grade, open-source platform for managing software repositories. Before adopting Pulp, PMC relied on a custom-built, in-house application deployed across virtual machines (VM’s) in multiple regions. Due to the entirely custom and aging nature of the infrastructure it was becoming increasingly difficult to maintain and prone to outages. Transitioning to Pulp enabled PMC to also modernize its architecture by leveraging best-in-class Azure Services: Azure Blob Storage for scalable package and metadata storage, Azure Kubernetes Service for improved application scalability and availability, and Azure Front Door (AFD) for global content delivery. This shift has significantly enhanced service reliability and operational efficiency. Pulp was initially developed by Red Hat to manage RPM packages - handling tasks like fetching, uploading, organizing, and distributing them. Over time, it evolved into a plugin-based system that now supports additional package types, including Debian packages. By integrating Pulp with Azure’s capabilities, PMC achieved greater service stability, reduced dependency on legacy tooling, minimized rate-limiting issues, and gained a more scalable, maintainable solution. Its flexibility and robustness made Pulp the clear choice, allowing the team to focus on delivering packages to users in hundreds of countries, rather than maintaining a bespoke system. Better Together: How PMC and Pulp Serve Microsoft Customers Pulp manages uploaded artifacts, repositories of content, and metadata for the apt/dnf clients. PMC is best thought of as two tightly integrated components working with a shared database and blob store. Content Ingress: API which handles CRUD operations of repos and packages Content Egress: Delivering packages and associated metadata to customers The PMC API manages user authentication and security enforcement in front of the Pulp backend while AFD caches and serves content stored by Pulp in blob storage. Everything in between is handled by Pulp. Together, these components form a secure, scalable, and globally accessible platform for distributing Debian and RPM packages to both internal Microsoft teams and external users. Figure 1: Content Ingress Architectural Diagram Figure 2: Content Egress Architectural Diagram (Click to enlarge) Package Quality Assurance at Scale Before any package is published, PMC performs a signature check. PMC also provides publishers with a package quality container to scan their packages. This scanning check is currently being integrated directly into PMC to streamline quality assurance. While Pulp handles core metadata curation and repository management, PMC extends this with additional quality gates and validations tailored to Microsoft’s ecosystem. High Availability Pulp, as the origin, provides a container-based solution that we run on Azure Kubernetes Service (AKS). PMC then leverages AFD to replicate content across a global mirror fleet, ensuring fast, reliable access to packages regardless of customer location. Monitoring and Observability Pulp provides a tasking system that PMC monitors for failures. PMC also supplements this tasking system by monitoring our own logs and synthetics to ensure service reliability. We monitor the edge as it reflects the customer experience whereas origin monitoring focuses on the internal health of PMC components. As a result, PMC has comprehensive monitoring, both at the edge and internally, to gain deeper insights into the overall health and performance of its service. Secure and Reliable Package Delivery PMC ensures that packages are delivered with integrity. If a package fails to download or has a checksum mismatch, PMC helps diagnose and resolve the issue. Using the Pulp feature, Checkpoints, PMC enables snapshotting for limited distros, allowing customers to lock into known-good states for safe deployments. Partnering with the Pulp Community In addition to leveraging Pulp, the PMC team actively contributes to and engages with the broader Pulp community. We regularly contribute to upstream Pulp by submitting bug fixes, reporting issues to Pulp's GitHub issue tracker, and developing new features to benefit both PMC and the entire Pulp ecosystem. Below are a few features and smaller improvements the PMC team has contributed back to Pulp: Checkpoint support: Checkpoints allow you to manage and access historical versions of repositories, enhancing your content management capabilities. Checkpoint Support - A Journey Towards Predictable and Consistent Deployments - Pulp Project Pulp-deb apt-by-hash support: Implemented apt-by-hash (also known as Acquire-By-Hash) support in pulp-deb, eliminating errors where clients retrieve inconsistent Debian metadata during repository updates (AptByHash - Ubuntu Wiki). Source Package Support for Debian Plug-in: Collaborated with community contributors to add source package support to the pulp_deb plugin, expanding its capabilities for Debian-based workflows (add source package support to pulp_deb). Redis Caching Improvements: Enhanced Redis caching mechanisms in Pulp to improve performance and reduce latency (Ensure the Redis cache is actually respecting the TTL). Signing Service Enhancements: Delivered improvements to Pulp’s signing service, including better key management and signing workflows (Pass correlation id to signing script through ENV variable). You can learn more about the PMC service (packages.microsoft.com), file issues, pull requests, or report a security vulnerability on the affiliated GitHub repo: Microsoft Linux Package Repositories. Access the most recent packages on https://packages.microsoft.com.171Views0likes0CommentsUbuntu Pro FIPS 22.04 LTS on Azure: Secure, compliant, and optimized for regulated industries
Organizations across government (including local and federal agencies and their contractors), finance, healthcare, and other regulated industries running workloads on Microsoft Azure now have a streamlined path to meet rigorous FIPS 140-3 compliance requirements. Canonical is pleased to announce the availability of Ubuntu Pro FIPS 22.04 LTS on the Azure Marketplace, featuring newly certified cryptographic modules. This offering extends the stability and comprehensive security features of Ubuntu Pro, tailored for state agencies, federal contractors, and industries requiring a FIPS-validated foundation on Azure. It provides the enterprise-grade Ubuntu experience, optimized for performance on Azure in collaboration with Microsoft, and enhanced with critical compliance capabilities. For instance, if you are building a Software as a Service (SaaS) application on Azure that requires FedRAMP authorization, utilizing Ubuntu Pro FIPS 22.04 LTS can help you meet specific controls like SC-13 (Cryptographic Protection), as FIPS 140-3 validated modules are a foundational requirement. This significantly streamlines your path to achieving FedRAMP compliance. What is FIPS 140-3 and why does it matter? FIPS 140-3 is the latest iteration of the benchmark U.S. government standard for validating cryptographic module implementations, superseding FIPS 140-2. Managed by NIST, it's essential for federal agencies and contractors and is a recognized best practice in many regulated industries like finance and healthcare. Using FIPS-validated components helps ensure cryptography is implemented correctly, protecting sensitive data in transit and at rest. Ubuntu Pro FIPS 22.04 LTS includes FIPS 140-3 certified versions of the Linux kernel and key cryptographic libraries (like OpenSSL, Libgcrypt, GnuTLS) pre-enabled, which are drop-in replacements for the standard packages, greatly simplifying deployment for compliance needs. The importance of security updates (fips-updates) A FIPS certificate applies to a specific module version at its validation time. Over time, new vulnerabilities (CVEs) are discovered in these certified modules. Running code with known vulnerabilities poses a significant security risk. This creates a tension between strict certification adherence and maintaining real-world security. Recognizing this, Canonical provides security fixes for the FIPS modules via the fips-updates stream, available through Ubuntu Pro. We ensure these security patches do not alter the validated cryptographic functions. This approach aligns with modern security thinking, including recent FedRAMP guidance, which acknowledges the greater risk posed by unpatched vulnerabilities compared to solely relying on the original certified binaries. Canonical strongly recommends all users enable the fips-updates repository to ensure their systems are both compliant and secure against the latest threats. FIPS 140-3 vs 140-2 The new FIPS 140-3 standard includes modern ciphers such as TLS v1.3, as well as deprecating older algorithms like MD5. If you are upgrading systems and workloads to FIPS 140-3, it will be necessary to perform rigorous testing to ensure that applications continue to work correctly. Compliance tooling Included Ubuntu Pro FIPS also includes access to Canonical's Ubuntu Security Guide (USG) tooling, which assists with automated hardening and compliance checks against benchmarks like CIS and DISA-STIG, a key requirement for FedRAMP deployments. How to get Ubuntu Pro FIPS on Azure You can leverage Ubuntu Pro FIPS 22.04 LTS on Azure in two main ways: Deploy the Marketplace Image: Launch a new VM directly from the dedicated Ubuntu Pro FIPS 22.04 LTS listing on the Azure Marketplace. This image comes with the FIPS modules pre-enabled for immediate use. Enable on an Existing Ubuntu Pro VM: If you already have an Ubuntu Pro 22.04 LTS VM running on Azure, you can enable the FIPS modules using the Ubuntu Pro Client (pro enable fips-updates). Upgrading standard Ubuntu: If you have a standard Ubuntu 22.04 LTS VM on Azure, you first need to attach Ubuntu Pro to it. This is a straightforward process detailed in the Azure documentation for getting Ubuntu Pro. Once Pro is attached, you can enable FIPS as described above. Learn More Ubuntu Pro FIPS provides a robust, maintained, and compliant foundation for your sensitive workloads on Azure. Watch Joel Sisko from Microsoft speak with Ubuntu experts in this webinar Explore all features of Ubuntu Pro on Azure Read details on the FIPS 140-3 certification for Ubuntu 22.04 LTS Official NIST certification link190Views2likes0CommentsAzure Image Testing for Linux (AITL)
As cloud and AI evolve at an unprecedented pace, the need to deliver high-quality, secure, and reliable Linux VM images has never been more essential. Azure Image Testing for Linux (AITL) is a self-service validation tool designed to help developers, ISVs, and Linux distribution partners ensure their images meet Azure’s standards before deployment. With AITL, partners can streamline testing, reduce engineering overhead, and ensure compliance with Azure’s best practices, all in a scalable and automated manner. Let’s explore how AITL is redefining image validation and why it’s proving to be a valuable asset for both developers and enterprises. Before AITL, image validation was largely a manual and repetitive process, engineers were often required to perform frequent checks, resulting in several key challenges: Time-Consuming: Manual validation processes delayed image releases. Inconsistent Validation: Each distro had different methods for testing, leading to varying quality levels. Limited Scalability: Resource constraints restricted the ability to validate a broad set of images. AITL addresses these challenges by enabling partners to seamlessly integrate image validation into their existing pipelines through APIs. By executing tests within their own Azure subscriptions prior to publishing, partners can ensure that only fully validated, high-quality Linux images are promoted to production in the Azure environment. How AITL Works? AITL is powered by LISA, which is a test framework and a comprehensive opensource tool contains 400+ test cases. AITL provides a simple, yet powerful workflow run LISA test cases: Registration: Partners register their images in AITL’s validation framework. Automated Testing: AITL runs a suite of predefined validation tests using LISA. Detailed Reporting: Developers receive comprehensive results highlighting compliance, performance, and security areas. All test logs are available to access. Self-Service Fixes: Any detected issues can be addressed by the partner before submission, eliminating delays and back-and-forth communication. Final Sign-Off: Once tests pass, partners can confidently publish their images, knowing they meet Azure’s quality standards. Benefits of AITL AITL is a transformative tool that delivers significant benefits across the Linux and cloud ecosystem: Self-Service Capability: Enables developers and ISVs to independently validate their images without requiring direct support from Microsoft. Scalable by Design: Supports concurrent testing of multiple images, driving greater operational efficiency. Consistent and Standardized Testing: Offers a unified validation framework to ensure quality and consistency across all endorsed Linux distributions. Proactive Issue Detection: Identifies potential issues early in the development cycle, helping prevent costly post-deployment fixes. Seamless Pipeline Integration: Easily integrates with existing CI/CD workflows to enable fully automated image validation. Use Cases for AITL AITL designed to support a diverse set of users across the Linux ecosystem: Linux Distribution Partners: Organizations such as Canonical, Red Hat, and SUSE can validate their images prior to publishing on the Azure Marketplace, ensuring they meet Azure’s quality and compliance standards. Independent Software Vendors (ISVs): Companies providing custom Linux Images can verify that their custom Linux-based solutions are optimized for performance and reliability on Azure. Enterprise IT Teams: Businesses managing their own Linux images on Azure can use AITL to validate updates proactively, reducing risk and ensuring smooth production deployments. Current Status and Future Roadmap AITL is currently in private preview, with five major Linux distros and select ISVs actively integrating it into their validation workflows. Microsoft plans to expand AITL’s capabilities by adding: Support for Private Test Cases: Allowing partners to run custom tests within AITL securely. Kernel CI Integration: Enhancing low-level kernel validation for more robust testing and results for community. DPDK and Specialized Validation: Ensuring network and hardware performance for specialized SKU (CVM, HPC) and workloads How to Get Started? For developers and partners interested in AITL, following the steps to onboard. Register for Private Preview AITL is currently hidden behind a preview feature flag. You must first register the AITL preview feature with your subscription so that you can then access the AITL Resource Provider (RP). These are one-time steps done for each subscription. Run the “az feature register” command to register the feature: az feature register --namespace Microsoft.AzureImageTestingForLinux --name JobandJobTemplateCrud Sign Up for Private Preview – Contact Microsoft’s Linux Systems Group to request access. Private Preview Sign Up To confirm that your subscription is registered, run the above command and check that properties.state = “Registered” Register the Resource Provider Once the feature registration has been approved, the AITL Resource Provider can be registered by running the “az provider register” command: az provider register --namespace Microsoft.AzureImageTestingForLinux *If your subscription is not registered to Microsoft.Compute/Network/Storage, please do so. These are also prerequisites to using the service. This can be done for each namespace (Microsoft.Compute, Microsoft.Network, Microsoft.Storage) through this command: az provider register --namespace Microsoft.Compute Setup Permissions The AITL RP requires a permission set to create test resources, such as the VM and storage account. The permissions are provided through a custom role that is assigned to the AITL Service Principal named AzureImageTestingForLinux. We provide a script setup_aitl.py to make it simple. It will create a role and grant to the service principal. Make sure the active subscription is expected and download the script to run in a python environment. https://raw.githubusercontent.com/microsoft/lisa/main/microsoft/utils/setup_aitl.py You can run the below command: python setup_aitl.py -s "/subscriptions/xxxx" Before running this script, you should check if you have the permission to create role definition in your subscription. *Note, it may take up to 20 minutes for the permission to be propagated. Assign an AITL jobs access role If you want to use a service principle or registration application to call AITL APIs. The service principle or App should be assigned a role to access AITL jobs. This role should include the following permissions: az role definition create --role-definition '{ "Name": "AITL Jobs Access Role", "Description": "Delegation role is to read and write AITL jobs and job templates", "Actions": [ "Microsoft.AzureImageTestingForLinux/jobTemplates/read", "Microsoft.AzureImageTestingForLinux/jobTemplates/write", "Microsoft.AzureImageTestingForLinux/jobTemplates/delete", "Microsoft.AzureImageTestingForLinux/jobs/read", "Microsoft.AzureImageTestingForLinux/jobs/write", "Microsoft.AzureImageTestingForLinux/jobs/delete", "Microsoft.AzureImageTestingForLinux/operations/read", "Microsoft.Resources/subscriptions/read", "Microsoft.Resources/subscriptions/operationresults/read", "Microsoft.Resources/subscriptions/resourcegroups/write", "Microsoft.Resources/subscriptions/resourcegroups/read", "Microsoft.Resources/subscriptions/resourcegroups/delete" ], "IsCustom": true, "AssignableScopes": [ "/subscriptions/01d22e3d-ec1d-41a4-930a-f40cd90eaeb2" ] }' You can create a custom role using the above command in the cloud shell, and assign this role to the service principle or the App. All set! Please go through a quick start to try AITL APIs. Download AITL wrapper AITL is served by Azure management API. You can use any REST API tool to access it. We provide a Python wrapper for better experience. The AITL wrapper is composed of a python script and input files. It calls “az login” and “az rest” to provide similar experience like the az CLI. The input files are used for creating test jobs. Make sure az CLI and python 3 are installed. Clone LISA code, or only download files in the folder. lisa/microsoft/utils/aitl at main · microsoft/lisa (github.com). Use the command below to check the help text. python -m aitl job –-help python -m aitl job create --help Create a job Job creation consists of two entities: A job template and an image. The quickest way to get started with the AITL service is to create a Job instance with your job template properties in the request body. Replace placeholders with the real subscription id, resource group, job name to start a test job. This example runs 1 test case with a marketplace image using the tier0.json template. You can create a new json file to customize the test job. The name is optional. If it’s not provided, AITL wrapper will generate one. python -m aitl job create -s {subscription_id} -r {resource_group} -n {job_name} -b ‘@./tier0.json’ The default request body is: { "location": "westus3", "properties": { "jobTemplateInstance": { "selections": [ { "casePriority": [ 0 ] } ] } } } This example runs the P0 test cases with the default image. You can choose to add fields to the request, such as image to test. All possible fields are described in the API Specification – Jobs section. The “location” property is a required field that represents the location where the test job should be created, it doesn’t affect the location of VMs. AITL supports “westus”, “westus2”, or “westus3”. The image object in the request body json is where the image type to be used for testing is detailed, as well as the CPU architecture and VHD Generation. If the image object is not included, LISA will pick a Linux marketplace image that meets the requirements for running the specified tests. When an image type is specified, additional information will be required based on the image type. Supported image types are VHD, Azure Marketplace image, and Shared Image Gallery. - VHD requires the SAS URL. - Marketplace image requires the publisher, offer, SKU, and version. - Shared Image Gallery requires the gallery name, image definition, and version. Example of how to include the image object for shared image gallery. (<> denotes placeholder): { "location": "westus3", “properties: { <...other properties from default request body here>, "image": { "type": "shared_gallery", "architecture": "x64", "vhdGeneration": 2, "gallery": "<Example: myAzureComputeGallery>", "definition": "<Example: myImage1>", "version": "<Example: 1.0.1>" } } } Check Job Status & Test Results A job is an asynchronous operation that is updated throughout the job’s lifecycle with its operation and ongoing tests status. A job has 6 provisioning states – 4 are non-terminal states and 2 are terminal states. Non-terminal states represent ongoing operation stages and terminal states represent the status at completion. The job’s current state is reflected in the `properties.provisioningState` property located in the response body. The states are described below: Operation States State Type Description Accepted Non-Terminal state Initial ARM state describing the resource creation is being initialized. Queued Non-Terminal state The job has been queued by AITL to run LISA using the provided job template parameters. Scheduling Non-Terminal state The job has been taken off the queue and AITL is preparing to launch LISA. Provisioning Non-Terminal state LISA is creating your VM within your subscription using the default or provided image. Running Non-Terminal state LISA is running the specified tests on your image and VM configuration. Succeeded Terminal state LISA completed the job run and has uploaded the final test results to the job. There may be failed test cases. Failed Terminal state There was a failure during the job’s execution. Test results may be present and reflect the latest status for each listed test. Test results are updated in near real-time and can be seen in the ‘properties.results’ property in the response body. Results will begin to get updated during the “Running” state and the final set of result updates will happen prior to reaching a terminal state (“Completed” or “Failed”). For a complete list of possible test result properties, go to the API Specification – Test Results section. Run below command to get detailed test results. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} The query argument can format or filter results by JMESquery. Please refer to help text for more information. For example, List test results and error messages. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} -o table -q 'properties.results[].{name:testName,status:status,message:message}' Summarize test results. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} -q 'properties.results[].status|{TOTAL:length(@),PASSED:length([?@==`"PASSED"`]),FAILED:length([?@==`"FAILED"`]),SKIPPED:length([?@==`"SKIPPED"`]),ATTEMPTED:length([?@==`"ATTEMPTED"`]),RUNNING:length([?@==`"RUNNING"`]),ASSIGNED:length([?@==`"ASSIGNED"`]),QUEUED:length([?@==`"QUEUED"`])}' Access Job Logs To access logs and read from Azure Storage, the AITL user must have “Storage Blob Data Owner” role. You should check if you have the permission to create role definition in your subscription, likely with your administrator. For information on this role and instructions on how to add this permission, see this Azure documentation. To access job logs, send a GET request with the job name and use the logUrl in the response body to retrieve the logs, which are stored in Azure storage container. For more details on interpreting logs, refer to the LISA documentation on troubleshooting test failures. To quickly view logs online (note that file size limitations may apply), select a .log Blob file and click "edit" in the top toolbar of the Blob menu. To download the log, click the download button in the toolbar. Conclusion AITL represents a forward-looking approach to Linux image validation bringing automation, scalability, and consistency to the forefront. By shifting validation earlier in the development cycle, AITL helps reduce risk, accelerate time to market, and ensure a reliable, high-quality Linux experience on Azure. Whether you're a developer, a Linux distribution partner, or an enterprise managing Linux workloads on Azure, AITL offers a powerful way to modernize and streamline your validation workflows. To learn more or get started with AITL or more details and access to AITL, reach out to Microsoft Linux Systems Group640Views0likes0CommentsCanonical Ubuntu 20.04 LTS Reaching End of Standard Support
We’re announcing the upcoming end of standard support for Ubuntu 20.04 LTS (Focal Fossa) on 31 May 2025, as we focus on delivering a more secure and optimized Linux experience. Originally released in April 2020, Ubuntu 20.04 LTS introduced key enhancements like improved UEFI Secure Boot and broader Kernel Livepatch coverage, strengthening security on Azure. You can continue using your existing virtual machines, but after this date, security, features, and maintenance updates will no longer be provided by Canonical, which may impact system security and reliability. Recommended action: It’s important to act before 31 May 2025 to ensure you’re on a supported operating system. Microsoft recommends either migrating to the next Ubuntu LTS release or upgrading to Ubuntu Pro to gain access to expanded security and maintenance from Canonical. Upgrading to Ubuntu 22.04 LTS or Ubuntu 24.04 LTS Transitioning to the latest operating system, such as Ubuntu 24.04 LTS, is important for performance, hardware enablement, new technology benefits, and is recommended for new instances. It may be a complex process for existing deployments and should be properly scoped and tested with your workloads. While there’s no direct upgrade path from Ubuntu 20.04 LTS to Ubuntu 24.04 LTS, you can directly upgrade to Ubuntu 22.04 LTS, and then to Ubuntu 24.04 LTS, or directly install Ubuntu 24.04 LTS. See the Ubuntu Server upgrade guide for more information. Ubuntu Pro – Expanded Security Maintenance to 2030 Ubuntu Pro includes security patching for all Ubuntu packages due to Expanded Security Maintenance (ESM) for Infrastructure and Applications and optional 24/7 phone and ticket support. Ubuntu Pro 20.04 LTS will remain fully supported until May 2030. New virtual machines can be deployed with Ubuntu Pro from the Azure Marketplace. You can also upgrade existing virtual machines to Ubuntu Pro by in-place upgrades via Azure CLI. More Information More information covering Ubuntu 20.04 LTS End of Standard Support can be found here. Refer to the documentation to learn more about handling Ubuntu 20.04 LTS on Azure. You can also check out Canonical’s blog post and watch the webinar here.3.3KViews1like1CommentAzure Linux 3.0 now Generally Available with Azure Kubernetes Service v1.32
We are excited to announce that Azure Linux 3.0, the next major version release of the Azure Linux container host for Azure Kubernetes Service (AKS), is now Generally Available on AKS version 1.32. After extensive testing and valuable feedback from our early adopters, 3.0 is the highest quality release of Azure Linux for broad Azure usage. Azure Linux 3.0 offers increased package availability and versions, an updated kernel, and improvements to performance, security, and tooling and developer experience. Azure Linux 3.0 supports both x86_64 & ARM64 architectures. With this 3.0 release, we’re committed to supporting new platforms like Azure’s Cobalt architecture for the best performance. Some of the major components upgraded from Azure Linux 2.0 to 3.0 include: Component Azure Linux 3.0 Azure Linux 2.0 Release Notes Linux Kernel v6.6 (Latest LTS) V5.15 (Previous LTS) Linux 6.6 Containerd v2.0 1.6.26 Containerd Releases SystemD v255 V250 Systemd Releases OpenSSL v3.3.0 V1.1.1k OpenSSL 3.3 For more details on the key features and updates in Azure Linux 3.0 see the 3.0 GitHub release notes. New features since Azure Linux 3.0 Preview Azure Linux 3.0 is now defaulting to containerd 2.0. Azure Linux 3.0 nodepools now support Trusted Launch on AKS. Azure Linux 3.0 now supports a FIPS enabled ARM64 image, making it the only distribution on AKS to do so. Using Azure Linux 3.0 Creating New Azure Linux 3.0 Clusters and Nodepools Any new AKS clusters or node pools created using the --os-sku=AzureLinux flag and that run AKS version 1.32 default to Azure Linux 3.0. You can deploy clusters or node pools using the method of your choice to use Azure Linux 3.0 as the node OS: CLI PowerShell Terraform ARM Upgrading Existing Azure Linux 2.0 Clusters and Nodepools to Azure Linux 3.0 To upgrade existing Azure Linux 2.0 clusters and node pools to Azure Linux 3.0, you can upgrade them to AKS version 1.32. For more information about AKS cluster upgrades, see Upgrade an AKS cluster. Considerations Azure Linux 3.0 is not supported on Kubernetes version 1.30 and below. Azure Linux 3.0 Preview is supported on Kubernetes version 1.31. AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region. Kubernetes version 1.31 will be the last AKS version to support Azure Linux 2.0. Growing the Partner Ecosystem We want to express our gratitude to all the partners who participated in the Azure Linux 3.0 preview. The following partners have successfully completed their validation of Azure Linux 3.0: You can find the entire list of Azure Linux AKS Container Host partner solutions here. Upcoming Events KubeCon EU: The Azure Linux team will be available at the Microsoft booth at KubeCon EU from April 2-4, ready to chat with customers and address inquiries. The team is looking forward to connecting at KubeCon! LinuxFest Northwest: Another opportunity to connect with the Azure Linux team will be at LinuxFest Northwest, a local Linux conference in Bellingham, WA, taking place from April 24-25. The Azure Linux team will present a session on their learnings and challenges in building a Linux distribution at Microsoft, as well as showcasing features and benefits of Azure Linux. How to Keep in Touch with the Azure Linux Team For updates, feedback, and feature requests related to Azure Linux, there are a few ways to stay connected to the team: Ask questions & submit feedback via Azure Linux GitHub Issues We have a public community call every other month for Azure Linux users to come together to ask questions, share learnings, and get updates. Join the next community call on May 22 nd at 8AM PST: here Partners with support questions can reach out to AzureLinuxISV@microsoft.com2KViews0likes0CommentsAccelerating Deployment of Open Source Workloads Using Executable Docs in Copilot
Discover how Exec Docs is revolutionizing Azure documentation by transforming standard markdown into interactive, executable content. Powered by the open source Innovation Engine, Exec Docs enables users to seamlessly run CLI commands to deploy and manage Azure resources, while integrating with CI/CD pipelines to keep documentation accurate and up-to-date. Demonstrated at Microsoft Ignite 2024, the solution showcased guided workflows for discovering services, customizing workloads, testing outputs in real time, and even creating lab environments. This innovative approach simplifies the process of learning and deploying on Azure, reducing setup times from hours or days to just minutes.294Views0likes0CommentsMicrosoft at SUSECON 2025: Join us!
Microsoft Azure is thrilled to participate in SUSECON 2025, where we look forward to sharing insights, learning, and collaborating with everyone attending. Discover why Microsoft Azure is a trusted and proven cloud platform and explore the benefits of Azure-optimized solutions co-developed by Microsoft and SUSE for your business-critical Linux workloads. Here's what you can look forward to: Engaging Sessions: Microsoft experts will be delivering both in-person and virtual in-depth sessions on a variety of topics, including SAP, SQL Server, Kubernetes, AI and more. Keynote session | Wednesday, March 12 th @ 9:00 AM Technical breakout session | Tuesday, March 11 th @ 1:30 PM EDT : Unlocking Synergies: SUSE & Microsoft High Availability Innovations for SAP solutions Technical breakout session | Thursday, March 13 th @ 11:00 AM EDT Harnessing SQL Server 2025 and SUSE Rancher for a Unified Data Platform Across Containers, Physical, and Virtual Machines with DH2i's DxOperator Virtual session: How to Build a Secure and Resilient Production Environment for Your AI Applications Virtual session: Streamlining SAP Workloads on Azure using openQA Meet the Experts: Take advantage of the opportunity to interact with Azure experts at our booth. Discuss your unique challenges and explore the best practices for maximizing the potential of your SUSE workloads in Azure. Interactive Demos: Visit our booth (#9-12) to see live demos of the latest technologies and solutions for running business-critical SUSE workloads on Azure. Xbox Raffle and Swag: Visit our booth or attend one of our in-person sessions to enter our raffle for a chance to win an Xbox Series. Plus, don’t miss out on the opportunity to grab some great swag at the booth! Also, there’s still time to take advantage of the Linux promotional offer! With the offer, you can save up to 15% in addition to the existing one-year Azure Reserved VM Instances discount for select Linux VMs for a limited period. Read the blog to learn more. We look forward to seeing you at SUSECON! Learn more Open Source at Microsoft Linux on Azure SUSE on Azure172Views0likes0Comments