monitoring
6 TopicsUbuntu Pro FIPS 22.04 LTS on Azure: Secure, compliant, and optimized for regulated industries
Organizations across government (including local and federal agencies and their contractors), finance, healthcare, and other regulated industries running workloads on Microsoft Azure now have a streamlined path to meet rigorous FIPS 140-3 compliance requirements. Canonical is pleased to announce the availability of Ubuntu Pro FIPS 22.04 LTS on the Azure Marketplace, featuring newly certified cryptographic modules. This offering extends the stability and comprehensive security features of Ubuntu Pro, tailored for state agencies, federal contractors, and industries requiring a FIPS-validated foundation on Azure. It provides the enterprise-grade Ubuntu experience, optimized for performance on Azure in collaboration with Microsoft, and enhanced with critical compliance capabilities. For instance, if you are building a Software as a Service (SaaS) application on Azure that requires FedRAMP authorization, utilizing Ubuntu Pro FIPS 22.04 LTS can help you meet specific controls like SC-13 (Cryptographic Protection), as FIPS 140-3 validated modules are a foundational requirement. This significantly streamlines your path to achieving FedRAMP compliance. What is FIPS 140-3 and why does it matter? FIPS 140-3 is the latest iteration of the benchmark U.S. government standard for validating cryptographic module implementations, superseding FIPS 140-2. Managed by NIST, it's essential for federal agencies and contractors and is a recognized best practice in many regulated industries like finance and healthcare. Using FIPS-validated components helps ensure cryptography is implemented correctly, protecting sensitive data in transit and at rest. Ubuntu Pro FIPS 22.04 LTS includes FIPS 140-3 certified versions of the Linux kernel and key cryptographic libraries (like OpenSSL, Libgcrypt, GnuTLS) pre-enabled, which are drop-in replacements for the standard packages, greatly simplifying deployment for compliance needs. The importance of security updates (fips-updates) A FIPS certificate applies to a specific module version at its validation time. Over time, new vulnerabilities (CVEs) are discovered in these certified modules. Running code with known vulnerabilities poses a significant security risk. This creates a tension between strict certification adherence and maintaining real-world security. Recognizing this, Canonical provides security fixes for the FIPS modules via the fips-updates stream, available through Ubuntu Pro. We ensure these security patches do not alter the validated cryptographic functions. This approach aligns with modern security thinking, including recent FedRAMP guidance, which acknowledges the greater risk posed by unpatched vulnerabilities compared to solely relying on the original certified binaries. Canonical strongly recommends all users enable the fips-updates repository to ensure their systems are both compliant and secure against the latest threats. FIPS 140-3 vs 140-2 The new FIPS 140-3 standard includes modern ciphers such as TLS v1.3, as well as deprecating older algorithms like MD5. If you are upgrading systems and workloads to FIPS 140-3, it will be necessary to perform rigorous testing to ensure that applications continue to work correctly. Compliance tooling Included Ubuntu Pro FIPS also includes access to Canonical's Ubuntu Security Guide (USG) tooling, which assists with automated hardening and compliance checks against benchmarks like CIS and DISA-STIG, a key requirement for FedRAMP deployments. How to get Ubuntu Pro FIPS on Azure You can leverage Ubuntu Pro FIPS 22.04 LTS on Azure in two main ways: Deploy the Marketplace Image: Launch a new VM directly from the dedicated Ubuntu Pro FIPS 22.04 LTS listing on the Azure Marketplace. This image comes with the FIPS modules pre-enabled for immediate use. Enable on an Existing Ubuntu Pro VM: If you already have an Ubuntu Pro 22.04 LTS VM running on Azure, you can enable the FIPS modules using the Ubuntu Pro Client (pro enable fips-updates). Upgrading standard Ubuntu: If you have a standard Ubuntu 22.04 LTS VM on Azure, you first need to attach Ubuntu Pro to it. This is a straightforward process detailed in the Azure documentation for getting Ubuntu Pro. Once Pro is attached, you can enable FIPS as described above. Learn More Ubuntu Pro FIPS provides a robust, maintained, and compliant foundation for your sensitive workloads on Azure. Watch Joel Sisko from Microsoft speak with Ubuntu experts in this webinar Explore all features of Ubuntu Pro on Azure Read details on the FIPS 140-3 certification for Ubuntu 22.04 LTS Official NIST certification link155Views2likes0CommentsOptimizing Change-Driven Architectures with Drasi
By: Allen Jones, Principal Software Engineer, Azure Incubations, Microsoft The need to detect and react to data changes is pervasive across modern software systems. Whether it’s Kubernetes adapting to resource updates, building management systems responding to sensor shifts, or enterprise applications tracking business state, the ability to act precisely when data in databases or stateful systems changes is critical. Event-driven architectures, underpinned by technologies like Apache Kafka, Azure Event Hubs, and serverless functions, are the go-to solution. Examples include processing IoT telemetry to trigger alerts or handling user activity streams in real-time analytics. Yet, while these systems excel at generic event processing, they demand significant additional effort when the goal is to isolate and respond to specific data changes—a specialized subset of this paradigm we call change-driven architecture. Drasi, the open-source Data Change Processing platform, is an exciting new option for building change-driven solutions easily. This article explores how Drasi’s unique capabilities—centered around clearly defined change semantics, the continuous query pattern, and the reaction pattern—provide a more efficient and effective alternative to traditional event-driven implementations. By codifying consistent patterns and reducing complexity, Drasi empowers solution architects and developers to build responsive systems faster with greater precision and less complexity, which makes them less brittle and easier to update as solution requirements change. The Limits of Generic Event Processing Event-driven architectures are powerful options for handling high-volume data streams to support data transfer, aggregation, and analytics use cases, but they fall short when the task is to react to specific, often complex and contextual, data transitions. Developers must bridge the gap between generic events and actionable changes, often by: Parsing general purpose event payloads to infer state changes e.g., decoding Kubernetes pod events to detect a "Running" to "Failed" transition. Because the implementor of the event generating system does not know all possible consumer use cases, they often produce very generic events (e.g. xxxAdded, xxxUpdated, xxxDeleted), or large all-encompassing event documents to accommodate as many uses cases as possible, Filtering irrelevant events to isolate meaningful updates e.g., sifting through audit logs to identify compliance violations. Most event generating system don’t produce just the events consumers need, they produce a firehose of events covering everything in the system, leaving it to the consumer to filter this down to what they need. Maintaining state to track context across events e.g., aggregating inventory transactions to detect threshold breaches. Most events relate to a single thing occurring. In the case of updates, they sometimes provide before and after versions of the changed data, but they lack any ability to relate the change to other elements of the system. The consumer must create and maintain an accurate set of state. Polling, a common alternative, wastes resources querying unchanged data and risks delaying, or even missing entirely, important changes due to the polling intervals, while custom change log processing requires intricate logic to map low-level updates to high-level conditions. For instance, detecting when a database replication lag exceeds 30 seconds might involve correlating logs from multiple systems. These options may be tolerable occasionally but become costly to build, support, and maintain when repeated organization-wide many times for many scenarios. Changes vs. Events Drasi distinguishes changes from events. Events signal that something happened, but their producer-defined structure and ambiguous semantics require consumers to interpret intent—e.g. a "UserUpdated" event might not clarify what changed. In contrast, a change in Drasi has precise semantics: a set of additions, updates, and/or deletions to a continuous query result set occurring because of changes to the source data. The structure of the change is defined by the author of the query (i.e. the consumer), not the designer of the source system. This consumer-driven data model reduces dependency on producer assumptions and simplifies engineering and downstream processing. Continuous Query Pattern Central to Drasi is the continuous query pattern, implemented using the openCypher graph query language. A continuous query runs perpetually, fed by change logs from one or more data sources, maintaining the current set of query results and generating notifications when those results change. Unlike producer-defined event streams, this pattern empowers consumers to specify the relevant properties and their relationships using a familiar database-type query. Consider this example monitoring infrastructure health: MATCH (s:Server)-[:HOSTS]->(a:Application)-[:DEPENDS_ON]->(db:Database) WHERE s.status = 'active' AND db.replicationLag > 30 RETURN s.id AS serverId, s.location AS datacenter, a.name AS appName, db.replicationLag AS lagSeconds At any time, this continuous query result contains the answer to the question “Which applications are experiencing a DB replication lag of 30 seconds or more?”. The MATCH clause spans servers, applications, and databases—potentially from distinct systems like an inventory database and monitoring tool. The WHERE clause specifies the condition (active servers with lagging databases), and the RETURN clause shapes the result set. Drasi ensures this result set stays current, notifying subscribers only when it changes. With strict temporal semantics, historical result sets can also be retrieved, aiding audits or debugging. Reaction Pattern The reaction pattern leverages the consistency of change notifications. Because a query defines inclusion semantics, state transitions are clear. For example, in a building management system: MATCH (r:Room)-[:HAS_SENSOR]->(t:TemperatureSensor), (r:Room)-[:LOCATED_IN]->(b:Building) WHERE b.id = 'HQ1' AND t.currentReading < 18 AND r.occupied = true RETURN r.id AS roomId, t.currentReading AS temp, b.id AS buildingId A room enters the result set when its temperature drops below 18°C while occupied, triggering heating. It exits when conditions change, stopping the action. The standard structure of change notifications, either room additions or removals, ensures straightforward reaction logic. Real-World Applications Drasi’s change-driven approach shines across many domains. For resource optimization, this continuous query identifies underutilized VMs to trigger cost-saving reactions: MATCH (vm:VirtualMachine)-[:DEPLOYED_IN]->(s:Subscription) WHERE vm.status = 'running' AND vm.utilizationPercent < 10 AND duration.between(vm.lastHighUtilization, datetime()) > duration('P7D') RETURN vm.id AS vmId, vm.utilizationPercent AS usage, s.owner AS owner For business exceptions, this continuous query flags delayed orders due to stock shortages: MATCH (o:Order)-[:HAS_ITEM]->(i:OrderItem)-[:REQUIRES]->(p:Product) WHERE o.status = 'processing' AND p.stockLevel < i.quantity AND duration.between(o.placementDate, datetime()) > duration('P2D') RETURN o.id AS orderId, p.id AS productId, p.stockLevel AS stock In both cases, Drasi simplifies detection and reaction, avoiding the custom logic required in event-based systems. Addressing Misconceptions and Scaling Efficiently Because Drasi is fundamentally a new type of data processing service, it’s easy to underestimate its value, which can come from underestimating the complexity of change detection or assuming event-driven tools suffice. While building a solution with events or polling might work as a one-off solution, repeating the same approach multiple times across an organization is inefficient and expensive. The solutions are also complex and brittle, making them difficult to maintain and update to address changing requirements. Drasi’s standardized approach, rooted in its change-driven architecture and theoretical underpinnings, streamlines change-driven systems, enabling teams to focus on defining queries and reactions rather than rebuilding infrastructure. Conclusion For change-driven architectures, Drasi is superior to generic event-based technologies because it targets the specific challenge of reacting to data changes. Its continuous query pattern, precise change semantics, and streamlined reaction pattern deliver cleaner, more efficient solutions than the traditional approaches. Available on GitHub under the Apache 2.0 License, Drasi offers solution architects and developers a robust, scalable foundation to transform complex change detection into declarative, cost-effective implementations—making it the optimal choice for responsive, data-centric systems.2.2KViews5likes0CommentseBPF-Powered Observability Beyond Azure: A Multi-Cloud Perspective with Retina
Kubernetes simplifies container orchestration but introduces observability challenges due to dynamic pod lifecycles and complex inter-service communication. eBPF technology addresses these issues by providing deep system insights and efficient monitoring. The open-source Retina project leverages eBPF for comprehensive, cloud-agnostic network observability across AKS, GKE, and EKS, enhancing troubleshooting and optimization through real-world demo scenarios.643Views9likes0CommentsAzure Image Testing for Linux (AITL)
As cloud and AI evolve at an unprecedented pace, the need to deliver high-quality, secure, and reliable Linux VM images has never been more essential. Azure Image Testing for Linux (AITL) is a self-service validation tool designed to help developers, ISVs, and Linux distribution partners ensure their images meet Azure’s standards before deployment. With AITL, partners can streamline testing, reduce engineering overhead, and ensure compliance with Azure’s best practices, all in a scalable and automated manner. Let’s explore how AITL is redefining image validation and why it’s proving to be a valuable asset for both developers and enterprises. Before AITL, image validation was largely a manual and repetitive process, engineers were often required to perform frequent checks, resulting in several key challenges: Time-Consuming: Manual validation processes delayed image releases. Inconsistent Validation: Each distro had different methods for testing, leading to varying quality levels. Limited Scalability: Resource constraints restricted the ability to validate a broad set of images. AITL addresses these challenges by enabling partners to seamlessly integrate image validation into their existing pipelines through APIs. By executing tests within their own Azure subscriptions prior to publishing, partners can ensure that only fully validated, high-quality Linux images are promoted to production in the Azure environment. How AITL Works? AITL is powered by LISA, which is a test framework and a comprehensive opensource tool contains 400+ test cases. AITL provides a simple, yet powerful workflow run LISA test cases: Registration: Partners register their images in AITL’s validation framework. Automated Testing: AITL runs a suite of predefined validation tests using LISA. Detailed Reporting: Developers receive comprehensive results highlighting compliance, performance, and security areas. All test logs are available to access. Self-Service Fixes: Any detected issues can be addressed by the partner before submission, eliminating delays and back-and-forth communication. Final Sign-Off: Once tests pass, partners can confidently publish their images, knowing they meet Azure’s quality standards. Benefits of AITL AITL is a transformative tool that delivers significant benefits across the Linux and cloud ecosystem: Self-Service Capability: Enables developers and ISVs to independently validate their images without requiring direct support from Microsoft. Scalable by Design: Supports concurrent testing of multiple images, driving greater operational efficiency. Consistent and Standardized Testing: Offers a unified validation framework to ensure quality and consistency across all endorsed Linux distributions. Proactive Issue Detection: Identifies potential issues early in the development cycle, helping prevent costly post-deployment fixes. Seamless Pipeline Integration: Easily integrates with existing CI/CD workflows to enable fully automated image validation. Use Cases for AITL AITL designed to support a diverse set of users across the Linux ecosystem: Linux Distribution Partners: Organizations such as Canonical, Red Hat, and SUSE can validate their images prior to publishing on the Azure Marketplace, ensuring they meet Azure’s quality and compliance standards. Independent Software Vendors (ISVs): Companies providing custom Linux Images can verify that their custom Linux-based solutions are optimized for performance and reliability on Azure. Enterprise IT Teams: Businesses managing their own Linux images on Azure can use AITL to validate updates proactively, reducing risk and ensuring smooth production deployments. Current Status and Future Roadmap AITL is currently in private preview, with five major Linux distros and select ISVs actively integrating it into their validation workflows. Microsoft plans to expand AITL’s capabilities by adding: Support for Private Test Cases: Allowing partners to run custom tests within AITL securely. Kernel CI Integration: Enhancing low-level kernel validation for more robust testing and results for community. DPDK and Specialized Validation: Ensuring network and hardware performance for specialized SKU (CVM, HPC) and workloads How to Get Started? For developers and partners interested in AITL, following the steps to onboard. Register for Private Preview AITL is currently hidden behind a preview feature flag. You must first register the AITL preview feature with your subscription so that you can then access the AITL Resource Provider (RP). These are one-time steps done for each subscription. Run the “az feature register” command to register the feature: az feature register --namespace Microsoft.AzureImageTestingForLinux --name JobandJobTemplateCrud Sign Up for Private Preview – Contact Microsoft’s Linux Systems Group to request access. Private Preview Sign Up To confirm that your subscription is registered, run the above command and check that properties.state = “Registered” Register the Resource Provider Once the feature registration has been approved, the AITL Resource Provider can be registered by running the “az provider register” command: az provider register --namespace Microsoft.AzureImageTestingForLinux *If your subscription is not registered to Microsoft.Compute/Network/Storage, please do so. These are also prerequisites to using the service. This can be done for each namespace (Microsoft.Compute, Microsoft.Network, Microsoft.Storage) through this command: az provider register --namespace Microsoft.Compute Setup Permissions The AITL RP requires a permission set to create test resources, such as the VM and storage account. The permissions are provided through a custom role that is assigned to the AITL Service Principal named AzureImageTestingForLinux. We provide a script setup_aitl.py to make it simple. It will create a role and grant to the service principal. Make sure the active subscription is expected and download the script to run in a python environment. https://raw.githubusercontent.com/microsoft/lisa/main/microsoft/utils/setup_aitl.py You can run the below command: python setup_aitl.py -s "/subscriptions/xxxx" Before running this script, you should check if you have the permission to create role definition in your subscription. *Note, it may take up to 20 minutes for the permission to be propagated. Assign an AITL jobs access role If you want to use a service principle or registration application to call AITL APIs. The service principle or App should be assigned a role to access AITL jobs. This role should include the following permissions: az role definition create --role-definition '{ "Name": "AITL Jobs Access Role", "Description": "Delegation role is to read and write AITL jobs and job templates", "Actions": [ "Microsoft.AzureImageTestingForLinux/jobTemplates/read", "Microsoft.AzureImageTestingForLinux/jobTemplates/write", "Microsoft.AzureImageTestingForLinux/jobTemplates/delete", "Microsoft.AzureImageTestingForLinux/jobs/read", "Microsoft.AzureImageTestingForLinux/jobs/write", "Microsoft.AzureImageTestingForLinux/jobs/delete", "Microsoft.AzureImageTestingForLinux/operations/read", "Microsoft.Resources/subscriptions/read", "Microsoft.Resources/subscriptions/operationresults/read", "Microsoft.Resources/subscriptions/resourcegroups/write", "Microsoft.Resources/subscriptions/resourcegroups/read", "Microsoft.Resources/subscriptions/resourcegroups/delete" ], "IsCustom": true, "AssignableScopes": [ "/subscriptions/01d22e3d-ec1d-41a4-930a-f40cd90eaeb2" ] }' You can create a custom role using the above command in the cloud shell, and assign this role to the service principle or the App. All set! Please go through a quick start to try AITL APIs. Download AITL wrapper AITL is served by Azure management API. You can use any REST API tool to access it. We provide a Python wrapper for better experience. The AITL wrapper is composed of a python script and input files. It calls “az login” and “az rest” to provide similar experience like the az CLI. The input files are used for creating test jobs. Make sure az CLI and python 3 are installed. Clone LISA code, or only download files in the folder. lisa/microsoft/utils/aitl at main · microsoft/lisa (github.com). Use the command below to check the help text. python -m aitl job –-help python -m aitl job create --help Create a job Job creation consists of two entities: A job template and an image. The quickest way to get started with the AITL service is to create a Job instance with your job template properties in the request body. Replace placeholders with the real subscription id, resource group, job name to start a test job. This example runs 1 test case with a marketplace image using the tier0.json template. You can create a new json file to customize the test job. The name is optional. If it’s not provided, AITL wrapper will generate one. python -m aitl job create -s {subscription_id} -r {resource_group} -n {job_name} -b ‘@./tier0.json’ The default request body is: { "location": "westus3", "properties": { "jobTemplateInstance": { "selections": [ { "casePriority": [ 0 ] } ] } } } This example runs the P0 test cases with the default image. You can choose to add fields to the request, such as image to test. All possible fields are described in the API Specification – Jobs section. The “location” property is a required field that represents the location where the test job should be created, it doesn’t affect the location of VMs. AITL supports “westus”, “westus2”, or “westus3”. The image object in the request body json is where the image type to be used for testing is detailed, as well as the CPU architecture and VHD Generation. If the image object is not included, LISA will pick a Linux marketplace image that meets the requirements for running the specified tests. When an image type is specified, additional information will be required based on the image type. Supported image types are VHD, Azure Marketplace image, and Shared Image Gallery. - VHD requires the SAS URL. - Marketplace image requires the publisher, offer, SKU, and version. - Shared Image Gallery requires the gallery name, image definition, and version. Example of how to include the image object for shared image gallery. (<> denotes placeholder): { "location": "westus3", “properties: { <...other properties from default request body here>, "image": { "type": "shared_gallery", "architecture": "x64", "vhdGeneration": 2, "gallery": "<Example: myAzureComputeGallery>", "definition": "<Example: myImage1>", "version": "<Example: 1.0.1>" } } } Check Job Status & Test Results A job is an asynchronous operation that is updated throughout the job’s lifecycle with its operation and ongoing tests status. A job has 6 provisioning states – 4 are non-terminal states and 2 are terminal states. Non-terminal states represent ongoing operation stages and terminal states represent the status at completion. The job’s current state is reflected in the `properties.provisioningState` property located in the response body. The states are described below: Operation States State Type Description Accepted Non-Terminal state Initial ARM state describing the resource creation is being initialized. Queued Non-Terminal state The job has been queued by AITL to run LISA using the provided job template parameters. Scheduling Non-Terminal state The job has been taken off the queue and AITL is preparing to launch LISA. Provisioning Non-Terminal state LISA is creating your VM within your subscription using the default or provided image. Running Non-Terminal state LISA is running the specified tests on your image and VM configuration. Succeeded Terminal state LISA completed the job run and has uploaded the final test results to the job. There may be failed test cases. Failed Terminal state There was a failure during the job’s execution. Test results may be present and reflect the latest status for each listed test. Test results are updated in near real-time and can be seen in the ‘properties.results’ property in the response body. Results will begin to get updated during the “Running” state and the final set of result updates will happen prior to reaching a terminal state (“Completed” or “Failed”). For a complete list of possible test result properties, go to the API Specification – Test Results section. Run below command to get detailed test results. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} The query argument can format or filter results by JMESquery. Please refer to help text for more information. For example, List test results and error messages. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} -o table -q 'properties.results[].{name:testName,status:status,message:message}' Summarize test results. python -m aitl job get -s {subscription_id} -r {resource_group} -n {job_name} -q 'properties.results[].status|{TOTAL:length(@),PASSED:length([?@==`"PASSED"`]),FAILED:length([?@==`"FAILED"`]),SKIPPED:length([?@==`"SKIPPED"`]),ATTEMPTED:length([?@==`"ATTEMPTED"`]),RUNNING:length([?@==`"RUNNING"`]),ASSIGNED:length([?@==`"ASSIGNED"`]),QUEUED:length([?@==`"QUEUED"`])}' Access Job Logs To access logs and read from Azure Storage, the AITL user must have “Storage Blob Data Owner” role. You should check if you have the permission to create role definition in your subscription, likely with your administrator. For information on this role and instructions on how to add this permission, see this Azure documentation. To access job logs, send a GET request with the job name and use the logUrl in the response body to retrieve the logs, which are stored in Azure storage container. For more details on interpreting logs, refer to the LISA documentation on troubleshooting test failures. To quickly view logs online (note that file size limitations may apply), select a .log Blob file and click "edit" in the top toolbar of the Blob menu. To download the log, click the download button in the toolbar. Conclusion AITL represents a forward-looking approach to Linux image validation bringing automation, scalability, and consistency to the forefront. By shifting validation earlier in the development cycle, AITL helps reduce risk, accelerate time to market, and ensure a reliable, high-quality Linux experience on Azure. Whether you're a developer, a Linux distribution partner, or an enterprise managing Linux workloads on Azure, AITL offers a powerful way to modernize and streamline your validation workflows. To learn more or get started with AITL or more details and access to AITL, reach out to Microsoft Linux Systems Group627Views0likes0CommentsAutomating the Linux Quality Assurance with LISA on Azure
Introduction Building on the insights from our previous blog regarding how MSFT ensures the quality of Linux images, this article aims to elaborate on the open-source tools that are instrumental in securing exceptional performance, reliability, and overall excellence of virtual machines on Azure. While numerous testing tools are available for validating Linux kernels, guest OS images and user space packages across various cloud platforms, finding a comprehensive testing framework that addresses the entire platform stack remains a significant challenge. A robust framework is essential, one that seamlessly integrates with Azure's environment while providing the coverage for major testing tools, such as LTP and kselftest and covers critical areas like networking, storage and specialized workloads, including Confidential VMs, HPC, and GPU scenarios. This unified testing framework is invaluable for developers, Linux distribution providers, and customers who build custom kernels and images. This is where LISA (Linux Integration Services Automation) comes into play. LISA is an open-source tool specifically designed to automate and enhance the testing and validation processes for Linux kernels and guest OS images on Azure. In this blog, we will provide the history of LISA, its key advantages, the wide range of test cases it supports, and why it is an indispensable resource for the open-source community. Moreover, LISA is available under the MIT License, making it free to use, modify, and contribute. History of LISA LISA was initially developed as an internal tool by Microsoft to streamline the testing process of Linux images and kernel validations on Azure. Recognizing the value it could bring to the broader community, Microsoft open-sourced LISA, inviting developers and organizations worldwide to leverage and enhance its capabilities. This move aligned with Microsoft's growing commitment to open-source collaboration, fostering innovation and shared growth within the industry. LISA serves as a robust solution to validate and certify that Linux images meet the stringent requirements of modern cloud environments. By integrating LISA into the development and deployment pipeline, teams can: Enhance Quality Assurance: Catch and resolve issues early in the development cycle. Reduce Time to Market: Accelerate deployment by automating repetitive testing tasks. Build Trust with Users: Deliver stable and secure applications, bolstering user confidence. Collaborate and Innovate: Leverage community-driven improvements and share insights. Benefits of Using LISA Scalability: Designed to run large-scale test cases, from 1 test case to 10k test cases in one command. Multiple platform orchestration: LISA is created with modular design, to support run the same test cases on various platforms including Microsoft Azure, Windows HyperV, BareMetal, and other cloud-based platforms. Customization: Users can customize test cases, workflow, and other components to fit specific needs, allowing for targeted testing strategies. It’s like building kernels on-the-fly, sending results to custom database, etc. Community Collaboration: Being open source under the MIT License, LISA encourages community contributions, fostering continuous improvement and shared expertise. Extensive Test Coverage: It offers a rich suite of test cases covering various aspects of compatibility of Azure and Linux VMs, from kernel, storage, networking to middleware. How it works Infrastructure LISA is designed to be componentized and maximize compatibility with different distros. Test cases can focus only on test logic. Once test requirements (machines, CPU, memory, etc) are defined, just write the test logic without worrying about environment setup or stopping services on different distributions. Orchestration. LISA uses platform APIs to create, modify and delete VMs. For example, LISA uses Azure API to create VMs, run test cases, and delete VMs. During the test case running, LISA uses Azure API to collect serial log and can hot add/remove data disks. If other platforms implement the same serial log and data disk APIs, the test cases can run on the other platforms seamlessly. Ensure distro compatibility by abstracting over 100 commands in test cases, allowing focus on validation logic rather than distro compatibility. Pre-processing workflow assists in building the kernel on-the-fly, installing the kernel from package repositories, or modifying all test environments. Test matrix helps one run to test all. For example, one run can test different vm sizes on Azure, or different images, even different VM sizes and different images together. Anything is parameterizable, can be tested in a matrix. Customizable notifiers enable the saving of test results and files to any type of storage and database. Agentless and low dependency LISA operates test systems via SSH without requiring additional dependencies, ensuring compatibility with any system that supports SSH. Although some test cases require installing extra dependencies, LISA itself does not. This allows LISA to perform tests on systems with limited resources or even different operating systems. For instance, LISA can run on Linux, FreeBSD, Windows, and ESXi. Getting Started with LISA Ready to dive in? Visit the LISA project at aka.ms/lisa to access the documentation. Install: Follow the installation guide provided in the repository to set up LISA in your testing environment. Run: Follow the instructions to run LISA on local machine, Azure or existing systems. Extend: Follow the documents to extend LISA by test cases, data sources, tools, platform, workflow, etc. Join the Community: Engage with other users and contributors through forums and discussions to share experiences and best practices. Contribute: Modify existing test cases or create new ones to suit your needs. Share your contributions with the community to enhance LISA's capabilities. Conclusion LISA offers open-source collaborative testing solutions designed to operate across diverse environments and scenarios, effectively narrowing the gap between enterprise demands and community-led innovation. By leveraging LISA, customers can ensure their Linux deployments are reliable and optimized for performance. Its comprehensive testing capabilities, combined with the flexibility and support of an active community, make LISA an indispensable tool for anyone involved in Linux quality assurance and testing. Your feedback is invaluable, and we would greatly appreciate your insights.408Views1like0CommentsEnhancing Observability with Inspektor Gadget
Thorough observability is essential to a pain free cloud experience. Azure provides many general-purpose observability tools, but you may want to create custom tooling . Inspektor Gadget is an open-source framework that makes customizable data collection easy. Microsoft recently contributed new features to Inspektor Gadget that further enhance its modular framework, making it even easier to meet your specific systems inspection needs. Of course, we also made it easy for Azure Kubernetes Service (AKS) users to use.1.1KViews0likes0Comments