adaptive cloud
47 TopicsIntroducing Azure Local: cloud infrastructure for distributed locations enabled by Azure Arc
Today at Microsoft Ignite 2024 we're introducing Azure Local, cloud-connected infrastructure that can be deployed at your physical locations and under your operational control. With Azure Local, you can run the foundational Azure compute, networking, storage, and application services locally on hardware from your preferred vendor, providing flexibility to meet your requirements and budget.86KViews24likes26CommentsEvolving Stretch Clustering for Azure Local
Stretched clusters in Azure Local, version 22H2 (formerly Azure Stack HCI, version 22H2) entail a specific technical implementation of storage replication that spans a cluster across two sites. Azure Local, version 23H2 has evolved from a cloud-connected operating system to an Arc-enabled solution with Arc Resource Bridge, Arc VM, and AKS enabled by Azure Arc. Azure Local, version 23H2 expands the requirements for multi-site scenarios beyond the OS layer, while Stretched clusters do not encompass the entire solution stack. Based on customer feedback, the new Azure Local release will replace the Stretched clusters defined in version 22H2 with new high availability and disaster recovery options. For Short Distance Rack Aware Cluster is a new cluster option which spans two separate racks or rooms within the same Layer-2 network at a single location, such as a manufacturing plant or a campus. Each rack functions as a local availability zone across layers from OS to Arc management including Arc VMs and AKS enabled by Azure Arc, providing fault isolation and workload placement within the cluster. The solution is configured with one storage pool to reduce additional storage replication and enhance storage efficiency. This solution delivers the same Azure deployment and management experience as a standard cluster. This setup is suitable for edge locations and can scale up to 8 nodes, with 4 nodes in each rack. Rack Aware Cluster is currently in private preview and is slated to public preview and general release in 2025. For Long Distance Azure Site Recovery can be used to replicate on-premises Azure Local virtual machines into Azure and protect business-critical workloads. This allows Azure cloud to serve as a disaster recovery site, enabling critical VMs to be failed over to Azure in case of a local cluster disaster, and then failed back to the on-premises cluster when it becomes operational again. If you cannot fail over certain workloads to cloud and require long distance of disaster recovery, like in two different cities, you can leverage Hyper-V Replica to replicate Arc VMs to the secondary site. Those VMs will become Hyper-V VMs on the secondary site, they will become Arc VMs once they fail back to the original cluster on the first site. Additional Options beyond Azure Local If the above solutions in Azure Local do not cover your needs, you can fully customize your solution with Windows Server 2025 which introduces several advanced hybrid cloud capabilities designed to enhance operational flexibility and connectivity across various environments. Additionally, it offers various replication technologies like Hyper-V Replica, Storage Replica and external SAN replication that enable the development of tailored datacenter disaster recovery solutions. Learn more from the Windows Server 2025 now generally available, with advanced security, improved performance, and cloud agility - Microsoft Windows Server Blog What to do with existing Stretched clusters on version 22H2 Stretched clusters and Storage Replica are not supported in Azure Local, version 23H2 and beyond. However, version 22H2 stretched clusters can stay in supported state in version 23H2 by performing the first step of operating system upgrade as shown in the following diagram to 23H2 OS. The second step of the solution upgrade to Azure Local is not applicable to stretched clusters. This provides extra time to assess the most suitable future solution for your needs. Please refer to the About Azure Local upgrade to version 23H2 - Azure Local | Microsoft Learn for more information on the 23H2 upgrade. Refer the blog on Upgrade from Azure Stack HCI, version 22H2 to Azure Local | Microsoft Community Hub. Conclusion We are excited to be bringing Rack Aware Clusters and Azure Site Recovery to Azure Local. These high availability and disaster recovery options allow customers to address various scenarios with a modern cloud experience and simplified management.14KViews16likes0CommentsAnnouncing General Availability: Windows Server Management enabled by Azure Arc
Windows Server Management enabled by Azure Arc offers customers with Windows Server licenses that have active Software Assurances or Windows Server licenses that are active subscription licenses the following key benefits: Azure Update Manager Azure Change Tracking and Inventory Azure Machine Configuration Windows Admin Center in Azure for Arc Remote Support Network HUD Best Practices Assessment Azure Site Recovery (Configuration Only) Upon attestation, customers receive access to the following at no additional cost beyond associated networking, compute, storage, and log ingestion charges. These same capabilities are also available for customers enrolled in Windows Server 2025 Pay as you Go licensing enabled by Azure Arc. Learn more at Windows Server Management enabled by Azure Arc - Azure Arc | Microsoft Learn or watch Video: Free Azure Services for Non-Azure Windows Servers Covered by SA Powered by Azure Arc! To get started, connect your servers to Azure Arc, attest for these benefits, and deploy management services as you modernize to Azure's AI-enabled set of server management capabilities across your hybrid, multi-cloud, and edge infrastructure!18KViews10likes10CommentsCloud infrastructure for disconnected environments enabled by Azure Arc
Organizations in highly regulated industries such as government, defense, financial services, healthcare, and energy often operate under strict security and compliance requirements and across distributed locations, some with limited or no connectivity to public cloud. Leveraging advanced capabilities, including AI, in the face of this complexity can be time-consuming and resource intensive. Azure Local, enabled by Azure Arc, offers simplicity. Azure Local’s distributed infrastructure extends cloud services and security across distributed locations, including customer-owned on-premises environments. Through Azure Arc, customers benefit from a single management experience and full operational control that is consistent from cloud to edge. Available in preview to pre-qualified customers, Azure Local with disconnected operations extends these capabilities even further – enabling organizations to deploy, manage, and operate cloud-native infrastructure and services in completely disconnected or air-gapped networks. What is disconnected operations? Disconnected operations is an add-on capability of Azure Local, delivered as a virtual appliance, that enables the deployment and lifecycle management of your Azure Local infrastructure and Arc-enabled services, without any dependency on a continuous cloud connection. Key Benefits Consistent Azure Experience: You can operate your disconnected environment using the same tools you already know - Azure Portal, Azure CLI and ARM Templates extended through a local control plane. Built-in Azure Services: Through Azure Arc, you can deploy, update, and manage Azure services such as Azure Local VMs, Azure Kubernetes Service (AKS), etc. Data Residency and Control: You can govern and keep data within your organization's physical and legal jurisdiction to meet data residency, operational autonomy, and technological isolation requirements. Key Use Cases Azure Local with disconnected operations unlocks a range of impactful use cases for regulated industries: Government and Defense: Running sensitive government workloads and classified data more securely in air-gapped and tactical environments with familiar Azure management and operations. Manufacturing: Deploying and managing mission-critical applications like industrial process automation and control systems for real-time optimizations in more highly secure environments with zero connectivity. Financial Services: Enhanced protection of sensitive financial data with real time data analytics and decision making, while ensuring compliance with strict regulations in isolated networks. Healthcare: Running critical workloads with a need for real-time processing, storing and managing sensitive patient data with the increased levels of privacy and security in disconnected environments Energy: Operating critical infrastructure in isolated environments, such as electrical production and distribution facilities, oil rigs, or remote pipelines. Here is an example of how disconnected operations for Azure Local can provide mission critical emergency response and recovery efforts by providing essential services when critical infrastructure and networks are unavailable. Core Features and capabilities Simplified Deployment and Management Download and deploy the disconnected operations virtual appliance on Azure Local Premier Solutions through a streamlined user interface. Create and manage Azure Local instances using the local control plane, with the same tooling experience as Azure. Offline Updates The monthly update package includes all the essential components: the appliance, Azure Local software, AKS, and Arc-enabled service agents. You can update and manage the entire Azure Local instance using the local control plane without an internet connection. Monitoring Integration You can monitor your Azure Local instances and VMs using external monitoring solutions like SCOM by installing custom management packs and monitor AKS Clusters through 3 rd party open-source solutions like Prometheus and Grafana. Run Mission-Critical Workloads – Anytime, Anywhere Azure Local VMs You can run VMs with flexible sizing, support for custom VM images, and high availability through storage replication and automatic failover – all managed through the local Azure interface. AI & Containers with AKS You can use disconnected AI containers with Azure Kubernetes Service (AKS) on Azure Local to deploy and manage AI applications in disconnected scenarios where data residency and operational autonomy is required. AKS enables the deployment and management of containerized applications such as AI agents and models, deep learning frameworks, and related tools, which can be leveraged for inferencing, fine-tuning, and training in isolated networks. AKS also automates resource scaling, allowing for the dynamic addition and removal of container instances to more efficiently utilize hardware resources, including GPUs, which are critical for AI workloads. This provides consistent Azure experience in managing Kubernetes clusters and AI workloads with the same tooling and processes in connected environments. Get Started: Resources and Next Steps Microsoft is excited to announce the upcoming preview of Disconnected Operations for Azure Local in Q3 ‘CY25 for both Commercial and Government Cloud customers. To Learn more, please visit Disconnected operations for Azure Local overview (preview) - Azure Local Ready to participate? Get Qualified! or contact your Microsoft account team. Please also check out this session at Microsoft Build https://build.microsoft.com/en-US/sessions/BRK195 by Mark Russinovich, one of the most influential minds in cloud computing. His insights into the latest Azure innovations, the future of cloud architecture and computing, is a must-watch event!2.3KViews7likes3CommentsExtending Azure's AI Platform with an adaptive cloud approach
Authored by Derek Bogardus and Sanjana Mohan, Azure Edge AI Product Management Ignite 2024 is here, and nothing is more top of mind for customers than the potential to transform their businesses with AI wherever they operate. Today, we are excited to announce the preview of two new Arc-enabled services that extend the power of Azure’s AI platform to on-premises and edge environments. Sign up to join the previews here! An adaptive cloud approach to AI The goal of Azure’s adaptive cloud approach is to extend just enough Azure to customers’ distributed environments. For many of these customers, valuable data is generated and stored locally, outside of the hyperscale cloud, whether due to regulation, latency, business continuity, or simply the large volume of data being generated in real time. AI inferencing can only occur where the data exists. So, while the cloud has become the environment of choice for training models, we see a tremendous need to extend inferencing services beyond the cloud to enable complete cloud-to-edge AI scenarios. Search on-premises data with generative AI Over the past couple of years, generative AI has come to the forefront of AI innovation. Language models give any user the ability to interact with large, complex data sets in natural language. Public tools like ChatGPT are great for queries about general knowledge, but they can’t answer questions about private enterprise data on which they were not trained. Retrieval Augmented Generation, or "RAG", helps address this need by augmenting language models with private data. Cloud services like Azure AI Search and Azure AI Foundry simplify how customers can use RAG to ground language models in their enterprise data. Today, we are announcing the preview of a new service that brings generative AI and RAG to your data at the edge. Within minutes, customers can deploy an Arc extension that contains everything needed to start asking questions about their on-premises data, including: Popular small and large language models running locally with support for both CPU and GPU hardware A turnkey data ingestion and RAG pipeline that keeps all data completely local, with RBAC controls to prevent unauthorized access An out-of-the-box prompt engineering and evaluation tool to find the best settings for a particular dataset Azure-consistent APIs to integrate into business applications, as well as a pre-packaged UI to get started quickly This service is available now in gated private preview for customers running Azure Local infrastructure, and we plan to make it available on other Arc-enabled infrastructure platforms in the near future. Sign up here! Deploy curated open-source AI models via Azure Arc Another great thing about Azure’s AI platform is that it provides a catalog of curated AI models that are ready to deploy and provide consistent inferencing endpoints that can be integrated directly into customer applications. This not only makes deployment easy, but customers can also be confident that the models are secure and validated These same needs exist on the edge as well, which is why we are now making a set of curated models deployable directly from the Azure Portal. These models have been selected, packaged, and tested specifically for edge deployments, and are currently available on Azure Local infrastructure. Phi-3.5 Mini (3.8 billion parameter language model) Mistral 7B (7.3 billion parameter language model) MMDetection YOLO (object detection) OpenAI Whisper Large (speech to text) Google T5 Base (translation) Models can be deployed from a familiar Azure Portal wizard to an Arc AKS cluster running on premises. All available models today can be run on just a CPU. Phi-3.5 and Mistral 7B also have GPU versions available for better performance. Once complete, the deployment can be managed directly in Azure ML Studio, and an inferencing endpoint is available on your local network. Wrap up Sign up now to join either of the previews at the link below or stop by and visit us in person in the Azure Arc and Azure Local Expert Meet Up station in the Azure Infrastructure neighborhood at Ignite. We’re excited to get these new capabilities into our customers’ hands and hear from you how it’s going. Sign up to join the previews here5.3KViews7likes2CommentsAnnouncing the preview of Software Defined Networking (SDN) on Azure Local
Big news for Azure Local customers! Starting in Azure Local version 2506, we’re excited to announce the Public Preview of Software Defined Networking (SDN) on Azure Local using the Azure Arc resource bridge. This release introduces cloud-native networking capabilities for access control at the network layer, utilizing Network Security Groups (NSGs) on Azure Local. Key highlights in this release are: 1- Centralized network management: Manage Logical networks, network interfaces, and NSGs through the Azure control plane – whether your preference is the Azure Portal, Azure Command-Line Interface (CLI), or Azure Resource Manager templates. 2- Fine-grained traffic control: Safeguard your edge workloads with policy-driven access controls by applying inbound and outbound allow/deny rules on NSGs, just as you would in Azure. 3- Seamless hybrid consistency: Reduce operational friction and accelerate your IT staff’s ramp-up on advanced networking skills by using the same familiar tools and constructs across both Azure public cloud and Azure Local. Software Defined Networking (SDN) forms the backbone of delivering Azure-style networking on-premises. Whether you’re securing enterprise applications or extending cloud-scale agility to your on-premises infrastructure, Azure Local, combined with SDN enabled by Azure Arc, offers a unified and scalable solution. Try this feature today and let us know how it transforms your networking operations! What’s New in this Preview? Here’s what you can do today with SDN enabled by Azure Arc: ✅ Run SDN Network Controller as a Failover Cluster service — no VMs required! ✅ Deploy logical networks — use VLAN-backed networks in your datacenter that integrate with SDN enabled by Azure Arc. ✅ Attach VM Network Interfaces — assign static or DHCP IPs to VMs from logical networks. ✅ Apply NSGs - create, attach, and manage NSGs directly from Azure on your logical networks (VLANs in your datacenter) and/or on the VM network interface. This enables a generic rule set for VLANs, with a crisper rule set for individual Azure Local VM network interface using a complete 5-tuple control: source and destination IP, port, and protocol. ✅ Use Default Network Policies — apply baseline security policies during VM creation for your primary NIC. Select well-known inbound ports such as HTTP (while we block everything else for you), while still allowing outbound traffic. Or select an existing NSG you already have! SDN enabled by Azure Arc (Preview) vs. SDN managed by on-premises tools Choosing Your Path: Some SDN features like virtual networks (vNETs), Load Balancers (SLBs), and Gateways are not yet supported in the SDN enabled by Azure Arc (Preview). But good news: you’ve still got options. If your workloads need those features today, you can leverage SDN managed by on-premises tools: - SDN Express (PowerShell) - Windows Admin Center (WAC) The SDN managed by on-premises tools continues to provide full-stack SDN capabilities, including SLBs, Gateways, and VNET peering, while we actively work on bringing this additional value to complete SDN enabled by Azure Arc feature set. You must choose one of the modes of SDN management and cannot run in a hybrid management mode, mixing the two. Please read this important consideration section before getting started! Thank You to Our Community This milestone was only possible because of your input, your use cases, and your edge innovation. We're beyond excited to see what you build next with SDN enabled by Azure Arc. To try it out, head to the Azure Local documentation Let’s keep pushing the edge forward. Together!945Views6likes5CommentsIgnite 2024: AKS enabled by Azure Arc - New Capabilities and Expanded Workload Support
Microsoft Ignite 2024 has been a showcase of innovation across the Azure ecosystem, bringing forward major advancements in AI, cloud-native applications, and hybrid cloud solutions. This year’s event featured key updates, including enhancements to AKS enabled by Azure Arc, which introduced new capabilities and expanded workload support. These updates reinforce the value and versatility that AKS enabled by Azure Arc brings to organizations looking to scale and optimize their operations. With these advancements, AKS Arc continues to support seamless management, increased scalability, and enhanced workload performance across diverse infrastructures. AKS Enabled by Azure Arc AKS enabled by Azure Arc brings the power of Azure’s managed Kubernetes service to any environment, providing consistent management and security across on-premises, edge, and multi-cloud deployments. It encompasses: AKS on Azure Local: A full-featured Kubernetes platform integrated with Azure Local for comprehensive container orchestration in hybrid setups. Notably, AKS on Azure Local has earned recognition as a leader in the 2024 Gartner Magic Quadrant for Distributed Hybrid Infrastructure, underscoring Microsoft's dedication to delivering comprehensive, enterprise-ready solutions for hybrid cloud deployments. AKS Edge Essentials: A lightweight version designed for edge computing, ensuring operational consistency on constrained hardware. AKS on Azure Local Disconnected Operations: It is now available on Azure Local Disconnected Operations. This latest addition to AKS enabled by Azure Arc portfolio is the support for fully disconnected scenario. It allows AKS enabled by Azure Arc to operate in air-gapped, isolated environments without the need for continuous Azure connectivity. It is crucial for organizations that require secure, self-sufficient Kubernetes operations in highly controlled or remote locations. With this support, businesses can maintain robust Kubernetes functionality while meeting stringent compliance and security standards. Key Features and Expanded Workload Support This year's Ignite announcements unveiled a series of public preview and GA features that enhance the capabilities of AKS enabled by Azure Arc. These advancements reflect our commitment to delivering robust, scalable solutions that meet the evolving needs of our customers. Below are the key highlights that showcase the enhanced capabilities of AKS enabled by Azure Arc: Edge Workload Azure IoT Operations - enabled by Azure Arc: Available on AKS Edge Essentials (AKS-EE) and AKS on Azure Local with public preview support. Azure IoT Operations in the management and scaling of IoT solutions. It provides robust support for deploying and overseeing IoT applications within Kubernetes environments, enhancing operational control and scalability. Organizations can leverage this tool to maintain seamless management of distributed IoT workloads, ensuring consistent performance and simplified scaling across diverse deployment scenarios. Azure Container Storage - enabled by Azure Arc: Available on both AKS Edge Essentials (AKS-EE) and AKS on Azure Local, this support enables seamless integration for persistent storage needs in Kubernetes environments. It provides scalable, reliable, and high-performance storage solutions that enhance data management and support stateful applications running in hybrid and edge deployments. This addition ensures that organizations can efficiently manage their containerized workloads with robust storage capabilities. Azure Key Vault Secret Store extension for Kubernetes: Now available as public preview on AKS Edge Essentials and AKS on Azure Local, this extension automatically synchronizes secrets from an Azure Key Vault to an AKS enabled by Azure Arc cluster for offline access, providing essential tools for proactive monitoring and policy enforcement. It offers advanced security and compliance capabilities tailored for robust governance and regulatory adherence, ensuring that organizations can maintain compliance with industry standards and best practices while safeguarding their infrastructure. Azure Monitor Pipeline: The Azure Monitor pipeline is a data ingestion solution designed to provide consistent, centralized data collection for Azure Monitor. Once deployed for AIO on AKS cluster enabled by Azure Arc, it enables at-scale telemetry data collection and routing at the edge. The pipeline can cache data locally, syncing with the cloud when connectivity is restored, and supports segmented networks where direct data transfer to the cloud isn’t possible. Built on OpenTelemetry Collector, the pipeline’s configuration includes data flows, cache properties, and destination rules defined in the DCR to ensure seamless data processing and transmission to the cloud. Arc Workload Identity Federation: Now available as public preview on AKS Edge Essentials and AKS on Azure Local, providing secure federated identity management to enhance security for customer workloads Arc Gateway: Now available as public preview for AKS Edge Essentials and AKS on Azure Local. Arc Gateway support on AKS enabled by Azure Arc enhances secure connectivity across hybrid environments, reducing required firewall rules and improving security for customer deployments. Azure AI Video Indexer - enabled by Azure Arc: Supported on AKS Edge Essentials and AKS on Azure Local. Arc-enabled Video Indexer enables comprehensive AI-powered video analysis, including transcription, facial recognition, and object detection. It allows organizations to deploy sophisticated video processing solutions within hybrid and edge environments, ensuring efficient local data processing with improved security and minimal latency. MetalLB - Azure Arc Extension: Now supported on AKS Edge Essentials and AKS on Azure Local, MetalLB ensures efficient load balancing capabilities. This addition enhances network resilience and optimizes traffic distribution within Kubernetes environments. Comprehensive AI and Machine Learning Capabilities GPUs for AI Workloads: Now AKS enabled by Azure Arc supports a range of GPUs tailored for AI and machine learning workloads with GPU Partitioning) and GPU Passthrough Virtualization support. These options enable robust performance for resource-intensive AI and machine learning workloads, allowing for efficient use of GPU resources to run complex models and data processing tasks. Arc-enabled Azure Machine Learning: Support on AKS on Azure Local, AML capabilities for running sophisticated AI models. Businesses can leverage Azure’s powerful machine learning tools seamlessly across different environments, enabling them to develop, deploy, and manage machine learning models effectively on-premises and at the edge. Arc-enabled Video Indexer: It extends Azure's advanced video analytics capabilities to AKS enabled by Azure Arc. Organizations can now process and analyze video content in real-time, harnessing Azure's robust video AI tools to enhance video-based insights and operations. This support provides businesses with greater flexibility to conduct video analysis seamlessly in remote or hybrid environments Kubernetes AI Toolchain Orchestrator (Kaito + LoRA + QLoRA): Fully validated and support for fine-tuning and optimizing AI models, Kaito, LoRA and QLoRA are designed for edge deployments such as AKS on Azure Local. This combination enhances the ability to run and refine AI applications effectively in edge environments, ensuring performance and flexibility. Flyte Integration: Now supported on AKS on Azure Local, Flyte offers a scalable orchestration platform for managing machine learning workflows. This capability enables teams to build, execute, and manage complex AI pipelines efficiently, enhancing productivity and simplifying the workflow management process. Enhanced Infrastructure and Operations Management Infrastructure as Code (IaC) with Terraform: Now supported on AKS on Azure Local for both Connected and Air-gapped scenario, providing streamlined deployment capabilities through code. This support enables teams to automate and manage their Kubernetes infrastructure at scale more efficiently with Terraform. Anti-affinity, Pod CIDR, Taints/Labels: Available on AKS on Azure Local, these features provide enhanced infrastructure capabilities by allowing refined workload placement and advanced network configuration. Anti-affinity rules help distribute pods across different nodes to avoid single points of failure, while Pod CIDR simplifies network management by allocating IP ranges to pods. Taints and labels offer greater control over node selection, ensuring that specific workloads run on designated nodes and enhancing the overall efficiency and reliability of Kubernetes operations. Optimized Windows Node Pool Management: AKS enabled by Azure Arc now includes the capability to enable and disable Windows node pools for clusters. This enhancement helps prevent unnecessary binary downloads, benefiting customers with low-speed or limited internet connection. It optimizes resource usage, reduces bandwidth consumption, and enhances overall deployment efficiency, making it ideal for environments with network constraints. Kubernetes Development AKS-WSL: With AKS-WSL, developers can set up a local environment that mimics the experience of working with AKS. This makes it easier for developers to write, debug, and test Kubernetes applications locally before deploying them to a full AKS cluster. AKS-WSL VSCode Extension: The Visual Studio Code extension for AKS-WSL allows developers to write, debug, and deploy Kubernetes applications locally, streamlining development workflows. This setup improves productivity by providing efficient tools and capabilities, making it easier to develop, test, and refine Kubernetes workloads directly from a local machine. Arc Jumpstart: Supported AKS Edge Essentials and AKS on Azure Local. Arc Jumpstart simplifies deployment initiation, providing developers with a streamlined way to set up and start working with Kubernetes environments quickly. It makes it easier for teams to evaluate and experiment with AKS enabled by Azure Arc, offering pre-configured scenarios and comprehensive guidance. By reducing complexity and setup time, Arc Jumpstart enhances the developer experience, facilitating faster prototyping and smoother onboarding for new projects in hybrid and edge settings. Conclusion Microsoft Ignite 2024 has underscored the continued evolution of AKS enabled by Azure Arc, bringing more comprehensive, scalable, and secure solutions to diverse environments. These advancements support organizations in running cloud-native applications anywhere, enhancing operational efficiency and innovation. We welcome your feedback (aksarcfeedback@microsoft.com) and look forward to ongoing collaboration as we continue to evolve AKS enabled by Azure Arc.4.1KViews5likes0CommentsUpgrade Azure Local operating system to new version
Today, we’re sharing more details about the end of support for Azure Local, with OS version 25398.xxxx (23H2) on October 31, 2025. After this date, monthly security and quality updates stop, and Microsoft Support remains available only for upgrade assistance. Your billing continues, and your systems keep working, including registration and repair. There are several options to upgrade to Azure Local, with OS version 26100.xxxx (24H2) depending on which scenario applies to you. Scenario #1: You are on Azure Local solution, with OS version 25398.xxxx If you're already running the Azure Local solution, with OS version 25398.xxxx, there is no action required. You will automatically receive the upgrade to OS version 26100.xxxx via a solution update to 2509. Azure Local, version 23H2 and 24H2 release information - Azure Local | Microsoft Learn for the latest version of the diagram. If you are interested in upgrading to OS version 26100.xxxx before the 2509 release, there will be an opt-in process available in the future with production support. Scenario #2: You are on Azure Stack HCI and haven’t performed the solution upgrade yet Scenario #2a: You are still on Azure Stack HCI, version 22H2 With the 2505 release, a direct upgrade path from version 22H2 OS (20349.xxxx) to 24H2 OS (26100.xxxx) has been made available. To ensure a validated, consistent experience, we have reduced the process to using the downloadable media and PowerShell to install the upgrade. If you’re running Azure Stack HCI, version 22H2 OS, we recommend taking this direct upgrade path to the version 24H2 OS. Skipping the upgrade to the version 23H2 OS will be one less upgrade hop and will help reduce reboots and maintenance planning prior to the solution upgrade. After then, perform post-OS upgrade tasks and validate the solution upgrade readiness. Consult with your hardware vendor to determine if version 24H2 OS is supported before performing the direct upgrade path. The solution upgrade for systems on the 24H2 OS is not yet supported but will be available soon. Scenario #2b: You are on Azure Stack HCI, version 23H2 OS If you performed the upgrade from Azure Stack HCI, version 22H2 OS to version 23H2 OS (25398.xxxx), but haven’t applied the solution upgrade, then we recommend that you perform post-OS upgrade tasks, validate the solution upgrade readiness, and apply the solution upgrade. Diagram of Upgrade Paths Conclusion We invite you to identify which scenarios apply to you and take action to upgrade your systems. On behalf of the Azure Local team, we thank you for your continuous trust and feedback! Learn more To learn more, refer to the upgrade documentation. For known issues and remediation guidance, see the Azure Local Supportability GitHub repository.3.1KViews4likes9CommentsJumpstart LocalBox 25H2 Update
LocalBox delivers a streamlined, one-click sandbox experience for exploring the full power of Azure Local. With this 25H2 release, we are introducing support for Azure VM Spot pricing for the LocalBox Client VM, removed service principal dependency and transitioned to Managed Identity, added support for deploying the LocalBox Client VM and Azure Local instance in separate regions, added dedicated PowerShell modules and updated LocalBox to the Azure Local 2505 release - making it possible for you to evaluate a range of new features and enhancements that elevate the functionality, performance, and user experience. Following our LocalBox rebranding last month, today, we are thrilled to announce our second major update – LocalBox 25H2! Key Azure Local Updates Azure Local 2505 Solution Version In this release, we have updated the base image for the Azure Local nodes to the 2505 solution version. Started in the previous Azure Local 2504 release, a new operating system was introduced for Azure Local deployments. For 2505, all new deployments of Azure Local will run the new OS version 26100.4061. This unlocks several new features: Registration and Deployment Updates Starting with this release, you can now download a specific version of Azure Local software instead of just the latest version. For each upcoming release, you will be able to choose from up to the last six supported versions. Security Updates The Dynamic Root of Trust for Measurement (DRTM) is enabled by default for all new 2504 deployments running OS version 26100.3775. Azure Local VM Updates Data disk expansion - You can now expand the size of a data disk attached to an Azure Local VM. For more information, see Expand the size of a data disk attached to an Azure Local VM. Live VM migration with GPU partitioning (GPU-P) - You can now live migrate VMs with GPU-P. You can read more about what is new in Azure Local 2505 in the documentation. Jumpstart LocalBox 25H2 Updates Features Cost Optimizations with Azure Spot VM Support LocalBox now supports enabling Azure VM Spot pricing for the Client VM, allowing users to take advantage of cost savings on unused Azure capacity. This feature is ideal for workloads that can tolerate interruptions, providing an economical option for testing and dev environments. By leveraging Spot pricing, users can significantly reduce their operational costs while maintaining the flexibility and scalability offered by Azure. You may leverage the advisor on the Azure Spot Virtual Machine pricing page to estimate costs for your selected region. Here is an example for running the LocalBox Client Virtual Machine in the East US region: The new deployment parameter enableAzureSpotPricing is disabled by default, so users who wants to take advantage of this capability will need to opt-in. Visit the LocalBox FAQ to see the updated price estimates for running LocalBox in your environment. Deploying the LocalBox Client VM and Azure Local Instance in Separate Regions Our users have been sharing with us feedback around the Azure capacity requirements of deploying LocalBox, specifically when it comes to regions with sufficient compute capacity (vCPU quotas) for the VM SKU (Standard E32s v5/v6) used in LocalBox. To address this, we have now introduced a new parameter for specifying the region the Azure Local instance resources will be deployed to. In the following example, LocalBox is deployed into Norway East while the Azure Local instance is deployed into Australia East. In practice, this makes it possible to deploy LocalBox into any region where users have sufficient vCPU quotas available for the LocalBox VM SKU (Standard E32s v5/v6). Enhanced Security - Support for Azure Managed Identity We have now introduced an enhanced security posture by removing the Service Principal Names (SPN) user requirement in favor of Azure Managed Identity at deployment time. This follows the same pattern we introduced in Jumpstart ArcBox last year, and now, with Azure Local fully support deployments without an SPN, we are excited to share this update in LocalBox. Dedicated PowerShell modules Arc Jumpstart has been evolving and growing significantly since its beginning more than 5 years ago. As the code base is growing, we see the opportunity to consolidate common code and separate large scripts into modules. Our first PowerShell module, Azure.Arc.Jumpstart.Common, was moved into its own repository — and was published to the PowerShell Gallery via a GitHub Actions workflow last month. 💥 With this LocalBox release, we have also separated functions in LocalBox into the newly dedicated Azure.Arc.Jumpstart.LocalBox module. Both modules are now installed during provisioning and leveraged in automation scripts. While these modules are targeted for use in automation, it makes the scripts readable for those who want to understand the logic and potentially contribute with bugfixes or new functionality. What we’ve achieved: ✅ New repo structure for PowerShell modules ✅ CI/CD pipeline using GitHub Actions ✅ Cross-platform testing on Linux, macOS, Windows PowerShell 5.1 & 7 ✅ Published module to PowerShell Gallery ✅ Sampler module used to scaffold and streamline the module structure 🎯 This is a big step toward better reusability and scalability for PowerShell in Jumpstart scenarios. As we continue building out new use cases, having this modular foundation will keep things clean, maintainable, and extensible. Check out our SDK repository on GitHub and the modules on PowerShell Gallery. Other Quality of Life Improvements We appreciate the continued feedback from the Jumpstart community and have incorporated several smaller changes to make it easier for users to deploy LocalBox. These include, but are not limited to: Added configuration of start/shutdown settings for the nested VMs to make sure that they are shutdown properly and started in the correct order when the LocalBox Client VM is stopped and started. Moved the manual deployment steps to a separate page for clarity Added information about the Pester-tests in the Troubleshooting section, including how to open the log-file to see which tests have failed Added shortcut to Hyper-V Manager on the desktop in the LocalBox Client VM Getting started! The latest update to LocalBox not only focuses on new features but also on enhancing overall cost and deployment experience. We invite our community to explore these new features and take full advantage of the enhanced capabilities of LocalBox. Your feedback is invaluable to us, and we look forward to hearing about your experiences and insights as you navigate these new enhancements. Get started today by visiting aka.ms/JumpstartLocalBox!1.2KViews4likes2CommentsPublic Preview: Deploy OSS Large Language Models with KAITO on AKS on Azure Local
Announcement Along with Kubernetes AI Toolchain Operator (KAITO) on AKS GA release, we are thrilled to announce Public Preview refresh for KAITO on AKS on Azure Local. Customers can now enable KAITO as a cluster extension on AKS enabled by Azure Arc as part of cluster creation or day 2 using Az CLI. The seamless enablement experience makes it easy to get started with LLM deployment and fully consistent with AKS in the cloud. We also invest heavily to reduce frictions in LLM deployment such as recommending the right GPU SKU, validating preset models with GPUs and avoiding Out of Memory errors, etc. KAITO Use Cases Many of our lighthouse customers are exploring exciting opportunities to build, deploy and run AI Apps at the edge. We’ve seen many interesting scenarios like Pipeline Leak detection, Shrinkage detection, Factory line optimization or GenAI Assistant across many industry verticals. All these scenarios need a local AI model with edge data to satisfy low latency or regulatory requirements. With one simple command, customers can quickly get started with LLM in the edge-located Kubernetes cluster, and ready to deploy OSS models with OpenAI-compatible endpoints. Deploy & fine-tune LLM declaratively With KAITO extension, customers can author a simple YAML for inference workspace in Visual Studio Code or any text editor and deploy a variety of preset models ranging from Phi-4, Mistral, to Qwen with kubectl on any supported GPUs. In addition, customers can deploy any vLLM compatible text generation model from Hugging Face or even private weights models by following custom integration instructions. You can also customize base LLMs in the edge Kubernetes with Parameter Efficient Fine Tuning (PEFT) using qLoRA or LoRA method, just like the inference workspace deployment with YAML file. For more details, please visit the product documentation and KAITO Jumpstart Drops for more details. Compare and evaluate LLMs in AI Toolkit Customers can now use AI Toolkit, a popular extension in Visual Studio Code, to compare and evaluate LLMs whether it’s local or remote endpoint. With AI Toolkit playground and Bulk Run features, you can test and compare LLMs side by side and find out which model fits the best for your edge scenario. In addition, there are many built-in LLM Evaluators such as Coherence, Fluency, or Relevance that can be used to analyze model performance and generate numeric scores. For more details, please visit AI Toolkit Overview document. Monitor inference metrics in Managed Grafana The KAITO extension defaults to vLLM inference runtime. With vLLM runtime, customers can now monitor and visualize inference metrics with Azure Managed Prometheus and Azure Managed Grafana. Within a few configuration steps, e.g., enabling the extensions, labeling inference workspace, creating Service Monitor, the vLLM metrics will show up in Azure Monitor Workspace. To visualize them, customers can link the Grafana dashboard to Azure Monitor Workspace and view the metrics using the community dashboard. Please view product document and vLLM metric reference for more details. Get started today The landscape of LLM deployment and application is evolving at lightning speed - especially in the world of Kubernetes. With the KAITO extension, we're aiming to supercharge innovation around LLMs and streamline the journey from ideation to model endpoints to real-world impact. Dive into this blog as well as KAITO Jumpstart Drops to explore how KAITO can help you get up and running quickly on your own edge Kubernetes cluster. We’d love to hear your thoughts - drop your feedback or suggestions in the KAITO OSS Repo!1.1KViews4likes2Comments