arc jumpstart
11 TopicsAnnouncing general availability of workload orchestration: simplifying edge deployments at scale
We’re excited to announce the General Availability of workload orchestration, a new Azure Arc capability that simplifies how enterprises deploy and manage Kubernetes-based applications across distributed edge environments. Organizations across industries, such as manufacturing, retail, healthcare, face challenges in managing varied site-specific configurations. Traditional methods often require duplicating app variants—an error-prone, costly, and hard-to-scale approach. Workload orchestration solves this with a centralized, template-driven model: define configurations once, deploy them across all sites, and allow local teams to adjust within guardrails. This ensures consistency, improves speed, reduces errors, and scales with your CI/CD workflows—whether you’re supporting 200+ factories, offline retail clusters, or regionally-compliant hospital apps. Fig 1.0: Workload orchestration – Key features Key benefits of workload orchestration include: Solution Configuration & Template Reuse Define solutions, environments, and multiple hierarchy levels using reusable templates. Key-value stores and schema-driven inputs allow flexible configurations, validations with role-based access to maintain control. Context-Aware Deployments Automatically generate deployable artifacts based on selected environments (Dev, QA, Prod) and push changes safely through a git ops flow — enabling controlled rollouts and staged testing across multiple environments. Deploying at Scale in Constrained Environments Deploy workloads across edge and cloud environments with built-in dependency management and preloading of container images (a.k.a Staging) to minimize downtime during narrow maintenance windows. Bulk Deployment and Git Ops-Based Rollouts Execute large-scale deployments — including shared or dependent applications — across multiple sites using Git-based CI/CD pipelines that validate configurations and enforce policy compliance before rollout. End to End Observability K8 diagnostics in workload orchestration provide full-stack observability by capturing container logs, Kubernetes events, system logs, and deployment errors—integrated with Azure Monitor and Open Telemetry pipelines for proactive troubleshooting across edge and cloud environments. Who Is It For? Workload orchestration supports two primary user personas: IT Admins and DevOps Engineers: Responsible for initial setup and application configuration via CLI. OT Operators: Use the portal for day-to-day activities like monitoring deployments and adjusting configurations. Resources for You to Get Started You can start using workload orchestration by visiting the Azure Arc portal and following the documentation. We encourage you to try it with a small application deployed to a few edge sites. Create a template, define parameters like site name or configuration toggles, and run a deployment. As you grow more comfortable, expand to more sites or complex applications.840Views3likes0CommentsEmpowering the Physical World with AI
Unlocking AI at the Edge with Azure Arc The integration of AI into the physical environment is revolutionizing ways we interact with and navigate the world around us. By embedding intelligence into edge devices, AI is not just processing data—it is defining how machines perceive, reason, and act autonomously in real-world scenarios. AI at the edge is transforming how we interact with our environment, driven by critical factors such as data sensitivity, local regulations, compliance, low latency requirements, limited network connectivity, and cost considerations. Added to this, the emergence of new, powerful agentic AI capabilities enables autonomous and adaptive real-time operations, making AI an indispensable tool in reshaping the physical world. Customers’ Use Cases By embedding AI into edge operations, industries are unlocking transformative efficiencies and innovations. In manufacturing, edge-powered AI enables real-time quality control and predictive maintenance, minimizing downtime and maximizing productivity. In retail, AI enhances customer experiences with personalized recommendations and streamlined inventory management. Similarly, finance leverages AI's capabilities for robust fraud detection and advanced risk management. Moreover, sectors like government and defense are increasingly adopting edge AI for safety-critical applications, enabling autonomous, real-time surveillance and response solutions that are both efficient and resilient. These advancements are paving the way for scalable, adaptive solutions that meet the unique demands of diverse operational environments. Azure’s Adaptive Cloud Approach enabling AI from cloud to edge Building on the promise to unify cloud and edge, Azure’s adaptive cloud approach is empowering teams to develop and scale AI workloads seamlessly across diverse environments. By enabling a unified suite of services tailored for modern AI applications, whether deployed in public clouds or distributed locations, Azure Arc enables streamlined operations with enhanced security and resilience. Central to extending AI services to the edge is our commitment to adaptive, scalable, and efficient solutions tailored to diverse operational needs. Azure Arc plays a key role in this vision by facilitating seamless deployment and management of AI workloads across various environments. This week, we’re excited to share that a subset of Microsoft Azure AI Foundry models, such as Phi and Mistral have been rigorously validated to run on Azure Local enabled by Azure Arc. Our investments are reflected in two primary areas: Foundational tools for MLOps and developer frameworks, which empower teams to build robust AI applications Intuitive, end-to-end low-code experiences designed for data analysts and solution developers. These low-code tools prioritize user-friendly interfaces and rapid deployment, enabling the creation of solutions with just a few clicks. This dual focus ensures enterprises can fully harness the potential of edge AI while maintaining flexibility and operational efficiency. Image 1: This high-level diagram illustrates our vision for the cloud to edge AI workloads, enabled by Azure Arc. Some components (agents and integration with AI Foundry and Foundry Local) are still under development, while others are more advanced and have been released to the market. Build 2025: New Capabilities and Releases This strategic vision is now being realized through a wave of new capabilities unveiled at Build 2025. These innovations are designed to accelerate edge AI adoption and simplify the developer experience—making it easier than ever to build, deploy, and manage intelligent applications across hybrid environments. Announcements related to developer Building blocks: Kubernetes AI Toolchain Orchestrator (KAITO), enabled by Azure Arc (public preview) Foundry Local (public preview) for Windows apps to be deployed on any client device read more here. Workload orchestration (public preview) Application development tools for Kubernetes enabled by Arc (public preview) Refer to this blog to read more: https://aka.ms/AdaptiveCloudBuild2025 Announcements related to End-to-end experiences: Edge RAG, enabled by Azure Arc is now available in public preview. Azure AI Video Indexer for recorded files, enabled by Arc is generally available since April 2025. Azure AI Video Indexer for live video analysis, enabled by Arc is available in private preview, for limited set of customers Customer scenarios: enabling search and retrieval for on-premises data on Azure Local Edge RAG targets customers who have data that needs to stay on premises due to data gravity, security and compliance, or latency requirements. We have observed significant and consistent interest from highly regulated sectors. These entities are exploring the use of RAG capabilities in disconnected environments through Azure Local. DataON is a hybrid cloud computing company for enterprises of all sizes, with a focus on educational institutions and local government agencies. Recently, they have worked with the their customers to successfully deploy our RAG solution on CPU and GPU clusters and begin testing with sample end-customer data. “DataON has been actively exploring how Edge RAG can enhance our Microsoft Azure Local solutions by providing more efficient data retrieval and decision-making capabilities. It’s exciting to be part of the private preview program and see firsthand how Edge RAG is shaping the future of data-driven insights.” Howard Lo | VP, Sales & Marketing | DataON This capability brings generative AI and RAG to on-premises data. Edge RAG was validated on AKS running on Azure Local. Based on DataON and other customer feedback, we have expanded the version to include new features: Model Updates: Ability to use any model compatible with OpenAI Inferencing standard APIs Multi-lingual support: 100+ common languages for document ingestion and question-answer sessions Multi-modal support: Support for image ingestion & retrieval during question-answer sessions Search Types: Support for Text, Vector, Hybrid Text & Hybrid Text+Image searches Ingestion Scale-out: Integration with KEDA for fully parallelized, high-throughput ingestion pipeline Evaluation Workflow with RAG Metrics: Integrated workflow with built-in or customer-provided sample dataset Read more about Edge RAG in this blog: https://aka.ms/AzureEdgeAISearchenabledbyArc. AI Workloads for Disconnected Operations In fully disconnected (air-gapped or non-internet) environments, such as those often found in government and defense sectors, technologies like RAG, can be deployed on-premises or in secure private clouds. This is currently available with limited access. Use Cases: Video analysis: Automatically analyzes video and audio content to extract metadata such as objects and scenes. Use cases include live video and analysis, mission debriefing and training, and modern safety. Models consumption: A central repository for securely managing, sharing, and deploying AI/ML models. Use cases: model governance, rapid deployment of mission-specific models, and inter-agency collaboration. Retrieval-Augmented Generation (RAG): Combines LLMs with a document retrieval system to generate accurate, context-aware responses based on internal knowledge bases. Use cases include field briefings, legal and policy compliance, and cybersecurity incident response. Transforming Industries with AI: Real-World Stories from the Edge Across industries, organizations are embracing AI to solve complex challenges, enhance operations, and deliver better outcomes. From healthcare to manufacturing, retail to energy, and even national security, Azure AI solutions are powering innovation at scale. In the manufacturing sector, a global company sought to optimize production and reduce costly downtime. Azure AI Video Indexer monitored video feeds from production lines to catch defects early, while custom predictive maintenance models from the Model Catalog helped prevent equipment failures. RAG provided real-time insights into operations, empowering managers to make smarter decisions by asking questions. These tools collectively boosted efficiency, minimized downtime, and improved product quality. At Airports, Azure AI helped enhance passenger experience and safety. From monitoring queue lengths and tracking vehicles to detecting falls and identifying restricted area breaches, the combination of Azure Local, Video Indexer, Azure IoT for Operations, and custom AI created a smarter, safer airport environment. Retailers, too, are reaping the benefits. A major retail chain used Azure AI to understand in-store customer behavior through video analytics, optimize inventory with demand forecasting models, and personalize shopping experiences using RAG. These innovations led to better customer engagement, streamlined inventory management, and increased sales. In Healthcare, a leading provider operating multiple hospitals and clinics nationwide faced the daunting task of analyzing massive volumes of patient data—from medical records and imaging to real-time feeds from wearable devices. With strict privacy regulations in play, they turned to Azure AI. Using Azure AI Video Indexer, they analyzed imaging data like X-rays and MRIs to detect anomalies. The Model Catalog enabled predictive analytics to identify high-risk patients and forecast readmissions. Meanwhile, Retrieval-Augmented Generation (RAG) gave doctors instant access to patient histories and relevant medical literature. The result? More accurate diagnoses, better patient care, and full regulatory compliance. These stories highlight how Azure Arc enabled AI workloads are not just a set of tools—they are a catalyst for transformation. Whether it’s saving lives, improving safety, or driving business growth, the impact is real, measurable, and growing every day. Learn More Whether you are tuning in online or joining us in person, we wish you a fun and exciting Build 2025! The advancements in AI at the edge are set to revolutionize how we build, deploy, and manage applications, providing greater speed, agility, and security for businesses around the world. Recommended Build Sessions: Breakout session (BRK188): Power your AI apps across cloud and edge with Azure Arc Breakout session (BRK183): Improving App Health with Health Modeling and Chaos Engineering Breakout session (BRK 195): Inside Azure innovations with Mark Russinovich Breakout session (BRK 168): AI and Agent Observability in Azure AI Foundry and Azure Monitor1.7KViews2likes0CommentsJumpstart LocalBox 25H2 Update
LocalBox delivers a streamlined, one-click sandbox experience for exploring the full power of Azure Local. With this 25H2 release, we are introducing support for Azure VM Spot pricing for the LocalBox Client VM, removed service principal dependency and transitioned to Managed Identity, added support for deploying the LocalBox Client VM and Azure Local instance in separate regions, added dedicated PowerShell modules and updated LocalBox to the Azure Local 2505 release - making it possible for you to evaluate a range of new features and enhancements that elevate the functionality, performance, and user experience. Following our LocalBox rebranding last month, today, we are thrilled to announce our second major update – LocalBox 25H2! Key Azure Local Updates Azure Local 2505 Solution Version In this release, we have updated the base image for the Azure Local nodes to the 2505 solution version. Started in the previous Azure Local 2504 release, a new operating system was introduced for Azure Local deployments. For 2505, all new deployments of Azure Local will run the new OS version 26100.4061. This unlocks several new features: Registration and Deployment Updates Starting with this release, you can now download a specific version of Azure Local software instead of just the latest version. For each upcoming release, you will be able to choose from up to the last six supported versions. Security Updates The Dynamic Root of Trust for Measurement (DRTM) is enabled by default for all new 2504 deployments running OS version 26100.3775. Azure Local VM Updates Data disk expansion - You can now expand the size of a data disk attached to an Azure Local VM. For more information, see Expand the size of a data disk attached to an Azure Local VM. Live VM migration with GPU partitioning (GPU-P) - You can now live migrate VMs with GPU-P. You can read more about what is new in Azure Local 2505 in the documentation. Jumpstart LocalBox 25H2 Updates Features Cost Optimizations with Azure Spot VM Support LocalBox now supports enabling Azure VM Spot pricing for the Client VM, allowing users to take advantage of cost savings on unused Azure capacity. This feature is ideal for workloads that can tolerate interruptions, providing an economical option for testing and dev environments. By leveraging Spot pricing, users can significantly reduce their operational costs while maintaining the flexibility and scalability offered by Azure. You may leverage the advisor on the Azure Spot Virtual Machine pricing page to estimate costs for your selected region. Here is an example for running the LocalBox Client Virtual Machine in the East US region: The new deployment parameter enableAzureSpotPricing is disabled by default, so users who wants to take advantage of this capability will need to opt-in. Visit the LocalBox FAQ to see the updated price estimates for running LocalBox in your environment. Deploying the LocalBox Client VM and Azure Local Instance in Separate Regions Our users have been sharing with us feedback around the Azure capacity requirements of deploying LocalBox, specifically when it comes to regions with sufficient compute capacity (vCPU quotas) for the VM SKU (Standard E32s v5/v6) used in LocalBox. To address this, we have now introduced a new parameter for specifying the region the Azure Local instance resources will be deployed to. In the following example, LocalBox is deployed into Norway East while the Azure Local instance is deployed into Australia East. In practice, this makes it possible to deploy LocalBox into any region where users have sufficient vCPU quotas available for the LocalBox VM SKU (Standard E32s v5/v6). Enhanced Security - Support for Azure Managed Identity We have now introduced an enhanced security posture by removing the Service Principal Names (SPN) user requirement in favor of Azure Managed Identity at deployment time. This follows the same pattern we introduced in Jumpstart ArcBox last year, and now, with Azure Local fully support deployments without an SPN, we are excited to share this update in LocalBox. Dedicated PowerShell modules Arc Jumpstart has been evolving and growing significantly since its beginning more than 5 years ago. As the code base is growing, we see the opportunity to consolidate common code and separate large scripts into modules. Our first PowerShell module, Azure.Arc.Jumpstart.Common, was moved into its own repository — and was published to the PowerShell Gallery via a GitHub Actions workflow last month. 💥 With this LocalBox release, we have also separated functions in LocalBox into the newly dedicated Azure.Arc.Jumpstart.LocalBox module. Both modules are now installed during provisioning and leveraged in automation scripts. While these modules are targeted for use in automation, it makes the scripts readable for those who want to understand the logic and potentially contribute with bugfixes or new functionality. What we’ve achieved: ✅ New repo structure for PowerShell modules ✅ CI/CD pipeline using GitHub Actions ✅ Cross-platform testing on Linux, macOS, Windows PowerShell 5.1 & 7 ✅ Published module to PowerShell Gallery ✅ Sampler module used to scaffold and streamline the module structure 🎯 This is a big step toward better reusability and scalability for PowerShell in Jumpstart scenarios. As we continue building out new use cases, having this modular foundation will keep things clean, maintainable, and extensible. Check out our SDK repository on GitHub and the modules on PowerShell Gallery. Other Quality of Life Improvements We appreciate the continued feedback from the Jumpstart community and have incorporated several smaller changes to make it easier for users to deploy LocalBox. These include, but are not limited to: Added configuration of start/shutdown settings for the nested VMs to make sure that they are shutdown properly and started in the correct order when the LocalBox Client VM is stopped and started. Moved the manual deployment steps to a separate page for clarity Added information about the Pester-tests in the Troubleshooting section, including how to open the log-file to see which tests have failed Added shortcut to Hyper-V Manager on the desktop in the LocalBox Client VM Getting started! The latest update to LocalBox not only focuses on new features but also on enhancing overall cost and deployment experience. We invite our community to explore these new features and take full advantage of the enhanced capabilities of LocalBox. Your feedback is invaluable to us, and we look forward to hearing about your experiences and insights as you navigate these new enhancements. Get started today by visiting aka.ms/JumpstartLocalBox!1.2KViews4likes2CommentsPublic Preview: Deploy OSS Large Language Models with KAITO on AKS on Azure Local
Announcement Along with Kubernetes AI Toolchain Operator (KAITO) on AKS GA release, we are thrilled to announce Public Preview refresh for KAITO on AKS on Azure Local. Customers can now enable KAITO as a cluster extension on AKS enabled by Azure Arc as part of cluster creation or day 2 using Az CLI. The seamless enablement experience makes it easy to get started with LLM deployment and fully consistent with AKS in the cloud. We also invest heavily to reduce frictions in LLM deployment such as recommending the right GPU SKU, validating preset models with GPUs and avoiding Out of Memory errors, etc. KAITO Use Cases Many of our lighthouse customers are exploring exciting opportunities to build, deploy and run AI Apps at the edge. We’ve seen many interesting scenarios like Pipeline Leak detection, Shrinkage detection, Factory line optimization or GenAI Assistant across many industry verticals. All these scenarios need a local AI model with edge data to satisfy low latency or regulatory requirements. With one simple command, customers can quickly get started with LLM in the edge-located Kubernetes cluster, and ready to deploy OSS models with OpenAI-compatible endpoints. Deploy & fine-tune LLM declaratively With KAITO extension, customers can author a simple YAML for inference workspace in Visual Studio Code or any text editor and deploy a variety of preset models ranging from Phi-4, Mistral, to Qwen with kubectl on any supported GPUs. In addition, customers can deploy any vLLM compatible text generation model from Hugging Face or even private weights models by following custom integration instructions. You can also customize base LLMs in the edge Kubernetes with Parameter Efficient Fine Tuning (PEFT) using qLoRA or LoRA method, just like the inference workspace deployment with YAML file. For more details, please visit the product documentation and KAITO Jumpstart Drops for more details. Compare and evaluate LLMs in AI Toolkit Customers can now use AI Toolkit, a popular extension in Visual Studio Code, to compare and evaluate LLMs whether it’s local or remote endpoint. With AI Toolkit playground and Bulk Run features, you can test and compare LLMs side by side and find out which model fits the best for your edge scenario. In addition, there are many built-in LLM Evaluators such as Coherence, Fluency, or Relevance that can be used to analyze model performance and generate numeric scores. For more details, please visit AI Toolkit Overview document. Monitor inference metrics in Managed Grafana The KAITO extension defaults to vLLM inference runtime. With vLLM runtime, customers can now monitor and visualize inference metrics with Azure Managed Prometheus and Azure Managed Grafana. Within a few configuration steps, e.g., enabling the extensions, labeling inference workspace, creating Service Monitor, the vLLM metrics will show up in Azure Monitor Workspace. To visualize them, customers can link the Grafana dashboard to Azure Monitor Workspace and view the metrics using the community dashboard. Please view product document and vLLM metric reference for more details. Get started today The landscape of LLM deployment and application is evolving at lightning speed - especially in the world of Kubernetes. With the KAITO extension, we're aiming to supercharge innovation around LLMs and streamline the journey from ideation to model endpoints to real-world impact. Dive into this blog as well as KAITO Jumpstart Drops to explore how KAITO can help you get up and running quickly on your own edge Kubernetes cluster. We’d love to hear your thoughts - drop your feedback or suggestions in the KAITO OSS Repo!1KViews4likes2CommentsUnlocking AI Apps Across Boundaries with Azure
As we open the doors to Microsoft Build 2025, I’m thrilled to share the newest releases in our effort to enable teams to more rapidly develop and scale applications across boundaries: app development tools for Kubernetes (public preview), Kubernetes AI Toolchain Orchestrator [KAITO] (public preview), Foundry Local (public preview), workload orchestration (public preview) and Retrieval-Augmented Generation (RAG) capabilities on Azure Local (public preview). With our adaptive cloud approach, we offer a unified set of capabilities to enable your AI applications—whether they’re deployed to the public cloud, in hybrid environments, or at distributed edge locations. These capabilities include tools developers use every day, such as Visual Studio Code, to help build AI applications faster, better, and with greater security and resilience than ever before. Microsoft's Adaptive cloud approach to more rapidly developing and scaling applications across boundaries These new additions complement existing capabilities from Azure Arc for Kubernetes and Azure Kubernetes Service (AKS) enabled by Azure Arc, that support the hosting of containerized workloads, now with key capabilities designed to help expedite the creation of AI applications from model selection to edge-ready cluster provisioning (with GPU nodes), automated model deployment, lifecycle management and more. By combining KAITO with Azure Arc and Foundry Local in your workflow, Microsoft provides you with a more unified, flexible platform for building and running intelligent applications across boundaries. Learn more about our Arc-enabled AI story here. To help accelerate your adoption of cloud-native capabilities in distributed environments, Kubernetes-based app development tools extend essential services—such as container storage and secrets synchronization—to edge-located clusters. And we plan to expand this set of services in the future. This integration simplifies the deployment and management of applications across hybrid and multi-cloud environments. By unifying infrastructure and application lifecycle management, it empowers teams to move faster while maintaining consistency, security, and visibility. More details on each of these releases below. Here’s a glimpse of what they can mean for you, your workflow and your company. Many of these services are already making a difference for application teams at customers like Domino’s, Coles, Chevron, and Dick’s Sporting Goods. Providing them with greater speed and agility, as they build the solutions their customers and teams need. As customers continue to modernize their applications across hybrid, multi-cloud and distributed environments, many rely on trusted solutions from independent software vendors (ISVs). This is designed to help accelerate this journey—enabling partners to build, validate, and publish Arc-enabled Kubernetes applications directly to the Azure Marketplace. Building on the momentum from our initial launch at last year's Ignite, I'm excited to introduce a new wave of partner solutions to the Azure Arc ISV Partner Program. This latest expansion brings not only new partners, but also entirely new solution categories to the Azure Marketplace—including Security, Networking & Service Mesh, API Infrastructure & Management, and Monitoring & Observability. With just a few clicks, customers can now deploy enterprise-grade tools like HashiCorp Vault Enterprise, Istio by Solo.io, Traefik’s API stack, and Dynatrace Operator directly onto their Arc-enabled Kubernetes clusters. These additions to the Azure Arc ISV Partner Program reflect our commitment to supporting the full spectrum of cloud-native application needs. Explore the growing ecosystem of Arc-enabled solutions in the Azure Marketplace. RELEASES Here’s a recap of some of our newest feature releases that support our Adaptive cloud approach. App development tools for Kubernetes | Public Preview Kubernetes clusters enabled by Azure Arc helps power our adaptive cloud strategy. We are extending a set of fundamental, services that are fully validated, managed and deployed by Arc. The initial set of these services includes Azure Container Storage enabled by Azure Arc and Azure Key Vault . In the future we will be expanding and adding more of these foundational services. In addition, a Visual Studio Code extension is available for developers to kick start Kubernetes application development and turn their Kubernetes apps into Arc-enabled applications. This toolkit provides code samples and an environment to build, test and deploy Kubernetes applications. Figure 1: app development tools for Kubernetes in Azure Arc Retrieval-Augmented Generation (RAG) capabilities on Azure Local | Public Preview Edge RAG on Azure Local is a is a turnkey service, Azure Arc-enabled solution that brings Retrieval-Augmented Generation (RAG) capabilities to on-premises environments. It can help customers to build, evaluate, and deploy generative AI applications—like custom chat assistants—directly on their local data, without sending it to the cloud. This release is especially valuable for industries like manufacturing and healthcare, where data sovereignty, low latency, and IP protection are important. By supporting customer local deployment of language models, more secure data ingestion, and built-in tools for prompt engineering and evaluation, these capabilities help empower organizations to unlock AI insights while maintaining more control over their data. KAITO extension for AKS on Azure Local | Public Preview Kubernetes AI Toolchain Operator (KAITO) enabled by Azure Arc is designed to help simplify and scale AI model deployment across hybrid and edge environments. It enables developers to declaratively deploy AI models—whether from Microsoft’s AI Foundry, third-party hubs like Hugging Face, or customer-provided sources—on Arc-enabled Kubernetes clusters. It helps customers bring cloud-native AI capabilities to the edge, enable low-latency inference, more consistent lifecycle management, and operational control across diverse infrastructure. Try it out today using the “KAITO & AKS Arc” Jumpstart Drop! Figure 2: Deploy AI models on AKS in hybrid and edge environments using KAITO Workload orchestration | Public Preview Workload orchestration provides a centralized, template-driven platform for managing application configurations across distributed edge environments. It enables IT teams to define reusable templates, manage interdependent applications, and enforce custom validation rules—both built-in and external. It also includes integrated monitoring, a no-code authoring portal with RBAC, and CLI support for automation and CI/CD integration. Workload orchestration simplifies complex edge deployments by unifying configuration management and governance, empowering teams to scale faster with consistency, security, and flexibility. Foundry Local | Public Preview Foundry Local is the high-performance local AI runtime stack that helps bring Azure AI Foundry’s power to client devices. It includes CLI, SDK, and a local REST API for model inference, and integrates with the Azure AI Foundry catalog for model access and deployment. It can help provide performance optimizations for Windows and Apple Silicon, and the SDK enables code portability between local and cloud environments. Foundry Local, now available in preview on Windows and macOS, enables the creation and deployment of cross-platform AI applications that help operate models, tools, and agents directly on-device. This eliminates reliance on cloud connectivity and offers more enhanced control and flexibility. FIND US AT BUILD Breakout session (BRK188): Build and Scale your AI apps with Kubernetes and Azure Arc Breakout session (BRK183): Improving App Health with Health Modeling and Chaos Engineering Breakout session (BRK 195): Inside Azure innovations with Mark Russinovich Breakout session (BRK 168): AI and Agent Observability in Azure AI Foundry and Azure Monitor You can also come talk to us about building, deploying and managing applications for the Adaptive cloud at the Expert Meet Up Area. Whether you are tuning in online or joining us in person, I wish you a fun and exciting Build 2025!!1.3KViews1like0CommentsTroubleshoot the Azure Arc Agent in Azure using Azure Monitor & Log Analytics Workspace
This article explores how to centralize logging from on-premises servers—both physical and virtual—into a single Log Analytics Workspace. The goal is to enhance monitoring capabilities for the Azure Arc Connected Machine Agent running on these servers. Rather than relying on scattered and unstructured .log files on individual machines, this approach enables customers to collect, analyze, and gain insights from multiple agents in one centralized location. This not only simplifies troubleshooting but also unlocks richer observability across the hybrid environment.885Views1like0CommentsArc Jumpstart Newsletter: April 2025 Edition
We’re thrilled to bring you the latest updates from the Arc Jumpstart team in this month’s newsletter. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.398Views1like1CommentJumpstart LocalBox - New name, still awesome!
We’re thrilled to announce that Jumpstart HCIBox has been rebranded to Jumpstart LocalBox, aligning with the broader Azure Local rebranding introduced at Microsoft Ignite 2024. 🎁 Why the Rebrand? The transition from HCIBox to LocalBox reflects our evolving mission: to support a broader spectrum of edge and on-premises deployments. While HCIBox was originally focused on Azure Stack HCI, Jumpstart LocalBox embraces the expanding needs of hybrid and edge solutions under the Azure Local umbrella. LocalBox delivers a streamlined, one-click sandbox experience for exploring the full power of Azure Arc in localized environments. It’s tailored for IT pros, DevOps engineers, solution architects, and anyone that is looking to get hands-on with Azure Local and Arc-enabled infrastructure and services, whether in a datacenter, remote branch office, or at the edge. 🛠️ Use cases LocalBox is a turnkey solution that provides a complete sandbox for exploring Azure Local capabilities and hybrid cloud integration in a virtualized environment. It is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for a user to get hands-on with Azure Local and Azure Arc without the need for physical hardware. We have seen our users using LocalBox in several ways: Sandbox environment for getting hands-on with Azure Local and Azure Arc technologies Accelerator for Proof-of-concepts or pilots Training tool for skills development Demo environment for customer presentations or events Rapid integration testing platform 🤝 Most Deployed, Most Engaged Jumpstart LocalBox holds a special place in our community. It’s our most deployed solution across Jumpstart, with hundreds of automated deployments every month and the highest engagement levels across our GitHub repositories. It’s trusted by users worldwide to explore and validate hybrid cloud and edge architectures in a real-world-like environment. What’s 🆕? Updated Documentation and Assets: All references to HCIBox have been updated to LocalBox across our documentation, ensuring consistency and clarity. Autologon by default: LocalBox Deployments now initialize without the need for user to log into the VM. Support for Standard E32s v6 VM Size (Default): We've added support for the Standard_E32s_v6 VM size and set it as the new default for LocalBox deployments - offering improved performance and value. For those who prefer it, Standard_E32s_v5 remains fully supported. Enhanced User Experience: We’ve refreshed the user interface and experience to align with the new branding, making it more intuitive and user-friendly. 🛣️ Roadmap Looking ahead, LocalBox roadmap looks promising with some exciting features and capabilities: Support for LocalBox Client VM and Azure Local cluster in separate regions 🌏 Azure Spot VM support 💵 ⬇️ Remove service principal dependency and transition to Managed Identity (coming soon!) 🔑 netsh NAT mappings for easier access to nested VMs on Azure Local cluster 🛜 Graceful shutdown 🛑 Dedicated PowerShell modules 💾 As always, enjoy Jumpstart and let us know if you have a question or an issue, we will get right on it!815Views1like0CommentsArc Jumpstart Training Video Series
Enter the Arc Jumpstart Training video series, now available on YouTube! This series has been crafted with care to equip users of Arc Jumpstart with the comprehensive knowledge and practical skills needed to unlock the full potential of various Arc Jumpstart solutions. Whether you're a newcomer starting your journey or an experienced user looking to refine your expertise, this series promises to be your ultimate guide. What Awaits You in the Arc Jumpstart Training Series? Designed to provide a structured and in-depth exploration of Arc Jumpstart's offerings, the series consists of five modules, each focusing on a specific aspect of the Arc Jumpstart ecosystem. These modules delve into everything from foundational concepts to advanced functionalities, ensuring that users have all the tools they need to succeed. Let’s take a closer look at what each module has to offer. Module 1: Introduction to Arc Jumpstart Every great journey begins with a solid introduction, and Module 1 delivers that. In this module, we explore the fundamental question: What is Arc Jumpstart? Azure Arc Jumpstart is a comprehensive, automated, and open-source platform designed to help users quickly set up and explore Azure Arc environments. It provides a variety of scenarios and tools to get started with Azure Arc, including: Jumpstart Scenarios: Automated, zero-to-hero scenarios for Arc-enabled servers, Kubernetes, and more. Jumpstart ArcBox: A virtual, hybrid sandbox that allows you to explore all major capabilities of Azure Arc with just one click. Jumpstart HCIBox: A dedicated Azure Local sandbox for trying out Azure Local services. Jumpstart Drops: Community-contributed artifacts, deployment guides, and code snippets. Jumpstart Gems: Detailed technical diagrams and end-to-end cloud scenarios. Jumpstart Agora: Explore comprehensive cloud-to-edge scenarios designed for specific industry needs. These resources are designed to help users deploy quickly, test easily, and evaluate confidently, leveraging the full power of the adaptive cloud. Module 2: Jumpstart ArcBox Arc Jumpstart ArcBox is a virtual, hybrid sandbox environment that allows users to explore and utilize the major capabilities of Azure Arc with ease. Here are some key features: One-Click Deployment: You can set up a complete Azure Arc environment with just one click, requiring only an Azure subscription. Curated Experiences: ArcBox offers tailored environments for different roles, such as IT professionals, DevOps engineers, and data professionals. Comprehensive Capabilities: It includes all major Azure Arc functionalities, enabling users to test, deploy, and evaluate various scenarios in a controlled setting. ArcBox is designed to simplify the process of getting started with Azure Arc, making it accessible and efficient for users to explore its full potential. Module 3: Jumpstart HCIBox (for Azure Local) Arc Jumpstart HCIBox is a turnkey solution that provides a complete sandbox for exploring Azure Local capabilities and hybrid cloud integration in a virtualized environment. Here are some key features: Dedicated Azure Local Sandbox: You can set up an Azure Local environment with just one click, requiring only an Azure subscription. Hybrid Cloud Integration: HCIBox allows you to explore the integration of Azure Local with hybrid cloud scenarios. Automated Deployment: It simplifies the process of deploying and testing Azure Local capabilities. HCIBox is designed to help users quickly get up and running with Azure Local, making it easier to evaluate and leverage its full potential. Module 4: Jumpstart Drops Arc Jumpstart Drops is a curated collection of scripts, tools, tutorials, and other resources contributed by the community for the community. These "Drops" are designed to make life easier for developers, IT professionals, and operations teams by providing small, self-contained pieces of code and artifacts that can be easily integrated into various projects. Here are some key features: Community Contributions: Anyone can contribute their own scripts, tools, and tutorials to the collection. Curated Content: The Drops are carefully selected to ensure quality and relevance. Diverse Resources: The collection includes a wide range of resources, from automation scripts to detailed tutorials. Arc Jumpstart Drops is a great way to share knowledge and tools, helping others to streamline their workflows and solve common challenges. Module 5: More Jumpstart and Next Steps The journey doesn’t end with the earlier modules. Module 5 explores what lies ahead, including: Next Steps: Guidance on how to continue your learning journey and leverage Jumpstart to its fullest potential. Jumpstart Lightning: A sneak peek into this exciting feature and how it can accelerate your workflows. Jumpstart Badges: Earn recognition for your expertise and showcase your achievements in the Jumpstart ecosystem. This module serves as a bridge to advanced learning opportunities and provides a roadmap for continued success. Get Started Today So, are you ready to enhance your skills and unlock the full potential of Arc Jumpstart? Head over to YouTube and dive into the Arc Jumpstart Training video series. Whether you’re deploying ArcBox for the first time, experimenting with HCIBox, or creating your first Jumpstart Drop, these videos are your ultimate resource. Don’t wait—your journey with Arc Jumpstart begins now!1.3KViews2likes0CommentsArc Jumpstart Newsletter: March 2025 Edition
We’re thrilled to bring you the latest updates from the Arc Jumpstart team in this month’s newsletter. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.292Views1like1Comment