kubernetes
82 TopicsAKS enabled by Azure Arc: Powering AI Applications from Cloud to Edge [Ignite 2025]
A New Era for Hybrid Kubernetes and AI Microsoft Ignite 2025 continues to accelerate Azure’s hybrid vision, extending cloud-native innovation into datacenters, factories, retail sites, and remote, fully disconnected environments. This year’s announcements expand the capabilities of AKS enabled by Azure Arc, making it the most versatile and secure platform for deploying modern applications and AI workloads across any environment. AKS Arc now underpins Azure’s hybrid and edge strategy — and increasingly its hybrid AI strategy by delivering consistent operations, strong security, and flexible deployment models for distributed applications. TL;DR: New AKS Arc offering and features in 2025 AKS on Azure Local Disconnected Operations Public Preview AKS on Azure Local Small Form Factor Bare-Metal Private Preview Improvements to AKS on Azure Local Medium, including lifecycle, portability, additional GPU support and hardware support expansion. Improvements to AKS on Windows Server, improved platform reliability, security, and consistency through fixes to image packaging, dependency handling, node/agent synchronization, certificate and key management, error detection, telemetry and cleanup of stale resources 2-Node High Availability for AKS Arc at the edge Private Preview AI Foundry Local integration for offline/hybrid AI development KAITO on AKS Arc Public Preview for hybrid/edge model deployment Edge RAG on Azure Local Medium Arc Gateway for AKS Arc Public Preview KMS v2 for secrets encryption on AKS on Azure Local Medium Expanded GPU support for AKS Arc on Azure Local (RTX 6000 Ada GA, NVIDIA L-series Preview) AKS Container Apps on Azure Local Medium Public Preview AKS Edge Essentials release for improved stability and offline operations Arc-enabled Azure Monitor Pipeline, Workload Identity Federation, and Azure Container Storage enhancements Azure Linux 3.0 support, Key Vault Secret Store extension AKS on Azure Local: Evolving the Hybrid Managed Kubernetes Platform This year, AKS on Azure Local introduces several major enhancements that broaden where and how customers can deploy AKS as their managed Kubernetes platform at the edge. Disconnected Operations Public Preview AKS on Azure Local can now operate entirely offline, supporting customers in sovereign, regulated, or isolated environments. Clusters can be deployed, managed, and updated without continuous Azure connectivity, syncing only when connectivity is temporarily restored. Small Form Factor Bare-Metal Preview The new SFF edition brings AKS to compact industrial PCs and constrained retail or factory environments. It delivers bare-metal performance in a much smaller footprint, including optional GPU support for edge inferencing. Improvements to Azure Local Medium Azure Local Medium continues to mature with expanded hardware compatibility, improved lifecycle reliability, and better workload portability across cloud and local deployments — enabling enterprises to standardize on AKS across all tiers of infrastructure. 2-Node High Availability for the Edge For space- and cost-constrained environments, AKS Arc can support HA clusters with only two nodes, enabling robust production workloads in places where traditional 3-node clusters are not feasible. Operational Excellence with AKS Arc Enterprises operating distributed Kubernetes fleets will benefit from new governance and connectivity capabilities. AKS Arc Gateway Public Preview Arc Gateway simplifies hybrid connectivity by streamlining cluster onboarding and reducing required firewall rules. This creates a more secure and operationally efficient pattern for managing large fleets of Arc-enabled clusters. KMS v2 for Kubernetes secrets encryption at rest in etcd KMS v2 enhances Kubernetes secret encryption for hybrid and on-prem clusters, delivering improved reliability, stronger security boundaries, and consistency with Azure’s cloud-native cryptography approach. AKS as the Hybrid AI Application Platform AI is the defining theme of Ignite 2025 and AKS enabled by Azure Arc is now the foundation for deploying AI where the data resides. Organizations increasingly need to run AI models in datacenters, factories, field environments, and sovereign locations, and this year’s updates establish AKS Arc as Azure’s platform for distributed and offline AI workloads. AI Foundry Local: Build and Fine-Tune AI Models Anywhere AI Foundry Local brings Azure AI Foundry’s core capabilities: the curated model catalog, development tools, templates, and fine-tuning support into customer environments. It allows developers to run foundation models locally using optimized execution paths for GPUs, NPUs, and CPUs; fine-tune models with LoRA/QLoRA in regulated or offline scenarios; and package model artifacts for deployment on AKS clusters. This enables a complete hybrid AI development loop that works both online and fully disconnected. KAITO Public Preview on AKS Arc KAITO automates model serving across cloud, datacenter, and edge. Now available on AKS Arc, it provides one-click packaging, optimization, and deployment of models built in AI Foundry Local. Customers can run ONNX, Hugging Face, or custom models with edge-aware performance optimization across diverse hardware, including CPU-only and GPU-accelerated nodes. Expanded GPU Capabilities Hybrid AI workloads benefit from expanded GPU options, including general availability of the NVIDIA RTX 6000 Ada, preview support for NVIDIA L-series GPUs, and new GPU Partitioning (GPU-PV) support for efficient resource utilization. These capabilities make it possible to run high-performance inferencing and training workloads across a wide range of hybrid deployment scenarios. RAG on Azure Local: Bring Generative AI to On-Premises Data RAG (Retrieval-Augmented Generation) on Azure Local enables organizations to ground AI in their own on-premises data without moving information to the cloud. Delivered as a first-party Azure Arc extension, it provides an integrated retrieval pipeline for ingesting, indexing, and querying enterprise content stored in datacenters or edge locations. With support for hybrid search, multi-modal data, evaluation tooling, and responsible AI controls, organizations can build RAG applications that remain fully compliant with data sovereignty requirements while reducing latency and improving accuracy. By running the full RAG workflow locally — from retrieval to generation — customers can create intelligent applications that leverage proprietary documents, images, and other unstructured data directly within their secure environments. Expanding Application Capabilities at the Edge AKS Container Apps on the Edge A major milestone this year is the public preview of ACA on the edge, enabling teams to bring the simplicity of Azure Container Apps to Azure Local Medium. Developers can deploy AI-powered microservices, inference endpoints, and event-driven applications at the edge using the same ACA programming model used in Azure. AKS Edge Essentials The latest release improves cluster stability, enhances offline lifecycle operations, and strengthens both Linux and Windows support, making it easier to operate AKS at scale in constrained or intermittently connected environments. Enhanced Storage, Telemetry, and Security for Hybrid AI Distributed AI workloads require robust identity, storage, and observability patterns, and Ignite brings major updates in all three areas. The Arc-enabled Azure Monitor Pipeline improves telemetry ingestion across disconnected or segmented networks, caching data locally and syncing to Azure when connectivity is available. Workload Identity Federation for Arc enables secure, secret-less identity for workloads running at the edge. And Azure Container Storage enabled by Arc, now expanded for AKS Arc clusters, provides a high-performance persistent storage layer suited for vector stores, embedding caches, cloud ingest and mirror. Conclusion Ignite 2025 represents a major step forward for AKS enabled by Azure Arc as both a hybrid Kubernetes platform and a hybrid AI application platform. With disconnected operations, edge-native Container Apps, improved GPU acceleration, KAITO for unified model serving, AI Foundry Local for offline model development, and a fully consistent operational model across cloud, datacenter, and edge, AKS Arc now enables organizations to run their most critical cloud-native and AI workloads anywhere they operate. We look forward to continuing to support customers as they build the next generation of hybrid and edge AI applications.503Views0likes0CommentsWorkload Identity support for Azure Arc-enabled Kubernetes clusters now Generally Available!
We’re excited to announce that Workload Identity support for Azure Arc-enabled Kubernetes is now Generally Available (GA)! This milestone brings a secure way for applications running on Arc-connected clusters running outside of Azure to authenticate to Azure services without managing secrets. Traditionally, workloads outside Azure relied on static credentials or certificates to access Azure resources like Event Hubs, Azure Key Vault, and Azure Storage. Managing these secrets introduces operational overhead and security risks. With Microsoft Entra Workload ID federation, your Kubernetes workloads can now: Authenticate securely using OpenID Connect (OIDC) without storing secrets. Exchange trusted tokens for Azure access tokens to interact with services securely. This means no more manual secret rotation and reduced attack surface, all while maintaining compliance and governance. How It Works The integration uses Service Account Token Volume Projection and aligns with Kubernetes best practices for identity federation. The process involves a few concise steps: Enable OIDC issuer and workload identity on your Arc-enabled cluster using Azure CLI. az connectedk8s connect --name "${CLUSTER_NAME}" --resource-group "${RESOURCE_GROUP}" --enable-oidc-issuer –-enable-workload-identity Configure a user-assigned managed identity in Azure to trust tokens from your Azure Arc enabled Kubernetes cluster's OIDC issuer URL. This involves creating a federated identity credential that links the Azure identity with the Kubernetes service account. Applications running in pods, using the annotated Kubernetes service account, can then request Azure tokens via Microsoft Entra ID and access resources they’re authorized for (e.g., Azure Storage, Azure Key Vault). This integration uses Kubernetes-native construct of Service Account Token Volume Projection and aligns with Kubernetes best practices for identity federation. Supported platforms We support a broad ecosystem of distributions, including: Red Hat OpenShift Rancher K3s AKS-Arc (In preview) VMware Tanzu Kubernetes Grid (TKGm) So, whether you’re running clusters in retail stores, manufacturing plants, or remote edge sites, you can connect them to Azure Arc and enable secure identity federation for your workloads to access Azure services. Ready to get started? Follow our step-by-step guide on Deploying and Configuring Workload Identity Federation in Azure Arc-enabled Kubernetes to secure your edge workloads today!148Views0likes0CommentsPublic Preview: Multicloud connector support for Google Cloud
We are excited to announce that the Multicloud connector is now in preview for GCP environments. With the Multicloud connector, you can easily connect your GCP projects and AWS accounts to Azure with the following capabilities: Inventory: Get an up-to-date, comprehensive view of your cloud assets across different cloud providers. Now supporting GCP services (Compute VM, GKE, Storage, Functions, and more), you can now gain insights into your Azure, AWS, and GCP environments in a single pane of glass. The agentless inventory solution will periodically scan your GCP environment, project the discovered resources in GCP as Azure resources, including all of the GCP metadata like GCP labels. Now, you can easily view, query, and tag these resources from a centralized location. Azure Arc onboarding: Automatically Arc-enable your existing and future GCP VMs so you can leverage Azure and Microsoft services, like Azure Monitor and Microsoft Defender for Cloud. Through the multicloud connector, the Azure Arc agent will be automatically installed for machines that meet the prerequisites. How do I get started? You can easily set up the multicloud connector by following our getting started guide which provides step by step instructions on creating the connector and setting up the permissions in GCP which leveraged OIDC federation. What can I do after my connector is set up? With the inventory offering, you can see and query for all of your GCP and Azure resources via Azure Resource Graph. For Azure Arc onboarding, you can apply the Azure management services on your GCP VMs that are Arc-enabled. Learn more here. We are very excited about the expanded support in Google Cloud. Set up your multicloud connector now for free! Please let us know if you have any questions by posting on the Azure Arc forum or via Microsoft support. Here is the mutlicloud capabilities technical documentation. Check out the Ignite session here!239Views0likes0CommentsTransforming City Operations: How Villa Park and DataON Deliver Real-Time Decisions with Edge RAG
In today’s connected world, customers expect instant, context-rich interactions- even in environments where cloud connectivity isn’t guaranteed. That’s where Edge Retrieval-Augmented Generation (RAG) at the edge comes in. Edge RAG, enabled by Azure Arc, combining local data retrieval with intelligent reasoning to empowers conversational experiences that are fast, secure, and deeply personalized. Together with our Edge Infrastructure partners, we’re applying this technology to transform customer engagement - enabling real-time insights, autonomous workflows, and resilient operations across industries. Edge RAG is a core part of our Adaptive Cloud pillar for Edge AI, ensuring flexibility, resilience, and intelligence wherever customers operate. It uses Foundry language models and together with Foundry Local shape Microsoft’s Foundry Anywhere commitment. Today we’re excited to announce a public preview refresh of Edge RAG at Ignite 2025, bringing new capabilities to accelerate adoption and unlock even more value at the edge: Production-Class LazyGraph RAG with Industry-leading RAG inferencing quality High-Fidelity Parsing: OCR-enabled support for documents, tables, and images SharePoint Server integration (limited access; to register, click here ) Multimodal search with image retrieval & image-rich outputs Chat UI Upgrades and performance improvements Fully Disconnected scenarios enabled by Azure Local for Disconnected Operations The new features in this release are informed by our engagement with the City of Villa Park, in partnership with DataON, where we’ve applied Edge RAG to improve operational efficiency and deliver smarter, real-time services for urban environments. Together, we pilot compliance assistant agentic workflow with OCR & LLM integration. Villa Park: A Blueprint for Smart Cities The City of Villa Park, California, faced challenges common to many municipalities: complex zoning regulations that slowed approvals, lengthy CEQA compliance processes requiring deep environmental analysis, backlogs in accessory dwelling unit (ADU) permit reviews. Working with DataON, a Microsoft partner, and Microsoft, Villa Park deployed Edge RAG on Azure Local, creating a resilient, intelligent planning system that operates seamlessly; even offline. Environmental assessments that once required days are now completed in minutes. The partnership between the City of Villa Park and DataON is a standout example of how municipalities and technology providers can co-innovate to solve real-world challenges. Ray Pascua, Villa Park’s Planning Manager, has led this transformation: “Having the opportunity to utilize AI to perform research and retrieve large datasets specifically from the California Environmental Quality Act (CEQA) Guidelines (Statutory/Categorical Exemptions), and State law relative to Accessory Dwelling Units (ADUs), has been an overall positive experience. AI algorithm is a revolutionary medium that can streamline and improve workflow efficiencies by automating routine and repetitive planning-related tasks and analysis, and would be of particular value and benefit to local government agencies that have limited personnel and resources. While this cutting-edge technological tool is still evolving and has room to improve accuracy and speed, it certainly has a place in the realm of City Planning, as well as other land use development fields and disciplines.” Howard Lo, VP of Sales & Marketing at DataON, shares: “Our collaboration with Microsoft and the City of Villa Park showcases Azure Local's transformative potential for municipal government AI. As a leading Azure Local partner, DataON has optimized our infrastructure to run Microsoft's Edge RAG solution, enabling Villa Park to address real planning challenges while maintaining data control and security. Working directly with Microsoft's engineering team and a forward-thinking city partner, we've proven that Azure Local delivers practical AI value for government operations. We're excited to help other municipalities achieve similar results on our Azure Local platform.” Villa Park’s deployment leverages DataON’s Azure Local-certified hardware, Microsoft’s Arc-enabled AI stack, and the expertise of city planners to deliver: End-to-end digital workflows for CEQA, zoning, and ADU permitting Conversational AI interfaces that empower staff to ask questions and get cited, regulatory-compliant answers instantly Operational resilience with full offline support, ensuring continuity even during network outages A replicable model for other municipalities seeking to modernize planning and compliance About DataON DataON’s edge infrastructure, combined with Azure Local and Edge RAG, forms the core of this transformation. DataON provides robust hardware and delivers deployment, integration, and training services, ensuring a seamless Azure Local experience. Their close support helps organizations quickly adopt and confidently manage edge solutions, resulting in secure, high-performance, and scalable deployments for multi-site environments. Let’s take a closer look at the features we’re announcing today: Deep Search for Complex Reasoning with LazyGarph RAG With the Ignite release, Edge RAG introduces Deep Search powered by LazyGraph RAG; a dynamic graph-based retrieval method that enables advanced, multi-document reasoning. This means Villa Park planners can now ask complex, multi-part questions that span zoning, CEQA, and ADU regulations, and Edge RAG will synthesize answers by connecting information from multiple sources in real time. Image 1: Deep Search capabilities on Edge RAG The system incrementally explores only the most relevant document chunks, reducing compute cost and latency while delivering comprehensive, cited responses. For Villa Park, this translates to resolving intricate regulatory scenarios, such as “What are the environmental constraints for ADUs in zones X, Y, and Z?”. With answers that reference and link multiple regulatory documents and historical decisions, all in a single query. Advanced Document Parsing for Structured Data Edge RAG’s advanced document parsing, introduced in this release, transforms how Villa Park’s planning documents are utilized. During data ingestion, the system now extracts not only free-form text but also tables, images, headings, and rich metadata. This includes full indexing of multi-page tables, column headers, and section context, with each chunk annotated by page number, section heading, and table index. As a result, planners can search for specific permit statistics, environmental impact scores, or compliance tables and retrieve results directly from structured data within city documents; enabling precise, source-attributed answers that were previously difficult or impossible to obtain. Image 2: Advanced document parsing on Edge RAG Enhanced Chat Experience The new model-only chat mode allows staff to interact directly with the language model, bypassing contextual data for general queries or troubleshooting. This flexibility enables Villa Park staff to quickly switch between knowledge-based chat-grounded in city data, and model-only chat for training, testing, or handling ambiguous queries, streamlining both day-to-day operations and onboarding of new team members. Additional Edge RAG Preview Refresh Updates We also improved Edge RAG based on customer feedback, adding these features: Agentic RAG for autonomous workflows: Systems can reason and act at the edge with less manual work. Full offline support: Operates and accesses data even without a network. SharePoint integration (private preview): users will also be able to query Edge RAG directly over SharePoint, enabling enhanced information retrieval and analysis within their workflows. Image 3: Sharepoint as a data source on Edge RAG Performance optimizations: Query responses for every search type, excluding Deep Search, are now delivered in under 15 seconds on legacy A2 and A16 GPUs; a fivefold speed boost. Additionally, streaming image processing has increased one hundred times, allowing 600 images to be handled continuously in just 36 seconds. Since late May, Edge RAG has supported “bring your own model” (BYOM), allowing organizations to deploy their preferred language models such as OpenAI GPT-4o or other advanced models, directly on their own infrastructure. This capability enables advanced features like deep search and hybrid multimodal search, while ensuring that sensitive data remains on-premises. BYOM empowers organizations to tailor Edge RAG’s AI capabilities to their unique compliance, performance, or customization requirements, maintaining full control over both data and model selection. Security, Compliance, and Sustainability Edge RAG is built for trust: data sovereignty ensures sensitive data remains on-premises, zero-trust architecture integrates with Microsoft security stack, and compliance-ready design supports municipal, state, and industry regulations. Sustainability is also a priority, with energy-efficient edge hardware reducing carbon footprint. Looking Ahead: The Future of Edge Intelligence Edge RAG enables flexible edge intelligence deployment in various environments. Its adaptable design handles dynamic workloads, supporting frontline teams as operations evolve. Instead of just speeding up processes or boosting connectivity, Edge RAG fosters innovative applications and smarter decision-making, helping organizations stay agile amid changing technology and business needs. Resources Explore these resources to learn more about Edge RAG, deployment best practices, customer stories, and technical documentation: Product documentation: Edge RAG Preview, enabled by Azure Arc Documentation | Microsoft Learn Get Started: Quickstart: Install Edge RAG Preview enabled by Azure Arc Release notes: What's New in Edge RAG – Azure Arc Tech Talk Distribution List: EdgeRAGTalk@microsoft.com Join the conversation, ask questions, and connect with the Edge RAG team Recommended Ignite sessions: BRK147: What’s new in Azure Local ODSP1467: Unlock your IT potential with Azure Local & DataON Plus Solutions BRK199: From cloud to edge: Building and shipping Edge AI apps with Foundry218Views1like0CommentsAnnouncing General Availability of the Azure Key Vault Secret Store Extension
We are thrilled to announce Azure Key Vault Secret Store Extension (SSE) is now generally available for Arc-enabled on-premises Kubernetes, including clusters that you connect yourself and AKS Arc managed clusters. SSE automatically fetches secrets from an Azure Key Vault to the on-premises cluster for offline access. This means you can use Azure Key Vault to store, maintain, and rotate your secrets, even when running your Kubernetes cluster in a semi-disconnected state. Key Benefits Offline Access: It’s important that workloads continue even when there are temporary disruptions to connectivity. This includes regular operation as well as the ability to restart pods while there’s a connectivity interruption. With SSE, workloads can access secrets from the local Kubernetes secrets store regardless of connectivity interruptions. Standard K8s Secret Access: Secrets can be accessed via volume mounting, environment variables, or via the Kubernetes API. Workloads and ingress controllers do not need to be customized to access Azure Key Vault and developers have options on how to access secrets. Security: SSE has limited permissions and leverages the latest Kubernetes security features so that cluster admins do not need to configure and limit permissions themselves. Secrets are critical business assets, so the Secret Store helps to secure them through isolated namespaces, role-based access control (RBAC) policies, federated identities for accessing AKV, and limited permissions for the secrets synchronizer. Scalability: SSE helps very large distributed deployments with hundreds or thousands of clusters to work with Azure Key Vault by spreading demand over time. By effectively caching secrets in the Kubernetes secrets store, SSE can also help to lower overall demand on Azure Key Vault instances. Low maintenance: Auto-updates can keep your SSE up to date with security and performance improvements as they are released. Additionally, changes to configured secrets are even easier now with the new simplified configuration experience (in preview). With the simplified configuration style, a single custom resource is all that’s needed, reducing the effort and surface area for misconfigurations. How to Use the Secret Store Extension Install the Secret Store Extension to an Arc-enabled cluster or AKS managed on-premises cluster with configuration parameters such as sync intervals. Configure an Azure managed identity that has permission to read secrets from AKV and federate it with a Kubernetes service account. Configure a secret provider class custom resource (CR) in the cluster with connection details to the Key Vault. Configure a secret sync custom resource (CR) in the cluster for each secret to be synchronized. (Steps 3 and 4 are now even easier with the new simplified configuration!) Apply the CRs in the cluster and secrets will automatically begin syncing to the cluster. Relax knowing that your configured secrets will be kept up to date on the cluster, as frequently as you want. Try out the Secret Store Extension Today! Get started by visiting the SSE documentation Get hands-on even faster with the SSE Jumpstart Drop858Views2likes0CommentsOperate everywhere with AI-enhanced management and security
Farzana Rahman and Dushyant Gill from Microsoft discuss new AI-enhanced features in Azure that make it simpler to acquire, connect, and operate with Azure's management offerings across multiple clouds, on-premises, and at the edge. Key updates include enhanced management for Windows servers and virtual machines with Windows Software Assurance, Windows Server 2025 hotpatching support in Azure Update Manager, simplified hybrid environment connectivity with Azure Arc gateway, a multicloud connector for AWS, and Log Analytics Simple Mode. Additionally, Azure Migrate Business Case helps compare the total cost of ownership, and new Copilot in Azure capabilities that simplify cloud management and provide intelligent recommendations.2.2KViews1like1CommentEOL of Azure Linux 2.0 on Azure Kubernetes Service enabled by Azure Arc
Azure Linux 2.0 will reach its End of Life (EOL) in July 2025 Azure Linux 2.0 (formerly CBL-Mariner) will reach its official End of Life (EOL) on July 31, 2025. After this date, it will no longer receive updates, security patches, or support from the Azure Linux team. Starting with the Azure Local 2507 release, Azure Kubernetes Service enabled by Azure Arc will ship Azure Linux 3.0 images for all supported Kubernetes versions. This change applies to all AKS enabled by Azure Arc deployments, as we have used Azure Linux 2.0 as the base image in the past. To maintain security compliance and ensure continued support, all AKS Arc customers must plan on migrating to Azure Linux 3.0 at the earliest by upgrading their Azure Local instances to the 2507 release, when it is available. What's new in Azure Linux 3.0 Approximately every three years Azure Linux releases a new version of its operating system with upgrades to major components. Azure Linux 3.0 offers increased package availability and versions, an updated kernel, and improvements to performance, security, and tooling and developer experience. Some of the major components upgraded from Azure Linux 2.0 to 3.0 include: Component Azure Linux 3.0 Azure Linux 2.0 Release Notes Linux Kernel v6.6 (Latest LTS) V5.15 (Previous LTS) Linux 6.6 Containerd v1.7.13, but will also offer v2.0 once it becomes stable 1.6.26 Containerd Releases SystemD v255 V250 Systemd Releases OpenSSL v3.3.0 V1.1.1k OpenSSL 3.3 For more details on the key features and updates in Azure Linux 3.0 see the 3.0 GitHub release notes. Upgrading to Azure Linux 3.0 Once the Azure Local 2507 release is available, update to 2507 . Once your Azure Local instance has upgraded, you can then upgrade your Kubernetes clusters You can choose to the remain on the same Kubernetes version and provide the same version number in the aksarc upgarde command. Once the upgrade is completed, you should be able to check the kernel version on your Linux nodes. Kernel version v6.6 is the latest Azure Linux 3.0 version. Sample command kubectl --kubeconfig /path/to/aks-cluster-kubeconfig get nodes -o wide Sample output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME moc-lsbe393il9d Ready control-plane 3h14m 100.72.248.133 100.72.248.133 <none> CBL-Mariner/Linux 6.6.92.2 Containerd://1.6.26 moc-lzwagtkjah5 Ready control-plane 3h12m 00.72.248.134 v1.30.4 <none> CBL-Mariner/Linux 6.6.92.2 Containerd://1.6.26 FAQs Is Azure Linux same as Mariner? Yes, Mariner was rebranded to Azure Linux. We will slowly update our documentation and VM/container image tags to reflect this name change When did Azure Linux 3.0 GA? Azure Linux 3.0 became generally available in August 2024. When will Azure Linux 3.0 reach End of Life (EOL)? We currently support each major version for 3 years after it becomes generally available. Azure Linux 3.0 will reach EOL in Summer 2027. How to keep in touch with the AKS Arc team For updates, feedback, and feature requests related to AKS Arc: Ask questions & submit feedback via AKS Arc GitHub Issues Partners with support questions can reach out to aks-hci-talk@microsoft.com616Views1like0CommentsDeploy a Kubernetes Application Programmatically Using Terraform and CLI
In our previous blog post, we explored Kubernetes apps benefits along with an introduction into how to programmatically deploy Kubernetes Apps. Today we will cover deploying a Kubernetes application programmatically using Terraform and CLI. These deployment methods can streamline your workflow and automate repetitive tasks. Deploying your Kubernetes Application using Terraform This walkthrough assumes you have previous knowledge of Terraform. For additional information and guidance on using Terraform to provision a cluster, please refer here. Prerequisites Before we begin, ensure you have the following: Terraform Azure CLI Sample Location You can find the Terraform sample we will be using at this location: Terraform Sample Prepare the Environment First, initialize Terraform in the current directory where you have copied the k8s-extension-install sample by running the following command: terraform init In the directory, you will find two example tfvars files. These files can be used to deploy the application with different configurations: azure-vote-without-config.tfvars - Deploy the application with the default configuration for azure-vote. azure-vote-with-config.tfvars - Deploy/update the application with a custom configuration for azure-vote. Before you test run the sample tfvars files, update the following in the tfvars files: cluster_name - The name of the AKS cluster. resource_group_name - The name of the resource group where the AKS cluster is located. subscription_id - The subscription ID where the AKS cluster is located. Deploy the Application To deploy the application with the default configuration for azure-vote, run: terraform apply -var-file="azure-vote-without-config.tfvars" To deploy or update the application with a custom configuration for azure-vote, use: terraform apply -var-file="azure-vote-with-config.tfvars" Conclusion And that's it! You've successfully deployed your Kubernetes application programmatically using Terraform. This process can drastically reduce the time and effort involved in managing and scaling your applications. By using Terraform, you can ensure that your deployment is consistent and repeatable, making it easier to maintain your infrastructure as code. Deploying a Kubernetes Application from Azure CLI Deploying a Kubernetes application using Azure CLI can seem daunting, but we’re here to make it simple and accessible. Follow these steps, and you’ll have your azure-vote application up and running in no time! Prerequisites Before we get started, ensure you have the following: Azure CLI installed on your machine Deploying the Sample Azure-Vote Application from the Marketplace Step 1: Log in to Azure Open your terminal and log in to your Azure account by running: az login Step 2: Set Your Subscription Specify the subscription you want to use with: az account set --subscription Step 3: Deploy the Azure-Vote Application Now, deploy the azure-vote application to your Kubernetes cluster with the following command: az k8s-extension create --name azure-vote --scope cluster ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --extension-type commercialMarketplaceServices.AzureVote ` --plan-name azure-vote-paid ` --plan-product azure-vote-final-1 ` --plan-publisher microsoft_commercial_marketplace_services ` --configuration-settings title=VoteAnimal value1=Cats value2=Dogs Updating Configuration Settings If you want to update the configuration settings of the azure-vote application, you can do so easily. Use the following command to change the configuration settings: az k8s-extension update --name azure-vote ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --configuration-settings value1=Elephant value2=Horse And there you have it! By following these steps, you can deploy and update the azure-vote application on your Kubernetes cluster using Azure CLI. Conclusion Deploying Kubernetes applications using Azure CLI is a powerful way to manage and scale your applications. The process described above helps ensure your deployments are consistent and repeatable, simplifying the maintenance of your infrastructure as code.😄523Views0likes1CommentAnnouncing general availability of workload orchestration: simplifying edge deployments at scale
We’re excited to announce the General Availability of workload orchestration, a new Azure Arc capability that simplifies how enterprises deploy and manage Kubernetes-based applications across distributed edge environments. Organizations across industries, such as manufacturing, retail, healthcare, face challenges in managing varied site-specific configurations. Traditional methods often require duplicating app variants—an error-prone, costly, and hard-to-scale approach. Workload orchestration solves this with a centralized, template-driven model: define configurations once, deploy them across all sites, and allow local teams to adjust within guardrails. This ensures consistency, improves speed, reduces errors, and scales with your CI/CD workflows—whether you’re supporting 200+ factories, offline retail clusters, or regionally-compliant hospital apps. Fig 1.0: Workload orchestration – Key features Key benefits of workload orchestration include: Solution Configuration & Template Reuse Define solutions, environments, and multiple hierarchy levels using reusable templates. Key-value stores and schema-driven inputs allow flexible configurations, validations with role-based access to maintain control. Context-Aware Deployments Automatically generate deployable artifacts based on selected environments (Dev, QA, Prod) and push changes safely through a git ops flow — enabling controlled rollouts and staged testing across multiple environments. Deploying at Scale in Constrained Environments Deploy workloads across edge and cloud environments with built-in dependency management and preloading of container images (a.k.a Staging) to minimize downtime during narrow maintenance windows. Bulk Deployment and Git Ops-Based Rollouts Execute large-scale deployments — including shared or dependent applications — across multiple sites using Git-based CI/CD pipelines that validate configurations and enforce policy compliance before rollout. End to End Observability K8 diagnostics in workload orchestration provide full-stack observability by capturing container logs, Kubernetes events, system logs, and deployment errors—integrated with Azure Monitor and Open Telemetry pipelines for proactive troubleshooting across edge and cloud environments. Who Is It For? Workload orchestration supports two primary user personas: IT Admins and DevOps Engineers: Responsible for initial setup and application configuration via CLI. OT Operators: Use the portal for day-to-day activities like monitoring deployments and adjusting configurations. Resources for You to Get Started You can start using workload orchestration by visiting the Azure Arc portal and following the documentation. We encourage you to try it with a small application deployed to a few edge sites. Create a template, define parameters like site name or configuration toggles, and run a deployment. As you grow more comfortable, expand to more sites or complex applications.946Views3likes0Comments