cloud adoption framework
16 Topics- How do AKS and AKS on Azure Stack HCI compare?This blog is an update to the original blog published comparing AKS in Azure and on Azure Stack HCI, a year ago. Since then, we’ve released multiple features and fixes aimed at improving AKS consistency between Azure and on-premises that warranted a fresh blog 😊 Features in preview are marked by (*) Feature Set AKS on Azure Stack HCI & AKS on Windows Server AKS Kubernetes Management Cluster/AKS host AKS on Azure Stack HCI and Windows Server is a Cluster API based hosted Kubernetes offering. A management Kubernetes cluster is used to manage Kubernetes workload clusters. The management Kubernetes cluster runs in customer datacenters and is managed by the infrastructure administrator. AKS is a managed Kubernetes offering. AKS control plane is hosted and managed by Microsoft. AKS worker nodes are created in customer subscriptions. Kubernetes Target Cluster (lifecycle operations) Cloud Native Computing Foundation (CNCF) certification Yes Yes Who manages the cluster? Managed by you Managed by you Where is the cluster located? In your datacenter alongside your AKS hybrid management cluster. Azure Stack HCI 21H2 Windows Server 2019 Datacenter Windows Server 2022 Datacenter Windows 10/11 IoT Enterprise* Windows 10/11 Enterprise* Windows 10/11 Pro* Azure cloud K8s cluster lifecycle management tools (create, scale, update and delete clusters) PowerShell (PS) Windows Admin Center (WAC) Az CLI* Azure Portal* ARM templates* Az CLI Az PowerShell Azure Portal Bicep ARM templates Can you use kubectl and other open-source Kubernetes tools? Yes Yes Workload cluster updates K8s version upgrade through PowerShell or WAC. Initiated by you. Node OS image update initiated by you; Updates in a target cluster happen at the cluster level – control plane nodes + node pools updated. Azure CLI, Azure PS, Portal, ARM templates, GitHub Actions; OS image patch upgrade; Automatic upgrades; Planned maintenance windows; Kubernetes versions Continuous updates to supported Kubernetes versions. For latest version support, visit AKS hybrid releases on GitHub. Continuous updates to supported Kubernetes versions. For latest version support, run az aks get-versions. Can you start/stop K8s clusters to save costs? Yes, by stopping the underlying failover cluster Yes Azure Fleet Manager integration Not yet. Yes* Terraform support Not yet. Yes Node Pools Do you support running Linux and Windows node pools in the same cluster? Yes! Linux nodes: CBL-Mariner Windows nodes: Windows Server 2019 Datacenter, Windows Server 2022 Datacenter Yes. Linux nodes: Ubuntu 18.04, CBL-Mariner Windows nodes: Windows Server 2019 Datacenter Windows Server 2022 Datacenter What’s your container runtime? Linux nodes: containerd Windows nodes: containerd Linux nodes: containerd Windows nodes: containerd Can you scale node pools? Manually Cluster autoscaler Vertical pod autoscalar Manually Cluster autoscaler Vertical pod autoscalar Horizontal pod autoscalar Yes Yes What about virtual nodes? Azure container instance No Yes Can you upgrade a node pool? We do not support upgrading individual node pools. All upgrades happen at the K8s cluster level. You can perform node pool specific upgrades in an AKS cluster. GPU enabled node pools Yes* Yes Azure Container Registry Yes Yes KEDA support Not yet Yes* Networking Who creates and manages the networks? All networks (for both the management cluster and target K8s clusters) are created and managed by you By default, Azure creates the virtual network and subnet for you. You can also choose an existing virtual network to create your AKS clusters What type of network options are supported? DHCP networks with/without VLAN ID Static IP networks with/without VLAN ID SDN support for AKS on Azure Stack HCI Bring your own Azure virtual network for AKS clusters. Load balancers HAProxy (default) runs in a separate VM in the target K8s cluster kubeVIP – runs as a K8s service in the control plane K8s node Bring your own load balancer Load balancers are always given sIP addresses from a customer vip pool to ensure application and K8s cluster availability. You can create multiple instances of a LB (active-passive) for high availability Azure load balancer – Basic SKU or Standard SKU Can also use internal load balancer By default, load balancer IP address is tied to load balancer ARM resource. You can also assign a static public IP address directly to your Kubernetes service CNI/Network plugin Calico (default) Note: Network policies are covered in the Security and Authentication section. Azure CNI Calico Azure CNI Overlay Bring your own CNI Note: Network policies are covered in the Security and Authentication section. Ingress controllers No but you can use 3 rd party addons – Nginx. 3 rd party addons are not supported by Microsoft’s support policy. Support for Nginx with web app routing addon. Egress controls Egress is controlled by Network policies, by default all outbound traffic from pods is blocked. You can deploy additional egress controls and policies. You can use Azure Policy and NSGs to control network flow or use Calico policies. You can also use Azure FW and Azure Security Groups. Egress types Egress types and options depend on your network architecture. Azure load balancer, managed NAT gateway and user defined routes are the supported egress types. Customize CoreDNS Allowed Allowed Service Mesh Yes, Open Service Mesh (OSM) through Azure Arc enabled Kubernetes. 3 rd party addons – Istio, etc. 3 rd party addons are not supported by Microsoft’s support policy. Open Service Mesh Marketplace offering available for Istio Storage Where is the storage provisioned? On-premises Azure Storage. Azure Files and Azure Disk premium CSI drivers deployed by default. You can also deploy any custom storage class. What types of persistent volumes are supported? Read Write Once Read Write Many Read Write Once Read Write Many Do the storage drivers support Container Storage Interface (CSI)? Yes Yes Is dynamic provisioning supported? Yes Yes Is volume resizing supported? Yes Yes Are volume snapshots supported? No Yes Security and Authentication How do you access your Kubernetes cluster? Certificate based kubeconfig (default) AD based kubeconfig Azure AD and Kubernetes RBAC Azure AD and Azure RBAC* Certificate based kubeconfig (default) Azure AD and Kubernetes RBAC Azure AD and Azure RBAC Network Policies Yes, we support Calico network policies Yes, we support Calico and Azure CNI network policies Limit source networks that can access API server Yes, by using VIP pools. Yes, by using the “-api-server-authorized-ip-ranges” parameter and private clusters. Certificate rotation and secrets encryption Yes Yes Support for private cluster Not supported yet Yes! You can create private AKS clusters Secrets store CSI driver Yes Yes Support for disk encryption Yes, via bitlocker Disks are encrypted on the storage side with platform managed keys and with support for customer provided keys. Hosts and locally attached disks can also be encrypted with encryption at host. gMSA v2 support for Windows containers Yes Yes Azure Policy Yes, through Azure Arc enabled K8s Yes Azure Defender Yes, through Azure Arc enabled K8s* Yes Monitoring and Logging Collect logs Yes, through PS and WAC. All logs – management cluster, control plane nodes, target K8s clusters are collected. Yes, through Azure Portal, Az CLI, etc Support for Azure Monitor Yes, through Azure Arc enabled K8s. Yes 3 rd party addons for monitoring and logging AKS works with Azure managed Prometheus* and Azure managed Grafana* Subscribe to Azure Event Grid Events Yes, via Azure Arc enabled Kubernetes* Yes Develop and run applications Azure App service Yes, through Azure Arc enabled K8s* Yes Azure Functions Yes, through Azure Arc enabled K8s* Yes Azure Logic Apps Yes, through Azure Arc enabled K8s* You can directly create App Service, Functions, Logic Apps on Azure instead of creating on AKS Develop applications using Helm Yes Yes Develop applications using Dapr Yes, through Azure Arc enabled K8s* Yes DevOps Azure DevOps via Azure Arc enabled K8s. GitHub Actions via Azure Arc enabled K8s. GitOps Flux v2 via Azure Arc enabled K8s. 3 rd party addon: ArgoCD. 3 rd party addons are not supported by Microsoft’s support policy. GitOps Flux v2 through Azure Arc enabled Kubernetes is free for AKS-HCI customers. Azure DevOps GitHub Actions GitOps Flux v2 Product Pricing Product pricing If you have Azure Hybrid Benefit, you can use AKS-HCI at no additional cost. If you do not have Azure Hybrid Benefit pricing based on number of workload cluster vCPUs. Management cluster, control plane nodes, load balancers are free. Unlimited free clusters, pay for on-demand compute of the worker nodes. Paid tier available with uptime SLA, support for 5k nodes. Azure Support AKS-HCI is supported out of the Windows Server support organization aligned with Arc for Kubernetes and Azure Stack HCI. You can open support requests through the Azure portal and other support channels like Premier Support. AKS in Azure is supported through enterprise class support in the Azure team. You can open support requests in the Azure portal. SLA We do not offer SLAs since AKS-HCI runs in your environment. Paid uptime SLA clusters for production with fixed cost on the API + worker node compute, storage and networking costs.17KViews2likes3Comments
- Announcing landing zone accelerator for Azure Arc-enabled KubernetesFollowing our release a few months back of the new landing zone accelerator for Azure Arc-enabled servers, today we’re launching the Azure Arc-enabled Kubernetes landing zone accelerator within the Azure Cloud Adoption Framework.13KViews3likes0Comments
- Azure Best Practices delivered to machines anywhere with new Azure Arc and Automanage integration.Tired of manually onboarding and configuring Azure services for your Arc-enabled servers? With Azure Automanage Machine Best Practices, you can point, click, set, and forget to extend Azure security, monitoring, and governance services to servers anywhere.5.9KViews6likes2Comments
- Announcing Landing Zone Accelerator for Azure Arc-enabled SQL Managed InstanceWith both Azure Arc-enabled servers and Kubernetes Landing Zone Accelerators already generally available, today we're launching the Azure Arc-enabled SQL Managed Instance landing zone accelerator within the Azure Cloud Adoption Framework.5.7KViews0likes0Comments
- Give us your thoughts on running Kubernetes anywhere for a chance to win a $300 USD gift card!Do you work with Kubernetes? Are you interested in improving Kubernetes experience across hybrid and multicloud environments? Take the survey below for a chance to win a $300 USD virtual gift card! Must be 18 or older. Survey ends on May 10, 2023.3.5KViews0likes0Comments
- Challenges of Containerized App Portability in KubernetesIntroduction As organizations embrace containerization and Kubernetes for their applications, the need for seamless portability across the Kubernetes ecosystem coupled with cloud object storage and local persistence has become a pressing concern. In this blog post, we will dive into the core problem and dissect the complex challenges that customers face in achieving containerized app portability. Challenges Local Persistence and High Availability Local persistence is crucial, but ensuring highly available Kubernetes volumes that can tolerate hardware failures presents a challenge. Organizations need a robust solution to maintain continuous operation and data integrity. Coordinating Consistency Across Apps Coordinating data consistency across all edge applications sharing data is imperative. Ensuring that data changes are propagated uniformly and reliably is a significant challenge in a distributed and dynamic Kubernetes environment. Once cloud storage is involved in your data management strategy, consistency handling between edge data bound for cloud processing becomes even more challenging. Data Upload at the Edge A suite of containerized apps deployed at the edge needs to upload data to cloud storage, introducing challenges related to data transfer, synchronization, and efficient utilization of bandwidth. Avoiding Cloud Storage API Coding for Every App It is not feasible for every app in the suite to code directly to the Cloud Storage API. Organizations need solutions that abstract this complexity, providing a unified interface for different applications without compromising on functionality. Disconnect/Reconnect Logic The need for disconnect/reconnect logic to handle network disconnections introduces an additional layer of complexity. Applications must seamlessly adapt to network disruptions, ensuring uninterrupted operation and data flow. Shared Filesystem Capability Implementing shared filesystem capability on top of high availability volumes is essential. Achieving this requires careful orchestration to avoid data inconsistencies and conflicts in a distributed environment. Addressing the Challenges Robust High-Availability Strategies Implement robust strategies for local persistence and high availability within Kubernetes clusters, minimizing the impact of compute hardware failures and maintaining continuous operations. Unified Filesystem Abstraction Ensures consistency across applications without compromising on the benefits of distributed storage. Edge-Focused Data Solutions Explore solutions tailored for edge computing that efficiently manage data upload, synchronization, and bandwidth utilization, ensuring optimal performance in edge environments. Smart Network Handling Implement intelligent disconnect/reconnect logic that enables applications to handle network disruptions gracefully. This ensures uninterrupted operation and minimizes the impact of transient network issues. If you choose to cloud-enable your application, you must consider cloud unavailability. Infrastructure Capability Differences between Kubernetes Environments Application developers must be aware of the inherited advertised capabilities of differing cloud and edge environments which are often not homogenous. Taking an application from Dev/Test environment to a different Production environment typically requires additional deployment customization. Conclusion In the landscape of containerized applications across Kubernetes, achieving portability across the ecosystem while leveraging cloud object storage and local persistence is a multifaceted challenge. By understanding and addressing the specific challenges related to high availability, shared filesystems, data upload, and network handling, organizations can pave the way for a more efficient and resilient containerized app deployment. As the industry continues to evolve, staying up to date on emerging solutions and best practices is essential for navigating the complexities of Kubernetes and ensuring a portable and robust application ecosystem. Check back shortly for a follow-on blog post talking about how you can build deployments that address some of these challenges.3.4KViews2likes0Comments
- Launching the Arc Jumpstart Newsletter: October 2024 EditionWe are excited to kick off this monthly newsletter, where you can get the latest updates on everything happening in the Arc Jumpstart realm. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.1.8KViews3likes0Comments
- Announcing the Azure Arc ISV Partner Program at IgniteEmpowering Partners and Enhancing Customer Experience We are thrilled to introduce the newly launched Azure Arc ISV Partner Program at Ignite! This innovative and growing ecosystem partner program allows them to publish offers on the Azure Marketplace that can be deployed to Arc-enabled Kubernetes clusters. Customers can now access validated, enterprise-grade applications and tools to enhance their Azure Arc development, while ISVs benefit from a deeper integration with Azure Arc services and access to the Arc enabled customer base. All marketplace images have been validated across the Azure Arc platform with the support of both Microsoft and partner teams. With the solutions each partner has made available on the Azure marketplace, the integration with Azure Arc offers central governance to build robust applications with consistent security and reliability for any hybrid deployments. What is Azure Arc? Azure Arc is a platform that extends Azure to datacenters, on-premises, edge, or even multi-cloud environments. It simplifies governance and management by delivering the consistency of the Azure platform. The ability to create offerings for Azure Arc in the marketplace is a significant benefit to our partners, allowing them to integrate with Azure services and tools and access a large and diverse customer base. Azure Arc also provides validated applications for customers to manage their Kubernetes clusters on our platform. Edge developers leverage the open-source community to build their enterprise applications, and we aim to provide them with a one-stop shop in Azure Marketplace, offering a choice of Kubernetes-based building blocks needed to develop their applications. Meet our partners With our Ignite launch, we have built the foundation of an ecosystem that is designed to bring the best capabilities and innovations to our marketplace, focused on leading building block categories: databases, big data/analytics, and messaging. We are excited to introduce our esteemed partners, (CloudCasa, MongoDB, Redis, MinIO, DataStax) who have Arc enabled their application and will now be available on the Azure Marketplace. Here’s a closer look at their offerings: CloudCasa CloudCasa is a leading provider of Kubernetes backup and recovery solutions. By Arc-enabling their application, CloudCasa offers robust, secure, and easy-to-use backup services for Kubernetes, ensuring the protection and availability of critical data. With CloudCasa, your Arc enabled Kubernetes deployments across hybrid environments are fully protected, ensure that your data is safe and recoverable, no matter the scenario. CloudCasa’s integration with Azure Arc offers three key components: handling persistent volume with or without CSI snapshots, unified management and monitoring across environments, and disaster recovery and migration for AKS hybrid. One-way CloudCasa manages persistent storage is that it natively integrates with Container Storage Interface snapshots, ensuring that all your persistent volumes can be captured and protected without interrupting your workloads. CloudCasa also provides a powerful disaster recovery and migration solution. For AKS on Azure Stack HCI, this means you can confidently deploy hybrid and edge clusters, knowing that you have a trusted solution to recover from any disaster, or even perform seamless migrations from edge to cloud or vice versa. To explore CloudCasa’s full capabilities for Azure Arc-enabled Kubernetes clusters, visit the CloudCasa Marketplace listing for Azure Arc or find out more at cloudcasa.io. For personalized assistance, feel free to contact casa@cloudcasa.io. DataStax DataStax is a leading provider of Gen AI solutions for AI developers. With DataStax HCD (Hyper-Converged Database), businesses can harness the power of Apache Cassandra, the highly scalable and resilient NoSQL database, to manage large volumes of structured and vector data with ease. By Arc-enabling their applications, DataStax HCD offers users a “single pane of glass” for streamlined deployment, monitoring, and lifecycle management of their entire infrastructure. Ensuring consistent operations across on-premises, Azure, and multi-cloud environments makes Azure with HCD an ideal choice for mission-critical applications. The combination of Azure Arc central governance and Mission Control, DataStax’s operations platform, on HCD will allow for provisioning of resources for on-premises and on the cloud. HCD brings to Azure Arc database management and the ability to support workloads and AI systems at scale with no single point of failure. DataStax HCD brings three key benefits to Azure Arc: data replication and distribution, node repair, and vector search capabilities to enhance your enterprise data workloads. To learn more about the full capabilities of DataStax HCD, please visit the DataStax HCD for Azure Arc or find out more on the HCD product page. MongoDB MongoDB Enterprise Advanced (EA) empowers customers to securely self-manage their MongoDB deployments on-premises or in hybrid environments, driving operational efficiency, performance, and control to meet specific infrastructure needs. Now with Arc-enablement, MongoDB EA allows developers to build, scale, and innovate faster by providing a robust and dynamic database solution across a multitude of environments. MongoDB’s document data model is intuitive and powerful, and it can easily handle a variety of data types and use cases efficiently. MongoDB EA includes advanced automation, reliable backups, monitoring capabilities, updating deployments, and integrating with various Kubernetes services. The MongoDB integration with Azure Arc provides three key benefits: support for multi-Kubernetes cluster deployments, centralized provisioning through the Azure portal, and leveraging the resilience of Kubernetes deployments. As Azure Arc provides centralized management of Kubernetes environments across a multitude of environments, MongoDB EA adds value with databases that can run across multiple Kubernetes clusters. To explore MongoDB EA on Azure Marketplace for Azure Arc - and to learn more about the full potential of this offering - please visit MongoDB Enterprise Advanced for Azure Arc. For licensing inquiries and to learn more about MongoDB Enterprise Advanced, please visit MongoDB's website. Redis Redis Software, an enterprise-grade, real-time data platform, offers an in-memory data structure store used as a cache, vector database, document database, streaming engine, and message broker. With its Arc-enabled application, Redis Software provides ultra-fast data access, real-time analytics, and seamless scalability. This makes Redis Software ideal for applications requiring high performance and low latency. Integrating with Azure Arc allows users to deploy Redis workloads across on-premises, Cloud and hybrid infrastructure. The benefits Redis Software brings to Azure Arc are support multi-core deployments, Active-Active geo-distribution, data tiering, high-availability with seamless failover and multiple level of on-disk persistence. As it is integrated with Arc, these Redis instances are located on-premises or on the cloud and can be managed centrally on the Azure portal. To explore Redis Software on the Azure Marketplace for Azure Arc, please visit Redis Software for Kubernetes for Azure Arc. You can learn more about licensing inquiries at Redis Software. MinIO MinIO AIStor is the standard for building large scale AI data infrastructure. It is a software-defined, S3 compatible object store that is optimized for the private cloud but will run anywhere - from the public cloud to the edge. Enterprises use AIStor to deliver against artificial intelligence, machine learning, analytics, application, backup and archival workloads - all from a single platform. It was built for the cloud operating model, so it is native to the technologies and architectures that define the cloud, such as: containerization, orchestration with Kubernetes, microservices and multi-tenancy. By Arc-enabling their application, MinIO ensures that Azure users can experience the unmatched scalability, robust security, and lightning-fast storage performance that has made MinIO the most widely integrated object store in the market today. Users can now run these hybrid or multi-cloud deployments on Azure Arc and manage them in a single pane of glass on the Azure portal. To deploy and learn more about MinIO AIStor on Azure Arc, please visit MinIO AIStor for Azure Arc here. For further information on MinIO AIStor for Azure Arc, please visit MinIO | AI Storage is Object Storage. Become an Arc Enabled Partner These partners have collaborated with Microsoft to join our ISV ecosystem, providing resilient and scalable applications readily accessible for our Azure Arc customers via the Azure Marketplace. Joining forces with Microsoft enables partners to stay ahead of the technological curve, strengthen customer relationships, and contribute to transformative digital changes across industries. We look forward to expanding this program to include more ISVs, enhancing the experience for customers using Arc enabled Kubernetes clusters. As we continue to expand our Azure Arc ISV Partner Program, stay tuned for more blogs on the new partners being published to the Azure Marketplace. To reach out and learn more about the Azure Arc ISV Partner Program, please feel free to reach out to us at https://aka.ms/AzureArcISV.1.2KViews1like0Comments
- Public Preview: Workload orchestration simplifying edge deployment at scalePublic Preview Announcement - workload orchestration Introduction: As enterprises continue to scale their edge infrastructure, IT teams face growing complexity in deploying, managing, and monitoring workloads across distributed environments. Today, we are excited to announce the Public Preview of workload orchestration — a purpose-built platform that redefines configuration and deployment management across enterprise environments. Workload orchestration is designed to help you centrally manage configurations for applications deployed in diverse locations (from factories and retail stores to restaurants and hospitals) while empowering on-site teams with flexibility. Modern enterprises increasingly deploy Kubernetes-based applications at the edge, where infrastructure diversity and operational constraints are the norm. Managing these with site-specific configurations traditionally requires creating and maintaining multiple variants of the same application for different sites – a process that is costly, error-prone, and hard to scale. Workload orchestration addresses this challenge by introducing a centralized, template-driven approach to configuration. With this platform, central IT can define application configurations once and reuse them across many deployments, ensuring consistency and compliance, while still allowing site owners to adjust parameters for their local needs within controlled guardrails. The result is a significantly simplified deployment experience that maintains both central governance and localized flexibility. Key features of workload orchestration The public preview release of workload orchestration includes several key innovations and capabilities designed to simplify how IT manages complex workload deployments: Powerful Template Framework & Schema Inheritance: Define application configurations and schemas one time and reuse or extend them for multiple deployments. Workload orchestration introduces a templating framework that lets central IT teams create a single source of truth for app configurations, which can then be inherited and customized by different sites as needed. This ensures consistency across deployments and streamlines the authoring process by eliminating duplicate work. Dependent Application Management: Manage and deploy interdependent applications seamlessly using orchestrated workflows. The platform supports configuring and deploying apps with dependencies via a guided CLI or an intuitive portal experience, reducing deployment friction and minimizing errors when rolling out complex, multi-tier applications. Custom Validation Rules: Ensure every configuration is right before it’s applied. Administrators can define pre-deployment validation expressions (rules) that automatically check parameter inputs and settings. This means that when site owners customize configurations, all inputs are validated against predefined rules to prevent misconfigurations, helping to reduce rollout failures. External Validation Rules: External validation enables you to verify the solution template through an external service, such as an Azure Function or a webhook. The external validation service receives events from the workload orchestration service and can execute custom validation logic. This design pattern is commonly used when customers require complex validation rules that exceed data type and expression-based checks. It allows the implementation of business-specific validation logic, thereby minimizing runtime errors. Integrated Monitoring & Unified Control: Track and manage deployments from a single pane of glass. Workload orchestration includes an integrated monitoring dashboard that provides near real-time visibility into deployment progress and the health of orchestrated workloads. From this centralized interface, you can pause, retry, or roll back deployments as needed, with full logging and compliance visibility for all actions. Enhanced Authoring Experience (No-Code UI with RBAC): We’ve built a web-based orchestration portal that offers a no-code configuration authoring experience. Configuration managers can easily define or update application settings via an intuitive UI – comparing previous configuration revisions side by side, copying values between versions, and defining hierarchical parameters with just a few clicks. This portal is secured with role-based access control (RBAC) and full audit logging, so non-developers and local operators can safely make approved adjustments without risking security or compliance. CLI and Automation Support: For IT admins and DevOps engineers, workload orchestration provides a command-line interface (CLI) optimized for automation. This enables scripted deployments and environment bootstrapping. Power users can integrate the orchestration into CI/CD pipelines or use it to programmatically manage application lifecycles across sites, using familiar CLI commands to deploy or update configurations in bulk. Fast Onboarding and Setup: Getting started with orchestrating your edge environments is quick. The platform offers guided setup workflows to configure your organizational hierarchy of edge sites, define user roles, and set up access policies in minutes. This means you can onboard your team and prepare your edge infrastructure for orchestration without lengthy configuration processes. Architecture & Workflow: Workload orchestration is a service built with cloud and edge components. At a high level, the cloud control plane of workload orchestration provides customers and opportunity to use a dedicated resource provider to define templates centrally which WO edge agents consume and contextualize based on required customization needed at edge locations. The overall object model is embedded in Azure Resource Manager thus providing customers fine grained RBAC (Role Based Access Control) for all workload orchestration resources. The key actions to manage WO are governed by an intuitive CLI and portal experience. There is also a simplified no code experience for non-technical onsite staff for authoring, monitoring and deploying solution with contextualized configurations. Important Details & Limitations: Preview Scope: During public preview, workload orchestration supports Kubernetes-based workloads at the edge (e.g., AKS edge deployments or Arc-enabled Kubernetes clusters). Support for other types of workloads or cloud VMs is coming soon. Regions and Availability: The service is available in East US and East US2 regions during preview. Integration Requirements: Using workload orchestration with your edge Kubernetes clusters require them to be connected (e.g., via Azure Arc) for full functionality. Getting Started with workload orchestration Availability: Workload orchestration is available in public preview starting 19 th May, 2025. For access to public preview, please complete the form to get access for your subscription or share your subscription details over email at configmanager@service.microsoft.com. Once you have shared the details, the team will get back to you with an update on your request! Try it Out: We encourage you to try workload orchestration with one of your real-world scenarios. A great way to start is to pick a small application that you typically deploy to a few edge sites and use the orchestration to deploy it. Create a template for that app, define a couple of parameters (like a site name or a configuration toggle), and run a deployment to two or three test sites. This hands-on trial will let you experience first-hand how the process works and the value it provides. As you grow more comfortable, you can expand to more sites or more complex applications. Because this is a preview, feel free to experiment — you can deploy to non-production clusters or test environments to see how the orchestration fits your workflow. Feedback and Engagement We’d love to hear your feedback! As you try out workload orchestration, please share your experiences, questions, and suggestions. You can leave a comment below this blog post – our team will be actively monitoring and responding to comments throughout the preview. Let us know what worked well, what could be improved, and any features you’d love to see in the future. Your insights are incredibly valuable to us and will help shape the product as we progress toward General Availability. If you encounter any issues or have urgent feedback, you can also engage with us through the following channels: Email at configmanager@service.microsoft.com or fill up the form at WOfeedback for feedback Email at configmanager@service.microsoft.com or fill up the form at WOReportIssuees for reporting issues Contact your Microsoft account representative or support channel and mention “workload orchestration Public Preview” – they can route your feedback to us as well. Occasionally, we may reach out to select preview customers for deeper feedback sessions or to participate in user research. If you’re interested in that, please mention it in your comment or forum post. We truly consider our preview users as co-creators of the product. Many of the features and improvements in workload orchestration have been influenced by early customer input. So, thank you in advance for sharing your thoughts and helping us ensure that this platform meets your needs! (Reminder: Since this is a public preview, it is not meant for production use yet. If you do decide to use it in a production scenario, do so with caution and be aware of the preview limitations. We will do our best to assist with any issues during preview). Learn More To help you get started and dive deeper into workload orchestration, we’ve prepared a set of resources: Workload orchestration Documentation – Overview and how-to guides: Learn about the architecture, concepts, and step-by-step instructions for using workload orchestration in our official docs. [WO documentation] Quick Start: Deploy Your First Application – Tutorial: Follow a guided tutorial to create a template and deploy a sample application to a simulated edge cluster using workload orchestration. [Quickstart] CLI Reference – Command reference: Detailed documentation of all workload orchestration CLI commands with examples. [CLI reference] Conclusion: We’re thrilled for you to explore workload orchestration and see how it can transform your edge deployment strategy. This public preview is a major step towards simplifying distributed workload management, and your participation and feedback are key to its success.1.1KViews2likes0Comments