azure arc
156 TopicsAnnouncing Jumpstart ArcBox 25Q1 general availability
We are thrilled to announce the first major update to ArcBox following our release of ArcBox 3.0 in August 2024. ArcBox has been an invaluable resource for IT professionals, DataOps teams, and DevOps practitioners, providing comprehensive solutions to evaluate how to deploy, manage, and operate Arc-enabled environments. With this release, we have introduced Windows Server 2025 on both the ArcBox-Client as well as in a nested VM, making it possible for you to evaluate a range of new features and enhancements that elevate the functionality, performance, and user experience. WinGet and Windows Terminal Integration One of the standout enhancements in Windows Server 2025 is the inclusion of WinGet and Windows Terminal. These tools are now built-in components of Windows Server 2025 and no longer require bootstrapping in our automation processes. Advanced Management Capabilities for Arc-enabled servers Windows Server 2025 introduces new management capabilities specifically designed for Arc-enabled servers. These capabilities enhance the control and oversight of server environments, providing more robust tools for monitoring, configuration, and maintenance. The enhancements are now available in ArcBox to be evaluated. SSH Included and Enabled Another significant update in Windows Server 2025 is the inclusion of SSH as a native component. This addition is a major step forward, as it eliminates the need for external SSH installations. However, it is important to note that while SSH is included, it needs to be enabled manually. This feature enhances secure access to servers, facilitating more efficient remote management and operations. In ArcBox, SSH is enabled by the automated setup and ready to start evaluating. SSH for Arc-enabled servers enables SSH based connections to Arc-enabled servers without requiring a public IP address or additional open ports. This functionality can be used interactively, automated, or with existing SSH based tooling, allowing existing management tools to have a greater impact on Azure Arc-enabled servers. You can use Azure CLI or Azure PowerShell to connect to one of the Azure Arc-enabled servers using SSH. In addition to SSH, you can also connect to the Azure Arc-enabled servers, Windows Server virtual machines using Remote Desktop tunneled via SSH. Also, Remote PowerShell over SSH is available for Windows and Linux machines. SSH for Arc-enabled servers also enables SSH-based PowerShell Remoting connections to Arc-enabled servers without requiring a public IP address or additional open ports. After setting up the configuration, we can use native PowerShell Remoting commands. Configurable SQL Server Edition to support Performance Dashboards ArcBox now provides the flexibility to deploy SQL Server Standard or Enterprise editions on the ArcBox-SQL guest VM, replacing the previously default Developer edition. This enhancement empowers users to experience advanced Arc-enabled SQL Server monitoring through Performance Dashboard reports. Available in both the ITPro and DataOps configurations, this feature ensures tailored performance monitoring capabilities for diverse use cases. To configure the SQL Server edition during deployment: Portal Deployment: Specify the desired SQL Server edition during setup. Bicep Deployment: Use the sqlServerEdition parameter to define the edition. ARM Template Deployment: Set the edition via the sqlServerEdition parameter. Below is an example Performance Dashboard report from an Arc-enabled SQL Server using the Standard or Enterprise editions, highlighting comprehensive insights and monitoring capabilities. Cost Optimizations We optimized the storage costs significantly by changing the ArcBox Client VM data disk from Premium SSD to Premium SSD v2. This change allows for better performance at a lower cost, making ArcBox even more economical for various use cases. With this optimization, users can enjoy faster data access speeds and increased storage efficiency. We also introduced support for enabling Azure VM Spot pricing for the ArcBox Client VM, allowing users to take advantage of cost savings on unused Azure capacity. This feature is ideal for workloads that can tolerate interruptions, providing an economical option for testing and development environments. By leveraging Spot pricing, users can significantly reduce their operational costs while maintaining the flexibility and scalability offered by Azure. You may leverage the advisor on the Azure Spot Virtual Machine pricing page to estimate costs for your selected region. Here is an example for running the ArcBox Client Virtual Machine in the East US region: Visit the ArcBox FAQ to see the updated price estimates for running ArcBox in your environment. The new deployment parameter enableAzureSpotPricing is disabled by default, so users who wants to take advantage of this capability will need to opt-in. Along with the option to opt-in for Azure Spot pricing, we also added new parameters for enabling Auto Shutdown: Auto Shutdown is enabled by default, and will configure the built-on Auto-shutdown feature for Azure VMs: Summary The latest update to ArcBox not only focuses on new features but also on enhancing overall cost and performance. The integration of new operating system versions and management capabilities ensures a smoother, more efficient workflow for IT professionals, DataOps teams, and DevOps practitioners to evaluate Azure Arc services. We invite our community to explore these new features and take full advantage of the enhanced capabilities of ArcBox with Windows Server 2025 support. Your feedback is invaluable to us, and we look forward to hearing about your experiences and insights as you navigate these new enhancements. Watch our release announcement episode of Jumpstart Lightning and get started today by visiting aka.ms/JumpstartArcBox!717Views2likes0CommentsIntroducing Azure Local: cloud infrastructure for distributed locations enabled by Azure Arc
Today at Microsoft Ignite 2024 we're introducing Azure Local, cloud-connected infrastructure that can be deployed at your physical locations and under your operational control. With Azure Local, you can run the foundational Azure compute, networking, storage, and application services locally on hardware from your preferred vendor, providing flexibility to meet your requirements and budget.65KViews21likes23CommentsAKS Arc - Optimized for AI Workloads
Overview Azure is the world’s AI supercomputer providing the most comprehensive AI capabilities ranging from infrastructure, platform services to frontier models. We’ve seen emerging needs among Azure customers to use the same Azure-based solution for AI/ML on the edge with minimized latencies while staying compliant with industry regulation or government requirement. Azure Kubernetes Service enabled by Azure Arc (AKS Arc) is a managed Kubernetes service that empowers customers to deploy and manage containerized workload whether they are in data centers or at edge locations. We want to ensure AKS Arc provides optimal experience for AI/ML workload on the edge, throughout the whole development lifecycle from AI infrastructure, Model deployment, Inference, Fine-tuning, and Application. AI infrastructure AKS Arc supports Nvidia A2, A16, and T4 for compute-intensive workload such as machine learning, deep learning, model training. When GPUs are enabled in Azure Local; AKS Arc customers can provision GPU node pools from Azure and host AI/ML workload in the Kubernetes cluster on the edge. For more details, please visit instructions from GPU Nodepool in AKS Arc. Model deployment and fine tuning Use KAITO for language model deployment, inference and fine tuning Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automates and simplifies the management of model deployments on a Kubernetes cluster. With KAITO, you can deploy popular open-source language models such as Phi-3 and Falcon, and host them in the cloud or on the edge. Along with the currently supported models from KAITO, you can also onboard and deploy custom language models following this guidance in just a few steps. AKS Arc has been validated with the latest KAITO operator via helm-based installation, and customers can now use KAITO in the edge to: Deploy language models such as Falcon, Phi-3, or their custom models Automate and optimize AI/ML model inferencing for cost-effective deployments, Fine-tune a model directly in a Kubernetes cluster, Perform parameter efficient fine tuning using low-rank adaptation (LoRA) Perform parameter efficient fine tuning using quantized adaptation (QLoRA) You can get started by installing KAITO and deploying a model for inference on your edge GPU nodes with KAITO Quickstart Guidance. You may also refer to KAITO experience in AKS in cloud: Deploy an AI model with the AI toolchain operator (Preview) Use Arc-enabled Machine Learning to train and deploy models in the edge For customers who are already familiar with Azure Machine Learning (AML), Azure Arc-enabled ML extends AML in Azure and enables customers to target any Arc enabled Kubernetes cluster for model training, evaluation and inferencing. With Arc ML extension running in AKS Arc, customers can meet data-residency requirements by storing data on premises during model training and deploy models in the cloud for global service access. To get started with Arc ML extension, please view instructions from Azure Machine Learning document . In addition, AML extension can now be used for a fully automated deployment of a curated list of pre-validated language and traditional AI models to AKS clusters, perform CPU and GPU-based inferencing, and subsequently manage them via Azure ML Studio. This experience is currently in gated preview, please view another Ignite blog for more details. Use Azure AI Services with disconnected container in the edge Azure AI services enable customers to rapidly create cutting-edge AI applications with out-of-the-box and customizable APIs and models. It simplified the developer experience to use APIs and embed the ability to see, hear, speak, search, understand and accelerate decision-making into the application. With disconnected Azure AI service containers, customers can now download the container to an offline environment such as AKS Arc and use the same APIs available from Azure. Containers enable you to run Azure AI services APIs in your own environment and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs disconnected from the internet. Currently, the following containers can be run in this manner: Speech to text Custom Speech to text Neural Text to speech Text Translation (Standard) Azure AI Vision - Read Document Intelligence Azure AI Language Sentiment Analysis Key Phrase Extraction Language Detection Summarization Named Entity Recognition Personally Identifiable Information (PII) detection To get started with disconnected container, please view instructions at Use Docker containers in disconnected environments . Build and deploy data and machine learning pipelines with Flyte Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. It is a Kubernetes native workflow automation tool. Customers can focus on experimentation and providing business value without being an expert in infrastructure and resource management. Data scientists and ML engineers can use Flyte to create data pipelines for processing petabyte-scale data, building analytics workflow for business or finance, or leveraging it as ML pipeline for industry applications. AKS Arc has been validated with the latest Flyte operator via helm-based installation, customers are welcome to use Flyte for building data or ML pipelines. For more information, please view instructions from Introduction to Flyte - Flyte and Build and deploy data and machine learning pipelines with Flyte on Azure Kubernetes Service (AKS). AI-powered edge applications with cloud-connected control plane Azure AI Video Indexer, enabled by Azure Arc Azure AI Video Indexer enabled by Arc enables video and audio analysis, generative AI on edge devices. It runs as Azure Arc extension on AKS Arc and supports many video formats including MP4 and other common formats. It also supports several languages in all basic audio-related models. The Phi 3 language model is included and automatically connected with your Video Indexer extension. With Arc enabled VI, you can bring AI to the content for cases when indexed content can’t move to the cloud due to regulation or data store being too large. Other use cases include using on-premises workflow to lower the indexing duration latency or pre-indexing before uploading to the cloud. You can find more details from What is Azure AI Video Indexer enabled by Arc (Preview) Search on-premises data with a language model via Arc extension Retrieval Augmented Generation (RAG) is emerging to augment language models with private data, and this is especially important for enterprise use cases. Cloud services like Azure AI Search and Azure AI Studio simplify how customers can use RAG to ground language models in their enterprise data in cloud. The same experience is coming to the edge and now customers can deploy an Arc extension and ask questions about on-premises data within a few clicks. Please note this experience is currently in gated preview and please see another Ignite blog for more details. Conclusion Developing and running AI workload at distributed edges brings clear benefits such as using cloud as universal control plane, data residency, reduced network bandwidth, and low latency. We hope the products and features we developed above can benefit and enable new scenarios in Retail, Manufacturing, Logistics, Energy, and more. As Microsoft-managed Kubernetes on the edge, AKS Arc not only can host critical edge applications but also optimized for AI workload from hardware, runtime to application. Please share your valuable feedback with us (aksarcfeedback@microsoft.com) and we would love to hear from you regarding your scenarios and business impact.793Views2likes1CommentSpeed Innovation with Arc-enabled Kubernetes Applications
As our annual Ignite conference begins in Chicago, I am delighted to share the latest in our effort to empower our customers to rapidly build and scale applications across boundaries: Azure Container Storage, Azure Key Vault Secret Store, Arc Gateway, Azure Monitor Pipeline, Workload Identity Federation, new options for AI workloads with AKS Arc, and the launch of our Azure Arc ISV partner program. In addition, we just published a white paper with more details. In today’s quickly evolving business environment, speed and agility in software innovation are crucial for companies to compete. Organizations of all shapes and sizes need to rapidly build (or buy), deploy, and operate secure, resilient applications to stay competitive. Cloud computing has revolutionized how companies do this with modern, cloud native practices. But many applications don’t just run in the cloud, they run across the vast, distributed landscape that defines customer environments today. Coles, an Australian supermarket retailer, needed to streamline their development and update process for the applications their customers depend on whether they are in-store, online or engaged in a hybrid experience using their mobile app. Emirates Global Aluminium needed to optimize production, support advanced AI and automation solutions, enhance cost savings by applying intelligence at the edge, and optimize processing for massive amounts of real-time readings from sensors, machinery, and production lines. Delivering on the needs of organizations like Coles and Emirates Global Aluminum requires specific technologies that help teams reduce complexity and increase release velocity across the application development lifecycle. I like to think of these in three groups, representing areas of investment for us today and moving forward. As customers invest in applications to fuel their business, many of these solutions come from the broad ecosystem of independent software vendors (ISVs). We are taking an ecosystem approach, helping ISVs to develop and market modern, Arc-enabled applications. This is why I am very excited to announce our Azure Arc ISV partner program and our first set of Arc-enabled applications in the Azure Marketplace. Below is a full list of the announcements we are making for this space at Ignite: Announcements New capabilities for the development of enterprise-class Kubernetes applications Azure Container Storage: At the edge, customers experience multiple challenges with data: sharing, resiliency, storage capacity, space management, and cloud connection, among others. We are proud to announce Azure Container Storage enabled by Azure Arc (ACSA), a first-party Kubernetes native Arc extension designed to solve these customer edge storage needs. ACSA offers high availability and fault tolerance for Kubernetes clusters ReadWriteMany persistent volumes that can be provisioned as Kubernetes native Persistent Volume Claims (PVCs). Available configuration options include keeping data local or transferring it to Azure storage services, such as Blob, ADLSgen2 and OneLake Fabric. ACSA is suitable for production workloads and is available as a standard component of the Azure IoT Operations GA release. Azure Key Vault Secret Store: Customers need the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the Azure Key Vault Secret Store Extension for Arc-enabled Kubernetes automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a Pod. Azure Arc Gateway: Customers face challenges with complex network configurations and multiple endpoints, which can be difficult to manage and secure. The Azure Arc Gateway for Arc-enabled Kubernetes alleviates these issues by reducing the number of required endpoints for using Azure Arc, thereby streamlining the enterprise proxy configuration. This simplification makes it significantly easier for customers to set up their networks and leverage the full capabilities of Azure Arc. By centralizing network traffic through a single, unique endpoint, the Azure Arc Gateway not only enhances security by minimizing the attack surface but also improves operational efficiency by reducing the time and effort needed for network setup and maintenance. This centralized approach ensures that customers can manage their Kubernetes clusters more effectively, providing a seamless and consistent experience across diverse environments. Azure Monitor Pipeline: As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. We are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency. Workload Identity Federation: Customers need both simplicity and strong security from their workload identity management, especially when their solutions run in or across distributed environments. Workload Identity Federation delivers this by allowing software workloads running on Kubernetes clusters to access Azure resources without using traditional application credentials like secrets or certificates, which pose security risks. Instead, you can configure a user-assigned managed identity or app registration in Microsoft Entra ID to trust tokens from an external identity provider (IdP) like Kubernetes. This authentication option eliminates the need for manual credential management and reduces the risk of credential leaks or expirations. Creating an ecosystem of Arc-enabled Kubernetes applications Azure Arc ISV partner program: Customers want the ability to utilize third-party (3P) software to build their enterprise applications on Kubernetes. Currently, customers have to run multiple scripts to install any third party application on an Arc-enabled Kubernetes cluster. We are excited to announce the launch of our Azure Arc ISV ecosystem, which enables Azure to be a one-stop-shop. Now customers can install an application that has been validated on Arc and enabled onto their cluster through the Azure portal. With the click of a button in the Azure portal, users can install MongoDB, Redis, CloudCasa, MinIO, and DataStax on their Arc-enabled Kubernetes cluster. This enables customers to develop using enterprise grade tools on top of Azure Arc. This program will enhance the developer ecosystem as we onboard more and more partners. Exciting new ways to engage and get started Join the Adaptive cloud community: Connect with professionals passionate about hybrid, multi-cloud, and edge technologies. This space is designed for those looking to engage with peers and Microsoft experts, explore the latest in Azure Arc, Azure Local, AKS, and IoT, and expand their knowledge through valuable resources and discussions. Whether you are just starting out or an industry professional, this community is the perfect platform to share insights, ask questions, and grow your skills in the evolving Adaptive cloud ecosystem. Learn more about ways to get involved on our Adaptive cloud GitHub. Join the Adaptive cloud Community LinkedIn Group Join the Adaptive cloud Community Teams Channel Visit Arc Jumpstart: Explore the resources available to help you learn what Azure Arc can do for you and your business. Recent additions include Jumpstart Drops, an opportunity to contribute to and use community contributions, and Jumpstart Agora Hypermarket an industry scenario bringing the power of the Adaptive cloud approach for retail to life. I hope you enjoy the week visiting or tuning into Microsoft Ignite. You can find a full listing of opportunities to learn more about our Adaptive cloud approach at Ignite here: aka.ms/AdaptiveCloudIgnite.537Views2likes1CommentAnnouncing Preview of Azure AI Video Indexer Cloud-to-Edge Version
Introducing the Public Preview of Azure Video Indexer enabled by Arc: Harness the full capabilities of Video and Audio Content Anywhere. We are excited to unveil that Azure Video Indexer cloud offering is expanding, as part of Microsoft’s adaptive cloud approach. The newly released Video Indexer extension can run within Azure Arc-enabled Kubernetes cluster. This tool allows you to harness the full capabilities of video and audio content from any location. Register for the public preview by submitting this form.4.4KViews4likes0CommentsDeploy a Kubernetes Application Programmatically Using Terraform and CLI
In our previous blog post, we explored Kubernetes apps benefits along with an introduction into how to programmatically deploy Kubernetes Apps. Today we will cover deploying a Kubernetes application programmatically using Terraform and CLI. These deployment methods can streamline your workflow and automate repetitive tasks. Deploying your Kubernetes Application using Terraform This walkthrough assumes you have previous knowledge of Terraform. For additional information and guidance on using Terraform to provision a cluster, please refer here. Prerequisites Before we begin, ensure you have the following: Terraform Azure CLI Sample Location You can find the Terraform sample we will be using at this location: Terraform Sample Prepare the Environment First, initialize Terraform in the current directory where you have copied the k8s-extension-install sample by running the following command: terraform init In the directory, you will find two example tfvars files. These files can be used to deploy the application with different configurations: azure-vote-without-config.tfvars - Deploy the application with the default configuration for azure-vote. azure-vote-with-config.tfvars - Deploy/update the application with a custom configuration for azure-vote. Before you test run the sample tfvars files, update the following in the tfvars files: cluster_name - The name of the AKS cluster. resource_group_name - The name of the resource group where the AKS cluster is located. subscription_id - The subscription ID where the AKS cluster is located. Deploy the Application To deploy the application with the default configuration for azure-vote, run: terraform apply -var-file="azure-vote-without-config.tfvars" To deploy or update the application with a custom configuration for azure-vote, use: terraform apply -var-file="azure-vote-with-config.tfvars" Conclusion And that's it! You've successfully deployed your Kubernetes application programmatically using Terraform. This process can drastically reduce the time and effort involved in managing and scaling your applications. By using Terraform, you can ensure that your deployment is consistent and repeatable, making it easier to maintain your infrastructure as code. Deploying a Kubernetes Application from Azure CLI Deploying a Kubernetes application using Azure CLI can seem daunting, but we’re here to make it simple and accessible. Follow these steps, and you’ll have your azure-vote application up and running in no time! Prerequisites Before we get started, ensure you have the following: Azure CLI installed on your machine Deploying the Sample Azure-Vote Application from the Marketplace Step 1: Log in to Azure Open your terminal and log in to your Azure account by running: az login Step 2: Set Your Subscription Specify the subscription you want to use with: az account set --subscription Step 3: Deploy the Azure-Vote Application Now, deploy the azure-vote application to your Kubernetes cluster with the following command: az k8s-extension create --name azure-vote --scope cluster ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --extension-type commercialMarketplaceServices.AzureVote ` --plan-name azure-vote-paid ` --plan-product azure-vote-final-1 ` --plan-publisher microsoft_commercial_marketplace_services ` --configuration-settings title=VoteAnimal value1=Cats value2=Dogs Updating Configuration Settings If you want to update the configuration settings of the azure-vote application, you can do so easily. Use the following command to change the configuration settings: az k8s-extension update --name azure-vote ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --configuration-settings value1=Elephant value2=Horse And there you have it! By following these steps, you can deploy and update the azure-vote application on your Kubernetes cluster using Azure CLI. Conclusion Deploying Kubernetes applications using Azure CLI is a powerful way to manage and scale your applications. The process described above helps ensure your deployments are consistent and repeatable, simplifying the maintenance of your infrastructure as code.😄263Views0likes0CommentsArc Jumpstart Newsletter: January 2025 Edition
We’re thrilled to bring you the latest updates from the Arc Jumpstart team in this month’s newsletter. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.368Views0likes0CommentsEvolving Stretch Clustering for Azure Local
Stretched clusters in Azure Local, version 22H2 (formerly Azure Stack HCI, version 22H2) entail a specific technical implementation of storage replication that spans a cluster across two sites. Azure Local, version 23H2 has evolved from a cloud-connected operating system to an Arc-enabled solution with Arc Resource Bridge, Arc VM, and AKS enabled by Azure Arc. Azure Local, version 23H2 expands the requirements for multi-site scenarios beyond the OS layer, while Stretched clusters do not encompass the entire solution stack. Based on customer feedback, the new Azure Local release will replace the Stretched clusters defined in version 22H2 with new high availability and disaster recovery options. For Short Distance Rack Aware Cluster is a new cluster option which spans two separate racks or rooms within the same Layer-2 network at a single location, such as a manufacturing plant or a campus. Each rack functions as a local availability zone across layers from OS to Arc management including Arc VMs and AKS enabled by Azure Arc, providing fault isolation and workload placement within the cluster. The solution is configured with one storage pool to reduce additional storage replication and enhance storage efficiency. This solution delivers the same Azure deployment and management experience as a standard cluster. This setup is suitable for edge locations and can scale up to 8 nodes, with 4 nodes in each rack. Rack Aware Cluster is currently in private preview and is slated to public preview and general release in 2025. For Long Distance Azure Site Recovery can be used to replicate on-premises Azure Local virtual machines into Azure and protect business-critical workloads. This allows Azure cloud to serve as a disaster recovery site, enabling critical VMs to be failed over to Azure in case of a local cluster disaster, and then failed back to the on-premises cluster when it becomes operational again. If you cannot fail over certain workloads to cloud and require long distance of disaster recovery, like in two different cities, you can leverage Hyper-V Replica to replicate Arc VMs to the secondary site. Those VMs will become Hyper-V VMs on the secondary site, they will become Arc VMs once they fail back to the original cluster on the first site. Additional Options beyond Azure Local If the above solutions in Azure Local do not cover your needs, you can fully customize your solution with Windows Server 2025 which introduces several advanced hybrid cloud capabilities designed to enhance operational flexibility and connectivity across various environments. Additionally, it offers various replication technologies like Hyper-V Replica, Storage Replica and external SAN replication that enable the development of tailored datacenter disaster recovery solutions. Learn more from the Windows Server 2025 now generally available, with advanced security, improved performance, and cloud agility - Microsoft Windows Server Blog What to do with existing Stretched clusters on version 22H2 Stretched clusters and Storage Replica are not supported in Azure Local, version 23H2 and beyond. However, version 22H2 stretched clusters can stay in supported state in version 23H2 by performing the first step of operating system upgrade as shown in the following diagram to 23H2 OS. The second step of the solution upgrade to Azure Local is not applicable to stretched clusters. This provides extra time to assess the most suitable future solution for your needs. Please refer to the About Azure Local upgrade to version 23H2 - Azure Local | Microsoft Learn for more information on the 23H2 upgrade. Refer the blog on Upgrade from Azure Stack HCI, version 22H2 to Azure Local | Microsoft Community Hub. Conclusion We are excited to be bringing Rack Aware Clusters and Azure Site Recovery to Azure Local. These high availability and disaster recovery options allow customers to address various scenarios with a modern cloud experience and simplified management.5.5KViews13likes0CommentsUnlocking the Power of Azure Arc-enabled Kubernetes: Simplifying Hybrid and MultiCloud Management
In today’s rapidly evolving technological landscape, managing applications across hybrid and multi-cloud environments has emerged as a complex challenge. Enter Azure Arc-enabled Kubernetes, a groundbreaking solution designed to simplify and streamline these operations. Let’s delve into the myriad offerings of Azure Arc-enabled Kubernetes and illustrate how it can transform hybrid and multi-cloud management for you. What is Azure Arc-enabled Kubernetes? Azure Arc-enabled Kubernetes extends Azure management capabilities to Kubernetes clusters running on-premises, at the edge, or in other cloud environments. By integrating with the Azure ecosystem, it provides a unified management experience, enabling you to manage, govern, and secure your Kubernetes clusters from a single control plane. Key Features and Offerings Unified Management Azure Arc-enabled Kubernetes brings all your Kubernetes clusters, whether on-premises or in the cloud, under one management umbrella. This unified approach simplifies operations, such as monitoring, reducing the complexity and overhead associated with managing disparate environments. Consistent Deployment One of the standout features of Azure Arc-enabled Kubernetes is its ability to deliver consistent deployment across various environments. By using GitOps-based configuration, you can ensure that your applications and infrastructure are deployed consistently, regardless of the underlying infrastructure. This consistency enhances reliability and reduces the risk of misconfigurations. Enhanced Security and Compliance Security and compliance are paramount in today’s IT landscape. Azure Arc-enabled Kubernetes leverages Azure Security Center and Azure Policy to provide robust security and compliance capabilities. With policy enforcement and threat detection, you can ensure that your Kubernetes clusters meet stringent security standards. Seamless Integration with Azure Services Azure Arc-enabled Kubernetes integrates seamlessly with a wide array of Azure services. Whether it’s Azure Monitor for observability, Azure DevOps for CI/CD pipelines and GitOps, or Azure Machine Learning for AI workloads, you can leverage Azure’s rich ecosystem to enhance your Kubernetes environments. Flexibility and Scalability Azure Arc-enabled Kubernetes offers unparalleled flexibility and scalability. It allows you to run your applications where it makes the most sense—on-premises, at the edge, or in the cloud—without compromising on management capabilities. This flexibility ensures that you can scale your operations seamlessly as your business grows. Simplifying Hybrid and Multi-Cloud Management Streamlined Operations Managing hybrid and multi-cloud environments traditionally involves dealing with multiple management tools and interfaces. Azure Arc-enabled Kubernetes streamlines these operations by providing a single management platform. This simplification reduces operational overhead and allows your IT team to focus on strategic initiatives rather than mundane management tasks. Centralized Governance Governance is a critical aspect of managing hybrid and multi-cloud environments. With Azure Arc-enabled Kubernetes, you can apply policies consistently across all your Kubernetes clusters. This centralized governance ensures that your environments comply with corporate and regulatory standards, regardless of where they are hosted. Improved Visibility Visibility is key to effective management. Azure Arc-enabled Kubernetes provides comprehensive visibility into your Kubernetes clusters through Azure Monitor and Azure Security Center. This enhanced visibility allows you to monitor the health, performance, and security of your clusters in real-time, enabling proactive management and quicker issue resolution. Reduced Total Cost of Ownership (TCO) By consolidating management operations and leveraging Azure’s integrated services, Azure Arc-enabled Kubernetes can significantly reduce the total cost of ownership. This reduction in TCO is achieved through decreased operational complexity, improved resource utilization, and the elimination of the need for multiple management tools. Real-World Use Cases To better understand the impact of Azure Arc-enabled Kubernetes, let’s explore some real-world use cases: Financial Services In the financial services industry, data privacy and compliance are of utmost importance. Azure Arc-enabled Kubernetes allows financial institutions to manage their Kubernetes clusters across on-premises data centers and public clouds, ensuring consistent security policies and compliance with regulatory requirements. Take a look at one of our customer’s case study: Microsoft Customer Story-World Bank invests in greater efficiency and security with Microsoft Azure Arc Healthcare Healthcare organizations can leverage Azure Arc-enabled Kubernetes to manage their applications across hybrid environments. This capability is crucial for maintaining data sovereignty and complying with health regulations while enabling efficient application deployment and management. Retail Retail businesses often operate in a hybrid environment, with applications running in on-premises data centers and public clouds. Azure Arc-enabled Kubernetes provides a unified management platform, allowing retailers to manage their applications consistently and efficiently, enhancing customer experiences and operational efficiency. Take a look at one of our customer’s case study: Microsoft Customer Story-DICK’S Sporting Goods creates an omnichannel athlete experience using Azure Arc and AKS Getting Started with Azure Arc-enabled Kubernetes Prerequisites Before you start, ensure that you have the following prerequisites in place: An Azure subscription Kubernetes clusters (on-premises, at the edge, or in other cloud environments) Azure CLI installed Basic knowledge of Kubernetes and Azure For additional prerequisites please refer here How to Discover and Deploy Kubernetes applications that support Azure Arc-enabled clusters: Discover Kubernetes Applications: 1. In the Azure portal, search for Marketplace on the top search bar. In the results, under Services, select Marketplace. 2. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, on the left side under Categories select Containers. 3. You'll see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select See more. 4. Search for the applications using the ‘publisherId’ that was identified earlier as part of discovering applications that support connected clusters. Deploying a Kubernetes application using the Azure Portal: 1. On the Plans + Pricing tab, review the options. If there are multiple plans available, find the one that meets your needs. Review the terms on the page to make sure they're acceptable, and then select Create. 2. Select the resource group and Arc-enabled cluster to which you want to deploy the application. 3. Complete all pages of the deployment wizard to specify all configuration options that the application requires. 4. When you're finished, select Review + Create, then select Create to deploy the offer. 5. When the application is deployed, the portal shows Your deployment is complete, along with details of the deployment. 6. Lastly, verify the deployment navigating to the cluster you recently installed the extension on, then navigate to Extensions, where you'll see the extension status. If the deployment was successful, the Status will be Succeeded. If the status is Creating, the deployment is still in progress. Wait a few minutes then check again. Conclusion Azure Arc-enabled Kubernetes is a powerful solution that simplifies hybrid and multi-cloud management for tech enthusiasts and enterprises alike. With its unified management capabilities, consistent deployment, enhanced security, and seamless integration with Azure services, it transforms the way you manage your Kubernetes clusters. By adopting Azure Arc-enabled Kubernetes, you can streamline operations, improve visibility, and reduce costs, all while leveraging the flexibility and scalability of hybrid and multi-cloud environments. Embrace the future of Kubernetes management with Azure Arc and unlock the full potential of your hybrid and multi-cloud strategy. 😄282Views0likes0Comments