kubernetes
76 TopicsTechnology & Services partners are jumping on the bandwagon of Azure Arc
The Azure Arc partner ecosystem offers customers validated, enterprise grade solutions to run Azure on-premises and at the edge. Launched at Microsoft Ignite 2021 with support from industry-leading OEMs, hardware providers, platform providers, and ISVs, we are happy to announce the expansion of the Azure Arc network of trusted partners and validated platforms to data services.92KViews5likes3CommentsRealizing Machine Learning anywhere with Azure Kubernetes Service and Arc-enabled Machine Learning
We are thrilled to announce the general availability of Azure Machine Learning (Azure ML) Kubernetes compute, including support of seamless Azure Kubernetes Service (AKS) integration and Azure Arc-enabled Machine Learning. With a simple cluster extension deployment on AKS or Azure Arc-enabled Kubernetes (Arc Kubernetes) cluster, Kubernetes cluster is seamlessly supported in Azure ML to run training or inference workload. In addition, Azure ML service capabilities for streamlining full ML lifecycle and automation with MLOps become instantly available to enterprise teams of professionals. Azure ML Kubernetes compute empowers enterprises ML operationalization at scale across different infrastructures and addresses different needs with seamless experience of Azure ML CLI v2, Python SDK v2 (preview), and Studio UI. Here are some of the capabilities that customers can benefit Deploy ML workload on customer managed AKS cluster and gain more security and controls to meet compliance requirements. Run Azure ML workload on Arc Kubernetes cluster right where data lives and meets data residency, security, and privacy compliance, or harness existing IT investment. Use Arc Kubernetes cluster to deploy ML workload or aspect of ML lifecycle across multiple public clouds. Fully automated hybrid workload in cloud and on-premises to leverage different infrastructure advantages and IT investments. How it works The IT-operations team and data-science team are both integral parts of the broader ML team. By letting the IT-operations team manage Kubernetes compute setup, Azure ML creates a seamless compute experience for data-science team who does not need to learn or use Kubernetes directly. The design for Azure ML Kubernetes compute also helps IT-operations team leverage native Kubernetes concepts such as namespace, node selector, and resource requests/limits for ML compute utilization and optimization. Data-science team now can focus on models and work with productivity tools such as Azure ML CLI v2, Python SDK v2, Studio UI, and Jupyter notebook. It is easy to enable and use an existing Kubernetes cluster for Azure ML workload with the following simple steps: IT-operation team. The IT-operation team is responsible for the first 3 steps above: prepare an AKS or Arc Kubernetes cluster, deploy Azure ML cluster extension, and attach Kubernetes cluster to Azure ML workspace. In addition to these essential compute setup steps, IT-operation team also uses familiar tools such as Azure CLI or kubectl to take care of the following tasks for the data-science team: Network and security configurations, such as outbound proxy server connection or Azure firewall configuration, Azure ML inference router (azureml-fe) setup, SSL/TLS termination, and no-public IP with VNET. Create and manage instance types for different ML workload scenarios and gain efficient compute resource utilization. Trouble shooting workload issues related to Kubernetes cluster. Data-science team. Once the IT-operations team finishes compute setup and compute target(s) creation, data-science team can discover list of available compute targets and instance types in Azure ML workspace to be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as Azure ML CLI v2, Python SDK v2, or Studio UI. Recommended best practices Separation of responsibilities between the IT-operations team and data-science team. As we mentioned above, managing your own compute and infrastructure for ML workload is a complicated task and it is best to be done by IT-operations team so data-science team can focus on ML models for organizational efficiency. Create and manage instance types for different ML workload scenarios. Each ML workload uses different amounts of compute resources such as CPU/GPU and memory. Azure ML implements instance type as Kubernetes custom resource definition (CRD) with properties of nodeSelector and resource request/limit. With a carefully curated list of instance types, IT-operations can target ML workload on specific node(s) and manage compute resource utilization efficiently. Multiple Azure ML workspaces share the same Kubernetes cluster. You can attach Kubernetes cluster multiple times to the same Azure ML workspace or different Azure ML workspaces, creating multiple compute targets in one workspace or multiple workspaces. Since many customers organize data science projects around Azure ML workspace, multiple data science projects can now share the same Kubernetes cluster. This significantly reduces ML infrastructure management overheads as well as IT cost saving. Team/project workload isolation using Kubernetes namespace. When you attach Kubernetes cluster to Azure ML workspace, you can specify a Kubernetes namespace for the compute target and all workloads run by the compute target will be placed under the specified namespace. New Azure ML use patterns enabled Azure Arc-enabled ML enables teams of ML professionals to build, train, and deploy models in any infrastructure on-premises and across multi-cloud using Kubernetes. This opens a variety of new use patterns previously unthinkable in cloud setting environment. Below table provides a summary of the new use patterns enabled by Azure ML Kubernetes compute, including where the training data resides in each use pattern, the motivation driving each use pattern, and how the use pattern is realized using Azure ML and infrastructure setup. Get started today To get started with Azure Machine Learning Kubernetes compute, please visit Azure ML documentation and GitHub repo, where you can find detailed instructions to setup Kubernetes cluster for Azure Machine Learning, and train or deploy models with a variety of Azure ML examples. Lastly, visit Azure Hybrid, Multicloud, and Edge Day and watch “Real time insights from edge to cloud” where we announced the GA.19KViews4likes0CommentsHow do AKS and AKS on Azure Stack HCI compare?
This blog is an update to the original blog published comparing AKS in Azure and on Azure Stack HCI, a year ago. Since then, we’ve released multiple features and fixes aimed at improving AKS consistency between Azure and on-premises that warranted a fresh blog 😊 Features in preview are marked by (*) Feature Set AKS on Azure Stack HCI & AKS on Windows Server AKS Kubernetes Management Cluster/AKS host AKS on Azure Stack HCI and Windows Server is a Cluster API based hosted Kubernetes offering. A management Kubernetes cluster is used to manage Kubernetes workload clusters. The management Kubernetes cluster runs in customer datacenters and is managed by the infrastructure administrator. AKS is a managed Kubernetes offering. AKS control plane is hosted and managed by Microsoft. AKS worker nodes are created in customer subscriptions. Kubernetes Target Cluster (lifecycle operations) Cloud Native Computing Foundation (CNCF) certification Yes Yes Who manages the cluster? Managed by you Managed by you Where is the cluster located? In your datacenter alongside your AKS hybrid management cluster. Azure Stack HCI 21H2 Windows Server 2019 Datacenter Windows Server 2022 Datacenter Windows 10/11 IoT Enterprise* Windows 10/11 Enterprise* Windows 10/11 Pro* Azure cloud K8s cluster lifecycle management tools (create, scale, update and delete clusters) PowerShell (PS) Windows Admin Center (WAC) Az CLI* Azure Portal* ARM templates* Az CLI Az PowerShell Azure Portal Bicep ARM templates Can you use kubectl and other open-source Kubernetes tools? Yes Yes Workload cluster updates K8s version upgrade through PowerShell or WAC. Initiated by you. Node OS image update initiated by you; Updates in a target cluster happen at the cluster level – control plane nodes + node pools updated. Azure CLI, Azure PS, Portal, ARM templates, GitHub Actions; OS image patch upgrade; Automatic upgrades; Planned maintenance windows; Kubernetes versions Continuous updates to supported Kubernetes versions. For latest version support, visit AKS hybrid releases on GitHub. Continuous updates to supported Kubernetes versions. For latest version support, run az aks get-versions. Can you start/stop K8s clusters to save costs? Yes, by stopping the underlying failover cluster Yes Azure Fleet Manager integration Not yet. Yes* Terraform support Not yet. Yes Node Pools Do you support running Linux and Windows node pools in the same cluster? Yes! Linux nodes: CBL-Mariner Windows nodes: Windows Server 2019 Datacenter, Windows Server 2022 Datacenter Yes. Linux nodes: Ubuntu 18.04, CBL-Mariner Windows nodes: Windows Server 2019 Datacenter Windows Server 2022 Datacenter What’s your container runtime? Linux nodes: containerd Windows nodes: containerd Linux nodes: containerd Windows nodes: containerd Can you scale node pools? Manually Cluster autoscaler Vertical pod autoscalar Manually Cluster autoscaler Vertical pod autoscalar Horizontal pod autoscalar Yes Yes What about virtual nodes? Azure container instance No Yes Can you upgrade a node pool? We do not support upgrading individual node pools. All upgrades happen at the K8s cluster level. You can perform node pool specific upgrades in an AKS cluster. GPU enabled node pools Yes* Yes Azure Container Registry Yes Yes KEDA support Not yet Yes* Networking Who creates and manages the networks? All networks (for both the management cluster and target K8s clusters) are created and managed by you By default, Azure creates the virtual network and subnet for you. You can also choose an existing virtual network to create your AKS clusters What type of network options are supported? DHCP networks with/without VLAN ID Static IP networks with/without VLAN ID SDN support for AKS on Azure Stack HCI Bring your own Azure virtual network for AKS clusters. Load balancers HAProxy (default) runs in a separate VM in the target K8s cluster kubeVIP – runs as a K8s service in the control plane K8s node Bring your own load balancer Load balancers are always given sIP addresses from a customer vip pool to ensure application and K8s cluster availability. You can create multiple instances of a LB (active-passive) for high availability Azure load balancer – Basic SKU or Standard SKU Can also use internal load balancer By default, load balancer IP address is tied to load balancer ARM resource. You can also assign a static public IP address directly to your Kubernetes service CNI/Network plugin Calico (default) Note: Network policies are covered in the Security and Authentication section. Azure CNI Calico Azure CNI Overlay Bring your own CNI Note: Network policies are covered in the Security and Authentication section. Ingress controllers No but you can use 3 rd party addons – Nginx. 3 rd party addons are not supported by Microsoft’s support policy. Support for Nginx with web app routing addon. Egress controls Egress is controlled by Network policies, by default all outbound traffic from pods is blocked. You can deploy additional egress controls and policies. You can use Azure Policy and NSGs to control network flow or use Calico policies. You can also use Azure FW and Azure Security Groups. Egress types Egress types and options depend on your network architecture. Azure load balancer, managed NAT gateway and user defined routes are the supported egress types. Customize CoreDNS Allowed Allowed Service Mesh Yes, Open Service Mesh (OSM) through Azure Arc enabled Kubernetes. 3 rd party addons – Istio, etc. 3 rd party addons are not supported by Microsoft’s support policy. Open Service Mesh Marketplace offering available for Istio Storage Where is the storage provisioned? On-premises Azure Storage. Azure Files and Azure Disk premium CSI drivers deployed by default. You can also deploy any custom storage class. What types of persistent volumes are supported? Read Write Once Read Write Many Read Write Once Read Write Many Do the storage drivers support Container Storage Interface (CSI)? Yes Yes Is dynamic provisioning supported? Yes Yes Is volume resizing supported? Yes Yes Are volume snapshots supported? No Yes Security and Authentication How do you access your Kubernetes cluster? Certificate based kubeconfig (default) AD based kubeconfig Azure AD and Kubernetes RBAC Azure AD and Azure RBAC* Certificate based kubeconfig (default) Azure AD and Kubernetes RBAC Azure AD and Azure RBAC Network Policies Yes, we support Calico network policies Yes, we support Calico and Azure CNI network policies Limit source networks that can access API server Yes, by using VIP pools. Yes, by using the “-api-server-authorized-ip-ranges” parameter and private clusters. Certificate rotation and secrets encryption Yes Yes Support for private cluster Not supported yet Yes! You can create private AKS clusters Secrets store CSI driver Yes Yes Support for disk encryption Yes, via bitlocker Disks are encrypted on the storage side with platform managed keys and with support for customer provided keys. Hosts and locally attached disks can also be encrypted with encryption at host. gMSA v2 support for Windows containers Yes Yes Azure Policy Yes, through Azure Arc enabled K8s Yes Azure Defender Yes, through Azure Arc enabled K8s* Yes Monitoring and Logging Collect logs Yes, through PS and WAC. All logs – management cluster, control plane nodes, target K8s clusters are collected. Yes, through Azure Portal, Az CLI, etc Support for Azure Monitor Yes, through Azure Arc enabled K8s. Yes 3 rd party addons for monitoring and logging AKS works with Azure managed Prometheus* and Azure managed Grafana* Subscribe to Azure Event Grid Events Yes, via Azure Arc enabled Kubernetes* Yes Develop and run applications Azure App service Yes, through Azure Arc enabled K8s* Yes Azure Functions Yes, through Azure Arc enabled K8s* Yes Azure Logic Apps Yes, through Azure Arc enabled K8s* You can directly create App Service, Functions, Logic Apps on Azure instead of creating on AKS Develop applications using Helm Yes Yes Develop applications using Dapr Yes, through Azure Arc enabled K8s* Yes DevOps Azure DevOps via Azure Arc enabled K8s. GitHub Actions via Azure Arc enabled K8s. GitOps Flux v2 via Azure Arc enabled K8s. 3 rd party addon: ArgoCD. 3 rd party addons are not supported by Microsoft’s support policy. GitOps Flux v2 through Azure Arc enabled Kubernetes is free for AKS-HCI customers. Azure DevOps GitHub Actions GitOps Flux v2 Product Pricing Product pricing If you have Azure Hybrid Benefit, you can use AKS-HCI at no additional cost. If you do not have Azure Hybrid Benefit pricing based on number of workload cluster vCPUs. Management cluster, control plane nodes, load balancers are free. Unlimited free clusters, pay for on-demand compute of the worker nodes. Paid tier available with uptime SLA, support for 5k nodes. Azure Support AKS-HCI is supported out of the Windows Server support organization aligned with Arc for Kubernetes and Azure Stack HCI. You can open support requests through the Azure portal and other support channels like Premier Support. AKS in Azure is supported through enterprise class support in the Azure team. You can open support requests in the Azure portal. SLA We do not offer SLAs since AKS-HCI runs in your environment. Paid uptime SLA clusters for production with fixed cost on the API + worker node compute, storage and networking costs.17KViews2likes3CommentsAzure Arc enabled Kubernetes is now Generally Available!
We are excited to bring Azure Arc enabled Kubernetes to general availability at our Spring Ignite event. Azure Arc enabled Kubernetes enables you to attach any CNCF-conformant Kubernetes cluster to Azure for management. Your clusters can run anywhere, and if they have connectivity to Azure, onboarding is as easy as deploying the Azure Arc cluster agents using the Azure CLI extension.17KViews6likes0CommentsRun Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc
Over the last couple of years, Azure customers have leaned towards Kubernetes for their on-premises needs. Kubernetes allows them to leverage cloud native technologies to innovate faster and take advantage of portability across the cloud and at the edge. We listened and launched Azure Arc enabled Kubernetes to integrate customers Kubernetes assets in Azure and centrally govern and manage Kubernetes clusters including Azure Kubernetes Service (AKS). We have now taken it one step further to leverage Kubernetes and enable training ML (Machine Learning) models using Azure Machine learning. Run machine learning seamlessly across on-premises, multi-cloud and at the edge Customers can now run their ML training on any Kubernetes target cluster in the Azure cloud, GCP, AWS, edge devices and on prem through Azure Arc enabled Kubernetes. This allows customers to use excess capacity either in the cloud or on prem increasing operational efficiency. With a few clicks, they can enable the Azure Machine Learning agent to run on any OSS Kubernetes cluster that Azure Arc supports. This, along with other key design patterns, ensures a seamless set up of the agent on any OSS Kubernetes cluster such as AKS, RedHat OpenShift, managed Kubernetes services from other cloud providers, etc. There are multiple benefits of this design including using core Kubernetes concepts to set up/ configure a cluster, running cloud native tools, such as, GitOps etc. Once the agent is successfully deployed, IT operators can either grant Data Scientists access to the entire cluster or a slice of the cluster, using native concepts such as namespaces, node selectors, taints / tolerations, etc. The configuration and lifecycle management of the cluster (setting up autoscaling, upgrading to newer Kubernetes versions) is transparent, flexible and the responsibility of the customers’ IT operations team. Built using familiar Kubernetes and cloud native concepts The core of the offering is an agent that extends the Kubernetes API. Once set up with a single command, the IT operator can view these Kubernetes objects (operators for TensorFlow, PyTorch, MPI, etc.) using familiar tools such as, kubectl. Data Scientists can continue to use familiar tools to run training jobs One of the core principles we adhered to was splitting the IT operator persona and the Data Scientist one with separate roles and responsibilities. Data scientists do not need to know anything about or learn Kubernetes. To them, it is yet another compute target that they can submit their training jobs to. They use familiar tools, such as, the Azure Machine Learning studio, Azure Machine Learning Python SDK (Software Development Kits) or OSS tools( Jupyter notebooks, TensorFlow, PyTorch, etc.) spending their time solving machine learning problems rather than worrying about infrastructure that they are running on. Ensure consistency across workloads with unified operations, management, and security. Kubernetes comes with its own sets of challenges around security, management and governance. The Azure Machine Learning team and the Azure Arc enabled Kubernetes team have worked together to ensure that not only is an IT operator able to centrally monitor and apply policies on your workloads on Arc infrastructure but also ensure that the interaction with Azure Machine Learning service is secure and compliant. This along with the consistent experience across the cloud and on prem clusters no longer require you to lift and shift machine learning workloads but seamlessly operate them across both. You can choose to just run in the cloud to take advantage of the scale or just run-on excess on- premises capacity while leveraging the single pane of glass Azure Arc provides to manage all your on-premises infrastructure. We welcome you to take advantage of the Arc enabled machine learning. Please sign up to access the preview here. We look forward to getting feedback from you so that we can continue to build a solution that meets your organizational needs. Resources Learn more about Azure Machine Learning. Try Azure Machine learning today.15KViews2likes0CommentsNew hybrid deployment options for AKS clusters from cloud to edge, enabled by Azure Arc
This Ignite, we're announcing new hybrid deployment options for AKS clusters from cloud to edge, enabled by Azure Arc. Read along to find out how managing AKS on-premises is even more easier and cost-effective!14KViews6likes1CommentDigital transformation at SKF through data driven manufacturing approach using Azure Arc enabled SQL
Read more about how leading manufacturer SKF delivered a data driven, unified control and management of all IT infrastructure and data across all factory IT/OT staff needs using Azure Data Services in a hybrid cloud environment which will be deployed in over 90 factories globally.14KViews3likes0CommentsAnnouncing landing zone accelerator for Azure Arc-enabled Kubernetes
Following our release a few months back of the new landing zone accelerator for Azure Arc-enabled servers, today we’re launching the Azure Arc-enabled Kubernetes landing zone accelerator within the Azure Cloud Adoption Framework.13KViews3likes0Comments