multicloud
73 TopicsTechnology & Services partners are jumping on the bandwagon of Azure Arc
The Azure Arc partner ecosystem offers customers validated, enterprise grade solutions to run Azure on-premises and at the edge. Launched at Microsoft Ignite 2021 with support from industry-leading OEMs, hardware providers, platform providers, and ISVs, we are happy to announce the expansion of the Azure Arc network of trusted partners and validated platforms to data services.92KViews5likes3CommentsRealizing Machine Learning anywhere with Azure Kubernetes Service and Arc-enabled Machine Learning
We are thrilled to announce the general availability of Azure Machine Learning (Azure ML) Kubernetes compute, including support of seamless Azure Kubernetes Service (AKS) integration and Azure Arc-enabled Machine Learning. With a simple cluster extension deployment on AKS or Azure Arc-enabled Kubernetes (Arc Kubernetes) cluster, Kubernetes cluster is seamlessly supported in Azure ML to run training or inference workload. In addition, Azure ML service capabilities for streamlining full ML lifecycle and automation with MLOps become instantly available to enterprise teams of professionals. Azure ML Kubernetes compute empowers enterprises ML operationalization at scale across different infrastructures and addresses different needs with seamless experience of Azure ML CLI v2, Python SDK v2 (preview), and Studio UI. Here are some of the capabilities that customers can benefit Deploy ML workload on customer managed AKS cluster and gain more security and controls to meet compliance requirements. Run Azure ML workload on Arc Kubernetes cluster right where data lives and meets data residency, security, and privacy compliance, or harness existing IT investment. Use Arc Kubernetes cluster to deploy ML workload or aspect of ML lifecycle across multiple public clouds. Fully automated hybrid workload in cloud and on-premises to leverage different infrastructure advantages and IT investments. How it works The IT-operations team and data-science team are both integral parts of the broader ML team. By letting the IT-operations team manage Kubernetes compute setup, Azure ML creates a seamless compute experience for data-science team who does not need to learn or use Kubernetes directly. The design for Azure ML Kubernetes compute also helps IT-operations team leverage native Kubernetes concepts such as namespace, node selector, and resource requests/limits for ML compute utilization and optimization. Data-science team now can focus on models and work with productivity tools such as Azure ML CLI v2, Python SDK v2, Studio UI, and Jupyter notebook. It is easy to enable and use an existing Kubernetes cluster for Azure ML workload with the following simple steps: IT-operation team. The IT-operation team is responsible for the first 3 steps above: prepare an AKS or Arc Kubernetes cluster, deploy Azure ML cluster extension, and attach Kubernetes cluster to Azure ML workspace. In addition to these essential compute setup steps, IT-operation team also uses familiar tools such as Azure CLI or kubectl to take care of the following tasks for the data-science team: Network and security configurations, such as outbound proxy server connection or Azure firewall configuration, Azure ML inference router (azureml-fe) setup, SSL/TLS termination, and no-public IP with VNET. Create and manage instance types for different ML workload scenarios and gain efficient compute resource utilization. Trouble shooting workload issues related to Kubernetes cluster. Data-science team. Once the IT-operations team finishes compute setup and compute target(s) creation, data-science team can discover list of available compute targets and instance types in Azure ML workspace to be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as Azure ML CLI v2, Python SDK v2, or Studio UI. Recommended best practices Separation of responsibilities between the IT-operations team and data-science team. As we mentioned above, managing your own compute and infrastructure for ML workload is a complicated task and it is best to be done by IT-operations team so data-science team can focus on ML models for organizational efficiency. Create and manage instance types for different ML workload scenarios. Each ML workload uses different amounts of compute resources such as CPU/GPU and memory. Azure ML implements instance type as Kubernetes custom resource definition (CRD) with properties of nodeSelector and resource request/limit. With a carefully curated list of instance types, IT-operations can target ML workload on specific node(s) and manage compute resource utilization efficiently. Multiple Azure ML workspaces share the same Kubernetes cluster. You can attach Kubernetes cluster multiple times to the same Azure ML workspace or different Azure ML workspaces, creating multiple compute targets in one workspace or multiple workspaces. Since many customers organize data science projects around Azure ML workspace, multiple data science projects can now share the same Kubernetes cluster. This significantly reduces ML infrastructure management overheads as well as IT cost saving. Team/project workload isolation using Kubernetes namespace. When you attach Kubernetes cluster to Azure ML workspace, you can specify a Kubernetes namespace for the compute target and all workloads run by the compute target will be placed under the specified namespace. New Azure ML use patterns enabled Azure Arc-enabled ML enables teams of ML professionals to build, train, and deploy models in any infrastructure on-premises and across multi-cloud using Kubernetes. This opens a variety of new use patterns previously unthinkable in cloud setting environment. Below table provides a summary of the new use patterns enabled by Azure ML Kubernetes compute, including where the training data resides in each use pattern, the motivation driving each use pattern, and how the use pattern is realized using Azure ML and infrastructure setup. Get started today To get started with Azure Machine Learning Kubernetes compute, please visit Azure ML documentation and GitHub repo, where you can find detailed instructions to setup Kubernetes cluster for Azure Machine Learning, and train or deploy models with a variety of Azure ML examples. Lastly, visit Azure Hybrid, Multicloud, and Edge Day and watch “Real time insights from edge to cloud” where we announced the GA.19KViews4likes0CommentsAzure Arc-enabled SQL Managed Instance Business Critical now generally available!
Today we announced the General availability of Azure Arc-enabled SQL MI Business critical tier at the Microsoft Build 2022 conference. This is the second major release of Arc-enabled SQL MI and SQL Server on Arc-enabled server which will help SQL Server customers globally to achieve business continuity running their data workloads on the Azure Edge while connected to Azure. In this blog, we will do a tour of these new improvements and benefits of the Business critical tier and General purpose tier.17KViews2likes0CommentsRun Azure Machine Learning anywhere - on hybrid and in multi-cloud with Azure Arc
Over the last couple of years, Azure customers have leaned towards Kubernetes for their on-premises needs. Kubernetes allows them to leverage cloud native technologies to innovate faster and take advantage of portability across the cloud and at the edge. We listened and launched Azure Arc enabled Kubernetes to integrate customers Kubernetes assets in Azure and centrally govern and manage Kubernetes clusters including Azure Kubernetes Service (AKS). We have now taken it one step further to leverage Kubernetes and enable training ML (Machine Learning) models using Azure Machine learning. Run machine learning seamlessly across on-premises, multi-cloud and at the edge Customers can now run their ML training on any Kubernetes target cluster in the Azure cloud, GCP, AWS, edge devices and on prem through Azure Arc enabled Kubernetes. This allows customers to use excess capacity either in the cloud or on prem increasing operational efficiency. With a few clicks, they can enable the Azure Machine Learning agent to run on any OSS Kubernetes cluster that Azure Arc supports. This, along with other key design patterns, ensures a seamless set up of the agent on any OSS Kubernetes cluster such as AKS, RedHat OpenShift, managed Kubernetes services from other cloud providers, etc. There are multiple benefits of this design including using core Kubernetes concepts to set up/ configure a cluster, running cloud native tools, such as, GitOps etc. Once the agent is successfully deployed, IT operators can either grant Data Scientists access to the entire cluster or a slice of the cluster, using native concepts such as namespaces, node selectors, taints / tolerations, etc. The configuration and lifecycle management of the cluster (setting up autoscaling, upgrading to newer Kubernetes versions) is transparent, flexible and the responsibility of the customers’ IT operations team. Built using familiar Kubernetes and cloud native concepts The core of the offering is an agent that extends the Kubernetes API. Once set up with a single command, the IT operator can view these Kubernetes objects (operators for TensorFlow, PyTorch, MPI, etc.) using familiar tools such as, kubectl. Data Scientists can continue to use familiar tools to run training jobs One of the core principles we adhered to was splitting the IT operator persona and the Data Scientist one with separate roles and responsibilities. Data scientists do not need to know anything about or learn Kubernetes. To them, it is yet another compute target that they can submit their training jobs to. They use familiar tools, such as, the Azure Machine Learning studio, Azure Machine Learning Python SDK (Software Development Kits) or OSS tools( Jupyter notebooks, TensorFlow, PyTorch, etc.) spending their time solving machine learning problems rather than worrying about infrastructure that they are running on. Ensure consistency across workloads with unified operations, management, and security. Kubernetes comes with its own sets of challenges around security, management and governance. The Azure Machine Learning team and the Azure Arc enabled Kubernetes team have worked together to ensure that not only is an IT operator able to centrally monitor and apply policies on your workloads on Arc infrastructure but also ensure that the interaction with Azure Machine Learning service is secure and compliant. This along with the consistent experience across the cloud and on prem clusters no longer require you to lift and shift machine learning workloads but seamlessly operate them across both. You can choose to just run in the cloud to take advantage of the scale or just run-on excess on- premises capacity while leveraging the single pane of glass Azure Arc provides to manage all your on-premises infrastructure. We welcome you to take advantage of the Arc enabled machine learning. Please sign up to access the preview here. We look forward to getting feedback from you so that we can continue to build a solution that meets your organizational needs. Resources Learn more about Azure Machine Learning. Try Azure Machine learning today.15KViews2likes0CommentsEvaluate SQL Server configuration using Best practices assessment for Azure Arc Enabled SQL Server
Azure Arc-enabled SQL Server extends Azure services to SQL Server instances hosted outside of Azure; in your datacenter, on the edge, or in a multi-cloud environment.. Azure ARC-enabled SQL Server provides a single pane of glass for all SQL deployments irrespective of their location. This article provides instructions for using best practices assessment on an instance of Azure Arc-enabled SQL Server.15KViews3likes3CommentsAnnouncing landing zone accelerator for Azure Arc-enabled Kubernetes
Following our release a few months back of the new landing zone accelerator for Azure Arc-enabled servers, today we’re launching the Azure Arc-enabled Kubernetes landing zone accelerator within the Azure Cloud Adoption Framework.13KViews3likes0CommentsIn preview: SSH access to Azure Arc-enabled servers
Remote server management is a critical tool for server administrators. Whether you are running automation or using interactively, SSH based remoting is to connect to your remote server. Starting today, you can now securely SSH into your Arc enabled servers without a public IP address or additional ports from an external network!11KViews4likes6CommentsDirectly connected mode for Azure Arc-enabled data services is generally available.
Manage and operate your data centrally from Azure using Azure Arc-enabled data services in directly connected mode with the benefits of using Azure services and the control of on-premises cloud.11KViews1like0CommentsAnnouncing Public Preview of Viewing SQL Server Databases - Azure Arc
We are excited to announce the public preview of viewing Databases for Azure Arc-enabled SQL Server. The feature surfaces all the active databases and their configurations for each of the Arc enabled SQL Servers in Azure.8.9KViews4likes0CommentsAzure Arc-enabled Data Services Overview
Arc enabled data services brings Azure benefits to outside of Azure, to your infrastructure wherever it is. Arc enabled data services provides PaaS like capabilities such as built in scalability, built in automated backups, buiilt in monitoring, built in HA, built in DR etc.8.8KViews3likes0Comments