Blog Post

AI - AI Platform Blog
5 MIN READ

New controls for model governance and secure access to on-premises or custom VNET resources

meera-kurup's avatar
meera-kurup
Icon for Microsoft rankMicrosoft
Oct 31, 2024

New enterprise security and governance features in Azure AI for October 2024

At Microsoft, we’re focused on helping customers build and use AI that is trustworthy, meaning AI that is secure, safe and private. This month, we’re pleased to highlight new security capabilities that support enterprise-readiness, so organizations can build and scale GenAI solutions with confidence:

  • Enhanced model governance: Control which GenAI models are available for deployment from the Azure AI Foundry model catalog with new built-in and custom policies
  • Secure access to hybrid resources: Securely access on-premises and custom VNET resources from your managed VNET with Application Gateway for your training, fine-tuning, and inferencing needs

Below, we share more information about these enterprise features and guidance to help you get started.

 

Control which GenAI models are available for deployment from the Azure AI Foundry model catalog with new built-in and custom policies (public preview)

The Azure AI Foundry model catalog offers over 1,700 models for developers to explore, evaluate, customize, and deploy. While this vast selection empowers innovation and flexibility, it can also present significant challenges for enterprises that want to ensure all deployed models align with their internal policies, security standards, and compliance requirements. Now, Azure AI administrators can use new Azure policies to restrict select models for deployment from the Azure AI Foundry model catalog, for greater control and compliance.

 

With this update, organizations can use pre-built policies for Model as a Service (MaaS) and Model as a Platform (MaaP) deployments or create custom policies for Azure OpenAI Service and other AI services using detailed guidance:

 

1) Apply a built-in policy for MaaS and MaaP

Admins can now leverage the "[Preview] Azure Machine Learning Deployments should only use approved Registry Models" built-in policy within Azure Portal. This policy enables admins to specify which MaaS and MaaP models are approved for deployment. When developers access the model catalog from Azure AI Foundry or Azure Machine Learning, they will only be able to deploy approved models. See the documentation here: Control AI model deployment with built-in policies - Azure AI Foundry.

 

2) Build a custom policy for AI Services and Azure OpenAI Service

Admins can now create custom policies for Azure AI Services and models in Azure OpenAI Service using detailed guidance. With custom policies, admins can tailor which services and models are accessible to their development teams, helping to align deployments with their organization's compliance requirements. See the documentation here: Control AI model deployment with custom policies - Azure AI Foundry.

 

Together, these policies provide comprehensive coverage for creating an allowed model list and enforcing it across Azure Machine Learning and Azure AI Foundry.

 

Securely access on-premises and custom VNET resources from your managed VNET ​with Application Gateway (public preview)

Virtual networks keep your network traffic securely isolated in your own tenant, even when other customers use the same physical servers. Previously, Azure AI customers could only access Azure resources from their managed virtual network (VNET) that were supported by private endpoints (see a list of supported private endpoints here). This meant hybrid cloud customers using a managed VNET could not access machine learning resources that were not within an Azure subscription, such as resources located on-premises, or resources located in their custom Azure VNET but not supported with a private endpoint.

 

Now, Azure Machine Learning and Azure AI Foundry customers can securely access on-premises or custom VNET resources for their training, fine-tuning, and inferencing scenarios from their managed VNET using Application Gateway. Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway will support a private connection from a managed VNET to any resources using an HTTP or HTTPs protocol. With this capability, customers can access the machine learning resources they need from outside their Azure subscription without compromising their security posture.

 

Supported scenarios for Azure AI customers using hybrid cloud

Today, Application Gateway is verified to support a private connection to Jfrog Artifactory, Snowflake Database, and Private APIs, supporting critical use cases for enterprise:

 

  1. JFrog Artifactory is used to store custom Docker images for training and inferencing pipelines, store trained models ready to deploy, and for security and compliance of machine learning models and dependencies used in production. JFrog Artifactory may be in another Azure VNET, separate from the VNET used to access the Azure Machine Learning workspace or Azure AI Foundry project. Thus, a private connection is necessary to secure the data transferred from a managed VNET to the JFrog Artifactory resource.

  2. Snowflake is a cloud data platform where users may store their data for training and fine-tuning models on managed compute. To securely send and receive data, a connection to a Snowflake database should be entirely private and never exposed to the Internet.

  3. Private APIs are used for managed online endpoints. Managed online endpoints are used to deploy machine learning models for real-time inferencing. Certain private APIs could be required to deploy managed online endpoints and must be secured through a private network.

Get started with Application Gateway

To get started with Application Gateway in Azure Machine Learning, see How to access on-premises resources - Azure Machine Learning. To get started with Application Gateway in Azure AI Foundry, see How to access on-premises resources - Azure AI Foundry.

 

Use Application Gateway to securely access on-premises and custom VNET resources

 

How to use Microsoft Cost Management to analyze and optimize your Azure OpenAI Service costs

One more thing... As organizations increasingly rely on AI for core operations, it has become essential to closely track and manage AI spend. In this month's blog, the Microsoft Cost Management team does a great job highlighting tools to help you analyze, monitor, and optimize your costs with Azure OpenAI Service. Read it here: Microsoft Cost Management updates.

 

Build secure, production-ready GenAI apps with Azure AI Foundry

Ready to go deeper? Check out these top resources:

 

Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024! We’ll share the latest from Azure AI and go deeper into enterprise-grade security capabilities with these sessions:

Please note: This article was edited on Dec 27, 2024 to reflect updated naming for Azure AI Foundry (formerly Azure AI Studio). No other content has been changed. Learn more about Azure AI Foundry.

Updated Dec 27, 2024
Version 3.0
  • Thank you, Meera and the team! Great features - will be highly appreciated by our customers, as per collected signals!