New Blog Post: Securing Multi-Cloud Gen AI workloads using Azure Native Solutions

Microsoft

Note: This series is part of “Security using Azure Native services” series and assumes that you are or planning to leverage Defender for Cloud, Defender XDR Portal, and Azure Sentinel.

Introduction

 

AI Based Technology introduces a new set of security risks that may not be comprehensively covered by existing risk management frameworks. Based on our experience, customers often only consider the risks related to the Gen AI models like OpenAI or Anthropic. Thereby, not taking a holistic approach that cover all aspects of the workload.

 

This article will help you:

  1. Understand a typical multi-cloud Gen AI workload pattern
  2. Articulate the technical risks exists in the AI workload
  3. Recommend security controls leveraging Azure Native services

 

We will not cover Data Security (cryptography, regulatory implications etc.), model specific issues like Hallucinations, deepfakes, privacy, toxicity, societal bias, supply chain security, attacks that leverage Gen AI capabilities to manifest such as Disinformation, Deepfakes, Financial Fraud etc. Instead, we aim to provide guidance on architectural security controls that will enable secure:

  • Configuration of AI workload
  • Operation of the workload

 

This is a two-part series:

  1. Part 1: Provides a framework to understand the threats related to Gen AI workloads holistically and an easy reference to the native security solutions that help mitigate. We also provide sample controls using leading industry frameworks.
  2. Part 2: Will dive deeper into the AI shared responsibility model and how that overlaps with your design choices

Threat Landscape

 

Let’s discuss some common threats:

  1. Insider abuse: An insider (human or machine) sending sensitive / proprietary information to a third party GenAI model
  2. Supply chain poisoning: Compromise of a third-party GenAI model (whether this is a SaaS or binary llm models developed by third party and downloaded by your organization)
  3. System abuse: Manipulating the model prompts to mislead the end user of the model
  4. Over privilege: Granting unrestricted permissions and capability to the model thereby allowing the model to perform unintentional actions
  5. Data theft/exfiltration: Intentional or unintentional exfiltration of the proprietary models, prompts, and model outputs
  6. Insecure configuration: Not following the leading practices when architecting and operating your AI workload
  7. Model poisoning: Tampering with the model itself to affect the desired behavior of the model
  8. Denial of Service: Impacting the performance of the model with resource intensive operations

 

We will discuss how these threats apply in a common architecture.

 

Reference architecture

 

TonyOPS_0-1724362906307.png

 

 

 

                                Fig. Gen-AI cloud native workload

 

Let’s discuss each step so we can construct a layered defense:

  1. Assuming you are following cloud native architecture patterns, your developer will publish all the application and infrastructure code in an Azure DevOps repo
  2. The DevOps pipeline will then Create a container image
  3. Pipeline will also set up respective API endpoints in Azure API management
  4. Pipeline will deploy the image with Kubernetes manifests (note that he secrets will stored out of bound in Azure Key Vault)
  5. User access an application that leverages GenAI (Open AI for Azure and Anthropic in AWS)
  6. Depending on the API endpoint requested, APIM will direct the request to the containerized application running in cloud native Kubernetes platforms (AKS or EKS)
  7. The application uses API credentials stored in KeyVault
  8. The application makes requests to appropriate Gen AI service
  9. The results are stored in a storage service and are reported back to the user who initiated step 5 above
  10. Each cloud native service stores the diagnostic logs in a centralized Log Analytics Workspace (LAW)
  11. Azure Sentinel is enabled on the LAW

For the full post click here: Securing Multi-Cloud Gen AI workloads using Azure Native Solutions - Microsoft Community Hub 

0 Replies