Blog Post

Microsoft Purview Blog
8 MIN READ

AI Security Ideogram: Practical Controls and Accelerated Response with Microsoft

shrutiailani's avatar
shrutiailani
Icon for Microsoft rankMicrosoft
Oct 02, 2025

Shruti Ailani and Hesham Abdelaal hereby cover Microsoft's two-pronged approach on Security for AI and AI for Security, focusing more comprehensively on the former, at data, identity (agents), MaaS, SaaS, PaaS, network and other layers, in order to safeguard your AI journey.

Overview 

As organizations scale generative AI, two motions must advance in lockstep: hardening the AI stack (“Security for AI”) and using AI to supercharge SecOps (“AI for Security”). This post is a practical map—covering assets, common attacks, scope, solutions, SKUs, and ownership—to help you ship AI safely and investigate faster. 

 

 

Why both motions matter, at the same time 

  • Security for AI (hereafter ‘ Secure AI’ ) guards prompts, models, apps, data, identities, keys, and networks; it adds governance and monitoring around GenAI workloads (including indirect prompt injection from retrieved documents and tools). Agents add complexity because one prompt can trigger multiple actions, increasing the blast radius if not constrained.  
  • AI for Security uses Security Copilot with Defender XDR, Microsoft Sentinel, Purview, Entra, and threat intelligence to summarize incidents, generate KQL, correlate signals, and recommend fixtures and betterments. Promptbooks make automations easier, while plugins provide the opportunity to use out of the box as well as custom integrations. 
    SKU: Security Compute Units (SCU).  
    Responsibility: Shared (customer uses; Microsoft operates).  

    The intent of this blog is to cover Secure AI stack and approaches through matrices and mind map. This blog is not intended to cover AI for Security in detail. For AI for Security, refer Microsoft Security Copilot.

 

The Secure AI stack at a glance 

 

 

At a high level, the controls align to the following three layers: 

  1. AI Usage (SaaS Copilots & prompts) — Purview sensitivity labels/DLP for Copilot and Zero Trust access hardening prevent oversharing and inadvertent data leakage when users interact with GenAI.  
  2. AI Application (GenAI apps, tools, connectors) — Azure AI Content Safety (Prompt Shields, cross prompt injection detection), policy mediation via API Management, and Defender for Cloud’s AI alerts reduce jailbreaks, XPIA/UPIA, and tool based exfiltration.  This layer also includes GenAI agents. 
  3. AI Platform & Model (foundation models, data, MLOps) — Private Link, Key Vault/Managed HSM, RBAC controlled workspaces and registries (Azure AI Foundry/AML), GitHub Advanced Security, and platform guardrails (Firewall/WAF/DDoS) harden data paths and the software supply chain end-to-end.  

 

Let’s understand the potential attacks, vulnerabilities and threats at each layer in more detail:

1) Prompt/Model protection (jailbreak, UPIA/system prompt override, leakage) 

Scope: GenAI applications (LLM, apps, data) → Azure AI Content Safety (Prompt Shields, content filters), grounded-ness detection, safety evaluations in Azure AI Foundry, and Defender for Cloud AI threat protection.
Responsibility: Shared (Customer/Microsoft).
SKU: Content Safety & Azure OpenAI consumption; Defender for Cloud – AI Threat Protection.  

2) Cross-prompt Injection (XPIA) via documents & tools 

Strict allow-lists for tools/connectors, Content Safety XPIA detection, API Management policies, and Defender for Cloud contextual alerts reduce indirect prompt injection and data exfiltration.
Responsibility: Customer (config) & Microsoft (platform signals).
SKU: Content Safety, API Management, Defender for Cloud – AI Threat Protection.  

3) Sensitive data loss prevention for Copilots (M365) 

Use Microsoft Purview (sensitivity labels, auto-labeling, DLP for Copilot) with enterprise data protection and Zero Trust access hardening to prevent PII/IP exfiltration via prompts or Graph grounding.
Responsibility: Customer.
SKU: M365 E5 Compliance (Purview), Copilot for Microsoft 365.   

4) Identity & access for AI services 

Entra Conditional Access (MFA/device), ID Protection, PIM, managed identities, role based access to Azure AI Foundry/AML, and access reviews mitigate over privilege, token replay, and unauthorized finetuning.
Responsibility: Customer.
SKU: Entra ID P2.  

5) Secrets & keys 

Protect against key leakage and secrets in code using Azure Key Vault/Managed HSM, rotation policies, Defender for DevOps and GitHub Advanced Security secret scanning.
Responsibility: Customer.
SKU: Key Vault (Std/Premium), Defender for Cloud – Defender for DevOps, GitHub Advanced Security.  

6) Network isolation & egress control 

Use Private Link for Azure OpenAI and data stores, Azure Firewall Premium (TLS inspection, FQDN allow-lists), WAF, and DDoS Protection to prevent endpoint enumeration, SSRF via plugins, and exfiltration.
Responsibility: Customer.
SKU: Private Link, Firewall Premium, WAF, DDoS Protection.  

7) Training data pipeline hardening 

Combine Purview classification/lineage, private storage endpoints & encryption, human-in-the-loop review, dataset validation, and safety evaluations pre/post finetuning.
Responsibility: Customer.
SKU: Purview (E5 Compliance / Purview), Azure Storage (consumption).  

8) Model registry & artifacts 

Use Azure AI Foundry/AML workspaces with RBAC, approval gates, versioning, private registries, and signed inferencing images to prevent tampering and unauthorized promotion.
Responsibility: Customer.
SKU: AML; Azure AI Foundry (consumption).  

9) Supply chain & CI/CD for AI apps 

GitHub Advanced Security (CodeQL, Dependabot, secret scanning), Defender for DevOps, branch protection, environment approvals, and policy-as-code guardrails protect pipelines and prompt flows.
Responsibility: Customer.
SKU: GitHub Advanced Security; Defender for Cloud – Defender for DevOps.  

10) Governance & risk management 

Microsoft Purview AI Hub, Compliance Manager assessments, Purview DSPM for AI, usage discovery and policy enforcement govern “shadow AI” and ensure compliant data use.
Responsibility: Customer.
SKU: Purview (E5 Compliance/addons); Compliance Manager.  

11) Monitoring, detection & incident

Defender for Cloud ingests Content Safety signals for AI alerts; Defender XDR and Microsoft Sentinel consolidate incidents and enable KQL hunting and automation.
Responsibility: Shared.
SKU: Defender for Cloud; Sentinel (consumption); Defender XDR (E5/E5 Security).  

12) Existing landing zone baseline 

Adopt Azure Landing Zones with AI-ready design, Microsoft Cloud Security Benchmark policies, Azure Policy guardrails, and platform automation.
Responsibility: Customer (with Microsoft guidance).
SKU: Guidance + Azure Policy (included); Defender for Cloud CSPM. 

 Mapping attacks to controls  

 

 

 

This heatmap ties common attack themes (prompt injection, cross-prompt injection, sensitive data loss, identity & keys, network egress, training data, registries, supply chain, governance, monitoring, and landing zone) to the primary Microsoft controls you’ll deploy. Use it to drive backlog prioritization. 

 

Quick decision table (assets → attacks → scope → solution) 

Use this as a guide during design reviews and backlog planning. The rows below are a condensed extract of the broader map in your workbook.  

Asset Class 

Possible Attack 

Scope 

Solution 

Data 

 

Sensitive info disclosure / Risky AI usage 

Microsoft AI 

Purview DSPM for AI; Purview DSPM for AI + IRM 

Unknown interactions for enterprise AI apps 

Microsoft AI 

Purview DSPM for AI 

Unethical behavior in AI apps 

Microsoft AI 

Purview DSPM for AI + Comms Compliance 

Sensitive info disclosure / Risky AI usage 

Non-Microsoft AI 

Purview DSPM for AI + IRM 

Unknown interactions for enterprise AI apps 

Non-Microsoft AI 

Purview DSPM for AI 

Unethical behavior in AI apps 

Non-Microsoft AI 

Purview DSPM for AI + Comms Compliance 

Models (MaaS) 

 

Supply-chain attacks (ML registry / DevOps of AI) 

OpenAI LLM 

OOTB built-in; Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Secure registries/workspaces compromise 

OpenAI LLM 

OOTB built-in 

Secure models running inside containers 

OpenAI LLM 

OOTB built-in 

Training data poisoning 

OpenAI LLM 

OOTB built-in 

Model theft 

OpenAI LLM 

OOTB built-in 

Prompt injection (XPIA) 

OpenAI LLM 

OOTB built-in; Azure AI Foundry – Content Safety / Prompt Shield 

Crescendo 

OpenAI LLM 

OOTB built-in 

Jailbreak 

OpenAI LLM 

OOTB built-in 

Supply-chain attacks (ML registry / DevOps of AI) 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Secure registries/workspaces compromise 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Secure models running inside containers 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Training data poisoning 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Model theft 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Prompt injection (XPIA) 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Crescendo 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

Jailbreak 

Non-OpenAI LLM 

Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time 

GenAI Applications (SaaS) 

 

Jailbreak 

Microsoft Copilot SaaS 

OOTB built-in 

Prompt injection (XPIA) 

Microsoft Copilot SaaS 

OOTB built-in 

Wallet abuse 

Microsoft Copilot SaaS 

OOTB built-in 

Credential theft 

Microsoft Copilot SaaS 

OOTB built-in 

Data leak / exfiltration 

Microsoft Copilot SaaS 

OOTB built-in 

Insecure plugin design 

Microsoft Copilot SaaS 

Responsibility: Provider/Creator
Example 1: Microsoft plugin: responsibility to secure lies with Microsoft

Example 2: 3rd party custom plugin: responsibility to secure lies with the 3rd party provider.

Example 3: customer-created plugin: responsibility to secure lies with the plugin creator.

Shadow AI 

Microsoft Copilot SaaS or non-Microsoft SaaS gen AI

APPS: Purview DSPM for AI (endpoints where browser extension is installed)

+ Defender for Cloud Apps 

AGENTS: Entra agent ID (preview) + Purview DSPM for AI

Jailbreak 

Non-Microsoft GenAI SaaS 

SaaS provider

Prompt injection (XPIA) 

Non-Microsoft GenAI SaaS 

SaaS provider

Wallet abuse 

Non-Microsoft GenAI SaaS 

SaaS provider

Credential theft 

Non-Microsoft GenAI SaaS 

SaaS provider

Data leak / exfiltration 

Non-Microsoft GenAI SaaS 

Purview DSPM for AI

Insecure plugin design 

Non-Microsoft GenAI SaaS 

SaaS provider

Shadow AI 

Microsoft Copilot SaaS or non-Microsoft SaaS GenAI

APPS: Purview DSPM for AI (endpoints where browser extension is installed)

+ Defender for Cloud Apps 

AGENTS: Entra agent ID (preview) + Purview DSPM for AI

Agents (Memory) 

Memory injection 

Microsoft PaaS (Azure AI Foundry) agents 

Defender for AI – Run-time* 

Memory exfiltration 

Microsoft PaaS (Azure AI Foundry) agents 

Defender for AI – Run-time* 

Memory injection 

Microsoft Copilot Studio agents 

Defender for AI – Run-time* 

Memory exfiltration 

Microsoft Copilot Studio agents 

Defender for AI – Run-time* 

Memory injection 

Non-Microsoft PaaS agents 

Defender for AI – Run-time* 

Memory exfiltration 

Non-Microsoft PaaS agents 

Defender for AI – Run-time* 

Identity 

 

Tool misuse / Privilege escalation 

Enterprise 

Entra for AI / Entra Agent ID – GSA Gateway 

Token theft & replay attacks 

Enterprise 

Entra for AI / Entra Agent ID – GSA Gateway 

Agent sprawl & orphaned agents 

Enterprise 

Entra for AI / Entra Agent ID – GSA Gateway 

AI agent autonomy 

Enterprise 

Entra for AI / Entra Agent ID – GSA Gateway 

Credential exposure 

Enterprise 

Entra for AI / Entra Agent ID – GSA Gateway 

PaaS 

 

General AI platform attacks 

Azure AI Foundry (Private Preview) 

Defender for AI 

General AI platform attacks 

Amazon Bedrock 

Defender for AI*
(AI-SPM GA, Workload protection is on roadmap)

General AI platform attacks 

Google Vertex AI 

Defender for AI*
(AI-SPM GA, Workload protection is on roadmap)

Network / Protocols (MCP) 

Protocol-level exploits (unspecified) 

Custom / Enterprise 

Defender for AI *

*roadmap 
OOTB = Out of the box (built-in) 

This table consolidates the mind map into a concise reference showing each asset class, the threats/attacks, whether they are scoped to Microsoft or non-Microsoft ecosystems, and the recommended solutions mentioned in the diagram. 

Here is a mind map corresponding to the table above, for easier visualization:

Mind map as of 30 Sep 2025 (to be updated in case there are technology enhancements or changes by Microsoft)

 

OWASP-style risks in SaaS & custom GenAI apps—what’s covered 

Your map calls out seven high frequency risks in LLM apps (e.g., jailbreaks, cross prompt injection, wallet abuse, credential theft, data exfiltration, insecure plugin design, and shadow LLM apps/plugins). For Security Copilot (SaaS), mitigations are built-in/OOTB; for non-Microsoft AI apps, pair Azure AI Foundry (Content Safety, Prompt Shields) with Defender for AI (runtime), AISPM via MDCSPM (build-time), and Defender for Cloud Apps to govern unsanctioned use. 

What to deploy first (a pragmatic order of operations) 

  1. Land the platform: Existing landing zone with Private Link to models/data, Azure Policy guardrails, and Defender for Cloud CSPM.   
  2. Lock down identity & secrets: Entra Conditional Access/PIM and Key Vault + secret scanning in code and pipelines.  
  3. Protect usage: Purview labels/DLP for Copilot; Content Safety shields and XPIA detection for custom apps; APIM policy mediation.  
  4. Govern & monitor: Purview AI Hub and Compliance Manager assessments; Defender for Cloud AI alerts into Defender XDR/Sentinel with KQL hunting & playbooks.  
  5. Scale SecOps with AI: Light up Copilot for Security across XDR/Sentinel workflows and Threat Intelligence/EASM.  

The below table shows the different AI Apps and the respective pricing SKU.

 

 

There exists a calculator to estimate costs for your different AI Apps, Pricing - Microsoft Purview | Microsoft Azure. Contact your respective Microsoft Account teams to understand the mapping of the above SKUs to dollar value.

 

Conclusion:

Microsoft’s two-pronged strategy—Security for AI and AI for Security—empowers organizations to safely scale generative AI while strengthening incident response and governance across the stack.

By deploying layered controls and leveraging integrated solutions, enterprises can confidently innovate with AI while minimizing risk and ensuring compliance.

 

 

 

Updated Oct 03, 2025
Version 3.0
No CommentsBe the first to comment