Blog Post

Marketplace blog
4 MIN READ

AI data governance made easy: How Microsoft Purview tackles GenAI risks and builds trust

vicperdana's avatar
vicperdana
Icon for Microsoft rankMicrosoft
Aug 04, 2025

Discover how Microsoft Purview helps software developers secure AI applications, protect data, and comply with regulations—without slowing innovation

As AI transforms software development, the opportunities are vast - but so are the risks. AI promises faster innovation, smarter experiences, and new business models. But behind the excitement, leaders across industries are grappling with a core question:

“How do I unlock the benefits of AI while protecting my data, complying with regulations, and maintaining customer trust?”

In our 7th episode of the Security for Software Development Companies webinar series - Safeguard Data Security and Privacy in AI-Driven Applications - we addressed this challenge directly. Featuring Microsoft experts Kyle Marsh and Vic Perdana, this session revealed how Microsoft Purview delivers practical, built-in security for AI applications, helping software development companies and enterprise developers meet security expectations from day one.

AI security is now a top concern for business leaders

The shift toward AI-driven applications has heightened concern among CISOs and decision makers. Recent research from the ISMG First Annual Generative AI Study revealed that:

 

Figure 1. CISOs and decision-makers cite data leakage (82%), hallucinations and ethics (73%), and regulatory confusion (55%) as top concerns. Nearly half expect to ban AI unless risks are mitigated.

Microsoft Purview for AI: Visibility, control, and compliance by design

To address these risks without slowing innovation, Microsoft has extended Purview, our enterprise data governance platform, into the world of AI.

From Microsoft Copilot to custom GPT-based assistants, Purview now governs AI interactions, offering:
- Data Loss Prevention (DLP) on prompts and responses
- Real-time blocking of sensitive content
- Audit trails and reporting for AI activity
- Seamless integration via Microsoft Graph APIs

This means software developers can plug into enterprise-grade governance - with minimal code and no need to reinvent compliance infrastructure.

What it looks like: Data Security Posture Management for AI in Microsoft Purview

Purview’s Data Security Posture Management (DSPM) for AI offers centralized visibility into all AI interactions across Microsoft Copilot, Azure OpenAI, and even third-party models like Google Gemini or ChatGPT.

Figure 2. Microsoft Purview DSPM for AI: Real-time visibility into AI activity, DLP violations, insider risk, and sensitive data shared across AI models like Copilot, ChatGPT, and Gemini.

A developer’s guide: How to integrate AI security using Microsoft Graph APIs

Microsoft Purview offers a lightweight, developer-friendly integration path. As Kyle Marsh demonstrated during the webinar, just two core APIs are required:

protectionScopes/compute 
This API lets you determine when and why to submit prompts/responses for review. It returns the execution mode:
- evaluateInline: Wait for Purview to approve before sending to the AI model or to the user from the AI model (future functionality)
- evaluateOffline: Send in parallel for audit only

processContent
Use this API to send prompts/responses along with metadata. If a DLP rule is triggered (e.g., presence of a credit card number), the app receives a block instruction before continuing.

For less intrusive monitoring, you can use contentActivity, which logs metadata only - ideal for auditing AI usage patterns without exposing user content.

Example in action: Blocking confidential data in Microsoft Copilot

The power of Purview’s inline protection is demonstrated in Microsoft Copilot. Below, we see how a user’s query surfaced confidential documents - but was blocked from sharing due to policy enforcement.

 

Figure 3. Microsoft Copilot detects and blocks the sharing of confidential content (e.g. 'Project Obsidian') - enforced by Microsoft Purview’s DLP policy engine.

Built-in support for Microsoft tooling

Developers using Copilot Studio, Azure AI Studio, or Azure AI Foundry benefit from built-in or automatic integration:
- Copilot Studio: Purview integration is fully automatic - developers don’t need to write a single line of security code.
- Azure AI Foundry: Supports evaluateOffline by default; advanced controls can be added via APIs.

Custom apps - like a chatbot built with OpenAI APIs - can integrate directly using Microsoft Graph, ensuring enterprise-readiness with minimal effort.

Powerful enterprise controls with zero developer overhead

Enterprise customers can define and manage AI security policies through the familiar Microsoft Purview interface:
- Create custom sensitive info types
- Apply role-based access and location targeting
- Build blocking or allow-list policies
- Conduct audits, investigations, and eDiscovery

As a software development company, you don’t need to manage any of these rules. Your app simply calls the API and responds to the decision returned - block, allow, or log.

Resources to help you get started

Microsoft provides comprehensive tools and docs to help developers integrate AI governance:
- Purview Developer Samples: samples
- Microsoft Graph APIs for Purview: docs
- Web App Security Assessment: aka.ms/wafsecurity
- Cloud Adoption Framework: aka.ms/caf
- Zero Trust for AI: aka.ms/zero-trust
- SaaS Workload Design Principles: docs

Final takeaway: Secure AI is smart AI

“Securing AI isn’t optional - it’s a competitive advantage. If you want your solution in the hands of enterprises, you must build trust from day one.”

With Microsoft Purview and Microsoft Graph, software developers can build AI experiences that are not only intelligent but trustworthy, compliant, and ready for scale.

🎥 Watch the full episode of “Safeguard Data Security and Privacy in AI-Driven Applications” at aka.ms/asiasdcsecurity/recording

Updated Aug 04, 2025
Version 2.0

1 Comment

  • MavenCollective's avatar
    MavenCollective
    Copper Contributor

    AI governance and security go hand in hand. Microsoft Purview's safeguards help ensure GenAI innovation doesn't compromise compliance or data protection. 

    Incidents like the Ingram Micro randsomware attack show why proactive security is critical for every partner. Here are our key takeaways: https://mavencollectivemarketing.com/insights/what-microsoft-partners-can-learn-from-the-ingram-micro-ransomware-attack/