securing ai
46 TopicsMicrosoft Copilot Studio vs. Microsoft Foundry: Building AI Agents and Apps
Microsoft Copilot Studio and Microsoft Foundry (often referred to as Azure AI Foundry) are two key platforms in Microsoft’s AI ecosystem that allow organizations to create custom AI agents and AI-enabled applications. While both share the goal of enabling businesses to build intelligent, task-oriented “copilot” solutions, they are designed for different audiences and use cases. To help you decide which path suits your organization, this blog provides an educational comparison of Copilot Studio vs. Azure AI Foundry, focusing on their unique strengths, feature parity and differences, and key criteria like control requirements, preferences, and integration needs. By understanding these factors, technical decision-makers, developers, IT admins, and business leaders can confidently select the right platform or even a hybrid approach for their AI agent projects. Copilot Studio and Azure AI Foundry: At a Glance Copilot Studio is designed for business teams, pro‑makers, and IT admins who want a managed, low‑code SaaS environment with plug‑and‑play integrations. Microsoft Foundry is built for professional developers who need fine‑grained control, customization, and integration into their existing application and cloud infrastructure. And the good news? Organizations often use both and they work together beautifully. Feature Parity and Key Differences While both platforms can achieve similar outcomes, they do so via different means. Here’s a high-level comparison of Copilot Studio and Azure AI Foundry: Factor Copilot Studio (SaaS, Low-Code) Microsoft (Azure) AI Foundry (PaaS, Pro-Code) Target Users & Skills Business domain experts, IT pros, and “pro-makers” comfortable with low-code tools. Little to no coding is required for building agents. Ideal for quick solutions within business units. Professional developers, software engineers, and data scientists with coding/DevOps expertise. Deep programming skills needed for custom code, DevOps, and advanced AI scenarios. Suited for complex, large-scale AI projects. Platform Model Software-as-a-Service – fully managed by Microsoft. Agents and tools are built and run in Microsoft’s cloud (M365/Copilot service) with no infrastructure to manage. Simplified provisioning, automatic updates, and built-in compliance with Microsoft 365 environment. Platform-as-a-Service, runs in your Azure subscription. You deploy and manage the agent’s infrastructure (e.g. Azure compute, networking, storage) in your cloud. Offers full control over environment, updates, and data residency. Integration & Data Out-of-box connectors & data integrations for Microsoft 365 (SharePoint, Outlook, Teams) and 3rd-party SaaS via Power Platform connectors. Easy integration with business systems without coding, ideal for leveraging existing M365 and Power Platform assets. Data remains in Microsoft’s cloud (with M365 compliance and Purview governance) by default. Deep custom integration with any system or data source via code. Natively works with Azure services (Azure SQL, Cosmos DB, Functions, Kubernetes, Service Bus, etc.) and can connect to on-prem or multi-cloud resources via custom connectors. Suitable when data/code must stay in your network or cloud for compliance or performance reasons. Development Experience Low-code, UI-driven development. Build agents with visual designers and prompt editors. No-code orchestration through Topics (conversational flows) and Agent Flows (Power Automate). Rich library of pre-built components (tools/capabilities) that are auto-managed and continuously improved by Microsoft (e.g. Copilot connectors for M365, built-in tool evaluations). Emphasizes speed and simplicity over granular control. Code-first development. Offers web-based studio plus extensive SDKs, CLI, and VS Code integration for coding agents and custom tools. Supports full DevOps: you can use GitHub/Azure DevOps for CI/CD, custom testing, version control, and integrate with your existing software development toolchain. Provides maximum flexibility to define bespoke logic, but requires more time and skill, sacrificing immediate simplicity for long-term extensibility. Control & Governance Managed environment – minimal configuration needed. Governance is handled via Microsoft’s standard M365 admin centers: e.g. Admin Center, Entra ID, Microsoft Purview, Defender for identity, access, auditing, and compliance across copilots. Updates and performance optimizations (e.g. tool improvements) are applied automatically by Microsoft. Limited need (or ability) to tweak infrastructure or model behavior under the hood – fits organizations that want Microsoft to manage the heavy lifting. Microsoft Foundry provides a pro‑code, Azure‑native environment for teams that need full control over the agent runtime, integrations, and development workflow. Full stack control – you manage how and where agents run. Customizable governance using Azure’s security & monitoring tools: Azure AD (identity/RBAC), Key Vault, network security (private endpoints, VNETs), plus integrated logging and telemetry via Azure Monitor, App Insights, etc. Foundry includes a developer control plane for observing, debugging, and evaluating agents during development and runtime. This is ideal for organizations requiring fine-grained control, custom compliance configurations, and rigorous LLMOps practices. Deployment Channels One-click publishing to Microsoft 365 experiences (Teams, Outlook), web chat, SharePoint, email, and more – thanks to native support for multiple channels in Copilot Studio. Everything runs in the cloud; you don’t worry about hosting the bot. Flexible deployment options. Foundry agents can be exposed via APIs or the Activity Protocol, and integrated into apps or custom channels using the M365 Agents SDK. Foundry also supports deploying agents as web apps, containers, Azure Functions, or even private endpoints for internal use, giving teams freedom to run agents wherever needed (with more setup). Control and customization Copilot Studio trades off fine-grained control for simplicity and speed. It abstracts away infrastructure and handles many optimizations for you, which accelerates development but limits how deeply you can tweak the agent’s behavior. Azure Foundry, by contrast, gives you extensive control over the agent’s architecture, tools and environment – at the cost of more complex setup and effort. Consider your project’s needs: Does it demand custom code, specialized model tuning or on-premises data? If yes, Foundry provides the necessary flexibility. Common Scenarios · HR or Finance teams building departmental AI assistants · Sales operations automating workflows and knowledge retrieval · Fusion teams starting quickly without developer-heavy resources Copilot Studio gives teams a powerful way to build agents quickly without needing to set up compute, networking, identity or DevOps pipeline · Embedding agents into production SaaS apps · If team uses professional developer frameworks (Semantic Kernel, LangChain, AutoGen, etc.) · Building multi‑agent architectures with complex toolchains · You require integration with existing app code or multi-cloud architecture. · You need full observability, versioning, instrumentation or custom DevOps. Foundry is ideal for software engineering teams who need configurability, extensibility and industrial-grade DevOps. Benefits of Combined Use: Embracing Hybrid approach One important insight is that Copilot Studio and Foundry are not mutually exclusive. In fact, Microsoft designed them to be interoperable so that organizations can use both in tandem for different parts of a solution. This is especially relevant for large projects or “fusion teams” that include both low-code creators and pro developers. The pattern many enterprises land on: Developers build specialized tools / agents in Foundry Makers assemble user-facing workflow experience in Copilot Studio Agents can collaborate via agent-to-agent patterns (including A2A, where applicable) Using both platforms together unlocks the best of both worlds: Seamless User Experience: Copilot Studio provides a polished, user-friendly interface for end-users, while Azure AI Foundry handles complex backend logic and data processing. Advanced AI Capabilities: Leverage Azure AI Foundry’s extensive model library and orchestration features to build sophisticated agents that can reason, learn, and adapt. Scalability & Flexibility: Azure AI Foundry’s cloud-native architecture ensures scalability for high-demand scenarios, while Copilot Studio’s low-code approach accelerates development cycles. For the customers who don’t want to decide up front, Microsoft introduced a unified approach for scaling agent initiatives: Microsoft Agent Pre-Purchase Plan (P3) as part of the broader Agent Factory story, designed to reduce procurement friction across both platforms. Security & Compliance using Microsoft Purview Microsoft Copilot Studio: Microsoft Purview extends enterprise-grade security and compliance to agents built with Microsoft Copilot Studio by bringing AI interaction governance into the same control plane you use for the rest of Microsoft 365. With Purview, you can apply DSPM for AI insights, auditing, and data classification to Copilot Studio prompts and responses, and use familiar compliance capabilities like sensitivity labels, DLP, Insider Risk Management, Communication Compliance, eDiscovery, and Data Lifecycle Management to reduce oversharing risk and support investigations. For agents published to non-Microsoft channels, Purview management can require pay-as-you-go billing, while still using the same Purview policies and reporting workflows teams already rely on. Microsoft Foundry: Microsoft Purview integrates with Microsoft Foundry to help organizations secure and govern AI interactions (prompts, responses, and related metadata) using Microsoft’s unified data security and compliance capabilities. Once enabled through the Foundry Control Plane or through Microsoft Defender for Cloud in Microsoft Azure Portal, Purview can provide DSPM for AI posture insights plus auditing, data classification, sensitivity labels, and enforcement-oriented controls like DLP, along with downstream compliance workflows such as Insider Risk, Communication Compliance, eDiscovery, and Data Lifecycle Management. This lets security and compliance teams apply consistent policies across AI apps and agents in Foundry, while gaining visibility and governance through the same Purview portal and reports used across the enterprise. Conclusion When it comes to Copilot Studio vs. Azure AI Foundry, there is no universally “best” choice – the ideal platform depends on your team’s composition and project requirements. Copilot Studio excels at enabling functional business teams and IT pros to build AI assistants quickly in a managed, compliant environment with minimal coding. Azure AI Foundry shines for developer-centric projects that need maximal flexibility, custom code, and deep integration with enterprise systems. The key is to identify what level of control, speed, and skill your scenario calls for. Use both together to build end-to-end intelligent systems that combine ease of use with powerful backend intelligence. By thoughtfully aligning the platform to your team’s strengths and needs, you can minimize friction and maximize momentum on your AI agent journey delivering custom copilot solutions that are both quick to market and built for the long haul Resources to explore Copilot Studio Overview Microsoft Foundry Use Microsoft Purview to manage data security & compliance for Microsoft Copilot Studio Use Microsoft Purview to manage data security & compliance for Microsoft Foundry Optimize Microsoft Foundry and Copilot Credit costs with Microsoft Agent pre-purchase plan Accelerate Innovation with Microsoft Agent FactorySecuring the AI Pipeline – From Data to Deployment
In our first post, we established why securing AI workloads is mission-critical for the enterprise. Now, we turn to the AI pipeline—the end-to-end journey from raw data to deployed models—and explore why every stage must be fortified against evolving threats. As organizations accelerate AI adoption, this pipeline becomes a prime target for adversaries seeking to poison data, compromise models, or exploit deployment endpoints. Enterprises don’t operate a single “AI system”; they run interconnected pipelines that transform data into decisions across a web of services, models, and applications. Protecting this chain demands a holistic security strategy anchored in Zero Trust for AI, supply chain integrity, and continuous monitoring. In this post, we map the pipeline, identify key attack vectors at each stage, and outline practical defenses using Microsoft’s security controls—spanning data governance with Purview, confidential training environments in Azure, and runtime threat detection with Defender for Cloud. Our guidance aligns with leading frameworks, including the NIST AI Risk Management Framework and MITRE ATLAS, ensuring your AI security program meets recognized standards while enabling innovation at scale. A Security View of the AI Pipeline Securing AI isn’t just about protecting a single model—it’s about safeguarding the entire pipeline that transforms raw data into actionable intelligence. This pipeline spans multiple stages, from data collection and preparation to model training, validation, and deployment, each introducing unique risks that adversaries can exploit. Data poisoning, model tampering, and supply chain attacks are no longer theoretical—they’re real threats that can undermine trust and compliance. By viewing the pipeline through a security lens, organizations can identify these vulnerabilities early and apply layered defenses such as Zero Trust principles, data lineage tracking, and runtime monitoring. This holistic approach ensures that AI systems remain resilient, auditable, and aligned with enterprise risk and regulatory requirements. Stages & Primary Risks Data Collection & Ingestion Sources: enterprise apps, data lakes, web, partners. Key risks: poisoning, PII leakage, weak lineage, and shadow datasets. Frameworks call for explicit governance and provenance at this earliest stage. [nist.gov] Data Prep & Feature Engineering Risks: backdoored features, bias injection, and transformation tampering that evades standard validation. ATLAS catalogs techniques that target data, features, and preprocessing. [atlas.mitre.org] Model Training / Fine‑Tuning Risks: model theft, inversion, poisoning, and compromised compute. Confidential computing and isolated training domains are recommended. [learn.microsoft.com] Validation & Red‑Team Testing Risks: tainted validation sets, overlooked LLM‑specific risks (prompt injection, unbounded consumption), and fairness drift. OWASP’s LLM Top 10 highlights the unique classes of generative threats. [owasp.org] Registry & Release Management Risks: supply chain tampering (malicious models, dependency confusion), unsigned artifacts, and missing SBOM/AIBOM. [codesecure.com], [github.com] Deployment & Inference Risks: adversarial inputs, API abuse, prompt injection (direct & indirect), data exfiltration, and model abuse at runtime. Microsoft has documented multi‑layer mitigations and integrated threat protection for AI workloads. [techcommun…rosoft.com], [learn.microsoft.com] Reference Architecture (Zero Trust for AI) The Reference Architecture for Zero Trust in AI establishes a security-first blueprint for the entire AI pipeline—from raw data ingestion to model deployment and continuous monitoring. Its importance lies in addressing the unique risks of AI systems, such as data poisoning, model tampering, and adversarial attacks, which traditional security models often overlook. By embedding Zero Trust principles at every stage—governance with Microsoft Purview, isolated training environments, signed model artifacts, and runtime threat detection—organizations gain verifiable integrity, regulatory compliance, and resilience against evolving threats. Adopting this architecture ensures that AI innovations remain trustworthy, auditable, and aligned with business and compliance objectives, ultimately accelerating adoption while reducing risk and safeguarding enterprise reputation. Below is a visual of what this architecture looks like: Why this matters: Microsoft Purview establishes provenance, labels, and lineage Azure ML enforces network isolation Confidential Computing protects data-in-use Responsible AI tooling addresses safety & fairness Defender for Cloud adds runtime AI‑specific threat detection Azure ML Model Monitoring closes the loop with drift and anomaly detection. [microsoft.com], [azure.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] Stage‑by‑Stage Threats & Concrete Mitigations (with Microsoft Controls) Data Collection & Ingestion - Attack Scenarios Data poisoning via partner feed or web‑scraped corpus; undetected changes skew downstream models. Research shows Differential Privacy (DP) can reduce impact but is not a silver bullet. Differential Privacy introduces controlled noise into training data or model outputs, making it harder for attackers to infer individual data points and limiting the influence of any single poisoned record. This helps reduce the impact of targeted poisoning attacks because malicious entries cannot disproportionately affect the model’s parameters. However, DP is not sufficient on its own for several reasons: Aggregate poisoning still works: DP protects individual records, but if an attacker injects a large volume of poisoned data, the cumulative effect can still skew the model. Utility trade-offs: Adding noise to achieve strong privacy guarantees often degrades model accuracy, creating tension between security and performance. Doesn’t detect malicious intent: DP doesn’t validate data quality or provenance—it only limits exposure. Poisoned data can still enter the pipeline undetected. Vulnerable to sophisticated attacks: Techniques like backdoor poisoning or gradient manipulation can bypass DP protections because they exploit model behavior rather than individual record influence. Bottom line, DP is a valuable layer for privacy and resilience, but it must be combined with data validation, anomaly detection, and provenance checks to effectively mitigate poisoning risks. [arxiv.org], [dp-ml.github.io] Sensitive data drift into training corpus (PII/PHI), later leaking through model inversion. NIST RMF calls for privacy‑enhanced design and provenance from the outset. When personally identifiable information (PII) or protected health information (PHI) unintentionally enters the training dataset—often through partner feeds, logs, or web-scraped sources—it creates a latent risk. If the model memorizes these sensitive records, adversaries can exploit model inversion attacks to reconstruct or infer private details from outputs or embeddings. [nvlpubs.nist.gov] Mitigations & Integrations Classify & label sensitive fields with Microsoft Purview Use Purview’s automated scanning and classification to detect PII, PHI, financial data, and other regulated fields across your data estate. Apply sensitivity labels and tags to enforce consistent governance policies. [microsoft.com] Enable lineage across Microsoft Fabric/Synapse/SQL Implement Data Loss Prevention (DLP) rules to block unauthorized movement of sensitive data and prevent accidental leaks. Combine this with role-based access control (RBAC) and attribute-based access control (ABAC) to restrict who can view, modify, or export sensitive datasets. Integrate with SOC and DevSecOps Pipelines Feed Purview alerts and lineage events into your SIEM/XDR workflows for real-time monitoring. Automate policy enforcement in CI/CD pipelines to ensure models only train on approved, sanitized datasets. Continuous Compliance Monitoring Schedule recurring scans and leverage Purview’s compliance dashboards to validate adherence to regulatory frameworks like GDPR, HIPAA, and NIST RMF. Maintain dataset hashes and signatures; store lineage metadata and approvals before a dataset can enter training (Purview + Fabric). [azure.microsoft.com] For externally sourced data, sandbox ingestion and run poisoning heuristics; if using Data Privacy (DP)‑training, document tradeoffs (utility vs. robustness). [aclanthology.org], [dp-ml.github.io] 3.2 Data Preparation & Feature Engineering Attack Scenarios Feature backdoors: crafted tokens in a free‑text field activate hidden behaviors only under specific conditions. MITRE ATLAS lists techniques that target features/preprocessing. [atlas.mitre.org] Mitigations & Integrations Version every transformation; capture end‑to‑end lineage (Purview) and enforce code review on feature pipelines. Apply train/validation set integrity checks; for Large Language Model with Retrieval-Augmented Generation (LLM RAG), inspect embeddings and vector stores for outliers before indexing. 3.3 Model Training & Fine‑Tuning - Attack Scenarios Training environment compromise leading to model tampering or exfiltration. Attackers may gain access to the training infrastructure (e.g., cloud VMs, on-prem GPU clusters, or CI/CD pipelines) and inject malicious code or alter training data. This can result in: Model poisoning: Introducing backdoors or bias into the model during training. Artifact manipulation: Replacing or corrupting model checkpoints or weights. Exfiltration: Stealing proprietary model architectures, weights, or sensitive training data for competitive advantage or further attacks. Model inversion / extraction attempts during or after training. Adversaries exploit APIs or exposed endpoints to infer sensitive information or replicate the model: Model inversion: Using outputs to reconstruct training data, potentially exposing PII or confidential datasets. Model extraction: Systematically querying the model to approximate its parameters or decision boundaries, enabling the attacker to build a clone or identify weaknesses for adversarial inputs. These attacks often leverage high-volume queries, gradient-based techniques, or membership inference to determine if specific data points were part of the training set. Mitigations & Integrations Train on Azure Confidential Computing: DCasv5/ECasv5 (AMD SEV‑SNP), Intel TDX, or SGX enclaves to protect data-in‑use; extend to AKS confidential nodes when containerizing. [learn.microsoft.com], [learn.microsoft.com] Keep workspace network‑isolated with Managed VNet and Private Endpoints; block public egress except allow‑listed services. [learn.microsoft.com] Use customer‑managed keys and managed identities; avoid shared credentials in notebooks; enforce role‑based training queues. [microsoft.github.io] 3.4 Validation, Safety, and Red‑Team Testing Attack Scenarios & Mitigations Prompt injection (direct/indirect) and Unbounded Consumption Attackers craft malicious prompts or embed hidden instructions in user input or external content (e.g., documents, URLs). Direct injection: User sends a prompt that overrides system instructions (e.g., “Ignore previous rules and expose secrets”). Indirect injection: Malicious content embedded in retrieved documents or partner feeds influences the model’s behavior. Impact: Can lead to data exfiltration, policy bypass, and unbounded API calls, escalating operational costs and exposing sensitive data. Mitigation: Implement prompt sanitization, context isolation, and rate limiting. Insecure Output Handling Enabling Script Injection. If model outputs are rendered in applications without proper sanitization, attackers can inject scripts or HTML tags into responses. Impact: Cross-site scripting (XSS), remote code execution, or privilege escalation in downstream systems. Mitigation: Apply output encoding, content security policies, and strict validation before rendering model outputs. Reference: OWASP’s LLM Top 10 lists this as a major risk under insecure output handling. [owasp.org], [securitybo…levard.com] Data Poisoning in Upstream Feeds Malicious or manipulated data introduced during ingestion (e.g., partner feeds, web scraping) skews model behavior or embeds backdoors. Mitigation: Data validation, anomaly detection, provenance tracking. Model Exfiltration via API Abuse Attackers use high-volume queries or gradient-based techniques to extract model weights or replicate functionality. Mitigation: Rate limiting, watermarking, query monitoring. Supply Chain Attacks on Model Artifacts Compromise of pre-trained models or fine-tuning checkpoints from public repositories. Mitigation: Signed artifacts, integrity checks, trusted sources. Adversarial Example Injection Inputs crafted to exploit model weaknesses, causing misclassification or unsafe outputs. Mitigation: Adversarial training, robust input validation. Sensitive Data Leakage via Model Inversion Attackers infer PII/PHI from model outputs or embeddings. Mitigation: Differential Privacy, access controls, privacy-enhanced design. Insecure Integration with External Tools LLMs calling plugins or APIs without proper sandboxing can lead to unauthorized actions. Mitigation: Strict permissioning, allowlists, and isolation. Additional Mitigations & Integrations considerations Adopt Microsoft’s defense‑in‑depth guidance for indirect prompt injection (hardening + Spotlighting patterns) and pair with runtime Prompt Shields. [techcommun…rosoft.com] Evaluate models with Responsible AI Dashboard (fairness, explainability, error analysis) and export RAI Scorecards for release gates. [learn.microsoft.com] Build security gates referencing MITRE ATLAS techniques and OWASP GenAI controls into your MLOps pipeline. [atlas.mitre.org], [owasp.org] 3.5 Registry, Signing & Supply Chain Integrity - Attack Scenarios Model supply chain risk: backdoored pre‑trained weights Attackers compromise publicly available or third-party pre-trained models by embedding hidden behaviors (e.g., triggers that activate under specific inputs). Impact: Silent backdoors can cause targeted misclassification or data leakage during inference. Mitigation: Use trusted registries and verified sources for model downloads. Perform model scanning for anomalies and backdoor detection before deployment. [raykhira.com] Dependency Confusion Malicious actors publish packages with the same name as internal dependencies to public repositories. If build pipelines pull these packages, attackers gain code execution. Impact: Compromised training or deployment environments, leading to model tampering or data exfiltration. Mitigation: Enforce private package registries and pin versions. Validate dependencies against allowlists. Unsigned Artifacts Swapped in the Registry If model artifacts (weights, configs, containers) are not cryptographically signed, attackers can replace them with malicious versions. Impact: Deployment of compromised models or containers without detection. Mitigation: Implement artifact signing and integrity verification (e.g., SHA256 checksums). Require signature validation in CI/CD pipelines before promotion to production. Registry Compromise Attackers gain access to the model registry and alter metadata or inject malicious artifacts. Mitigation: RBAC, MFA, audit logging, and registry isolation. Tampered Build Pipeline CI/CD pipeline compromised to inject malicious code during model packaging or containerization. Mitigation: Secure build environments, signed commits, and pipeline integrity checks. Poisoned Container Images Malicious base images used for model deployment introduce vulnerabilities or malware. Mitigation: Use trusted container registries, scan images for CVEs, and enforce image signing. Shadow Artifacts Attackers upload artifacts with similar names or versions to confuse operators and bypass validation. Mitigation: Strict naming conventions, artifact fingerprinting, and automated validation. Additional Mitigations & Integrations considerations Store models in Azure ML Registry with version pinning; sign artifacts and publish SBOM/AI‑BOM metadata for downstream verifiers. [microsoft.github.io], [github.com], [codesecure.com] Maintain verifiable lineage and attestations (policy says: no signature, no deploy). Emerging work on attestable pipelines reinforces this approach. [arxiv.org] 3.6 Secure Deployment & Runtime Protection - Attack Scenarios Adversarial inputs and prompt injections targeting your inference APIs or agents Attackers craft malicious queries or embed hidden instructions in user input or retrieved content to manipulate model behavior. Impact: Policy bypass, sensitive data leakage, or execution of unintended actions via connected tools. Mitigation: Prompt sanitization and isolation (strip unsafe instructions). Context segmentation for multi-turn conversations. Rate limiting and anomaly detection on inference endpoints. Jailbreaks that bypass safety filters Attackers exploit weaknesses in safety guardrails by chaining prompts or using obfuscation techniques to override restrictions. Impact: Generation of harmful, disallowed, or confidential content; reputational and compliance risks. Mitigation: Layered safety filters (input + output). Continuous red-teaming and adversarial testing. Dynamic policy enforcement based on risk scoring. API abuse and model extraction. High-volume or structured queries designed to infer model parameters or replicate its functionality. Impact: Intellectual property theft, exposure of proprietary model logic, and enabling downstream attacks. Mitigation: Rate limiting and throttling. Watermarking responses to detect stolen outputs. Query pattern monitoring for extraction attempts. [atlas.mitre.org] Insecure Integration with External Tools or Plugins LLM agents calling APIs without sandboxing can trigger unauthorized actions. Mitigation: Strict allowlists, permission gating, and isolated execution environments. Model Output Injection into Downstream Systems Unsanitized outputs rendered in apps or dashboards can lead to XSS or command injection. Mitigation: Output encoding, validation, and secure rendering practices. Runtime Environment Compromise Attackers exploit container or VM vulnerabilities hosting inference services. Mitigation: Harden runtime environments, apply OS-level security patches, and enforce network isolation. Side-Channel Attacks Observing timing, resource usage, or error messages to infer sensitive details about the model or data. Mitigation: Noise injection, uniform response timing, and error sanitization. Unbounded Consumption Leading to Cost Escalation Attackers flood inference endpoints with requests, driving up compute costs. Mitigation: Quotas, usage monitoring, and auto-scaling with cost controls. Additional Mitigations & Integrations considerations Deploy Managed Online Endpoints behind Private Link; enforce mTLS, rate limits, and token‑based auth; restrict egress in managed VNet. [learn.microsoft.com] Turn on Microsoft Defender for Cloud – AI threat protection to detect jailbreaks, data leakage, prompt hacking, and poisoning attempts; incidents flow into Defender XDR. [learn.microsoft.com] For Azure OpenAI / Direct Models, enterprise data is tenant‑isolated and not used to train foundation models; configure Abuse Monitoring and Risks & Safety dashboards, with clear data‑handling stance. [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] 3.7 Post‑Deployment Monitoring & Response - Attack Scenarios Data/Prediction Drift silently degrades performance Over time, input data distributions change (e.g., new slang, market shifts), causing the model to make less accurate predictions without obvious alerts. Impact: Reduced accuracy, operational risk, and potential compliance violations if decisions become unreliable. Mitigation: Continuous drift detection using statistical tests (KL divergence, PSI). Scheduled model retraining and validation pipelines. Alerting thresholds for performance degradation. Fairness Drift Shifts Outcomes Across Cohorts Model performance or decision bias changes for specific demographic or business segments due to evolving data or retraining. Impact: Regulatory risk (GDPR, EEOC), reputational damage, and ethical concerns. Mitigation: Implement bias monitoring dashboards. Apply fairness metrics (equal opportunity, demographic parity) in post-deployment checks. Trigger remediation workflows when drift exceeds thresholds. Emergent Jailbreak Patterns evolve over time Attackers discover new prompt injection or jailbreak techniques that bypass safety filters after deployment. Impact: Generation of harmful or disallowed content, policy violations, and security breaches. Mitigation: Behavioral anomaly detection on prompts and outputs. Continuous red-teaming and adversarial testing. Dynamic policy updates integrated into inference pipelines. Shadow Model Deployment Unauthorized or outdated models running in production environments without governance. Mitigation: Registry enforcement, signed artifacts, and deployment audits. Silent Backdoor Activation Backdoors introduced during training activate under rare conditions post-deployment. Mitigation: Runtime scanning for anomalous triggers and adversarial input detection. Telemetry Tampering Attackers manipulate monitoring logs or metrics to hide drift or anomalies. Mitigation: Immutable logging, cryptographic integrity checks, and SIEM integration. Cost Abuse via Automated Bots Bots continuously hit inference endpoints, driving up operational costs unnoticed. Mitigation: Rate limiting, usage analytics, and anomaly-based throttling. Model Extraction Over Time Slow, distributed queries across months to replicate model behavior without triggering rate limits. Mitigation: Long-term query pattern analysis and watermarking. Additional Mitigations & Integrations considerations Enable Azure ML Model Monitoring for data drift, prediction drift, data quality, and custom signals; route alerts to Event Grid to auto‑trigger retraining and change control. [learn.microsoft.com], [learn.microsoft.com] Correlate runtime AI threat alerts (Defender for Cloud) with broader incidents in Defender XDR for a complete kill‑chain view. [learn.microsoft.com] Real‑World Scenarios & Playbooks Scenario A — “Clean” Model, Poisoned Validation Symptom: Model looks great in CI, fails catastrophically on a subset in production. Likely cause: Attacker tainted validation data so unsafe behavior was never detected. ATLAS documents validation‑stage attacks. [atlas.mitre.org] Playbook: Require dual‑source validation sets with hashes in Purview lineage; incorporate RAI dashboard probes for subgroup performance; block release if variance exceeds policy. [microsoft.com], [learn.microsoft.com] Scenario B — Indirect Prompt Injection in Retrieval-Augmented Generation (RAG) Symptom: The assistant “quotes” an external PDF that quietly exfiltrates secrets via instructions in hidden text. Playbook: Apply Microsoft Spotlighting patterns (delimiting/datamarking/encoding) and Prompt Shields; enable Defender for Cloud AI alerts and remediate via Defender XDR. [techcommun…rosoft.com], [learn.microsoft.com] Scenario C — Model Extraction via API Abuse Symptom: Spiky usage, long prompts, and systematic probing. Playbook: Enforce rate/shape limits; throttle token windows; monitor with Defender for Cloud and block high‑risk consumers; for OpenAI endpoints, validate Abuse Monitoring telemetry and adjust content filters. [learn.microsoft.com], [learn.microsoft.com] Product‑by‑Product Implementation Guide (Quick Start) Data Governance & Provenance Microsoft Purview Data Governance GA: unify cataloging, lineage, and policy; integrate with Fabric; use embedded Copilot to accelerate stewardship. [microsoft.com], [azure.microsoft.com] Secure Training Azure ML with Managed VNet + Private Endpoints; use Confidential VMs (DCasv5/ECasv5) or SGX/TDX where enclave isolation is required; extend to AKS confidential nodes for containerized training. [learn.microsoft.com], [learn.microsoft.com] Responsible AI Responsible AI Dashboard & Scorecards for fairness/interpretability/error analysis—use as release artifacts at change control. [learn.microsoft.com] Runtime Safety & Threat Detection Azure AI Content Safety (Prompt Shields, groundedness, protected material detection) + Defender for Cloud AI Threat Protection (alerts for leakage/poisoning/jailbreak/credential theft) integrated to Defender XDR. [ai.azure.com], [learn.microsoft.com] Enterprise‑grade LLM Access Azure OpenAI / Direct Models: data isolation, residency (Data Zones), and clear privacy commitments for commercial & public sector customers. [learn.microsoft.com], [azure.microsoft.com], [blogs.microsoft.com] Monitoring & Continuous Improvement Azure ML Model Monitoring (drift/quality) + Event Grid triggers for auto‑retrain; instrument with Application Insights for latency/reliability. [learn.microsoft.com] Policy & Governance: Map → Measure → Manage (NIST AI RMF) Align your controls to NIST’s four functions: Govern: Define AI security policies: dataset admission, cryptographic signing, registry controls, and red‑team requirements. [nvlpubs.nist.gov] Map: Inventory models, data, and dependencies (Purview catalog + SBOM/AIBOM). [microsoft.com], [github.com] Measure: RAI metrics (fairness, explainability), drift thresholds, and runtime attack rates (Defender/Content Safety). [learn.microsoft.com], [learn.microsoft.com] Manage: Automate mitigations: block unsigned artifacts, quarantine suspect datasets, rotate keys, and retrain on alerts. [nist.gov] What “Good” Looks Like: A 90‑Day Hardening Plan Days 0–30: Establish Foundations Turn on Purview scans across Fabric/SQL/Storage; define sensitivity labels + DLP. [microsoft.com] Lock Azure ML workspaces into Managed VNet, Private Endpoints, and Managed Identity. [learn.microsoft.com], [microsoft.github.io] Move training to Confidential VMs for sensitive projects. [learn.microsoft.com] Days 31–60: Shift‑Left & Gate Releases Integrate RAI Dashboard/Scorecards into CI; add ATLAS + OWASP LLM checks to release gates. [learn.microsoft.com], [atlas.mitre.org], [owasp.org] Require SBOM/AIBOM and artifact signing for models. [codesecure.com], [github.com] Days 61–90: Runtime Defense & Observability Enable Defender for Cloud – AI Threat Protection and Azure AI Content Safety; wire alerts to Defender XDR. [learn.microsoft.com], [ai.azure.com] Roll out Model Monitoring (drift/quality) with auto‑retrain triggers via Event Grid. [learn.microsoft.com] FAQ: Common Leadership Questions Q: Do differential privacy and adversarial training “solve” poisoning? A: They reduce risk envelopes but do not eliminate attacks—plan for layered defenses and continuous validation. [arxiv.org], [dp-ml.github.io] Q: How do we prevent indirect prompt injection in agentic apps? A: Combine Spotlighting patterns, Prompt Shields, least‑privilege tool access, explicit consent for sensitive actions, and Defender for Cloud runtime alerts. [techcommun…rosoft.com], [learn.microsoft.com] Q: Can we use Azure OpenAI without contributing our data to model training? A: Yes—Azure Direct Models keep your prompts/completions private, not used to train foundation models without your permission; with Data Zones, you can align residency. [learn.microsoft.com], [azure.microsoft.com] Closing As your organization scales AI, the pipeline is the perimeter. Treat every stage—from data capture to model deployment—as a control point with verifiable lineage, signed artifacts, network isolation, runtime detection, and continuous risk measurement. But securing the pipeline is only part of the story—what about the models themselves? In our next post, we’ll dive into hardening AI models against adversarial attacks, exploring techniques to detect, mitigate, and build resilience against threats that target the very core of your AI systems. Key Takeaway Securing AI requires protecting the entire pipeline—from data collection to deployment and monitoring—not just individual models. Zero Trust for AI: Embed security controls at every stage (data governance, isolated training, signed artifacts, runtime threat detection) for integrity and compliance. Main threats and mitigations by stage: Data Collection: Risks include poisoning and PII leakage; mitigate with data classification, lineage tracking, and DLP. Data Preparation: Watch for feature backdoors and tampering; use versioning, code review, and integrity checks. Model Training: Risks are environment compromise and model theft; mitigate with confidential computing, network isolation, and managed identities. Validation & Red Teaming: Prompt injection and unbounded consumption are key risks; address with prompt sanitization, output encoding, and adversarial testing. Supply Chain & Registry: Backdoored models and dependency confusion; use trusted registries, artifact signing, and strict pipeline controls. Deployment & Runtime: Adversarial inputs and API abuse; mitigate with rate limiting, context segmentation, and Defender for Cloud AI threat protection. Monitoring: Watch for data/prediction drift and cost abuse; enable continuous monitoring, drift detection, and automated retraining. References NIST AI RMF (Core + Generative AI Profile) – governance lens for pipeline risks. [nist.gov], [nist.gov] MITRE ATLAS – adversary tactics & techniques against AI systems. [atlas.mitre.org] OWASP Top 10 for LLM Applications / GenAI Project – practical guidance for LLM‑specific risks. [owasp.org] Azure Confidential Computing – protect data‑in‑use with SEV‑SNP/TDX/SGX and confidential GPUs. [learn.microsoft.com] Microsoft Purview Data Governance – GA feature set for unified data governance & lineage. [microsoft.com] Defender for Cloud – AI Threat Protection – runtime detections and XDR integration. [learn.microsoft.com] Responsible AI Dashboard / Scorecards – fairness & explainability in Azure ML. [learn.microsoft.com] Azure AI Content Safety – Prompt Shields, groundedness detection, protected material checks. [ai.azure.com] Azure ML Model Monitoring – drift/quality monitoring & automated retraining flows. [learn.microsoft.com] #AIPipelineSecurity; #AITrustAndSafety; #SecureAI; #AIModelSecurity; #AIThreatModeling; #SupplyChainSecurity; #DataSecurityArtificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.Security as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Secure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurMicrosoft named an overall leader in KuppingerCole Leadership Compass for Generative AI Defense
Today, we are proud to share that Microsoft has been recognized as an overall leader in the KuppingerCole Leadership Compass for Generative AI Defense (GAD), an independent report from a leading European analyst firm. This recognition reinforces the work we’ve been doing to deliver enterprise-ready Security and Governance capabilities for AI, and reflects our commitment to helping customers secure AI at scale. Figure 1: KuppingerCole Generative AI Defense Leadership Compass chart highlighting Microsoft as the top Overall Leader, with other vendors including Palo Alto Networks, Cisco, F5, NeuralTrust, IBM, and others positioned as challengers or followers. At Microsoft, our approach to Generative AI Defense is grounded in a simple principle: security is a core primitive which must be embedded everywhere – across AI apps, agents, platforms, and infrastructure. Microsoft delivers this through a comprehensive and integrated approach that provides visibility, protection, and governance across the full AI stack. Our capabilities and controls help organizations address the most pressing challenges CISOs and security leaders face as AI adoption accelerates. We protect against agent sprawl and resource access with identity-first controls like Entra Agent ID and lifecycle governance, alongside network-layer controls that surface hidden shadow AI risks. We prevent sensitive data leaks with Microsoft Purview’s real-time data loss prevention, classification, and inference safeguards. We defend against new AI threats and vulnerabilities with Microsoft Defender’s runtime protection, posture management, and AI-driven red teaming. Finally, we help organizations stay in compliance with evolving AI regulations with built-in support for frameworks like the EU AI Act, NIST AI RMF, and ISO 42001, so teams can confidently innovate while meeting governance requirements. Foundational security is also built into Microsoft 365 Copilot and Microsoft Foundry, with identity controls, data safeguards, threat protection, and compliance integrated from the start. Guidance for Security Leaders and CISOs For CISOs enabling their organizations to accelerate their AI transformation journeys, the following priorities are essential to building a secure, governed, and scalable AI foundation. This guidance reflects a combination of key recommendations from KuppingerCole and Microsoft’s perspective on how we deliver on those recommendations: CISO Guidance What It Means How Microsoft Delivers Map AI usage across the enterprise Establish full visibility into every AI tool, agent, and model in use to understand risk exposure and security requirements. Agent365 provides a unified registry for AI agents with full lifecycle governance. Foundry Control Plane gives developers full observability and governance of their entire AI fleet across clouds. And with integrated security signals and controls from signals from Microsoft Entra, Purview, and Defender, Security Dashboard for AI brings posture, configuration, and risk insights together into a single, comprehensive view of your AI estate. Adopt identity-first controls Manage agents and other identities with the same rigor as privileged accounts, enforcing strong authentication, least privilege, and continuous monitoring. Microsoft Entra Agent ID assigns secure, unique identities to agents, applies conditional access policies, and enforces lifecycle controls to prevent agent sprawl and eliminate over-permissioned access. Enforce data governance and DLP for AI interactions Protect sensitive information to both inputs and outputs, applying consistent policies that align with evolving regulatory and compliance requirements. Microsoft Purview delivers real-time DLP for AI prompts and outputs, preserves sensitivity label, applies insider risk controls for agents, and provides compliance templates aligned with the EU AI Act, NIST AI RMF, ISO 42001, and more. Build a layered GAD architecture Combine prompt security, model integrity monitoring, output filtering, and runtime protection instead of relying on any single control. Microsoft Defender provides runtime protection for agents, correlates threat signals, including those from Microsoft Foundry’s Prompt Shields, with threat intelligence, and strengthens security through posture management and attack path analysis for AI workloads. Prioritize integrated, enterprise-ready solutions Choose platforms that unify policy enforcement, monitoring, and compliance across environments to reduce operational complexity and improve security outcomes. Microsoft Security integrates capabilities across Microsoft Entra, Purview, and Defender, deeply integrated with Microsoft 365, Copilot Studio, and Foundry, providing centralized governance, consistent policy enforcement, and operationalized oversight across your AI ecosystem. What differentiates Microsoft is the comprehensive set of security capabilities woven into the Microsoft AI agents, apps, and platform. Shared capabilities across Microsoft Entra, Purview, and Defender deliver consistent protection for IT, developers, and security teams, while tools such as Microsoft Agent 365, Foundry Control Plane, and Security Dashboard for AI integrate security and observability directly where AI applications and agents are built, deployed, and governed. Together, these capabilities, including our latest capabilities from Ignite, help organizations deploy AI securely, reduce operational complexity, and strengthen trust across their environment. Closing Thoughts Agentic AI is transforming how organizations work, and with that shift comes a new security frontier. As AI becomes embedded across business processes, taking a proactive approach to defense-in-depth, governance, and integrated AI security is essential. Organizations that act early will be better positioned to innovate confidently and maintain trust. At Microsoft, we recognize that securing AI requires purpose-built, enterprise-ready protection. With Microsoft Security for AI, organizations can safeguard sensitive data, protect against emerging AI threats, detect and remediate vulnerabilities, maintain compliance with evolving regulations, and strengthen trust as AI adoption accelerates. In this rapidly evolving landscape, AI defense is not optional, it is foundational to protecting innovation and ensuring enterprise readiness. Explore more Read the full KuppingerCole Leadership Compass on Generative AI Defense (GAD) Learn more about Security for AI Read our latest Security for AI blog to learn more about our latest capabilities Visit the Microsoft Security site for the latest innovations.Transforming Security Analysis into a Repeatable, Auditable, and Agentic Workflow
Author(s): Animesh Jain, Vinay Yadav Shaped by investigations into the strategic question of what it takes for Windows to achieve world-leading security—and the practical engineering needed to explore agentic workflows at scale and their interfaces. Our work in Windows Servicing & Delivery (WSD) is shaped by two guiding prompts from leadership: "what does it take for Windows to achieve world-leading security", and "how do we responsibly integrate AI into systems as large and high-churn as Windows?". Reasoning models open new possibilities on both fronts. As we continue experimenting, one issue repeatedly surfaces as the bottleneck for scalable security assurance: variant vulnerabilities. They are subtle, recurring, and easy to miss—making them an ideal proving ground for the enterprise-grade workflow we present here. Security Analysis at Windows Scale Security analysis shouldn’t be an afterthought—it should be a continuous, auditable, and intelligence-driven process built directly into the engineering workflow. This work introduces an agentic security analysis pipeline that uses reasoning models and tool-based agents to detect variant vulnerabilities across large, fast-changing codebases. By combining automation with explainability, it transforms security validation from a manual, point-in-time task into a repeatable and trustworthy part of every build. Why are variants the hard part? Security flaws rarely occur in isolation. Once a vulnerability is fixed, its logical or structural pattern often reappears elsewhere in the codebase—hidden behind different variables, layers, or call paths. These recurring patterns are variants—the quiet echoes of known issues that can persist across millions of lines of code. Finding them manually is slow, repetitive, and incomplete. As engineering velocity increases, so does the likelihood of variant drift—the same vulnerability class re-emerging in a slightly altered form. Each missed variant carries a downstream cost: regression, re-servicing, or, in the worst cases, re-exploitation. Modern large systems like Windows are too large, too interconnected, and ship too frequently for manual vulnerability discovery to keep pace. Traditional static analyzers and deterministic class-based scanners struggle to generalize these patterns or create too much noise, while targeted fuzzing campaigns often fail to trigger the nuanced runtime conditions that expose them. To stay ahead, automation must evolve. We need systems that reason—not just scan—systems capable of understanding relationships between code regions and applying logical analogies instead of brute-force enumeration. Reasoning Models: A Turning Point in Security Research Recent advances in AI reasoning have demonstrated that large language models can uncover vulnerabilities previously missed by deterministic tools. For example, Google’s Big Sleep agent surfaced an exploitable SQLite flaw (CVE-2025-6965) that bypassed traditional fuzzers due to configuration-sensitive logic. Similarly, an o-series reasoning model helped identify a critical Linux SMB logoff use-after-free (CVE-2025-37899), proving that reasoning-driven automation can detect complex, context-dependent flaws in mature kernel code. These breakthroughs show what’s possible when systems can form, test, and refine hypotheses about software behavior. The challenge now is scaling that intelligence into repeatable, auditable, enterprise-grade workflows—where every result is traceable, reviewable, and integrated into the developer’s daily workflow. A Framework for Agentic Security Analysis To address this challenge, we’ve developed an agentic security analysis framework that applies reasoning models within structured, enterprise grade workflow pattern. It combines large language model agents, specialized analysis tools, and structured artifact generation to make vulnerability discovery continuous, explainable, and auditable. It is interfaced as a first-class Azure DevOps (ADO) pipeline and can be integrated natively into enterprise CI/CD processes. For security analysis, it continuously reasons over large, evolving codebases to identify and validate variant vulnerabilities earlier in the release cycle. Together, these components form a repeatable workflow that helps surface variant patterns with greater consistency and clarity. Core Technical Pillars Scale – Autonomous Code Reasoning Long-context models extend analysis across massive, evolving codebases. They infer analogies, relationships, and behavioral patterns between code regions, enabling scalable reasoning that adapts as systems grow. Tool–Agent Collaboration Specialized agents coordinate to perform semantic search, graph traversal, and both static and dynamic interpretation. This distributed reasoning approach ensures resilience and precision across diverse enterprise environments. Structured Artifact Generation Every step produces versioned, auditable artifacts that document the reasoning process. These artifacts help provide reproducibility, compliance, and transparency—critical for enterprise governance and regulated industries. Together, these pillars enable scalable, explainable, and repeatable vulnerability discovery across large software ecosystems such as Windows. Every stage—from reasoning to validation—is logged and traceable, designed to make each discovery reproducible and reviewable. Inside the framework Agent-Led, Human-Reviewed The system is agent-led from start to finish and human-reviewed only at decision boundaries. Agents form hypotheses from recent fixes or vulnerability classes, test them against context, perform validation passes, and generate evidence-backed reports for reviewer confirmation. The workflow mirrors how seasoned security engineers operate—only faster and continuously. n tasks based on templatized prompts. Tool Specialists as Agents Each analytical tool functions as a domain-specific agent—performing semantic search, file inspection, or function-graph traversal. These agents collaborate through structured orchestration, maintaining specialization without sacrificing coherence. Agentic Patterns and Orchestration The framework employs reusable reasoning patterns—reflective reasoning, actor–validator loops, and parallel tool dialogues—for accuracy and scale. A central conductor agent governs task coordination, context flow, and artifact persistence across runs. Auditability Through Artifacts Every investigation yields a transparent chain of artifacts: Analysis Notes – summarize candidate issues Critique Notes – document reasoning and counter-evidence Synthesis Reports – provide developer-ready summaries, diffs, call graphs, and exploitability insights Agentic Conversation Logs - provides conversation logs so developers can backtrack on reasoning and get more context This structure makes each discovery fully traceable and auditable. CI/CD-Native Integration The interface operates as a first-class Azure DevOps pipeline, attachable to pull requests, nightly builds, or release triggers. Each run publishes versioned artifacts and validation notes directly into the developer workflow—making reasoning-driven security a seamless part of software delivery. What It Can Do Today Seeded Variant Hunts: Start from a recent fix or known pattern to enumerate analogous cases, analyze helper functions, and test reachability. Evidence-First Reporting: Every finding includes reproducible evidence—code snippets, diffs, and caller graphs—delivered within the PR or work item. Scalable Coverage: Runs across servicing branches, producing consistent and auditable validation artifacts. Improved Precision: A reasoning-based validation pass has significantly reduced false positives in internal testing. Case Study: CVE-2025-55325 During a sweep of “*_DEFAULTS” deserializers, the agentic pipeline independently identified GetPoolDefaults trusting a user-controlled size field and copying that many bytes from a caller buffer. The missing runtime bounds check—guarded only by an assertion in debug builds—enabled a potential read access violation and information disclosure. The mitigation mirrored a hardened sibling helper: enforcing runtime bounds on Size versus BytesAvailable/Version before allocation and copy. The finding was later validated by the servicing teams, confirming it matched an issue already under active investigation—illustrating how the automated reasoning process can independently surface real-world vulnerabilities that align with expert analysis. Beyond Variant Analysis The underlying architecture of this framework extends naturally beyond variant detection: Net-new vulnerability discovery through cross-binary pattern matching Model-assisted fuzzing & static analysis orchestrated through CI/CD integration Regression detection via historical code comparisons Security Development Lifecycle (SDL) enforcement and reproducibility checks The agentic patterns and tooling can support net-new vulnerability discovery through cross-binary pattern matching, regression detection using historical code comparisons, reproducibility checks aligned with SDL requirements, and model-assisted fuzzing orchestrated through CI/CD processes. These capabilities open the door to applying reasoning-driven workflows across a broader range of security & validation tasks. The Road Ahead Looking ahead, this trajectory naturally leads toward autonomous cybersecurity pipelines powered by reasoning agents that apply reflective analysis, validation loops, and structured tool interactions to complex codebases. By structuring each step as an auditable artifact, the approach supports security & validation analysis that is both explainable and repeatable. These agents could help validate security posture, analyze historical and real-time signals, and detect anomalous patterns early in the lifecycle. References Google Cloud Blog – Big Sleep and AI-Assisted Vulnerability Discovery “A summer of security: empowering cyber defenders with AI.” https://blog.google/technology/safety-security/cybersecurity-updates-summer-2025 The Hacker News – Google AI ‘Big Sleep’ Stops Exploitation of Critical SQLite Flaw https://thehackernews.com/2025/07/google-ai-big-sleep-stops-exploitation.html NIST National Vulnerability Database – CVE-2025-6965 (SQLite) https://nvd.nist.gov/vuln/detail/CVE-2025-6965 Sean Heelan – “Reasoning Models and the ksmbd Use-After-Free” https://simonwillison.net/2025/May/24/sean-heelan The Cyber Express – AI Finds CVE-2025-37899 Zero-Day in Linux SMB Kernel https://thecyberexpress.com/cve-2025-37899-zero-day-in-linux-smb-kernel NIST National Vulnerability Database – CVE-2025-37899 (Linux SMB Use-After-Free) https://nvd.nist.gov/vuln/detail/CVE-2025-37899 NIST National Vulnerability Database – CVE-2025-55325 (Windows Storage Management Provider Buffer Over-read) https://nvd.nist.gov/vuln/detail/CVE-2025-55325 NVD Microsoft Security Response Center – Vulnerability Details for CVE-2025-55325 https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-55325Announcing New Microsoft Purview Capabilities to Protect GenAI Agents
As organizations accelerate their adoption of agentic AI, a new and urgent challenge is emerging: how to protect the rapidly growing number of agents—first-party, third-party, and custom-built—created and deployed across the enterprise. At Microsoft Ignite, we’re introducing major advancements in Microsoft Purview to support our customers in securing all their agents, wherever they operate. The New Reality: Agents Everywhere, Data Risks Amplified For many organizations, the pace of agent creation is outstripping traditional oversight. Developers, business units, and other information workers can spin up agents to automate tasks, analyze data, or interact with enterprise systems. This proliferation brings tremendous opportunity, but also a new level of risk. New agents can access sensitive information, trigger cascading actions of other agents, and operate outside direct human supervision. The anxiety is real: how do you protect every agent, even those you didn’t know existed? Data risks are especially critical in this new landscape. Agents can process and share sensitive information at scale, interact with external systems, and invoke other agents or large language models, multiplying the complexity and potential of data exposure. Unlike traditional apps, agents are dynamic, autonomous, and currently often invisible to standard security controls. The risk surface expands with every new agent, making comprehensive data protection not just a technical requirement, but a business imperative. Purview for Agent 365: Protections for a more complex agent world Therefore, this week, we are announcing Agent 365 (A365) as the control plane for agents, enabling organizations to confidently manage, secure, and govern AI agents—first-party, third-party, and custom-built—across the enterprise. With A365, teams can extend familiar Microsoft 365 tools and policies to agents, ensuring unified inventory, robust access controls, lifecycle management, and deep integration with productivity apps. That’s why we’re extending Microsoft Purview protections to A365, bringing enterprise-grade security, compliance, and risk management to every agent. Here’s what we’re introducing to make this possible: AI Observability in Data Security Posture Management: Organizations gain visibility, risk assessment, and guided remediation for agents across Microsoft environments. Note: While third-party agents are included in the inventory, assigned risk levels, risk patterns, and guided remediation currently apply to M365 Copilot agents, Copilot Studio, and Microsoft Foundry agents. AI Observability in Data Security Posture Management Agentic Risk in Insider Risk Management: New agent-specific risk indicators and behavioral analytics enable precise detection and policy enforcement. For example, organizations can now identify risky agent behaviors, such as unauthorized data access or unusual activity patterns, and take targeted action to mitigate threats. Data Loss Prevention (DLP) and Information Protection controls extended to agent actions: Purview DLP and Information Protection policies now extend to agents that operate autonomously, allowing these agents to inherit the same protections and organizational policies as users. For example, these built-in controls ensure AI agents don’t access or share sensitive data when accessing M365 data within apps, whether that means blocking access for agents to labeled files or preventing agents from sending external emails and Teams messages that contain sensitive data. Expanded governance via Communication Compliance, Audit, Data Lifecycle Management and eDiscovery: Organizations benefit from expanded proactive detection, secure retention, and policy-based governance for interactions between humans and agents. By including these protections in A365, organizations can apply Purview’s enterprise-grade security, compliance, and risk controls to every agent—making it simpler and safer for customers to deploy agents at scale. Learn more about the Agent 365 announcement. Extending Purview Controls for All Agents Not all agents in an organization will run under an A365 license, yet every agent still requires strong data security and compliance controls. For that reason, we are also adding the following Purview capabilities: Inline data protection for prompts and responses: Expanded DLP capabilities prevent the sharing of sensitive data or files between users and third-party agents (such as ChatGPT or Google Gemini in Agent mode) through inline data protection for the browser and network. Purview SDK embedded in Agent Framework SDK: Purview SDK embedded in Agent Framework SDK enables developers to seamlessly integrate enterprise-grade security, compliance, and governance into the AI agents they build. This integration enables automatic classification and protection of sensitive data, prevents data leaks and oversharing, and provides visibility and control for regulatory compliance—empowering organizations to confidently and securely adopt AI agents in complex environments. Embedding Security into the Foundry Development Pipeline We are also adding several Purview capabilities specifically available in Foundry: Purview integration with Foundry: Purview is now enabled within Foundry, allowing Foundry admins to activate Microsoft Purview on their subscription. Once enabled, interaction data from all apps and agents flows into Purview for centralized compliance, governance, and posture management of AI data. Azure AI Search honors Purview labels and policies: Azure AI Search now ingests Microsoft Purview sensitivity labels and enforces corresponding protection policies through built-in indexers (SharePoint, OneLake, Azure Blob, ADLS Gen2). This enables secure, policy-aligned search over enterprise data, enabling agentic RAG scenarios where only authorized documents are returned or sent to LLMs, preventing data oversharing and aligning with enterprise data protection standards. Communication Compliance for Foundry: New policies extend Communication Compliance capabilities to Foundry, allowing security admins to set organization-wide Communication Compliance policies for acceptable communication for interactions with Foundry-built apps and agents, supported by Microsoft’s Responsible AI Standard. In Foundry Control Plane, Foundry admins will be able to view any deviations from this policy. In addition, Purview admins will be able to review potentially risky AI interactions in Communication Compliance, enabling them to decide on appropriate next steps. Communication Compliance provides visibility to potential unethical or harmful agent interactions Automated AI Compliance Assessments: The new integration between Microsoft Purview Compliance Manager and Foundry delivers automated, real-time compliance for AI solutions. Organizations can quickly assess agents against global standards like the EU AI Act, NIST AI RMF, and ISO/IEC, with one-click assessments and live monitoring of critical controls such as fairness, safety, and transparency. This streamlined approach eliminates manual mapping, surfaces actionable insights, and helps AI systems remain audit-ready as they evolve. Strengthening Trust in Microsoft 365 Copilot And we’re not stopping there. We’re continuing to expand Purview’s protections for Microsoft 365 Copilot to help organizations provide real-time protection for sensitive data in and accelerate remediation of oversharing risks. New enhancements include: Item-level oversharing investigation and remediation: Data security admins can now use data risk assessments in DSPM to analyze user sharing links in SharePoint and OneDrive and take bulk actions such as applying sensitivity labels to shared files, requesting the site owner to review sharing links, or disabling the links entirely. These enhancements streamline risk management, reduce exposure, and give organizations greater control over sensitive data at scale. Expanding DLP for Microsoft 365 Copilot to safeguard sensitive prompts and prevent data leakage: This new real-time control applicable to M365 Copilot, Copilot Chat and Copilot agents, helps prevent data leaks and oversharing by detecting and blocking sensitive data based on SITs in prompts. By blocking the prompt, this also prevents sensitive data from being used for grounding in Microsoft 365 or the web. This expands on the existing capability to prevent sensitive files & emails from being accessed by Copilot based on sensitivity label. Data security and compliance admins need stronger controls for Copilot-related assets like Teams meeting recordings and transcripts. They want to identify recordings with sensitive data and delete them to reduce risk and free up storage. We are announcing two new capabilities to help: Priority cleanup for M365 Copilot assets: Enables admins to override existing retention policies and compliantly delete files, such as meetings recordings and transcripts created to support Copilot use. Priority cleanup is now generally available in Purview Data Lifecycle Management. On-demand classification now extends to meeting transcripts: Information Protection automatically classifies files when they’re created, accessed, or modified, identifying sensitive information in real time. On-demand classification brings the same discovery and classification to data-at-rest without requiring user interactions. We’ve now added meeting transcripts to that coverage. Once the sensitive data in meeting transcripts is discovered and classified, admins can apply DLP or Data Lifecycle Management (DLM) to protect sensitive data from being shared or exposed unintentionally. Honoring Purview data security controls when using Copilot Mode in Edge for Business: Microsoft Edge for Business now features Copilot Mode to empower users to accelerate their productivity through AI-assisted browsing. Copilot Mode honors existing Purview data protections, such as preventing summarization of sensitive content open in the browser. Additionally, Agent Mode can be enabled for multi-step agent workflows in the browser. These agentic workflows will honor the user’s existing DLP protections, such as endpoint DLP policies that prevent pasting of sensitive data to sensitive service domains. Collectively, these capabilities reinforce Purview as the enterprise standard for securing AI-powered productivity. They give organizations the protections they need to scale Copilot usage with confidence and control. Empowering Secure Agentic AI Adoption As agents become integral to enterprise operations, Purview’s expanded protections empower organizations to safely embrace agentic AI—maintaining control, trust, and accountability at every step. With unified data security and compliance, organizations can observe and assess agent risk, prevent oversharing and leakage, detect risky agent behavior, and take decisive control to turn agentic AI into a trusted engine for growth. To learn more about Agent 365, visit the Agent 365 website.Unlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Secure Model Context Protocol (MCP) Implementation with Azure and Local Servers
Introduction The Model Context Protocol (MCP) enables AI systems to interact with external data sources and tools through a standardized interface. While powerful, MCP can introduce security risks in enterprise environments. This tutorial shows you how to implement MCP securely using local servers, Azure OpenAI with APIM, and proper authentication. Understanding MCP's Security Risks There are a couple of key security concerns to consider before implementing MCP: Data Exfiltration: External MCP servers could expose sensitive data. Unauthorized Access: Third-party services become potential security risks. Loss of Control: Unknown how external services handle your data. Compliance Issues: Difficulty meeting regulatory requirements with external dependencies. The solution? Keep everything local and controlled. Secure Architecture Before we dive into implementation, let's take a look at the overall architecture of our secure MCP solution: This architecture consists of three key components working together: Local MCP Server - Your custom tools run entirely within your local environment, reducing external exposure risks. Azure OpenAI + APIM Gateway - All AI requests are routed through Azure API Management with Microsoft Entra ID authentication, providing enterprise-grade security controls and compliance. Authenticated Proxy - A lightweight proxy service handles token management and request forwarding, ensuring seamless integration. One of the key benefits of this architecture is that no API key is required. Traditional implementations often require storing OpenAI API keys in configuration files, environment variables, or secrets management systems, creating potential security vulnerabilities. This approach uses Azure Managed Identity for backend authentication and Azure CLI credentials for client authentication, meaning no sensitive API keys are ever stored, logged, or exposed in your codebase. For more security, APIM and Azure OpenAI resources can be configured with IP restrictions or network rules to only accept traffic from certain sources. These configurations are available for most Azure resources and provide an additional layer of network-level security. This security-forward approach gives you the full power of MCP's tool integration capabilities while keeping your implementation completely under your control. How to Implement MCP Securely 1. Local MCP Server Implementation Building the MCP Server Let's start by creating a simple MCP server in .NET Core. 1. Create a web application dotnet new web -n McpServer 2.Add MCP packages dotnet add package ModelContextProtocol --prerelease dotnet add package ModelContextProtocol.AspNetCore --prerelease 3. Configure Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddMcpServer() .WithHttpTransport() .WithToolsFromAssembly(); var app = builder.Build(); app.MapMcp(); app.Run(); WithToolsFromAssembly() automatically discovers and registers tools from the current assembly. Look into the C# SDK for other ways to register tools for your use case. 4. Define Tools Now, we can define some tools that our MCP server can expose. here is a simple example for tools that echo input back to the client: using ModelContextProtocol.Server; using System.ComponentModel; namespace Tools; [McpServerToolType] public static class EchoTool { [McpServerTool] [Description("Echoes the input text back to the client in all capital letters.")] public static string EchoCaps(string input) { return new string(input.ToUpperInvariant()); } [McpServerTool] [Description("Echoes the input text back to the client in reverse.")] public static string ReverseEcho(string input) { return new string(input.Reverse().ToArray()); } } Key components of MCP tools are the McpServerToolType class decorator indicating that this class contains MCP tools, and the McpServerTool method decorator with a description that explains what the tool does. Alternative: STDIO Transport If you want to use STDIO transport instead of SSE (implemented here), check out this guide: Build a Model Context Protocol (MCP) Server in C# 2. Create a MCP Client with Cline Now that we have our MCP server set up with tools, we need a client that can discover and invoke these tools. For this implementation, we'll use Cline as our MCP client, configured to work through our secure Azure infrastructure. 1. Install Cline VS Code Extension Install the Cline extension in VS Code. 2. Deploy secure Azure OpenAI Endpoint with APIM Instead of connecting Cline directly to external AI services (which could expose the secure implementation to external bad actors), we will route through Azure API Management (APIM) for enterprise security. With this implementation, all requests go through Microsoft Entra ID and we use managed identity for all authentications. Quick Setup: Deploy the Azure OpenAI with APIM solution. Ensure your Azure OpenAI resources are configured to allow your APIM's managed identity to make calls. The APIM policy below uses managed identity authentication to connect to Azure OpenAI backends. Refer to the Azure OpenAI documentation on managed identity authentication for detailed setup instructions. 3. Configure APIM Policy After deploying APIM, configure the following policy to enable Azure AD token validation, managed identity authentication, and load balancing across multiple OpenAI backends: <!-- Azure API Management Policy for OpenAI Endpoint --> <!-- Implements Azure AD Token validation, managed identity authentication --> <!-- Supports round-robin load balancing across multiple OpenAI backends --> <!-- Requests with 'gpt-5' in the URL are routed to a single backend --> <!-- The client application ID '04b07795-8ddb-461a-bbee-02f9e1bf7b46' is the official Azure CLI app registration --> <!-- This policy allows requests authenticated by Azure CLI (az login) when the required claims are present --> <policies> <inbound> <!-- IP Allow List Fragment (external fragment for client IP restrictions) --> <include-fragment fragment-id="YourCompany-IPAllowList" /> <!-- Azure AD Token Validation for Azure CLI app ID --> <validate-azure-ad-token tenant-id="YOUR-TENANT-ID-HERE" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>04b07795-8ddb-461a-bbee-02f9e1bf7b46</application-id> </client-application-ids> <audiences> <audience>api://YOUR-API-AUDIENCE-ID-HERE</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>YourApp.User</value> </claim> </required-claims> </validate-azure-ad-token> <!-- Acquire Managed Identity access token for backend authentication --> <authentication-managed-identity resource="https://cognitiveservices.azure.com" output-token-variable-name="managed-id-access-token" ignore-error="false" /> <!-- Set Authorization header for backend using the managed identity token --> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["managed-id-access-token"])</value> </set-header> <!-- Check if URL contains 'gpt-5' and set backend accordingly --> <choose> <when condition="@(context.Request.Url.Path.ToLower().Contains("gpt-5"))"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <otherwise> <cache-lookup-value key="backend-counter" variable-name="backend-counter" /> <choose> <when condition="@(context.Variables.ContainsKey("backend-counter") == false)"> <set-variable name="backend-counter" value="@(0)" /> </when> </choose> <set-variable name="current-backend-index" value="@((int)context.Variables["backend-counter"] % 7)" /> <choose> <when condition="@((int)context.Variables["current-backend-index"] == 0)"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 1)"> <set-variable name="selected-backend-url" value="https://your-region2-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 2)"> <set-variable name="selected-backend-url" value="https://your-region3-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 3)"> <set-variable name="selected-backend-url" value="https://your-region4-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 4)"> <set-variable name="selected-backend-url" value="https://your-region5-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 5)"> <set-variable name="selected-backend-url" value="https://your-region6-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 6)"> <set-variable name="selected-backend-url" value="https://your-region7-oai.openai.azure.com/openai" /> </when> </choose> <set-variable name="next-counter" value="@(((int)context.Variables["backend-counter"] + 1) % 1000)" /> <cache-store-value key="backend-counter" value="@((int)context.Variables["next-counter"])" duration="300" /> </otherwise> </choose> <!-- Always set backend service using selected-backend-url variable --> <set-backend-service base-url="@((string)context.Variables["selected-backend-url"])" /> <!-- Inherit any base policies defined outside this section --> <base /> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> This policy creates a secure gateway that validates Azure AD tokens from your local Azure CLI session, then uses APIM's managed identity to authenticate with Azure OpenAI backends, eliminating the need for API keys. It automatically load-balances requests across multiple Azure OpenAI regions using round-robin distribution for optimal performance. 4. Create Azure APIM proxy for Cline This FastAPI-based proxy forwards OpenAI-compatible API requests from Cline through APIM using Azure AD authentication via Azure CLI credentials, eliminating the need to store or manage OpenAI API keys. Prerequisites: Python 3.8 or higher Azure CLI (ensure az login has been run at least once) Ensure the user running the proxy script has appropriate Azure AD roles and permissions. This script uses Azure CLI credentials to obtain bearer tokens. Your user account must have the correct roles assigned and access to the target API audience configured in the APIM policy above. Quick setup for the proxy: Create this requirements.txt: fastapi uvicorn requests azure-identity Create this Python script for the proxy source code azure_proxy.py: import os import requests from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse import uvicorn from azure.identity import AzureCliCredential # CONFIGURATION APIM_BASE_URL = <APIM BASE URL HERE> AZURE_SCOPE = <AZURE SCOPE HERE> PORT = int(os.environ.get("PORT", 8080)) app = FastAPI() credential = AzureCliCredential() # Use a single requests.Session for connection pooling from requests.adapters import HTTPAdapter session = requests.Session() session.mount("https://", HTTPAdapter(pool_connections=100, pool_maxsize=100)) import time _cached_token = None _cached_expiry = 0 def get_bearer_token(scope: str) -> str: """Get an access token using AzureCliCredential, caching until expiry is within 30 seconds.""" global _cached_token, _cached_expiry now = int(time.time()) if _cached_token and (_cached_expiry - now > 30): return _cached_token try: token_obj = credential.get_token(scope) _cached_token = token_obj.token _cached_expiry = token_obj.expires_on return _cached_token except Exception as e: raise RuntimeError(f"Could not get Azure access token: {e}") @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"]) async def proxy(request: Request, path: str): # Assemble the destination URL (preserve trailing slash logic) dest_url = f"{APIM_BASE_URL.rstrip('/')}/{path}".rstrip("/") if request.url.query: dest_url += "?" + request.url.query # Get the Bearer token bearer_token = get_bearer_token(AZURE_SCOPE) # Prepare headers (copy all, overwrite Authorization) headers = dict(request.headers) headers["Authorization"] = f"Bearer {bearer_token}" headers.pop("host", None) # Read body body = await request.body() # Send the request to APIM using the pooled session resp = session.request( method=request.method, url=dest_url, headers=headers, data=body if body else None, stream=True, ) # Stream the response back to the client return StreamingResponse( resp.raw, status_code=resp.status_code, headers={k: v for k, v in resp.headers.items() if k.lower() != "transfer-encoding"}, ) if __name__ == "__main__": # Bind the app to 127.0.0.1 to avoid any Firewall updates uvicorn.run(app, host="127.0.0.1", port=PORT) Run the setup: pip install -r requirements.txt az login # Authenticate with Azure python azure_proxy.py Configure Cline to use the proxy: Using the OpenAI Compatible API Provider: Base URL: http://localhost:8080 API Key: <any random string> Model ID: <your Azure OpenAI deployment name> API Version: <your Azure OpenAI deployment version> The API key field is required by Cline but unused in our implementation - any random string works since authentication happens via Azure AD. 5. Configure Cline to listen to your MCP Server Now that we have both our MCP server running and Cline configured with secure OpenAI access, the final step is connecting them together. To enable Cline to discover and use your custom tools, navigate to your installed MCP servers on Cline, select Configure MCP Servers, and add in the configuration for your server: { "mcpServers": { "mcp-tools": { "autoApprove": [ "EchoCaps", "ReverseEcho", ], "disabled": false, "timeout": 60, "type": "sse", "url": "http://<your localhost url>/sse" } } } Now, you can use Cline's chat interface to interact with your secure MCP tools. Try asking Cline to use your custom tools - for example, "Can you echo 'Hello World' in capital letters?" and watch as it calls your local MCP server through the infrastructure you've built. Conclusion There you have it: A secure implementation of MCP that can be tailored to your specific use case. This approach gives you the power of MCP while maintaining enterprise security. You get: AI capabilities through secure Azure infrastructure. Custom tools that never leave your environment. Standard MCP interface for easy integration. Complete control over your data and tools. The key is keeping MCP servers local while routing AI requests through your secure Azure infrastructure. This way, you gain MCP's benefits without compromising security. Disclaimer While this tutorial provides a secure foundation for MCP implementation, organizations are responsible for configuring their Azure resources according to their specific security requirements and compliance standards. Ensure proper review of network rules, access policies, and authentication configurations before deploying to production environments. Resources MCP SDKs and Tools: MCP C# SDK MCP Python SDK Cline SDK Cline User Guide Azure OpenAI with APIM Azure API Management Network Security: Azure API Management - restrict caller IPs Azure API Management with an Azure virtual network Set up inbound private endpoint for Azure API Management Azure OpenAI and AI Services Network Security: Configure Virtual Networks for Azure AI services Securing Azure OpenAI inside a virtual network with private endpoints Add an Azure OpenAI network security perimeter az cognitiveservices account network-rule2.1KViews3likes2Comments