security
382 TopicsMicrosoft Purview Data Risk Assessments: M365 vs Fabric
Why Data Risk Assessments matter more in the AI era: Generative AI changes the oversharing equation. It can surface data faster, to more people, with less friction, which means existing permission mistakes become more visible, more quickly. Microsoft Purview Data Risk Assessments are designed to identify and help you remediate oversharing risks before (or while) AI experiences like Copilot and analytics copilots accelerate access patterns. Quick Decision Guide: When to use Which? Use Microsoft 365 Data Risk Assessments when: You’re rolling out Microsoft 365 Copilot (or Copilot Chat/agents grounded in SharePoint/OneDrive). Your biggest exposure risk is oversharing SharePoint sites, broad internal access, anonymous links, or unlabeled sensitive files. Use Fabric Data Risk Assessments when: You’re rolling out Copilot in Fabric and want visibility into sensitive data exposure in workspaces and Fabric items (Dashboard, Report, DataExploration, DataPipeline, KQLQuerySet, Lakehouse, Notebook, SQLAnalyticsEndpoint, and Warehouse) Your governance teams need to reduce risk in analytics estates without waiting for a full data governance program to mature. Use both when (most enterprises): Data is spread across collaboration + analytics + AI interactions, and you want a unified posture and remediation motion under DSPM objectives. At a high level: Microsoft 365 Data Risk Assessments focus on oversharing risk in SharePoint and OneDrive content, a primary readiness step for Microsoft 365 Copilot rollouts. Fabric Data Risk Assessments focus on oversharing risk in Microsoft Fabric workspaces and items, especially relevant for Copilot in Fabric and Power BI / Fabric artifacts. Both experiences show up under the newer DSPM (preview) guided objectives (and also remain available via DSPM for AI (classic) paths depending on your tenant rollout). The Differences That Matter NOTE: Assessments are a posture snapshot, not a live feed. Default assessments will automatically re-scan every week while custom assessments need to be manually recreated or duplicated to get a new snapshot. Use them to prioritize remediation and then re-run on a cadence to measure improvement. Scenario M365 Data Risk Assessments Fabric Data Risk Assessments Default scope & frequency Surfaces top 100 most active SharePoint sites (and OneDrives) weekly. Surfaces org-wide oversharing issues in M365 content. Default data risk assessment automatically runs weekly for the top 100 Fabric workspaces based on usage in your organization. Focuses on oversharing in Fabric items. Supported item types SharePoint Online sites (incl. Teams files) & OneDrive documents. Focus on files/folders and their sharing links or permissions. Fabric content: Dashboards, Power BI Reports, Data Explorations, Data Pipelines, KQL Querysets, Lakehouses, Notebooks, SQL Analytics Endpoints, Warehouses (as of preview). Oversharing signals Unprotected sensitive files that don't have sensitivity label that have broad or external access (e.g., “everyone” or public link sharing). Also flags sites with many active users (high exposure). Unprotected sensitive data in Fabric workspaces. For example, reports or datasets with SITs but no sensitivity label, or Fabric items accessible to many users (or shared externally via link, if applicable) Freshness and re-run behavior Custom assessments can be rerun by duplicating the assessment to create a new run, results can expire after a defined window (30‑day expiration and using “duplicate” to re-run). Fabric custom assessments are also active for 30 days and can be duplicated to continue scanning the scoped list of workspaces. To rerun and to see results after the 30-day expiration, use the duplicate option to create a new assessment with the same selections. Setup Requirements For deeper capabilities like M365 item-level scanning in custom assessments requires an Entra app registration with specific Microsoft Graph application permissions + admin consent for your tenant. The Fabric default assessment requires one-time service principal setup and enabling service principal authentication for Fabric admin APIs in the Fabric admin portal. Remediation options Advanced: Item-level scanning identifies potentially overshared files with direct remediation actions (e.g., remove sharing link, apply sensitivity label, notify owner) for SharePoint sites. site-level actions are also available such as enabling default sensitivity labels or configuring DLP policies. Workspace level Purview controls: e.g., create DLP policies, apply default sensitivity labels to new items, or review user access in Entra. (The actual permission changes in Fabric must be done by admins or owners in Fabric.) User experience Default assessment: Site-level insights like label of sites, how many times site was accessed and how many sensitive items were found in the site. Custom Assessments: Site-level and item-level insights if using item-level scan. Shows counts of sensitive files, share links (anyone links, externals), label coverage, last accessed info, etc. And possible remediation actions Results organized by Fabric workspace. Shows counts of sensitive items found and how many are labeled vs unlabeled, broad access indicators (e.g., large viewer counts or “anyone link” usage for data in that workspace), plus recommended mitigation (DLP/sensitivity label policies) Licensing DSPM/DSPM for AI M365 features typically require Microsoft 365 E5 / E5 Compliance entitlements for relevant user-based scenarios. Copilot in Fabric governance (starting with Copilot for Power BI) requires E5 licensing and enabling the Fabric tenant option “Allow Microsoft Purview to secure AI interactions” (default enabled) and pay-as-you-go billing for Purview management of those AI interactions. Common pitfalls organizations face when securing Copilot in Fabric (and how to avoid them) Pitfall How to Avoid or Mitigate Not completing prerequisites for Purview to access Fabric environment Example: Skipping the Entra app setup for Fabric assessment scanning. Follow the setup checklist and enable the Purview-Fabric integration. Without it, your Fabric workspaces won’t be scanned for oversharing risk. Dedicate time pre-deployment to configure required roles, app registrations, and settings. Fabric setup stalls due to missing admin ownership Fabric assessments require Entra app admin + Fabric admin collaboration (service principal + tenant settings). Skipping the “label strategy” and jumping straight to DLP DLP is strongest when paired with a clear sensitivity labeling strategy; labels provide durable semantics across M365 + Fabric. Fragmented labeling strategy Example: Fabric assets have a different or no labeling schema separate from M365. Align on a unified sensitivity label taxonomy across M365 and Fabric. Re-use labels in Fabric via Purview publishing so that classifications mean the same thing everywhere. This ensures consistent DLP and retention behavior across all data locations. Too broad DLP blocks disrupt users Example: Blocking every sharing action involving any internal data causes frustration. Take a risk-based approach: Start with monitor-only DLP rules to gather data, then refine. Focus on high-impact scenarios (for instance, blocking external sharing of highly sensitive data) rather than blanket rules. Use user education (via policy tips) to drive awareness alongside enforcement. Ignoring the human element (owners & users) Example: IT implements controls but doesn’t inform data owners or train users. Involve workspace owners and end-users early. For each high-risk workspace, engage the owner to verify if data is truly sensitive and to help implement least-privilege access. Provide training to users about how Copilot uses data and why labeling & proper sharing are important. This fosters a culture of “shared responsibility” for AI security. No ongoing plan after initial fixes Example: One-time scan and label, but no follow-up, so new issues emerge unchecked. Operationalize the process: Treat this as a continuous cycle. Leverage the “Operate” phase – schedule regular re-assessments (e.g., monthly Fabric risk scans) and quarterly reviews of Copilot-related incidents. Consider appointing a Copilot Governance Board or expanding an existing data governance committee to include AI oversight, so there’s accountability for long-term upkeep. Resources Prevent oversharing with data risk assessments (DSPM) Prerequisites for Fabric Data risk assessments Fabric: enable service principal authentication for admin APIs Beyond Visibility: new Purview DSPM experience Microsoft Purview licensing guidance New Purview pricing options for protecting AI apps and agents (PAYG context) Acknowledgements Sincere Regards to Sunil Kadam, Principal Squad Leader & Jenny Li, Product Lead, for their review and feedback.Learn more about Microsoft Security Communities.
In the last five years, Microsoft has increased the emphasis on community programs – specifically within the security, compliance, and management space. These communities fall into two categories: Public and Private (or NDA only). In this blog, we will share a breakdown of each community and how to join.Introducing new security and compliance add-ons for Microsoft 365 Business Premium
Small and medium businesses (SMBs) are under pressure like never before. Cyber threats are evolving rapidly, and regulatory requirements are becoming increasingly complex. Microsoft 365 Business Premium is our productivity and security solution designed for SMBs (1–300 users). It includes Office apps, Teams, advanced security such as Microsoft Defender for Business, and device management — all in one cost-effective package. Today, we’re taking that a step further. We’re excited to announce three new Microsoft 365 Business Premium add-ons designed to supercharge security and compliance. Tailored for medium-sized organizations, these add-ons bring enterprise-grade security, compliance, and identity protection to the Business Premium experience without the enterprise price tag. Microsoft Defender Suite for Business Premium: $10/user/month Cyberattacks are becoming more complex. Attackers are getting smarter. Microsoft Defender Suite provides end-to-end security to safeguard your businesses from identity attacks, device threats, email phishing, and risky cloud apps. It enables SMBs to reduce risks, respond faster, and maintain a strong security posture without adding complexity. It includes: Protect your business from identity threats: Microsoft Entra ID P2 offers advanced security and governance features including Microsoft Entra ID Protection and Microsoft Entra ID Governance. Microsoft Entra ID protection offers risk-based conditional access that helps block identity attacks in real time using behavioral analytics and signals from both user risk and sign-in risk. It also enables SMBs to detect, investigate, and remediate potential identity-based risks using sophisticated machine learning and anomaly detection capabilities. With detailed reports and alerts, your business is notified of suspicious user activities and sign-in attempts, including scenarios like a password-spray where attackers try to gain unauthorized access to company employee accounts by trying a small number of commonly used passwords across many different accounts. ID Governance capabilities are also included to help automate workflows and processes that give users access to resources. For example, IT admins historically manage the onboarding process manually and generate repetitive user access requests for Managers to review which is time consuming and inefficient. With ID Governance capabilities, pre-configured workflows facilitate the automation of employee onboarding, user access, and lifecycle management throughout their employment, streamlining the process and reducing onboarding time. Microsoft Defender for Identity includes dedicated sensors and connectors for common identity elements that offer visibility into your unique identity landscape and provide detailed posture recommendations, robust detections and response actions. These powerful detections are then automatically enriched and correlated with data from other domains across Defender XDR for true incident-level visibility. Keep your devices safe: Microsoft Defender for Endpoint Plan 2 offers industry-leading antimalware, cyberattack surface reduction, device-based conditional access, comprehensive endpoint detection and response (EDR), advanced hunting with support for custom detections, and attack surface reduction capabilities powered by Secure Score. Secure email and collaboration: With Microsoft Defender for Office 365 P2, you gain access to cyber-attack simulation training, which provides SMBs with a safe and controlled environment to simulate real-world cyber-attacks, helping to train employees in recognizing phishing attempts. Additionally automated response capabilities and post-breach investigations help reduce the time and resources required to identify and remediate potential security breaches. Detailed reports are also available that capture information on employees’ URL clicks, internal and external email distribution, and more. Protect your cloud apps: Microsoft Defender for Cloud Apps is a comprehensive, AI-powered software-as-a-service (SaaS) security solution that enables IT teams to identify and manage shadow IT and ensure that only approved applications are used. It protects against sophisticated SaaS-based attacks, OAuth attacks, and risky interactions with generative AI apps by combining SaaS app discovery, security posture management, app-to-app protection, and integrated threat protection. IT teams can gain full visibility into their SaaS app landscape, understand the risks and set up controls to manage the apps. SaaS security posture management quickly identifies app misconfigurations and provides remediation actions to reduce the attack surface. Microsoft Purview Suite for Business Premium: $10/user/month Protect against insider threats Microsoft Purview Insider Risk Management uses behavioral analytics to detect risky activities, like an employee downloading large volumes of files before leaving the company. Privacy is built in, so you can act early without breaking employee trust. Protect sensitive data wherever it goes Microsoft Purview Information Protection classifies and labels sensitive data, so the right protections follow the data wherever it goes. Think of it as a ‘security tag’ that stays attached to a document whether it’s stored in OneDrive, shared in Teams, or emailed outside the company. Policies can be set based on the ‘tag’ to prevent data oversharing, ensuring sensitive files are only accessible to the right people. Microsoft Purview Data Loss Prevention (DLP) works in the background to stop sensitive information, like credit card numbers or health data, from being accidentally shared with unauthorized people Microsoft Purview Message Encryption adds another layer by making sure email content stays private, even when sent outside the organization. Microsoft Purview Customer Key gives organizations control of their own encryption keys, helping meet strict regulatory requirements. Ensure data privacy and compliant communications Microsoft Purview Communication Compliance monitors and flags inappropriate or risky communications to protect against policy and compliance violations. Protect AI interactions Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into how AI interacts with sensitive data, helping detect oversharing, risky prompts, and unethical behavior. Monitors Copilot and third-party AI usage with real-time alerts, policy enforcement, and risk scoring. Manage information through its lifecycle Microsoft Purview Records and Data Lifecycle Management helps businesses meet compliance obligations by applying policies that enable automatic retention or deletion of data. Stay investigation-ready Microsoft Purview eDiscovery (Premium) makes it easier to respond to internal investigations, legal holds, or compliance reviews. Instead of juggling multiple systems, you can search, place holds, and export information in one place — ensuring legal and compliance teams work efficiently. Microsoft Purview Audit (Premium) provides deeper audit logs and analytics to trace activity like file access, email reads, or user actions. This level of detail is critical for incident response and forensic investigations, helping SMBs maintain regulatory readiness and customer trust. Simplify Compliance Management Microsoft Purview Compliance Manager helps track regulatory requirements, assess risk, and manage improvement actions, all in one dashboard tailored for SMBs. Together, these capabilities help SMBs operate with the same level of compliance and data protection as large enterprises but simplified for smaller teams and tighter budgets. Microsoft Defender and Purview Suites for Business Premium: $15/user/month The new Microsoft Defender and Purview Suites unite the full capabilities of Microsoft Defender and Purview into a single, cost-effective package. This all-in-one solution delivers comprehensive security, compliance, and data protection, while helping SMB customers unlock up to 68% savings compared to buying the products separately, making it easier than ever to safeguard your organization without compromising on features or budget. FAQ Q: When will these new add-ons be available for purchase? A: They will be available for purchase as add-ons to Business Premium in September 2025. Q: How can I purchase? A: You can purchase these as add-ons to your Business Premium subscription through Microsoft Security for SMBs website or through your Partner. Q: Are there any seat limits for the add-on offers? A: Yes. Customers can purchase a mix of add-on offers, but the total number of seats across all add-ons is limited to 300 per customer. Q: Does Microsoft 365 Business Premium plus Microsoft Defender Suite allow mixed licensing for endpoint security solutions? A: Microsoft Defender for Business does not support mixed licensing so a tenant with Defender for Business (included in Microsoft 365 Business Premium) along with Defender for Endpoint Plan 2 (included in Microsoft 365 Security) will default to Defender for Business. For example, if you have 80 users licensed for Microsoft 365 Business Premium and you’ve added Microsoft Defender Suite for 30 of those users, the experience for all users will default to Defender for Business. If you would like to change that to the Defender for Endpoint Plan 2 experience, you should license all users for Defender for Endpoint Plan 2 (either through standalone or Microsoft Defender Suite) and then contact Microsoft Support to request the switch for your tenant. You can learn more here. Q: Can customers who purchased the E5 Security Suite as an add-on to Microsoft 365 Business Premium transition to the new Defender Suite starting from the October billing cycle? A: Yes. Customers currently using the Microsoft 365 E5 Security add-on with Microsoft 365 Business Premium are eligible to transition to the new Defender Suite beginning with the October billing cycle. For detailed guidance, please refer to the guidelines here. Q: As a Partner, how do I build Managed Detection and Response (MDR) services with MDB? A: For partners or customers looking to build their own security operations center (SOC) with MDR, Defender for Business supports the streaming of device events (device file, registry, network, logon events and more) to Azure Event Hub, Azure Storage, and Microsoft Sentinel to support advanced hunting and attack detection. If you are using the streaming API for the first time, you can find step-by-step instructions in the Microsoft 365 Streaming API Guide on configuring the Microsoft 365 Streaming API to stream events to your Azure Event Hubs or to your Azure Storage Account. To learn more about Microsoft Security solutions for SMBs you can visit our website.54KViews9likes37CommentsSimplifying Code Signing for Windows Apps: Artifact Signing (GA)
Trusted Signing is now Artifact Signing—and it’s officially Generally Available! Artifact Signing is a fully managed, end-to-end code signing service that makes it easier than ever for Windows application developers to sign their apps securely and efficiently. As Artifact Signing rebrands, customers will see changes over the next weeks. Please refer to our Learn docs for the most updated information. What is Artifact Signing? Code signing has traditionally been a complex and manual process. Managing certificates, securing keys, and integrating signing into build pipelines can slow teams down and introduce risk. Artifact Signing changes that by offering a fully managed, end-to-end solution that automates certificate management, enforces strong security controls, and integrates seamlessly with your existing developer tools. With zero-touch certificate management, verified identity, role-based access control, and support for multiple trust models, Artifact Signing makes it easier than ever to build and distribute secure Windows applications. Whether you're shipping consumer apps or internal tools, Artifact Signing helps you deliver software that’s secure. Security Made Simple Zero-Touch Certificate Management No more manual certificate handling. The service provides “zero-touch” certificate management, meaning it handles the creation, protection, and even automatic rotation of code signing certificates on your behalf. These certificates are short-lived and auto renewed behind the scenes, giving you tighter control, faster revocation when needed, and eliminating the risks associated with long-lived certs. Your signing reputation isn’t tied to a single certificate. Instead, it’s anchored to your verified identity in Azure, and every signature reflects that verified identity. Verified Identity Identity validation with Artifact Signing ensures your app’s digital signature displays accurate and verified publisher information. Once validated, your identity details, such as your individual or organization name, are included in the certificate. This means your signed apps will show a verified publisher name, not the dreaded “Unknown Publisher” warning. The entire validation process happens in the Azure portal. You simply submit your individual or organization details, and in some cases, upload supporting documents like business registration papers. Most validations are completed within a few business days, and once approved, you’re ready to start signing your apps immediately. organization validation page Secure and Controlled Signing (RBAC) Artifact Signing enforces Azure’s Role-Based Access Control (RBAC) to secure signing activities. You can assign specific Azure roles to accounts or CI agents that use your Artifact Signing resource, ensuring only authorized developers or build pipelines can initiate signing operations. This tight access control helps prevent unauthorized or rogue signatures. Full Telemetry and Audit Logs Every signing request is tracked. You can see what was signed, when, and by whom in the Azure portal. This logging not only helps with compliance and auditing needs but also enables fast remediation if an issue arises. For example, if you discover a particular signing certificate was used in error or compromised, you can quickly revoke it directly from the portal. The short-lived nature of certificates in Artifact Signing further limits the window of any potential misuse. Artifact Signing gives you enterprise-grade security controls out of the box: strong protection of keys, fine-grained access control, and visibility. For developers and companies concerned about supply chain security, this dramatically reduces risk compared to handling signing keys manually. Built for Developers Artifact Signing was built to slot directly into developers’ existing workflows. You don’t need to overhaul how you build or release software, just plug Artifact Signing into your toolchain: GitHub Actions & Azure DevOps: The service includes first-class support for modern CI/CD. An official GitHub Action is available for easy integration into your workflow YAML, and Azure DevOps has tasks for pipelines. With these tools, every Windows app build can automatically sign binaries or installers—no manual steps required. Since signing credentials are managed in Azure, you avoid storing secrets in your repository. Visual Studio & MSBuild: Use the Artifact Signing client with SignTool to integrate signing into publish profiles or post-build steps. Once the Artifact Signing client is installed, Visual Studio or MSBuild can invoke SignTool as usual, with signatures routed through the Artifact Signing service. SignTool / CLI: Developers using scripts or custom build systems can continue using the familiar signtool.exe command. After a one-time setup, your existing SignTool commands will sign via the cloud service. The actual file signing on your build machine uses a digest signing approach: SignTool computes a hash of your file and sends that to the Artifact Signing service, which returns a signature. The file itself isn’t uploaded, preserving confidentiality and speed. This way, integrating Artifact Signing can be as simple as adding a couple of lines to your build script to point SignTool at Azure. PowerShell & SDK: For advanced automation or custom scenarios, Artifact Signing supports PowerShell modules and an SDK. These tools allow you to script signing operations, bulk-sign files, or integrate signing into specialized build systems. The Right Trust for the Right Audience Artifact Signing has support for multiple trust models to suit different distribution scenarios. You can choose between Public Trust and Private Trust for your code signing, depending on your app’s audience: Public Trust: This is the standard model for software intended to go to consumers. When you use Public Trust signing, the certificates come from a Microsoft CA that’s part of the Microsoft Trusted Root Program. Apps signed under Public Trust are recognized by Windows as coming from a known publisher, enabling a smooth installation experience when security features such as Smart App Control and SmartScreen are enabled. Private Trust: This model is for internal or enterprise apps. These certificates aren’t publicly trusted but are instead meant to work with Windows Defender Application Control (App Control for Business) policies. This is ideal for line-of-business applications, internal tools, or scenarios where you want to tightly control who trusts the app. Artifact Signing ’s Private Trust model is the modern, expanded evolution of Microsoft’s older Device Guard Signing Service (DGSS) -- delivering the same ability to sign internal apps but with ease of access and expanded capabilities. Test Signing: Useful for development and testing. These certificates mimic real signatures but aren’t publicly trusted, allowing you to validate your signing setup in non-production environments before releasing your app. Note on Expanded Scenario Support: Artifact Signing supports additional certificate profiles, including those for VBS enclaves and Private Trust CI Policies. In addition, there is a new preview feature for signing container images using the Notary v2 standard from the CNCF Notary project. This enables developers to sign Docker/OCI container images stored in Azure Container Registry using tools like the notation CLI, backed by Artifact Signing. Having all trust models in one service means you can manage all your signing needs in one place. Whether your code is destined for the world or just your organization, Artifact Signing makes it easy to ensure it is signed with an appropriate level of trust. Misuse and Abuse Management Artifact Signing is engineered with robust safeguards to counter certificate misuse and abuse. The signing platform employs active threat intelligence monitoring to continuously detect suspicious signing activity in real time. The service also emphasizes prevention: certificates are short-lived (renewed daily and valid for only 72 hours), which means any certificate used maliciously can be swiftly revoked without impacting software signed outside its brief lifetime. When misuse is confirmed, Artifact Signing quickly revokes the certificate and suspends the subscriber’s account, removing trust from the malicious code’s signature and stopping further abuse. These measures adhere to strict industry standards for responsible certificate governance. By combining real-time threat detection, built-in preventive controls, and rapid response policies, Artifact Signing gives Windows app developers confidence that any attempt to abuse the platform will be quickly identified and contained, helping protect the broader software ecosystem from emerging threats. Availability and What’s Next Check out the upcoming “What’s New” section in the Artifact Signing Learn Docs for updates on supported file types, new region availability, and more. Microsoft will continue evolving the service to meet developer needs. Conclusion: Enhancing Trust and Security for All Windows Apps Artifact Signing empowers Windows developers to sign their applications with ease and confidence. It integrates effortlessly into your development tools, automates the heavy lifting of certificate management, and ensures every app carries a verified digital signature backed by Microsoft’s Certificate Authorities. For users, it means peace of mind. For developers and organizations, it means fewer headaches, stronger protection against supply chain threats, and complete control over who signs what and when. Now that Artifact Signing is generally available, it’s a must-have for building trustworthy Windows software. It reflects Microsoft’s commitment to a secure, inclusive ecosystem and brings modern security features like Smart App Control and App Control for Business within reach, simply by signing your code. Whether you're shipping consumer apps or internal tools, Artifact Signing helps you deliver software that’s both easy to install and tough to compromise.Trusted Signing Public Preview Update
Nearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, we’ve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.6.3KViews7likes40CommentsAnnouncing AI Entity Analyzer in Microsoft Sentinel MCP Server - Public Preview
What is the Entity Analyzer? Assessing the risk of entities is a core task for SOC teams - whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With Entity Analyzer, this complexity starts to fade away. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity you encounter - starting with users and urls. By providing this unified, out-of-the-box solution for entity analysis, Entity Analyzer also enables the AI agents you build to make smarter decisions and automate more tasks - without the need to manually engineer risk evaluation logic for each entity type. And for those building SOAR workflows, Entity Analyzer is natively integrated with Logic Apps, making it easy to enrich incidents and automate verdicts within your playbooks. *Entity Analyzer is rolling out in Public Preview to Sentinel MCP server and within Logic Apps starting today. Learn more here. **Leave feedback on the Entity Analyzer here. Deep Dive: How the User Analyzer is already solving problems for security teams Problem: Drowning in identity alerts Security operations centers (SOCs) are inundated with identity-based threats and alert noise. Triaging these alerts requires analyzing numerous data sources across sign-in logs, cloud app events, identity info, behavior analytics, threat intel, and more, all in tandem with each other to reach a verdict - something very challenging to do without a human in the loop today. So, we introduced the User Analyzer, a specialized analyzer that unifies, correlates, and analyzes user activity across all these security data sources. Government of Nunavut: solving identity alert overload with User Analyzer Hear the below from Arshad Sheikh, Security Expert at Government of Nunavut, on how they're using the User Analyzer today: How it's making a difference "Before the User Analyzer, when we received identity alerts we had to check a large amount of data related to users’ activity (user agents, anomalies, IP reputation, etc.). We had to write queries, wait for them to run, and then manually reason over the results. We attempted to automate some of this, but maintaining and updating that retrieval, parsing, and reasoning automation was difficult and we didn’t have the resources to support it. With the User Analyzer, we now have a plug-and-play solution that represents a step toward the AI-driven automation of the future. It gathers all the context such as what the anomalies are and presents it to our analysts so they can make quick, confident decisions, eliminating the time previously spent manually gathering this data from portals." Solving a real problem "For example, every 24 hours we create a low severity incident of our users who successfully sign-in to our network non interactively from outside of our GEO fence. This type of activity is not high-enough fidelity to auto-disable, requiring us to manually analyze the flagged users each time. But with User Analyzer, this analysis is performed automatically. The User Analyzer has also significantly reduced the time required to determine whether identity-based incidents like these are false positives or true positives. Instead of spending around 20 minutes investigating each incident, our analysts can now reach a conclusion in about 5 minutes using the automatically generated summary." Looking ahead "Looking ahead, we see even more potential. In the future, the User Analyzer could be integrated directly with Microsoft Sentinel playbooks to take automated, definitive action such as blocking user or device access based on the analyzer’s results. This would further streamline our incident response and move us closer to fully automated security operations." Want similar benefits in your SOC? Get started with our Entity Analyzer Logic Apps template here. User Analyzer architecture: how does it work? Let’s take a look at how the User Analyzer works. The User Analyzer aggregates and correlates signals from multiple data sources to deliver a comprehensive analysis, enabling informed actions based on user activity. The diagram below gives an overview of this architecture: Step 1: Retrieve Data The analyzer starts by retrieving relevant data from the following sources: Sign-In Logs (Interactive & Non-Interactive): Tracks authentication and login activity. Security Alerts: Alerts from Microsoft Defender solutions. Behavior Analytics: Surfaces behavioral anomalies through advanced analytics. Cloud App Events: Captures activity from Microsoft Defender for Cloud Apps. Identity Information: Enriches user context with identity records. Microsoft Threat Intelligence: Enriches IP addresses with Microsoft Threat Intelligence. Steps 2: Correlate signals Signals are correlated using identifiers such as user IDs, IP addresses, and threat intelligence. Rather than treating each alert or behavior in isolation, the User Analyzer fuses signals to build a holistic risk profile. Step 3: AI-based reasoning In the User Analyzer, multiple AI-powered agents collaborate to evaluate the evidence and reach consensus. This architecture not only improves accuracy and reduces bias in verdicts, but also provides transparent, justifiable decisions. Leveraging AI within the User Analyzer introduces a new dimension of intelligence to threat detection. Instead of relying on static signatures or rigid regex rules, AI-based reasoning can uncover subtle anomalies that traditional detection methods and automation playbooks often miss. For example, an attacker might try to evade detection by slightly altering a user-agent string or by targeting and exfiltrating only a few files of specific types. While these changes could bypass conventional pattern matching, an AI-powered analyzer understands the semantic context and behavioral patterns behind these artifacts, allowing it to flag suspicious deviations even when the syntax looks benign. Step 4: Verdict & analysis Each user is given a verdict. The analyzer outputs any of the following verdicts based on the analysis: Compromised Suspicious activity found No evidence of compromise Based on the verdict, a corresponding recommendation is given. This helps teams make an informed decision whether action should be taken against the user. *AI-generated content from the User Analyzer may be incorrect - check it for accuracy. User Analyzer Example Output See the following example output from the user analyzer within an incident comment: *IP addresses have been redacted for this blog* &CK techniques, a list of malicious IP addresses the user signed in from (redacted for this blog), and a few suspicious user agents the user's activity originated from. typically have to query and analyze these themselves, feel more comfortable trusting its classification. The analyzer also gives recommendations to remediate the account compromise, and a list of data sources it used during analysis. Conclusion Entity Analyzer in Microsoft Sentinel MCP server represents a leap forward in alert triage & analysis. By correlating signals and harnessing AI-based reasoning, it empowers SOC teams to act on investigations with greater speed, precision, and confidence. *Leave feedback on the Entity Analyzer hereAlways‑on Diagnostics for Purview Endpoint DLP: Effortless, Zero‑Friction troubleshooting for admins
Historically, some security teams have struggled with the challenge of troubleshooting issues with endpoint DLP. Investigations can often slow down because reproducing issues, collecting traces, and aligning on context can be tedious. With always-on diagnostics in Purview endpoint data loss prevention (DLP), our goal has been simple: make troubleshooting seamless, and effortless—without ever disrupting the information worker. Today, we’re excited to share new enhancements to always-on diagnostics for Purview endpoint DLP. This is the next step in our journey to modernize supportability in Microsoft Purview and dramatically reduce admin friction during investigations. Where We Started: Introduction of continuous diagnostic collection Earlier this year, we introduced continuous diagnostic trace collection on Windows endpoints (support for macOS endpoints coming soon). This eliminated the single largest source of friction: the need to reproduce issues. With this capability: Logs are captured persistently for up to 90 days Information workers no longer need admin permissions to retrieve traces Admins can submit complete logs on the first attempt Support teams can diagnose transient or rare issues with high accuracy In just a few months, we saw resolution times drop dramatically. The message was clear: Always-on diagnostics is becoming a new troubleshooting standard. Our Newest Enhancements: Built for Admins. Designed for Zero Friction. The newest enhancements to always-on diagnostics unlock the most requested capability from our IT and security administrators: the ability to retrieve and upload always-on diagnostic traces directly from devices using the Purview portal — with no user interaction required. This means: Admins can now initiate trace uploads on demand No interruption to information workers and their productivity No issue reproduction sessions, minimizing unnecessary disruption and coordination Every investigation starts with complete context Because the traces are already captured on-device, these improvements now help complete the loop by giving admins a seamless, portal-integrated workflow to deliver logs to Microsoft when needed. This experience is now fully available for customers using endpoint DLP on Windows. Why This Matters As a product team, our success is measured not just by usage, but by how effectively we eliminate friction for customers. Always-on diagnostics minimizes the friction and frustration that has historically affected some customers. - No more asking your employee or information worker to "can you reproduce that?" and share logs - No more lost context - No more delays while logs are collected after the fact How it Works Local trace capture Devices continuously capture endpoint DLP diagnostic data in a compressed, proprietary format, and this data stays solely on the respective device based on the retention period and storage limits configured by the admin. Users no longer need to reproduce issues during retrieval—everything the investigation requires is already captured on the endpoint. Admin-triggered upload Admins can now request diagnostic uploads directly from the Purview portal, eliminating the need to disrupt users. Upload requests can be initiated from multiple entry points, including: Alerts (Data Loss Prevention → Alerts → Events) Activity Explorer (Data Loss Prevention → Explorers → Activity explorer) Device Policy Status Page (Settings → Device onboarding → Devices) From any of these locations, admins can simply choose Request device log, select the date range, add a brief description, and submit the request. Once processed, the device’s always-on diagnostic logs are securely uploaded to Microsoft telemetry as per customer-approved settings. Admins can include the upload request number in their ticket with Microsoft Support, and sharing this number removes the need for the support engineer to ask for logs again during the investigation. This workflow ensures investigations start with complete diagnostic context. Privacy & compliance considerations Data is only uploaded during admin-initiated investigations Data adheres to our published diagnostic data retention policies Logs are only accessible to the Microsoft support team, not any other parties We Want to Hear From You Are you using always-on diagnostics? We'd love to hear about your experience. Share your feedback, questions, or success stories in the Microsoft Tech Community, or reach out to our engineering team directly. Making troubleshooting effortless—so you can focus on what matters, not on chasing logs.Microsoft Copilot Studio vs. Microsoft Foundry: Building AI Agents and Apps
Microsoft Copilot Studio and Microsoft Foundry (often referred to as Azure AI Foundry) are two key platforms in Microsoft’s AI ecosystem that allow organizations to create custom AI agents and AI-enabled applications. While both share the goal of enabling businesses to build intelligent, task-oriented “copilot” solutions, they are designed for different audiences and use cases. To help you decide which path suits your organization, this blog provides an educational comparison of Copilot Studio vs. Azure AI Foundry, focusing on their unique strengths, feature parity and differences, and key criteria like control requirements, preferences, and integration needs. By understanding these factors, technical decision-makers, developers, IT admins, and business leaders can confidently select the right platform or even a hybrid approach for their AI agent projects. Copilot Studio and Azure AI Foundry: At a Glance Copilot Studio is designed for business teams, pro‑makers, and IT admins who want a managed, low‑code SaaS environment with plug‑and‑play integrations. Microsoft Foundry is built for professional developers who need fine‑grained control, customization, and integration into their existing application and cloud infrastructure. And the good news? Organizations often use both and they work together beautifully. Feature Parity and Key Differences While both platforms can achieve similar outcomes, they do so via different means. Here’s a high-level comparison of Copilot Studio and Azure AI Foundry: Factor Copilot Studio (SaaS, Low-Code) Microsoft (Azure) AI Foundry (PaaS, Pro-Code) Target Users & Skills Business domain experts, IT pros, and “pro-makers” comfortable with low-code tools. Little to no coding is required for building agents. Ideal for quick solutions within business units. Professional developers, software engineers, and data scientists with coding/DevOps expertise. Deep programming skills needed for custom code, DevOps, and advanced AI scenarios. Suited for complex, large-scale AI projects. Platform Model Software-as-a-Service – fully managed by Microsoft. Agents and tools are built and run in Microsoft’s cloud (M365/Copilot service) with no infrastructure to manage. Simplified provisioning, automatic updates, and built-in compliance with Microsoft 365 environment. Platform-as-a-Service, runs in your Azure subscription. You deploy and manage the agent’s infrastructure (e.g. Azure compute, networking, storage) in your cloud. Offers full control over environment, updates, and data residency. Integration & Data Out-of-box connectors & data integrations for Microsoft 365 (SharePoint, Outlook, Teams) and 3rd-party SaaS via Power Platform connectors. Easy integration with business systems without coding, ideal for leveraging existing M365 and Power Platform assets. Data remains in Microsoft’s cloud (with M365 compliance and Purview governance) by default. Deep custom integration with any system or data source via code. Natively works with Azure services (Azure SQL, Cosmos DB, Functions, Kubernetes, Service Bus, etc.) and can connect to on-prem or multi-cloud resources via custom connectors. Suitable when data/code must stay in your network or cloud for compliance or performance reasons. Development Experience Low-code, UI-driven development. Build agents with visual designers and prompt editors. No-code orchestration through Topics (conversational flows) and Agent Flows (Power Automate). Rich library of pre-built components (tools/capabilities) that are auto-managed and continuously improved by Microsoft (e.g. Copilot connectors for M365, built-in tool evaluations). Emphasizes speed and simplicity over granular control. Code-first development. Offers web-based studio plus extensive SDKs, CLI, and VS Code integration for coding agents and custom tools. Supports full DevOps: you can use GitHub/Azure DevOps for CI/CD, custom testing, version control, and integrate with your existing software development toolchain. Provides maximum flexibility to define bespoke logic, but requires more time and skill, sacrificing immediate simplicity for long-term extensibility. Control & Governance Managed environment – minimal configuration needed. Governance is handled via Microsoft’s standard M365 admin centers: e.g. Admin Center, Entra ID, Microsoft Purview, Defender for identity, access, auditing, and compliance across copilots. Updates and performance optimizations (e.g. tool improvements) are applied automatically by Microsoft. Limited need (or ability) to tweak infrastructure or model behavior under the hood – fits organizations that want Microsoft to manage the heavy lifting. Microsoft Foundry provides a pro‑code, Azure‑native environment for teams that need full control over the agent runtime, integrations, and development workflow. Full stack control – you manage how and where agents run. Customizable governance using Azure’s security & monitoring tools: Azure AD (identity/RBAC), Key Vault, network security (private endpoints, VNETs), plus integrated logging and telemetry via Azure Monitor, App Insights, etc. Foundry includes a developer control plane for observing, debugging, and evaluating agents during development and runtime. This is ideal for organizations requiring fine-grained control, custom compliance configurations, and rigorous LLMOps practices. Deployment Channels One-click publishing to Microsoft 365 experiences (Teams, Outlook), web chat, SharePoint, email, and more – thanks to native support for multiple channels in Copilot Studio. Everything runs in the cloud; you don’t worry about hosting the bot. Flexible deployment options. Foundry agents can be exposed via APIs or the Activity Protocol, and integrated into apps or custom channels using the M365 Agents SDK. Foundry also supports deploying agents as web apps, containers, Azure Functions, or even private endpoints for internal use, giving teams freedom to run agents wherever needed (with more setup). Control and customization Copilot Studio trades off fine-grained control for simplicity and speed. It abstracts away infrastructure and handles many optimizations for you, which accelerates development but limits how deeply you can tweak the agent’s behavior. Azure Foundry, by contrast, gives you extensive control over the agent’s architecture, tools and environment – at the cost of more complex setup and effort. Consider your project’s needs: Does it demand custom code, specialized model tuning or on-premises data? If yes, Foundry provides the necessary flexibility. Common Scenarios · HR or Finance teams building departmental AI assistants · Sales operations automating workflows and knowledge retrieval · Fusion teams starting quickly without developer-heavy resources Copilot Studio gives teams a powerful way to build agents quickly without needing to set up compute, networking, identity or DevOps pipeline · Embedding agents into production SaaS apps · If team uses professional developer frameworks (Semantic Kernel, LangChain, AutoGen, etc.) · Building multi‑agent architectures with complex toolchains · You require integration with existing app code or multi-cloud architecture. · You need full observability, versioning, instrumentation or custom DevOps. Foundry is ideal for software engineering teams who need configurability, extensibility and industrial-grade DevOps. Benefits of Combined Use: Embracing Hybrid approach One important insight is that Copilot Studio and Foundry are not mutually exclusive. In fact, Microsoft designed them to be interoperable so that organizations can use both in tandem for different parts of a solution. This is especially relevant for large projects or “fusion teams” that include both low-code creators and pro developers. The pattern many enterprises land on: Developers build specialized tools / agents in Foundry Makers assemble user-facing workflow experience in Copilot Studio Agents can collaborate via agent-to-agent patterns (including A2A, where applicable) Using both platforms together unlocks the best of both worlds: Seamless User Experience: Copilot Studio provides a polished, user-friendly interface for end-users, while Azure AI Foundry handles complex backend logic and data processing. Advanced AI Capabilities: Leverage Azure AI Foundry’s extensive model library and orchestration features to build sophisticated agents that can reason, learn, and adapt. Scalability & Flexibility: Azure AI Foundry’s cloud-native architecture ensures scalability for high-demand scenarios, while Copilot Studio’s low-code approach accelerates development cycles. For the customers who don’t want to decide up front, Microsoft introduced a unified approach for scaling agent initiatives: Microsoft Agent Pre-Purchase Plan (P3) as part of the broader Agent Factory story, designed to reduce procurement friction across both platforms. Security & Compliance using Microsoft Purview Microsoft Copilot Studio: Microsoft Purview extends enterprise-grade security and compliance to agents built with Microsoft Copilot Studio by bringing AI interaction governance into the same control plane you use for the rest of Microsoft 365. With Purview, you can apply DSPM for AI insights, auditing, and data classification to Copilot Studio prompts and responses, and use familiar compliance capabilities like sensitivity labels, DLP, Insider Risk Management, Communication Compliance, eDiscovery, and Data Lifecycle Management to reduce oversharing risk and support investigations. For agents published to non-Microsoft channels, Purview management can require pay-as-you-go billing, while still using the same Purview policies and reporting workflows teams already rely on. Microsoft Foundry: Microsoft Purview integrates with Microsoft Foundry to help organizations secure and govern AI interactions (prompts, responses, and related metadata) using Microsoft’s unified data security and compliance capabilities. Once enabled through the Foundry Control Plane or through Microsoft Defender for Cloud in Microsoft Azure Portal, Purview can provide DSPM for AI posture insights plus auditing, data classification, sensitivity labels, and enforcement-oriented controls like DLP, along with downstream compliance workflows such as Insider Risk, Communication Compliance, eDiscovery, and Data Lifecycle Management. This lets security and compliance teams apply consistent policies across AI apps and agents in Foundry, while gaining visibility and governance through the same Purview portal and reports used across the enterprise. Conclusion When it comes to Copilot Studio vs. Azure AI Foundry, there is no universally “best” choice – the ideal platform depends on your team’s composition and project requirements. Copilot Studio excels at enabling functional business teams and IT pros to build AI assistants quickly in a managed, compliant environment with minimal coding. Azure AI Foundry shines for developer-centric projects that need maximal flexibility, custom code, and deep integration with enterprise systems. The key is to identify what level of control, speed, and skill your scenario calls for. Use both together to build end-to-end intelligent systems that combine ease of use with powerful backend intelligence. By thoughtfully aligning the platform to your team’s strengths and needs, you can minimize friction and maximize momentum on your AI agent journey delivering custom copilot solutions that are both quick to market and built for the long haul Resources to explore Copilot Studio Overview Microsoft Foundry Use Microsoft Purview to manage data security & compliance for Microsoft Copilot Studio Use Microsoft Purview to manage data security & compliance for Microsoft Foundry Optimize Microsoft Foundry and Copilot Credit costs with Microsoft Agent pre-purchase plan Accelerate Innovation with Microsoft Agent FactorySecuring the AI Pipeline – From Data to Deployment
In our first post, we established why securing AI workloads is mission-critical for the enterprise. Now, we turn to the AI pipeline—the end-to-end journey from raw data to deployed models—and explore why every stage must be fortified against evolving threats. As organizations accelerate AI adoption, this pipeline becomes a prime target for adversaries seeking to poison data, compromise models, or exploit deployment endpoints. Enterprises don’t operate a single “AI system”; they run interconnected pipelines that transform data into decisions across a web of services, models, and applications. Protecting this chain demands a holistic security strategy anchored in Zero Trust for AI, supply chain integrity, and continuous monitoring. In this post, we map the pipeline, identify key attack vectors at each stage, and outline practical defenses using Microsoft’s security controls—spanning data governance with Purview, confidential training environments in Azure, and runtime threat detection with Defender for Cloud. Our guidance aligns with leading frameworks, including the NIST AI Risk Management Framework and MITRE ATLAS, ensuring your AI security program meets recognized standards while enabling innovation at scale. A Security View of the AI Pipeline Securing AI isn’t just about protecting a single model—it’s about safeguarding the entire pipeline that transforms raw data into actionable intelligence. This pipeline spans multiple stages, from data collection and preparation to model training, validation, and deployment, each introducing unique risks that adversaries can exploit. Data poisoning, model tampering, and supply chain attacks are no longer theoretical—they’re real threats that can undermine trust and compliance. By viewing the pipeline through a security lens, organizations can identify these vulnerabilities early and apply layered defenses such as Zero Trust principles, data lineage tracking, and runtime monitoring. This holistic approach ensures that AI systems remain resilient, auditable, and aligned with enterprise risk and regulatory requirements. Stages & Primary Risks Data Collection & Ingestion Sources: enterprise apps, data lakes, web, partners. Key risks: poisoning, PII leakage, weak lineage, and shadow datasets. Frameworks call for explicit governance and provenance at this earliest stage. [nist.gov] Data Prep & Feature Engineering Risks: backdoored features, bias injection, and transformation tampering that evades standard validation. ATLAS catalogs techniques that target data, features, and preprocessing. [atlas.mitre.org] Model Training / Fine‑Tuning Risks: model theft, inversion, poisoning, and compromised compute. Confidential computing and isolated training domains are recommended. [learn.microsoft.com] Validation & Red‑Team Testing Risks: tainted validation sets, overlooked LLM‑specific risks (prompt injection, unbounded consumption), and fairness drift. OWASP’s LLM Top 10 highlights the unique classes of generative threats. [owasp.org] Registry & Release Management Risks: supply chain tampering (malicious models, dependency confusion), unsigned artifacts, and missing SBOM/AIBOM. [codesecure.com], [github.com] Deployment & Inference Risks: adversarial inputs, API abuse, prompt injection (direct & indirect), data exfiltration, and model abuse at runtime. Microsoft has documented multi‑layer mitigations and integrated threat protection for AI workloads. [techcommun…rosoft.com], [learn.microsoft.com] Reference Architecture (Zero Trust for AI) The Reference Architecture for Zero Trust in AI establishes a security-first blueprint for the entire AI pipeline—from raw data ingestion to model deployment and continuous monitoring. Its importance lies in addressing the unique risks of AI systems, such as data poisoning, model tampering, and adversarial attacks, which traditional security models often overlook. By embedding Zero Trust principles at every stage—governance with Microsoft Purview, isolated training environments, signed model artifacts, and runtime threat detection—organizations gain verifiable integrity, regulatory compliance, and resilience against evolving threats. Adopting this architecture ensures that AI innovations remain trustworthy, auditable, and aligned with business and compliance objectives, ultimately accelerating adoption while reducing risk and safeguarding enterprise reputation. Below is a visual of what this architecture looks like: Why this matters: Microsoft Purview establishes provenance, labels, and lineage Azure ML enforces network isolation Confidential Computing protects data-in-use Responsible AI tooling addresses safety & fairness Defender for Cloud adds runtime AI‑specific threat detection Azure ML Model Monitoring closes the loop with drift and anomaly detection. [microsoft.com], [azure.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] Stage‑by‑Stage Threats & Concrete Mitigations (with Microsoft Controls) Data Collection & Ingestion - Attack Scenarios Data poisoning via partner feed or web‑scraped corpus; undetected changes skew downstream models. Research shows Differential Privacy (DP) can reduce impact but is not a silver bullet. Differential Privacy introduces controlled noise into training data or model outputs, making it harder for attackers to infer individual data points and limiting the influence of any single poisoned record. This helps reduce the impact of targeted poisoning attacks because malicious entries cannot disproportionately affect the model’s parameters. However, DP is not sufficient on its own for several reasons: Aggregate poisoning still works: DP protects individual records, but if an attacker injects a large volume of poisoned data, the cumulative effect can still skew the model. Utility trade-offs: Adding noise to achieve strong privacy guarantees often degrades model accuracy, creating tension between security and performance. Doesn’t detect malicious intent: DP doesn’t validate data quality or provenance—it only limits exposure. Poisoned data can still enter the pipeline undetected. Vulnerable to sophisticated attacks: Techniques like backdoor poisoning or gradient manipulation can bypass DP protections because they exploit model behavior rather than individual record influence. Bottom line, DP is a valuable layer for privacy and resilience, but it must be combined with data validation, anomaly detection, and provenance checks to effectively mitigate poisoning risks. [arxiv.org], [dp-ml.github.io] Sensitive data drift into training corpus (PII/PHI), later leaking through model inversion. NIST RMF calls for privacy‑enhanced design and provenance from the outset. When personally identifiable information (PII) or protected health information (PHI) unintentionally enters the training dataset—often through partner feeds, logs, or web-scraped sources—it creates a latent risk. If the model memorizes these sensitive records, adversaries can exploit model inversion attacks to reconstruct or infer private details from outputs or embeddings. [nvlpubs.nist.gov] Mitigations & Integrations Classify & label sensitive fields with Microsoft Purview Use Purview’s automated scanning and classification to detect PII, PHI, financial data, and other regulated fields across your data estate. Apply sensitivity labels and tags to enforce consistent governance policies. [microsoft.com] Enable lineage across Microsoft Fabric/Synapse/SQL Implement Data Loss Prevention (DLP) rules to block unauthorized movement of sensitive data and prevent accidental leaks. Combine this with role-based access control (RBAC) and attribute-based access control (ABAC) to restrict who can view, modify, or export sensitive datasets. Integrate with SOC and DevSecOps Pipelines Feed Purview alerts and lineage events into your SIEM/XDR workflows for real-time monitoring. Automate policy enforcement in CI/CD pipelines to ensure models only train on approved, sanitized datasets. Continuous Compliance Monitoring Schedule recurring scans and leverage Purview’s compliance dashboards to validate adherence to regulatory frameworks like GDPR, HIPAA, and NIST RMF. Maintain dataset hashes and signatures; store lineage metadata and approvals before a dataset can enter training (Purview + Fabric). [azure.microsoft.com] For externally sourced data, sandbox ingestion and run poisoning heuristics; if using Data Privacy (DP)‑training, document tradeoffs (utility vs. robustness). [aclanthology.org], [dp-ml.github.io] 3.2 Data Preparation & Feature Engineering Attack Scenarios Feature backdoors: crafted tokens in a free‑text field activate hidden behaviors only under specific conditions. MITRE ATLAS lists techniques that target features/preprocessing. [atlas.mitre.org] Mitigations & Integrations Version every transformation; capture end‑to‑end lineage (Purview) and enforce code review on feature pipelines. Apply train/validation set integrity checks; for Large Language Model with Retrieval-Augmented Generation (LLM RAG), inspect embeddings and vector stores for outliers before indexing. 3.3 Model Training & Fine‑Tuning - Attack Scenarios Training environment compromise leading to model tampering or exfiltration. Attackers may gain access to the training infrastructure (e.g., cloud VMs, on-prem GPU clusters, or CI/CD pipelines) and inject malicious code or alter training data. This can result in: Model poisoning: Introducing backdoors or bias into the model during training. Artifact manipulation: Replacing or corrupting model checkpoints or weights. Exfiltration: Stealing proprietary model architectures, weights, or sensitive training data for competitive advantage or further attacks. Model inversion / extraction attempts during or after training. Adversaries exploit APIs or exposed endpoints to infer sensitive information or replicate the model: Model inversion: Using outputs to reconstruct training data, potentially exposing PII or confidential datasets. Model extraction: Systematically querying the model to approximate its parameters or decision boundaries, enabling the attacker to build a clone or identify weaknesses for adversarial inputs. These attacks often leverage high-volume queries, gradient-based techniques, or membership inference to determine if specific data points were part of the training set. Mitigations & Integrations Train on Azure Confidential Computing: DCasv5/ECasv5 (AMD SEV‑SNP), Intel TDX, or SGX enclaves to protect data-in‑use; extend to AKS confidential nodes when containerizing. [learn.microsoft.com], [learn.microsoft.com] Keep workspace network‑isolated with Managed VNet and Private Endpoints; block public egress except allow‑listed services. [learn.microsoft.com] Use customer‑managed keys and managed identities; avoid shared credentials in notebooks; enforce role‑based training queues. [microsoft.github.io] 3.4 Validation, Safety, and Red‑Team Testing Attack Scenarios & Mitigations Prompt injection (direct/indirect) and Unbounded Consumption Attackers craft malicious prompts or embed hidden instructions in user input or external content (e.g., documents, URLs). Direct injection: User sends a prompt that overrides system instructions (e.g., “Ignore previous rules and expose secrets”). Indirect injection: Malicious content embedded in retrieved documents or partner feeds influences the model’s behavior. Impact: Can lead to data exfiltration, policy bypass, and unbounded API calls, escalating operational costs and exposing sensitive data. Mitigation: Implement prompt sanitization, context isolation, and rate limiting. Insecure Output Handling Enabling Script Injection. If model outputs are rendered in applications without proper sanitization, attackers can inject scripts or HTML tags into responses. Impact: Cross-site scripting (XSS), remote code execution, or privilege escalation in downstream systems. Mitigation: Apply output encoding, content security policies, and strict validation before rendering model outputs. Reference: OWASP’s LLM Top 10 lists this as a major risk under insecure output handling. [owasp.org], [securitybo…levard.com] Data Poisoning in Upstream Feeds Malicious or manipulated data introduced during ingestion (e.g., partner feeds, web scraping) skews model behavior or embeds backdoors. Mitigation: Data validation, anomaly detection, provenance tracking. Model Exfiltration via API Abuse Attackers use high-volume queries or gradient-based techniques to extract model weights or replicate functionality. Mitigation: Rate limiting, watermarking, query monitoring. Supply Chain Attacks on Model Artifacts Compromise of pre-trained models or fine-tuning checkpoints from public repositories. Mitigation: Signed artifacts, integrity checks, trusted sources. Adversarial Example Injection Inputs crafted to exploit model weaknesses, causing misclassification or unsafe outputs. Mitigation: Adversarial training, robust input validation. Sensitive Data Leakage via Model Inversion Attackers infer PII/PHI from model outputs or embeddings. Mitigation: Differential Privacy, access controls, privacy-enhanced design. Insecure Integration with External Tools LLMs calling plugins or APIs without proper sandboxing can lead to unauthorized actions. Mitigation: Strict permissioning, allowlists, and isolation. Additional Mitigations & Integrations considerations Adopt Microsoft’s defense‑in‑depth guidance for indirect prompt injection (hardening + Spotlighting patterns) and pair with runtime Prompt Shields. [techcommun…rosoft.com] Evaluate models with Responsible AI Dashboard (fairness, explainability, error analysis) and export RAI Scorecards for release gates. [learn.microsoft.com] Build security gates referencing MITRE ATLAS techniques and OWASP GenAI controls into your MLOps pipeline. [atlas.mitre.org], [owasp.org] 3.5 Registry, Signing & Supply Chain Integrity - Attack Scenarios Model supply chain risk: backdoored pre‑trained weights Attackers compromise publicly available or third-party pre-trained models by embedding hidden behaviors (e.g., triggers that activate under specific inputs). Impact: Silent backdoors can cause targeted misclassification or data leakage during inference. Mitigation: Use trusted registries and verified sources for model downloads. Perform model scanning for anomalies and backdoor detection before deployment. [raykhira.com] Dependency Confusion Malicious actors publish packages with the same name as internal dependencies to public repositories. If build pipelines pull these packages, attackers gain code execution. Impact: Compromised training or deployment environments, leading to model tampering or data exfiltration. Mitigation: Enforce private package registries and pin versions. Validate dependencies against allowlists. Unsigned Artifacts Swapped in the Registry If model artifacts (weights, configs, containers) are not cryptographically signed, attackers can replace them with malicious versions. Impact: Deployment of compromised models or containers without detection. Mitigation: Implement artifact signing and integrity verification (e.g., SHA256 checksums). Require signature validation in CI/CD pipelines before promotion to production. Registry Compromise Attackers gain access to the model registry and alter metadata or inject malicious artifacts. Mitigation: RBAC, MFA, audit logging, and registry isolation. Tampered Build Pipeline CI/CD pipeline compromised to inject malicious code during model packaging or containerization. Mitigation: Secure build environments, signed commits, and pipeline integrity checks. Poisoned Container Images Malicious base images used for model deployment introduce vulnerabilities or malware. Mitigation: Use trusted container registries, scan images for CVEs, and enforce image signing. Shadow Artifacts Attackers upload artifacts with similar names or versions to confuse operators and bypass validation. Mitigation: Strict naming conventions, artifact fingerprinting, and automated validation. Additional Mitigations & Integrations considerations Store models in Azure ML Registry with version pinning; sign artifacts and publish SBOM/AI‑BOM metadata for downstream verifiers. [microsoft.github.io], [github.com], [codesecure.com] Maintain verifiable lineage and attestations (policy says: no signature, no deploy). Emerging work on attestable pipelines reinforces this approach. [arxiv.org] 3.6 Secure Deployment & Runtime Protection - Attack Scenarios Adversarial inputs and prompt injections targeting your inference APIs or agents Attackers craft malicious queries or embed hidden instructions in user input or retrieved content to manipulate model behavior. Impact: Policy bypass, sensitive data leakage, or execution of unintended actions via connected tools. Mitigation: Prompt sanitization and isolation (strip unsafe instructions). Context segmentation for multi-turn conversations. Rate limiting and anomaly detection on inference endpoints. Jailbreaks that bypass safety filters Attackers exploit weaknesses in safety guardrails by chaining prompts or using obfuscation techniques to override restrictions. Impact: Generation of harmful, disallowed, or confidential content; reputational and compliance risks. Mitigation: Layered safety filters (input + output). Continuous red-teaming and adversarial testing. Dynamic policy enforcement based on risk scoring. API abuse and model extraction. High-volume or structured queries designed to infer model parameters or replicate its functionality. Impact: Intellectual property theft, exposure of proprietary model logic, and enabling downstream attacks. Mitigation: Rate limiting and throttling. Watermarking responses to detect stolen outputs. Query pattern monitoring for extraction attempts. [atlas.mitre.org] Insecure Integration with External Tools or Plugins LLM agents calling APIs without sandboxing can trigger unauthorized actions. Mitigation: Strict allowlists, permission gating, and isolated execution environments. Model Output Injection into Downstream Systems Unsanitized outputs rendered in apps or dashboards can lead to XSS or command injection. Mitigation: Output encoding, validation, and secure rendering practices. Runtime Environment Compromise Attackers exploit container or VM vulnerabilities hosting inference services. Mitigation: Harden runtime environments, apply OS-level security patches, and enforce network isolation. Side-Channel Attacks Observing timing, resource usage, or error messages to infer sensitive details about the model or data. Mitigation: Noise injection, uniform response timing, and error sanitization. Unbounded Consumption Leading to Cost Escalation Attackers flood inference endpoints with requests, driving up compute costs. Mitigation: Quotas, usage monitoring, and auto-scaling with cost controls. Additional Mitigations & Integrations considerations Deploy Managed Online Endpoints behind Private Link; enforce mTLS, rate limits, and token‑based auth; restrict egress in managed VNet. [learn.microsoft.com] Turn on Microsoft Defender for Cloud – AI threat protection to detect jailbreaks, data leakage, prompt hacking, and poisoning attempts; incidents flow into Defender XDR. [learn.microsoft.com] For Azure OpenAI / Direct Models, enterprise data is tenant‑isolated and not used to train foundation models; configure Abuse Monitoring and Risks & Safety dashboards, with clear data‑handling stance. [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] 3.7 Post‑Deployment Monitoring & Response - Attack Scenarios Data/Prediction Drift silently degrades performance Over time, input data distributions change (e.g., new slang, market shifts), causing the model to make less accurate predictions without obvious alerts. Impact: Reduced accuracy, operational risk, and potential compliance violations if decisions become unreliable. Mitigation: Continuous drift detection using statistical tests (KL divergence, PSI). Scheduled model retraining and validation pipelines. Alerting thresholds for performance degradation. Fairness Drift Shifts Outcomes Across Cohorts Model performance or decision bias changes for specific demographic or business segments due to evolving data or retraining. Impact: Regulatory risk (GDPR, EEOC), reputational damage, and ethical concerns. Mitigation: Implement bias monitoring dashboards. Apply fairness metrics (equal opportunity, demographic parity) in post-deployment checks. Trigger remediation workflows when drift exceeds thresholds. Emergent Jailbreak Patterns evolve over time Attackers discover new prompt injection or jailbreak techniques that bypass safety filters after deployment. Impact: Generation of harmful or disallowed content, policy violations, and security breaches. Mitigation: Behavioral anomaly detection on prompts and outputs. Continuous red-teaming and adversarial testing. Dynamic policy updates integrated into inference pipelines. Shadow Model Deployment Unauthorized or outdated models running in production environments without governance. Mitigation: Registry enforcement, signed artifacts, and deployment audits. Silent Backdoor Activation Backdoors introduced during training activate under rare conditions post-deployment. Mitigation: Runtime scanning for anomalous triggers and adversarial input detection. Telemetry Tampering Attackers manipulate monitoring logs or metrics to hide drift or anomalies. Mitigation: Immutable logging, cryptographic integrity checks, and SIEM integration. Cost Abuse via Automated Bots Bots continuously hit inference endpoints, driving up operational costs unnoticed. Mitigation: Rate limiting, usage analytics, and anomaly-based throttling. Model Extraction Over Time Slow, distributed queries across months to replicate model behavior without triggering rate limits. Mitigation: Long-term query pattern analysis and watermarking. Additional Mitigations & Integrations considerations Enable Azure ML Model Monitoring for data drift, prediction drift, data quality, and custom signals; route alerts to Event Grid to auto‑trigger retraining and change control. [learn.microsoft.com], [learn.microsoft.com] Correlate runtime AI threat alerts (Defender for Cloud) with broader incidents in Defender XDR for a complete kill‑chain view. [learn.microsoft.com] Real‑World Scenarios & Playbooks Scenario A — “Clean” Model, Poisoned Validation Symptom: Model looks great in CI, fails catastrophically on a subset in production. Likely cause: Attacker tainted validation data so unsafe behavior was never detected. ATLAS documents validation‑stage attacks. [atlas.mitre.org] Playbook: Require dual‑source validation sets with hashes in Purview lineage; incorporate RAI dashboard probes for subgroup performance; block release if variance exceeds policy. [microsoft.com], [learn.microsoft.com] Scenario B — Indirect Prompt Injection in Retrieval-Augmented Generation (RAG) Symptom: The assistant “quotes” an external PDF that quietly exfiltrates secrets via instructions in hidden text. Playbook: Apply Microsoft Spotlighting patterns (delimiting/datamarking/encoding) and Prompt Shields; enable Defender for Cloud AI alerts and remediate via Defender XDR. [techcommun…rosoft.com], [learn.microsoft.com] Scenario C — Model Extraction via API Abuse Symptom: Spiky usage, long prompts, and systematic probing. Playbook: Enforce rate/shape limits; throttle token windows; monitor with Defender for Cloud and block high‑risk consumers; for OpenAI endpoints, validate Abuse Monitoring telemetry and adjust content filters. [learn.microsoft.com], [learn.microsoft.com] Product‑by‑Product Implementation Guide (Quick Start) Data Governance & Provenance Microsoft Purview Data Governance GA: unify cataloging, lineage, and policy; integrate with Fabric; use embedded Copilot to accelerate stewardship. [microsoft.com], [azure.microsoft.com] Secure Training Azure ML with Managed VNet + Private Endpoints; use Confidential VMs (DCasv5/ECasv5) or SGX/TDX where enclave isolation is required; extend to AKS confidential nodes for containerized training. [learn.microsoft.com], [learn.microsoft.com] Responsible AI Responsible AI Dashboard & Scorecards for fairness/interpretability/error analysis—use as release artifacts at change control. [learn.microsoft.com] Runtime Safety & Threat Detection Azure AI Content Safety (Prompt Shields, groundedness, protected material detection) + Defender for Cloud AI Threat Protection (alerts for leakage/poisoning/jailbreak/credential theft) integrated to Defender XDR. [ai.azure.com], [learn.microsoft.com] Enterprise‑grade LLM Access Azure OpenAI / Direct Models: data isolation, residency (Data Zones), and clear privacy commitments for commercial & public sector customers. [learn.microsoft.com], [azure.microsoft.com], [blogs.microsoft.com] Monitoring & Continuous Improvement Azure ML Model Monitoring (drift/quality) + Event Grid triggers for auto‑retrain; instrument with Application Insights for latency/reliability. [learn.microsoft.com] Policy & Governance: Map → Measure → Manage (NIST AI RMF) Align your controls to NIST’s four functions: Govern: Define AI security policies: dataset admission, cryptographic signing, registry controls, and red‑team requirements. [nvlpubs.nist.gov] Map: Inventory models, data, and dependencies (Purview catalog + SBOM/AIBOM). [microsoft.com], [github.com] Measure: RAI metrics (fairness, explainability), drift thresholds, and runtime attack rates (Defender/Content Safety). [learn.microsoft.com], [learn.microsoft.com] Manage: Automate mitigations: block unsigned artifacts, quarantine suspect datasets, rotate keys, and retrain on alerts. [nist.gov] What “Good” Looks Like: A 90‑Day Hardening Plan Days 0–30: Establish Foundations Turn on Purview scans across Fabric/SQL/Storage; define sensitivity labels + DLP. [microsoft.com] Lock Azure ML workspaces into Managed VNet, Private Endpoints, and Managed Identity. [learn.microsoft.com], [microsoft.github.io] Move training to Confidential VMs for sensitive projects. [learn.microsoft.com] Days 31–60: Shift‑Left & Gate Releases Integrate RAI Dashboard/Scorecards into CI; add ATLAS + OWASP LLM checks to release gates. [learn.microsoft.com], [atlas.mitre.org], [owasp.org] Require SBOM/AIBOM and artifact signing for models. [codesecure.com], [github.com] Days 61–90: Runtime Defense & Observability Enable Defender for Cloud – AI Threat Protection and Azure AI Content Safety; wire alerts to Defender XDR. [learn.microsoft.com], [ai.azure.com] Roll out Model Monitoring (drift/quality) with auto‑retrain triggers via Event Grid. [learn.microsoft.com] FAQ: Common Leadership Questions Q: Do differential privacy and adversarial training “solve” poisoning? A: They reduce risk envelopes but do not eliminate attacks—plan for layered defenses and continuous validation. [arxiv.org], [dp-ml.github.io] Q: How do we prevent indirect prompt injection in agentic apps? A: Combine Spotlighting patterns, Prompt Shields, least‑privilege tool access, explicit consent for sensitive actions, and Defender for Cloud runtime alerts. [techcommun…rosoft.com], [learn.microsoft.com] Q: Can we use Azure OpenAI without contributing our data to model training? A: Yes—Azure Direct Models keep your prompts/completions private, not used to train foundation models without your permission; with Data Zones, you can align residency. [learn.microsoft.com], [azure.microsoft.com] Closing As your organization scales AI, the pipeline is the perimeter. Treat every stage—from data capture to model deployment—as a control point with verifiable lineage, signed artifacts, network isolation, runtime detection, and continuous risk measurement. But securing the pipeline is only part of the story—what about the models themselves? In our next post, we’ll dive into hardening AI models against adversarial attacks, exploring techniques to detect, mitigate, and build resilience against threats that target the very core of your AI systems. Key Takeaway Securing AI requires protecting the entire pipeline—from data collection to deployment and monitoring—not just individual models. Zero Trust for AI: Embed security controls at every stage (data governance, isolated training, signed artifacts, runtime threat detection) for integrity and compliance. Main threats and mitigations by stage: Data Collection: Risks include poisoning and PII leakage; mitigate with data classification, lineage tracking, and DLP. Data Preparation: Watch for feature backdoors and tampering; use versioning, code review, and integrity checks. Model Training: Risks are environment compromise and model theft; mitigate with confidential computing, network isolation, and managed identities. Validation & Red Teaming: Prompt injection and unbounded consumption are key risks; address with prompt sanitization, output encoding, and adversarial testing. Supply Chain & Registry: Backdoored models and dependency confusion; use trusted registries, artifact signing, and strict pipeline controls. Deployment & Runtime: Adversarial inputs and API abuse; mitigate with rate limiting, context segmentation, and Defender for Cloud AI threat protection. Monitoring: Watch for data/prediction drift and cost abuse; enable continuous monitoring, drift detection, and automated retraining. References NIST AI RMF (Core + Generative AI Profile) – governance lens for pipeline risks. [nist.gov], [nist.gov] MITRE ATLAS – adversary tactics & techniques against AI systems. [atlas.mitre.org] OWASP Top 10 for LLM Applications / GenAI Project – practical guidance for LLM‑specific risks. [owasp.org] Azure Confidential Computing – protect data‑in‑use with SEV‑SNP/TDX/SGX and confidential GPUs. [learn.microsoft.com] Microsoft Purview Data Governance – GA feature set for unified data governance & lineage. [microsoft.com] Defender for Cloud – AI Threat Protection – runtime detections and XDR integration. [learn.microsoft.com] Responsible AI Dashboard / Scorecards – fairness & explainability in Azure ML. [learn.microsoft.com] Azure AI Content Safety – Prompt Shields, groundedness detection, protected material checks. [ai.azure.com] Azure ML Model Monitoring – drift/quality monitoring & automated retraining flows. [learn.microsoft.com] #AIPipelineSecurity; #AITrustAndSafety; #SecureAI; #AIModelSecurity; #AIThreatModeling; #SupplyChainSecurity; #DataSecurityArtificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.