microsoft information governance
81 TopicsMicrosoft Purview Data Governance - Authoring Custom Data Quality rules using expression languages
The cost of poor-quality data runs into millions of dollars in direct losses. When indirect costs—such as missed opportunities—are included, the total impact is many times higher. Poor data quality also creates significant societal costs. It can lead customers to pay higher prices for goods and services and force citizens to bear higher taxes due to inefficiencies and errors. In critical domains, the consequences can be severe. Defective or inaccurate data can result in injury or loss of life, for example due to medication errors or incorrect medical procedures, especially as healthcare increasingly relies on data- and AI-driven decision-making. Students may be unfairly denied admission to universities because of errors in entrance exam scoring. Consumers may purchase unsafe or harmful food products if nutritional labels are inaccurate or misleading. Research and industry measurements show that 20–35 percent of an organization’s operating revenue is often wasted on recovering from process failures, data defects, information scrap, and rework caused by poor data quality (Larry P. English, Information Quality Applied). Data Quality Rules To maintain high-quality data, organizations must continuously measure and monitor data quality and understand the negative impact of poor-quality data on their specific use cases. Data quality rules play a critical role in objectively measuring, enforcing, and quantifying data quality, enabling organizations to improve trust, reduce risk, and maximize the value of their data assets. Data Quality (DQ) rules define how data should be structured, related, constrained, and validated so it can be trusted for operational, analytical, and AI use cases. Data quality rules are essential guidelines that organizations establish to ensure the accuracy, consistency, and completeness of their data. These rules fall into four major categories: Business Entity rules, Business Attribute rules, Data Dependency rules, and Data Validity rules (Ref: Informit.com/articles). Business Entity Rules These rules ensure that core business objects (such as Customer, Order, Account, or Product) are well-defined and correctly related. Business entity rules prevent duplicate records, broken relationships, and incomplete business processes. Business Entity Rules Definition Example Uniqueness Every entity instance must be uniquely identifiable. Each customer must have a unique Customer ID that is never NULL. Duplicate customer records indicate poor data quality. Cardinality Defines how many instances of one entity can relate to another. One customer can place many orders (one-to-many), but an order belongs to exactly one customer. Optionality Defines whether a relationship is mandatory or optional. An order must be linked to a customer (mandatory), but a customer may exist without having placed any orders (optional). Business Attribute Rules These rules focus on individual data elements (columns/fields) within business entities. Attribute rules ensure consistency, interpretability, and prevent invalid or meaningless values. Business Attribute Rules Definition Example Data Inheritance Attributes defined in a supertype must be consistent across subtypes. An Account Number remains the same whether the account is Checking or Savings. Data Domains Attribute values must conform to allowed formats or ranges. · State Code must be one of the 50 U.S. state abbreviations · Age must be between 0 and 120 · Date must follow CCYY/MM/DD format Data Dependency Rules These rules define logical and conditional relationships between entities and attributes. Data dependency rules enforce business logic and prevent contradictory or illogical data states. Data Dependency Rules Definition Example Entity Relationship Dependency The existence of one relationship depends on another condition. Orders cannot be placed for customers with a “Delinquent” status. Attribute Dependency The value of one attribute depends on others. · If Loan Status = “Funded,” then Loan Amount > 0 and Funding Date is required · Pay Amount = Hours Worked × Hourly Rate · If Monthly Salary > 0, then Commission Rate must be NULL Data Validity Rules These rules ensure that actual data values are complete, correct, accurate, precise, unique, and consistent. Validity rules ensure data is trustworthy for reporting, regulatory compliance, and AI/ML models. Data Validity Rules Definition Example Completeness Required records, relationships, attributes, and values must exist. No NULLs in mandatory fields like Customer ID or Order Date. Correctness & Accuracy Values must reflect real-world truth and business rules. A customer’s credit limit must align with approved financial records. Precision Data must be stored with the required level of detail. Interest rates stored to four decimal places if required for calculations. Uniqueness No duplicate records, keys, definitions, or overloaded columns. A “Customer Type Code” column should not mix customer types and shipping methods. Consistency Duplicate or redundant data must match everywhere it appears. Customer address stored in multiple systems must be identical. Compliance PII and sensitive data Check and validate personal information like credit card, passport number, national id, bank account, etc. System Rules Microsoft Purview Data Quality provides both system (out-of-the-box) rules and custom rules, along with an AI-enabled data quality rule recommendation feature. Together, these capabilities help organizations effectively measure, monitor, and improve data quality by applying the right set of data quality rules. System (out-of-the-box) rules cover the majority of business attribute and data validity scenarios. List of the system rules are illustrated below (see the screenshot below). Custom Rules Custom rules allow you to define validations that evaluate one or more values within a row, enabling complex, context-aware data quality checks tailored to specific business requirements. Custom rules support all four major categories of data quality rules: Business Entity rules, Business Attribute rules, Data Dependency rules, and Data Validity rules. You can use regular expression language, Azure Data Factory expression, and SQL expression language to create custom rules. Purview Data Quality custom rule has three parts: Row expression: This Boolean expression applies to each row that the filter expression approves. If this expression returns true, the row passes. If it returns false, the row fails. Filter expression: This optional condition narrows down the dataset on which the row condition is evaluated. You activate it by selecting the Use filter expression checkbox. This expression returns a Boolean value. The filter expression applies to a row and if it returns true, then that row is considered for the rule. If the filter expression returns false for that row, then it means that row is ignored for the purposes of this rule. The default behavior of the filter expression is to pass all rows, so if you don't specify a filter expression, all rows are considered. Null expression: Checks how NULL values should be handled. This expression returns to a Boolean that handles cases where data is missing. If the expression returns true, the row expression isn't applied. Each part of the rule works similarly to existing Microsoft Purview Data Quality conditions. A rule only passes if the row expression evaluates to TRUE for the dataset that matches the filter expression and handles missing values as specified in the null expression. Examples: Ensure that the location of the salesperson is correct. Azure data factory expression language is used to author this rule. 2. Ensure "fare Amount" is positive and "trip Distance" is valid. SQL expression language is used to author this rule. 3. For each trip, check if the fare is above the average for its payment type. SQL expression language is used to author this rule. Together, above listed four categories of data quality rules: Prevent errors at the source Enforce business logic Improve trust in analytics and AI Reduce remediation costs downstream In short, high-quality data is not accidental—it is enforced through well-defined data quality rules across entities, attributes, relationships, and values. References Create Data Quality Rules in Unified Catalog | Microsoft Learn Expression builder in mapping data flows - Azure Data Factory & Azure Synapse | Microsoft Learn Expression Functions in the Mapping Data Flow - Azure Data Factory & Azure Synapse | Microsoft Learn http://www.informit.com/articles/article.aspx?p=399325&seqNum=3 Information Quality Applied, Larry P. EnglishSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Introducing Microsoft Security Store
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. We recognize that defending against modern threats requires the full strength of an ecosystem, combining our unique expertise and shared threat intelligence. But with so many options out there, it’s tough for security professionals to cut through the noise, and even tougher to navigate long procurement cycles and stitch together tools and data before seeing meaningful improvements. That’s why we built Microsoft Security Store - a storefront designed for security professionals to discover, buy, and deploy security SaaS solutions and AI agents from our ecosystem partners such as Darktrace, Illumio, and BlueVoyant. Security SaaS solutions and AI agents on Security Store integrate with Microsoft Security products, including Sentinel platform, to enhance end-to-end protection. These integrated solutions and agents collaborate intelligently, sharing insights and leveraging AI to enhance critical security tasks like triage, threat hunting, and access management. In Security Store, you can: Buy with confidence – Explore solutions and agents that are validated to integrate with Microsoft Security products, so you know they’ll work in your environment. Listings are organized to make it easy for security professionals to find what’s relevant to their needs. For example, you can filter solutions based on how they integrate with your existing Microsoft Security products. You can also browse listings based on their NIST Cybersecurity Framework functions, covering everything from network security to compliance automation — helping you quickly identify which solutions strengthen the areas that matter most to your security posture. Simplify purchasing – Buy solutions and agents with your existing Microsoft billing account without any additional payment setup. For Azure benefit-eligible offers, eligible purchases contribute to your cloud consumption commitments. You can also purchase negotiated deals through private offers. Accelerate time to value – Deploy agents and their dependencies in just a few steps and start getting value from AI in minutes. Partners offer ready-to-use AI agents that can triage alerts at scale, analyze and retrieve investigation insights in real time, and surface posture and detection gaps with actionable recommendations. A rich ecosystem of solutions and AI agents to elevate security posture In Security Store, you’ll find solutions covering every corner of cybersecurity—threat protection, data security and governance, identity and device management, and more. To give you a flavor of what is available, here are some of the exciting solutions on the store: Darktrace’s ActiveAI Security SaaS solution integrates with Microsoft Security to extend self-learning AI across a customer's entire digital estate, helping detect anomalies and stop novel attacks before they spread. The Darktrace Email Analysis Agent helps SOC teams triage and threat hunt suspicious emails by automating detection of risky attachments, links, and user behaviors using Darktrace Self-Learning AI, integrated with Microsoft Defender and Security Copilot. This unified approach highlights anomalous properties and indicators of compromise, enabling proactive threat hunting and faster, more accurate response. Illumio for Microsoft Sentinel combines Illumio Insights with Microsoft Sentinel data lake and Security Copilot to enhance detection and response to cyber threats. It fuses data from Illumio and all the other sources feeding into Sentinel to deliver a unified view of threats across millions of workloads. AI-driven breach containment from Illumio gives SOC analysts, incident responders, and threat hunters unified visibility into lateral traffic threats and attack paths across hybrid and multi-cloud environments, to reduce alert fatigue, prioritize threat investigation, and instantly isolate workloads. Netskope’s Security Service Edge (SSE) platform integrates with Microsoft M365, Defender, Sentinel, Entra and Purview for identity-driven, label-aware protection across cloud, web, and private apps. Netskope's inline controls (SWG, CASB, ZTNA) and advanced DLP, with Entra signals and Conditional Access, provide real-time, context-rich policies based on user, device, and risk. Telemetry and incidents flow into Defender and Sentinel for automated enrichment and response, ensuring unified visibility, faster investigations, and consistent Zero Trust protection for cloud, data, and AI everywhere. PERFORMANTA Email Analysis Agent automates deep investigations into email threats, analyzing metadata (headers, indicators, attachments) against threat intelligence to expose phishing attempts. Complementing this, the IAM Supervisor Agent triages identity risks by scrutinizing user activity for signs of credential theft, privilege misuse, or unusual behavior. These agents deliver unified, evidence-backed reports directly to you, providing instant clarity and slashing incident response time. Tanium Autonomous Endpoint Management (AEM) pairs realtime endpoint visibility with AI-driven automation to keep IT environments healthy and secure at scale. Tanium is integrated with the Microsoft Security suite—including Microsoft Sentinel, Defender for Endpoint, Entra ID, Intune, and Security Copilot. Tanium streams current state telemetry into Microsoft’s security and AI platforms and lets analysts pivot from investigation to remediation without tool switching. Tanium even executes remediation actions from the Sentinel console. The Tanium Security Triage Agent accelerates alert triage, enabling security teams to make swift, informed decisions using Tanium Threat Response alerts and real-time endpoint data. Walkthrough of Microsoft Security Store Now that you’ve seen the types of solutions available in Security Store, let’s walk through how to find the right one for your organization. You can get started by going to the Microsoft Security Store portal. From there, you can search and browse solutions that integrate with Microsoft Security products, including a dedicated section for AI agents—all in one place. If you are using Microsoft Security Copilot, you can also open the store from within Security Copilot to find AI agents - read more here. Solutions are grouped by how they align with industry frameworks like NIST CSF 2.0, making it easier to see which areas of security each one supports. You can also filter by integration type—e.g., Defender, Sentinel, Entra, or Purview—and by compliance certifications to narrow results to what fits your environment. To explore a solution, click into its detail page to view descriptions, screenshots, integration details, and pricing. For AI agents, you’ll also see the tasks they perform, the inputs they require, and the outputs they produce —so you know what to expect before you deploy. Every listing goes through a review process that includes partner verification, security scans on code packages stored in a secure registry to protect against malware, and validation that integrations with Microsoft Security products work as intended. Customers with the right permissions can purchase agents and SaaS solutions directly through Security Store. The process is simple: choose a partner solution or AI agent and complete the purchase in just a few clicks using your existing Microsoft billing account—no new payment setup required. Qualifying SaaS purchases also count toward your Microsoft Azure Consumption Commitment (MACC), helping accelerate budget approvals while adding the security capabilities your organization needs. Security and IT admins can deploy solutions directly from Security Store in just a few steps through a guided experience. The deployment process automatically provisions the resources each solution needs—such as Security Copilot agents and Microsoft Sentinel data lake notebook jobs—so you don’t have to do so manually. Agents are deployed into Security Copilot, which is built with security in mind, providing controls like granular agent permissions and audit trails, giving admins visibility and governance. Once deployment is complete, your agent is ready to configure and use so you can start applying AI to expand detection coverage, respond faster, and improve operational efficiency. Security and IT admins can view and manage all purchased solutions from the “My Solutions” page and easily navigate to Microsoft Cost Management tools to track spending and manage subscriptions. Partners: grow your business with Microsoft For security partners, Security Store opens a powerful new channel to reach customers, monetize differentiated solutions, and grow with Microsoft. We will showcase select solutions across relevant Microsoft Security experiences, starting with Security Copilot, so your offerings appear in the right context for the right audience. You can monetize both SaaS solutions and AI agents through built-in commerce capabilities, while tapping into Microsoft’s go-to-market incentives. For agent builders, it’s even simpler—we handle the entire commerce lifecycle, including billing and entitlement, so you don’t have to build any infrastructure. You focus on embedding your security expertise into the agent, and we take care of the rest to deliver a seamless purchase experience for customers. Security Store is built on top of Microsoft Marketplace, which means partners publish their solution or agent through the Microsoft Partner Center - the central hub for managing all marketplace offers. From there, create or update your offer with details about how your solution integrates with Microsoft Security so customers can easily discover it in Security Store. Next, upload your deployable package to the Security Store registry, which is encrypted for protection. Then define your license model, terms, and pricing so customers know exactly what to expect. Before your offer goes live, it goes through certification checks that include malware and virus scans, schema validation, and solution validation. These steps help give customers confidence that your solutions meet Microsoft’s integration standards. Get started today By creating a storefront optimized for security professionals, we are making it simple to find, buy, and deploy solutions and AI agents that work together. Microsoft Security Store helps you put the right AI‑powered tools in place so your team can focus on what matters most—defending against attackers with speed and confidence. Get started today by visiting Microsoft Security Store. If you’re a partner looking to grow your business with Microsoft, start by visiting Microsoft Security Store - Partner with Microsoft to become a partner. Partners can list their solution or agent if their solution has a qualifying integration with Microsoft Security products, such as a Sentinel connector or Security Copilot agent, or another qualifying MISA solution integration. You can learn more about qualifying integrations and the listing process in our documentation here.Cybersecurity: What Every Business Leader Needs to Know Now
As a Senior Cybersecurity Solution Architect, I’ve had the privilege of supporting organisations across the United Kingdom, Europe, and the United States—spanning sectors from finance to healthcare—in strengthening their security posture. One thing has become abundantly clear: cybersecurity is no longer the sole domain of IT departments. It is a strategic imperative that demands attention at board-level. This guide distils five key lessons drawn from real-world engagements to help executive leaders navigate today’s evolving threat landscape. These insights are not merely technical—they are cultural, operational, and strategic. If you’re a C-level executive, this article is a call to action: reassess how your organisation approaches cybersecurity before the next breach forces the conversation. In this article, I share five lessons (and quotes) from the field that help demystify how to enhance an organisation’s security posture. 1. Shift the Mindset “This has always been our approach, and we’ve never experienced a breach—so why should we change it?” A significant barrier to effective cybersecurity lies not in the sophistication of attackers, but in the predictability of human behaviour. If you’ve never experienced a breach, it’s tempting to maintain the status quo. However, as threats evolve, so too must your defences. Many cyber threats exploit well-known vulnerabilities that remain unpatched or rely on individuals performing routine tasks in familiar ways. Human nature tends to favour comfort and habit—traits that adversaries are adept at exploiting. Unlike many organisations, attackers readily adopt new technologies to advance their objectives, including AI-powered ransomware to execute increasingly sophisticated attacks. It is therefore imperative to recognise—without delay—that the advent of AI has dramatically reduced both the effort and time required to compromise systems. As the UK’s National Cyber Security Centre (NCSC) has stated: “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.” Similarly, McKinsey & Company observed: “As AI quickly advances cyber threats, organisations seem to be taking a more cautious approach, balancing the benefits and risks of the new technology while trying to keep pace with attackers’ increasing sophistication.” To counter this evolving threat landscape, organisations must proactively leverage AI in their cyber defence strategies. Examples include: Identity and Access Management (IAM): AI enhances IAM by analysing real-time signals across systems to detect risky sign-ins and enforce adaptive access controls. Example: Microsoft Entra Agents for Conditional Access use AI to automate policy recommendations, streamlining access decisions with minimal manual input. Figure 1: Microsoft Entra Agents Threat Detection: AI accelerates detection, response, and recovery, helping organisations stay ahead of sophisticated threats. Example: Microsoft Defender for Cloud’s AI threat protection identifies prompt injection, data poisoning, and wallet attacks in real time. Incident Response: AI facilitates real-time decision-making, removing emotional bias and accelerating containment and recovery during security incidents. Example: Automatic Attack Disruption in Defender XDR, which can automatically contain a breach in progress. AI Security Posture Management AI workloads require continuous discovery, classification, and protection across multi-cloud environments. Example: Microsoft Defender for Cloud’s AI Security Posture Management secures custom AI apps across Azure, AWS, and GCP by detecting misconfigurations, vulnerabilities, and compliance gaps. Data Security Posture Management (DSPM) for AI AI interactions must be governed to ensure privacy, compliance, and insider risk mitigation. Example: Microsoft Purview DSPM for AI enables prompt auditing, applies Data Loss Prevention (DLP) policies to third-party AI apps like ChatGPT, and supports eDiscovery and lifecycle management. AI Threat Protection Organisations must address emerging AI threat vectors, including prompt injection, data leakage, and model exploitation. Example: Defender for AI (private preview) provides model-level security, including governance, anomaly detection, and lifecycle protection. Embracing innovation, automation, and intelligent defence is the secret sauce for cyber resilience in 2026. 2. Avoid One-Off Purchases – Invest with a Strategy “One MDE and one Sentinel to go, please.” Organisations often approach me intending to purchase a specific cybersecurity product—such as Microsoft Defender for Endpoint (MDE)—without a clearly articulated strategic rationale. My immediate question is: what is the broader objective behind this purchase? Is it driven by perceived value or popularity, or does it form part of a well-considered strategy to enhance endpoint security? Cybersecurity investments should be guided by a long-term, holistic strategy that spans multiple years and is periodically reassessed to reflect evolving threats. Strengthening endpoint protection must be integrated into a wider effort to improve the organisation’s overall security posture. This includes ensuring seamless integration between security solutions and avoiding operational silos. For example, deploying robust endpoint protection is of limited value if identities are not safeguarded with multi-factor authentication (MFA), or if storage accounts remain publicly accessible. A cohesive and forward-looking approach ensures that all components of the security architecture work in concert to mitigate risk effectively. Security Adoption Journey (Based on Zero Trust Framework) Assess – Evaluate the threat landscape, attack surface, vulnerabilities, compliance obligations, and critical assets. Align – Link security objectives to broader business goals to ensure strategic coherence. Architect – Design integrated and scalable security solutions, addressing gaps and eliminating operational silos. Activate – Implement tools with robust governance and automation to ensure consistent policy enforcement. Advance – Continuously monitor, test, and refine the security posture to stay ahead of evolving threats. Security tools are not fast food—they work best as part of a long-term plan, not a one-off order. This piecemeal approach runs counter to the modern Zero Trust security model, which assumes no single tool will prevent every breach and instead implements layered defences and integration. 3. Legacy Systems Are Holding You Back “Unfortunately, we are unable to implement phishing-resistant MFA, as our legacy app does not support integration with the required protocols.” A common challenge faced by many organisations I have worked with is the constraint on innovation within their cybersecurity architecture, primarily due to continued reliance on legacy applications—often driven by budgetary or operational necessity. These outdated systems frequently lack compatibility with modern security technologies and may introduce significant vulnerabilities. A notable example is the deployment of phishing-resistant multi-factor authentication (MFA)—such as FIDO2 security keys or certificate-based authentication—which requires advanced identity protocols and conditional access policies. These capabilities are available exclusively through Microsoft Entra ID. To address this issue effectively, it is essential to design security frameworks based on the organisation’s future aspirations rather than its current limitations. By adopting a forward-thinking approach, organisations can remain receptive to emerging technologies that align with their strategic cybersecurity objectives. Moreover, this perspective encourages investment in acquiring the necessary talent, thereby reducing reliance on extensive change management and staff retraining. I advise designing for where you want to be in the next 1–3 years—ideally cloud-first and identity-driven—essentially adopting a Zero Trust architecture, rather than being constrained by the limitations of legacy systems. 4. Collaboration Is a Security Imperative “This item will need to be added to the dev team's backlog. Given their current workload, they will do their best to implement GitHub Security in Q3, subject to capacity.” Cybersecurity threats may originate from various parts of an organisation, and one of the principal challenges many face is the fragmented nature of their defence strategies. To effectively mitigate such risks, cybersecurity must be embedded across all departments and functions, rather than being confined to a single team or role. In many organisations, the Chief Information Security Officer (CISO) operates in isolation from other C-level executives, which can limit their influence and complicate the implementation of security measures across the enterprise. Furthermore, some teams may lack the requisite expertise to execute essential security practices. For instance, an R&D lead responsible for managing developers may not possess the necessary skills in DevSecOps. To address these challenges, it is vital to ensure that the CISO is empowered to act without political or organisational barriers and is supported in implementing security measures across all business units. When the CISO has backing from the COO and HR, initiatives such as MFA rollout happen faster and more thoroughly. Cross-Functional Security Responsibilities Role Security Responsibilities R&D - Adopt DevSecOps practices - Identify vulnerabilities early - Manage code dependencies - Detect exposed secrets - Embed security in CI/CD pipelines CIO - Ensure visibility over organizational data - Implement Data Loss Prevention (DLP) - Safeguard sensitive data lifecycle - Ensure regulatory compliance CTO - Secure cloud environments (CSPM) - Manage SaaS security posture (SSPM) - Ensure hardware and endpoint protection COO - Protect digital assets - Secure domain management - Mitigate impersonation threats - Safeguard digital marketing channels and customer PII Support & Vendors - Deliver targeted training - Prevent social engineering attacks - Improve awareness of threat vectors HR - Train employees on AI-related threats - Manage insider risks - Secure employee data - Oversee cybersecurity across the employee lifecycle Empowering the CISO to act across departments helps organisations shift towards a security-first culture—embedding cybersecurity into every function, not just IT. 5. Compliance Is Not Security “We’re compliant, so we must be secure.” Many organisations mistakenly equate passing audits—such as ISO 27001 or SOC 2—with being secure. While compliance frameworks help establish a baseline for security, they are not a guarantee of protection. Determined attackers are not deterred by audit checklists; they exploit gaps, misconfigurations, and human error regardless of whether an organisation is certified. Moreover, due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks often struggle to keep pace. By the time a standard is updated, attackers may already be exploiting new techniques that fall outside its scope. This lag creates a false sense of security for organisations that rely solely on regulatory checkboxes. Security is a continuous risk management process—not a one-time certification. It must be embedded into every layer of the enterprise and treated with the same urgency as other core business priorities. Compliance may be the starting line, not the finish line. Effective security goes beyond meeting regulatory requirements—it demands ongoing vigilance, adaptability, and a proactive mindset. Conclusion: Cybersecurity Is a Continuous Discipline Cybersecurity is not a destination—it is a continuous journey. By embracing strategic thinking, cross-functional collaboration, and emerging technologies, organisations can build resilience against today’s threats and tomorrow’s unknowns. The lessons shared throughout this article are not merely technical—they are cultural, operational, and strategic. If there is one key takeaway, it is this: avoid piecemeal fixes and instead adopt an integrated, future-ready security strategy. Due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks alone cannot keep pace. Security must be treated as a dynamic, ongoing process—one that is embedded into every layer of the enterprise and reviewed regularly. Organisations should conduct periodic security posture reviews, leveraging tools such as Microsoft Secure Score or monthly risk reports, and stay informed about emerging threats through threat intelligence feeds and resources like the Microsoft Digital Defence Report, CISA (Cybersecurity and Infrastructure Security Agency), NCSC (UK National Cyber Security Centre), and other open-source intelligence platforms. As Ann Johnson aptly stated in her blog: “The most prepared organisations are those that keep asking the right questions and refining their approach together.” Cyber resilience demands ongoing investment—in people (through training and simulation drills), in processes (via playbooks and frameworks), and in technology (through updates and adoption of AI-driven defences). To reduce cybersecurity risk over time, resilient organisations must continually refine their approach and treat cybersecurity as an ongoing discipline. The time to act is now. Resources: https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat Defend against cyber threats with AI solutions from Microsoft - Microsoft Industry Blogs Generative AI Cybersecurity Solutions | Microsoft Security Require phishing-resistant multifactor authentication for Microsoft Entra administrator roles - Microsoft Entra ID | Microsoft Learn AI is the greatest threat—and defense—in cybersecurity today. Here’s why. Microsoft Entra Agents - Microsoft Entra | Microsoft Learn Smarter identity security starts with AI https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/ https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2023-critical-cybersecurity-challenges https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/1.6KViews2likes0CommentsElevating Trust in Data through Data Quality in the AI Era
Data collection and utilization are growing rapidly, and organizations are increasingly relying on data as they transition into the era of AI. However, many face significant challenges in effectively managing investments across cloud, data, and AI. This is largely due to a lack of visibility across their entire data estate—which is often fragmented across silos, heterogeneous systems, on-premises environments, and the cloud. Concerns about data trustworthiness, AI-readiness, and uncertainty around security and compliance further complicate the ability to drive timely business insights. Elevating data quality means going beyond merely identifying issues—it's about equipping data stewards, analysts, and engineers with the right tools to proactively improve trust, consistency, and readiness of data for AI, analytics, and operations. Despite the critical importance of data quality, many organizations struggle to activate the full value of their data estate. Research shows that 75% of companies do not have a formal data quality program. This is alarming, especially as data quality has become a cornerstone of successful AI initiatives. Simply put: your AI is only as good as your data. In the past, when humans were always in the loop, minor data quality issues could be corrected manually. But in today’s world—where AI interprets not just the structure but the content of the data—any inconsistencies, inaccuracies, or noncompliance in the data can directly lead to flawed insights, unreliable AI outputs, and poor business decisions. Bad data can make your AI wrong. It can make your BI reports misleading. And it can impact your organization's credibility—as well as your reputation as a data professional. After all, there’s nothing worse than building something that no one trusts or uses. That’s why defining and deploying a robust data quality framework is more critical now than ever before. Organizations must establish a data quality maturity model, track quality across their data estate, and take continuous improvement and remediation actions. Key steps to maintaining data quality and ensuring the health of your data estate include: Define the scope – Identify which data is needed for specific business use cases. Measure quality – Assess whether data meets expected standards. Analyze findings – Understand patterns, gaps, and root causes. Improve quality – Take corrective actions to meet business needs. Control quality – Continuously monitor and govern data to maintain high standards. To ensure your data is fit for purpose, it's essential to establish strong data quality practices within your organization, supported by the right tools, roles, and governance structures. Profile your data to understand the distribution Data profiling is the process of analyzing data to understand its structure, content, and quality. It helps uncover patterns, anomalies, missing values, duplicates, and data types—providing valuable insight into the trustworthiness and usability of data for analytics, decision-making, and quality improvement. By examining datasets for structure, relationships, and inconsistencies, data practitioners can identify potential issues and define validation rules for ongoing data quality assurance. Microsoft Purview Unified Data Catalog provides an integrated data quality experience that supports profiling as a foundational step. With Purview, users can profile data to understand its distribution, patterns, and data types—helping inform data quality programs and define rules for continuous monitoring and improvement. Purview's data profiling leverages AI to recommend which columns in a dataset are most critical and should be profiled. Users remain in control and can adjust these recommendations by deselecting suggested columns or adding others. Additionally, all profiling history is preserved, allowing users to compare current profiles with historical patterns to detect changes over time and continuously assess data health. Define and apply rules to validate your data Applying rules and constraints is essential to ensure that data conforms to predefined requirements or business logic—such as data type, completeness, uniqueness, and consistency. Data profiling results can be leveraged to define these rules and validate data continuously, helping ensure that it is trustworthy and ready for both business and AI use cases. To achieve this, data quality should be measured across all stages: data at creation, data in motion, and data at rest. While many CRM and web-based applications perform UI-level validations to check user inputs before submission, a significant amount of poor-quality data still enters systems through bulk upload processes. These low-quality records often bypass front-end checks and propagate downstream through the data supply chain, leading to broader data integrity issues. In Medallion architecture, you can validate and correct data directly in the pipeline. Bad-quality data detected in the bronze layer can trigger notifications to upstream systems to fix issues at the source. Purview Unified Catalog Data Quality capability provides a user-friendly UI for managing data quality rules. You can configure rules for any supported data sources in cloud or on-premises or datasets in Fabric Lakehouse's bronze layer, schedule DQ jobs, and send notifications to data engineers and stewards when issues arise. This proactive monitoring ensures data quality is addressed early—before data progresses through the silver and gold layers of your architecture. Visualizing Data Quality Metrics and Driving Action Visualizing data quality metrics and trends provides critical insights into the overall health of your data and supports informed, data-driven decision-making. Microsoft Purview Unified Catalog publishes all metadata—including data quality rules and scores—into Fabric OneLake. Data analysts can link Purview metadata with raw data to generate actionable insights. They can also leverage Fabric AI skills to enhance intelligence and integrate with Data Activator to trigger notifications for upstream data publishers and downstream consumers. Alerts and notifications can be configured directly in the Purview Unified Catalog. Data stewards can set thresholds for one or many data assets to automatically notify upstream and downstream contacts if those thresholds are breached. Notifications can be directed to specific individuals or groups (e.g., a support team). These alerts empower data providers to resolve issues at the source, and data engineers can address problems within the bronze layer of their analytical storage—such as in Microsoft Fabric. Additionally, alerts can be used to pause data movement from the bronze to the silver and gold layers, ensuring only high-quality data flows downstream. Users can configure the storage location in the Purview Data Quality solution to publish failed or error records. This allows data stewards and data engineers to review and fix issues, improving data quality before using it for analytics or as input for ML model training. Integrated Data Observability with Data Quality Scores Data observability in Microsoft Purview Unified Catalog offers a comprehensive, bird’s-eye view into the health of the data estate as data flows across various sources. Data stewards, domain experts, and those responsible for data health can monitor their entire data landscape from a single unified interface. This centralized view provides visibility into data lineage—from source to consumption—and reveals how data assets map to governance domains. It enables users to understand where data originates and terminates, pinpoint data quality issues, and assess the impact on reporting and compliance obligations. By consolidating metadata into a single, accessible location, users can explore how data quality is evolving, track usage patterns, and understand who is interacting with the data. With full visibility across the data estate, both central and federated data teams can efficiently identify opportunities to improve metadata quality, clarify ownership, enhance data quality, and optimize data architecture. Summary Defining and implementing a data quality framework has become more critical than ever. Organizations must establish a data quality maturity model and continuously monitor the health of their data estate to enable ongoing improvement and remediation. Microsoft Purview Unified Catalog empowers governance domain owners and data stewards to assess and manage data quality across the ecosystem, driving informed and targeted improvements. In today’s AI-driven world, the trustworthiness of data is directly linked to the effectiveness of AI insights and recommendations. Unreliable data not only weakens AI outcomes but can also diminish trust in the systems themselves and slow their adoption. Poor data quality or inconsistent data structures can disrupt business operations and impair decision-making. Microsoft Purview addresses these challenges with a no-code/low-code approach to defining data quality rules—including out-of-the-box (OOB) and AI-generated rules. These rules are applied at the column level and rolled up into scores at the data asset, data product, and governance domain levels, offering full visibility into data quality across the enterprise. Purview’s AI-powered data profiling capabilities intelligently recommend which columns to profile, while still allowing human review and refinement. This human-in-the-loop process not only improves the relevance and accuracy of profiling but also feeds back into improving AI model performance over time. Elevating data quality is more than identifying problems—it's about equipping stewards, analysts, and engineers with the tools to proactively build trust, ensure consistency, and prepare data for AI, analytics, and business success.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewHelp! Sensitivity label applied to whole tenant mistakenly with Watermark
We create a sensitivity label to have a watermark to be applied on the files on where it assigned but accidentally or due to misconfiguration, the watermark applied to whole tenant and the files, need a solution to automatically removed these watermarks from the files wherever it is applied. Please assist, TIA... .169Views0likes1CommentHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Rethinking Data Security and Governance in the Era of AI
The era of AI is reshaping industries, enabling unprecedented innovations, and presenting new opportunities for organizations worldwide. But as organizations accelerate AI adoption, many are focused on a growing concern: their current data security and governance practices are not effectively built for the fast-paced AI innovation and ever-evolving regulatory landscape. At Microsoft, we recognize the critical need for an integrated approach to address these risks. In our latest findings, Top 3 Challenges in Securing and Governing Data for the Era of AI, we uncovered critical gaps in how organizations manage data risk. The findings exemplify the current challenges: 91% of leaders are not prepared to manage risks posed by AI 1 and 85% feel unprepared to comply with AI regulations 2 . These gaps not only increase non-compliance but also put innovation at risk. Microsoft Purview has the tools to tackle these challenges head on, helping organizations move to an approach that protects data, meets compliance regulations, and enables trusted AI transformation. We invite you to take this opportunity to evaluate your current practices, platforms, and responsibilities, and to understand how to best secure and govern your organization for growing data risks in the era of AI. Platform fragmentation continues to weaken security outcomes Organizations often rely on fragmented tools across security, compliance, and data teams, leading to a lack of unified visibility and insufficient data hygiene. Our findings reveal the effects of fragmented platforms, leading to duplicated data, inconsistent classification, redundant alerts, and siloed investigations, which ultimately is causing data exposure incidents related to AI to be on the rise 3 . Microsoft Purview offers centralized visibility across your organization’s data estate. This allows teams to break down silos, streamline workflows, and mitigate data leakage and oversharing. With Microsoft Purview, capabilities like data health management and data security posture management are designed to enhance collaboration and deliver enriched insights across your organization to help further protect your data and mitigate risks faster. Microsoft Purview offers the following: Unified insights across your data estate, breaking down silos between security, compliance, and data teams. Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations gain unified visibility into GenAI usage across users, data, and apps to address the heightened risk of sensitive data exposure from AI. Built-in capabilities like classification, labeling, data loss prevention, and insider risk insights in one platform. In addition, newly launched solutions like Microsoft Purview Data Security Investigations accelerate investigations with AI-powered deep content analysis, which helps data security teams quickly identify and mitigate sensitive data and security risks within impacted data. Organizations like Kern County historically relied on many fragmented systems but adopted Microsoft Purview to unify their organization’s approach to data protection in preparation for increasing risks associated with deploying GenAI. “We have reduced risk exposure, [Microsoft] Purview helped us go from reaction to readiness. We are catching issues proactively instead of retroactively scrambling to contain them.” – Aaron Nance, Deputy Chief Information Security Officer, Kern County Evolving regulations require continuous compliance AI-driven innovation is creating a surge in regulations, resulting in over 200 daily updates across more than 900 regulatory agencies 4 , as highlighted in our research. Compliance has become increasingly difficult, with organizations struggling to avoid fines and comply with varying requirements across regions. To navigate these challenges effectively, security leaders’ responsibilities are expanding to include oversight across governance and compliance, including oversight of traditional data catalog and governance solutions led by the central data office. Leaders also cite the need for regulation and audit readiness. Microsoft Purview enables compliance and governance by: Streamlining compliance with Microsoft Purview Compliance Manager templates, step-by-step guidance, and insights for region and industry-specific regulations, including GDPR, HIPAA, and AI-specific regulation like the EU AI Act. Supporting legal matters such as forensic and internal investigations with audit trail records in Microsoft Purview eDiscovery and Audit. Activating and governing data for trustworthy analytics and AI with Microsoft Purview Unified Catalog, which enables visibility across your data estate and data confidence via data quality, data lineage, and curation capabilities for federated governance. Microsoft Purview’s suite of capabilities provides visibility and accountability, enabling security leaders to meet stringent compliance demands while advancing AI initiatives with confidence. Organizations need a unified approach to secure and govern data Organizations are calling for an integrated platform to address data security, governance, and compliance collectively. Our research shows that 95% of leaders agree that unifying teams and tools is a top priority 5 and 90% plan to adopt a unified solution to mitigate data related risks and maximize impact 6 . Integration isn't just about convenience, it’s about enabling innovation with trusted data protection. Microsoft Purview enables a shared responsibility model, allowing individual business units to own their data while giving central teams oversight and policy control. As organizations adopt a unified platform approach, our findings reveal the upside potential not only being reduced risk but also cost savings. With AI-powered copilots such as Security Copilot in Microsoft Purview, data protection tasks are simplified with natural-language guidance, especially for under resourced teams. Accelerating AI transformation with Microsoft Purview Microsoft Purview helps security, compliance, and governance teams navigate the complexities of AI innovation while implementing effective data protection and governance strategies. Microsoft partner EY highlights the results they are seeing: “We are seeing 25%–30% time savings when we build secure features using [Microsoft] Purview SDK. What was once fragmented is now centralized. With [Microsoft] Purview, everything comes together on one platform, giving a unified foundation to innovate and move forward with confidence.” – Prashant Garg, Partner of Data and AI, EY We invite you to explore how you can propel your organization toward a more secure future by reading the full research paper at https://aka.ms/SecureAndGovernPaper. Visit our website to learn more about Microsoft Purview. 1 Forbes, Only 9% Of Surveyed Companies Are Ready To Manage Risks Posed By AI, 2023 2 SAP LeanIX, AI Survey Results, 2024 3 Microsoft, Data Security Index Report, 2024 4 Forbes, Cost of Compliance, Thomson Reuters, 2021 5 Microsoft, Audience Research, 2024 6 Microsoft, Customer Requirements Research, 20247.3KViews3likes0Comments