purview data security
3 TopicsData Security Posture Reports (Custom Workspace and Charts)
For more insights on OOB Reports, check out this article. Overview: NOW IN PUBLIC PREVIEW Microsoft Purview Posture Reports provide a clear, outcome‑based view of how effectively data protection controls, such as Sensitivity Labels and Data Loss Prevention (DLP) policies, are working across Microsoft 365. Rather than focusing on individual alerts or isolated events, Posture Reports help organizations answer a higher‑level, executive‑ready question: Are our data protection controls consistently applied and actually reducing risk at scale? Posture Reports transform complex telemetry from Audit logs, Activity Explorer, and policy enforcement into measurable, defensible insights that security, compliance, and business leaders can act on with confidence. Building on the out‑of‑the‑box experience, Custom Posture Reports enable teams to create scenario‑specific views tailored to their organization’s risk priorities. Key capabilities include: Custom dashboards with drag‑and‑drop sections and cards Built‑in and custom metric or chart cards powered by Activity Explorer data Flexible filtering to support focused investigations and reporting Tips: Start with clear questions, then choose cards that answer them Avoid overcrowding reports; fewer, well‑chosen cards are more effective Use metric cards for status, analytics cards for understanding Treat custom reports as living assets, iterate as needs evolve This allows security teams to move beyond one‑size‑fits‑all reporting and build views aligned to their unique data protection strategy. Preview note: As this feature is in Preview, capabilities, terminology, and UX may change, and not all scenarios are fully documented yet. Key Concepts What is a Custom Report? A Custom Report is a user‑created report container where you assemble one or more cards to visualize Information Protection–related data (for example, labeling, classification, or protection activity). Unlike the built‑in reports, custom reports are designed to be adaptable to different audiences and questions. Typical use cases include: Tracking adoption of sensitivity labels over time Monitoring where sensitive data is most concentrated Creating executive‑friendly, KPI‑style summaries Building analyst views for deeper investigation Core Actions in the Custom Reports Experience Add Report creates a new, empty report canvas. This is the starting point where you define: The report name and purpose Create custom reports with your preferred cards and analytics. Add section is used to create a logical grouping within a custom report. A section acts as a container that helps organize cards on the report canvas into meaningful groupings based on purpose, audience, or storyline. What a section does How sections are used Provides structure to a report by grouping related cards together Improves readability and navigation, especially in reports with multiple cards Helps separate different analytical themes within the same report A report can contain one or more sections Each section can include multiple cards (metric cards, chart cards, analytics cards, or custom cards) Sections are added before cards, serving as the layout framework for the report Add Card lets you place a visualization or metric onto the report canvas. Each card answers a specific question, such as “How much data is labeled Confidential?” or “Where is sensitive content growing fastest?” Cards are the building blocks of custom reports and can be mixed and matched within the same report. Permissions: in order to create these reports, you must have permissions to create labels and DLP policies. Built‑in (OOB – Out of the Box) cards: Custom reports include two built‑in card types that can be added to sections: Metric cards – predefined cards used to display key metrics and trends Analytics cards – predefined cards that provide deeper analytical insights Note: In addition to built‑in cards, you can add custom cards (such as metric‑based or chart‑based custom cards) to tailor the report to your scenario. What is a Metric Card? What is an Analytic Card? Metric cards are designed to highlight a single, high‑level value or KPI and are also the foundation for building custom cards that combine metrics with trend context. Analytics cards provide richer visualizations that help users explore patterns and trends in the data. What they do: A Metric card is used to create a card that pairs a primary metric with its historical trend This allows users to answer not just “What is the value?” but also “Is it improving or declining?” Metric cards are commonly used for adoption, growth, and compliance health indicators These cards focus on showing trends over time What they do: Show distributions, breakdowns, or trends over time Enable comparison across locations, labels, or workloads Support investigation and analysis rather than just reporting These are useful when you need a visual representation rather than a single metric. Display data using charts such as bars, lines, or other visual formats Custom cards allow you to define tailored views aligned to your organization’s unique questions. What they do: Focus on specific scenarios not covered by default cards Combine dimensions or filters relevant to your business context Adapt reporting to regulatory, regional, or operational needs When to use them: Organization‑specific KPIs Regulatory or audit‑driven reporting Advanced scenarios that go beyond standard dashboards Custom cards are especially useful for mature programs where built‑in reports are no longer sufficient on their own. Custom Card Configuration The following example illustrates how a metric‑based custom card can be configured to track adoption trends. Scenario: Track adoption of the Confidential sensitivity label over the last 30 days. Card type: Custom card (built from a Metric card) Metric configuration Filters applied What this card shows Metric: Number of items labeled Confidential Time range: Last 30 days (custom) Display format: Compound (shows total count with trend direction) Sensitivity label: Confidential Workload: SharePoint The current total number of items labeled Confidential Whether labeling activity is increasing or decreasing over the last 30 days A focused view of adoption for a specific label and workload This type of custom card is well‑suited for adoption tracking, executive summaries, and ongoing compliance health monitoring. Metric card configuration: Metric cards currently surface up to 7 days of data, providing recent context for the selected metric. Custom surfaces up to the last 30 days of data. You can choose different display formats, such as: Number – a raw count or value Percentage – a proportional view of the metric Compound – a combination of value and trend for quick interpretation You can apply filters to limit the data set to specific criteria (for example, a particular label, location, or workload), allowing the metric to reflect a targeted scenario rather than all data Chart cards are used to visualize data as a graphical chart and can be created as custom cards when you need a visual representation rather than a single metric. Click on Chart Card and under Chart card configuration, select the primary activities: Sensitivity Label Then define the Chart Type Based on the configuration options shown in the UI, the following chart types are available: Vertical bar – compares values across categories using vertical bars; commonly used for side‑by‑side comparisons Horizontal bar – compares values across categories using horizontal bars; useful when category labels are long Pie – shows proportional distribution of values across categories Donut – similar to a pie chart, with a central area that improves readability Line chart – visualizes trends or changes over time Selecting the appropriate chart type helps ensure the custom card clearly communicates the intended insight and improves overall report readability. These cards are commonly used for trend analysis, distribution views, and comparative reporting. Both make patterns easier to understand. Real World Example The business goal this report is addressing is to prove security value and risk reduction, especially to leadership and stakeholders, by tying data protection investments to measurable outcomes. Primary Business Goal: demonstrate that the organization’s data protection controls are effective in reducing financial data risk. The report shows that sensitive financial data is not only being found, but consistently labeled and enforced through DLP, validating that controls are working as intended. Supporting Business Objectives Executive assurance & trust Provide leadership with evidence that compliance and security controls are actively protecting financial data, not just configured. Risk reduction validation Show that financial SITs are being systematically identified and governed, reducing exposure and improper data handling. Value justification for security investments Correlate auto labeling and DLP outcomes to demonstrate ROI on Purview, labeling, and policy investments. Operational confidence Confirm that auto‑labeling policies are accurately detecting sensitive data at scale and triggering appropriate DLP enforcement. Audit and compliance readiness Establish defensible proof that sensitive financial data is discovered, classified, and protected consistently across the environment. Step 1: Create a report, add a name, and description Step 2: Add a section called Key Outcomes (title and description) and add metric cards to show the data at a glance. Step 3: Add another section. Include the following two out of the box charts available. Step 4: Add another section with the out of the box charts Step 5: Add the last section that ties everything together. One out of the box chart and another custom chart. Step 6: for the custom chart above, Do a vertical bar, pivot (the groupings at the bottom of the chart) to Activity. Then, add filters (Sensitive info type: the SITs and Activity: DLPRuleMatch. The report highlights key outcomes, label adoption, application areas, and auto labeling policies. It identifies the main SITs used in labeling and connects them to DLP, demonstrating that the admin's data security measures are effective, particularly with financial information. Using AI to simplify insights This AI integration builds on Microsoft Purview’s existing reporting stack (Posture Reports, Activity Explorer and Audit) and introduces AI-assisted interpretation, summarization, and report composition to reduce manual analysis and accelerate decision-making. To access the report AI Summary: Click on the report and open “View Details” AI will prepare and summarize the report. AI Report Components Executive Summary Delivers a high level, leadership friendly narrative of the most important insights. Highlights overall posture, major risks, and notable improvements or regressions. Summarizes overall activity (for example, total labeled items and dominant platforms) Calls out major observations and limitations (such as lack of trend comparison due to retention) Provides a concise interpretation of what the data means at a point in time This section answers: “What happened, and what should I know without reading the full report?” Key metrics This section provides the essential quantitative data that forms the foundation of the report. Establishes a baseline that can be tracked over time Quantitative measures such as: Number of policy triggers or Label adoption rates Lists the primary counts, categories, and time range used for analysis Clarifies what measurements are available and which are not (such as trends) This section answers: “What are the exact numbers this report is based on?” Distribution Breakdown This section shows how activity is distributed across categories or dimensions. Breaks total activity into meaningful segments (for example, Mac vs. Web Browser) Displays proportional impact using counts and percentages Helps identify concentration areas or imbalances across platforms This section answers: “Where is activity happening the most?” Trend Analysis Evaluates changes over time when historical data is available. Compares current activity to prior periods Highlights increases, decreases, or stability in behavior Clearly calls out when trend analysis is not possible due to data limitations This section answers: “is behavior improving, worsening, or staying the same over time?” Key Findings Synthesizes insights derived from metrics, distributions, and trends. Interprets the data rather than restating it Identifies notable patterns, gaps, or risks (for example, platform skew or low adoption) Connects observations to possible operational or policy implications. This section answers: “What stands out as important or concerning?” Assessment Provides an overall evaluation of the security or compliance posture Combines findings into a holistic judgment Assesses scope, coverage, and effectiveness of current practices Describes whether the posture is sufficient or limited This section answers: “How healthy is our current posture?” Status Summarizes the assessment into a simple outcome indicator. Recommendations Guides next steps based on observed gaps and risks. Suggests practical actions to improve coverage or effectiveness. Aligns recommendations to best practices and product capabilities. Prioritizes changes that reduce risk and improve consistency. This section answers: “What should we do nex References Provides traceability and supporting documentation. Links to authoritative Microsoft documentation used to inform recommendations Allows readers to validate guidance or explore implementation details This section answers: “Where can I verify or learn more?” Full AI Report Summary Summary Posture Reports represent a shift from security configuration to security outcomes. They empower organizations to confidently answer critical questions about risk, readiness, and return on security investment, especially in an AI‑driven world. As reporting continues to evolve, Posture Reports will play a foundational role in how customers prove, improve, and communicate their data security posture.54Views0likes0CommentsData Security Posture Reports
Proving Your Data Security Posture with Confidence Microsoft Purview Posture Reports help organizations prove (not just assume) that their data security controls are working. They provide a clear, outcome‑based view of how effectively sensitivity labels and Data Loss Prevention (DLP) policies are protecting sensitive data across Microsoft 365. Rather than focusing on individual events or alerts, Posture Reports answer a higher‑level question: Are our data protection controls consistently applied and enforced across the organization? We designed Posture Reports to give security, compliance, and business leaders a defensible, measurable view of data security posture, especially critical as organizations adopt Copilot and other AI tools. Purview reporting offers unified data security insights, helping teams identify and address top risks quickly. By consolidating intelligence, it highlights vulnerabilities so you can take prompt action. With contextual information and measurable results, Purview streamlines responses to threats, improves resilience, and supports a proactive security strategy. Microsoft Purview reporting dashboards drive security decisions because they convert massive, fragmented security telemetry into decision‑ready insights: what’s happening, where the risk is, whether controls are effective, and what to do next. Posture Reports Basics The out-of-box (OOB) reports are built with a combination of Metric and Analytic cards. Note: these reports are refreshed hourly. What is a Metric Card? What is an Analytic Card? Metric cards are designed to highlight a single, high‑level value or KPI and are also the foundation for building custom cards that combine metrics with trend context. Analytics cards provide richer visualizations that help users explore patterns and trends in the data. What they do: A Metric card is used to create a card that pairs a primary metric with its historical trend This allows users to answer not just “What is the value?” but also “Is it improving or declining?” Metric cards are commonly used for adoption, growth, and compliance health indicators These cards focus on showing trends over time What they do: Show distributions, breakdowns, or trends over time Enable comparison across locations, labels, or workloads Support investigation and analysis rather than just reporting These are useful when you need a visual representation rather than a single metric. Display data using charts such as bars, lines, or other visual formats These cards are commonly used for trend analysis, distribution views, and comparative reporting. Both make patterns easier to understand. Report Insights The following table goes into each OOB report and breaks down different viewpoints to help understand how to use them. Report Where it shows Data Security Decision Intent Why What it shows Key Metrics Filter by Label distribution and adoption in Microsoft 365 DSPM Reports Information Protection Reports Expand auto labeling to high volume unlabeled areas Simplify or consolidate confusing labels Look for high label coverage areas as additional enforcement opportunities Prioritize training/auto-labeling in areas with low label adoption Label coverage is the foundational signal for downstream controls Label activities by workload Sensitivity labels by platform for endpoint devices Sensitivity label usage Label activities by application methods Total labeled items Auto-labeled items Manually labeled items Labeled by default How applied Activity Location Platform Sensitivity label Sensitive info type Policy Rule How applied detail Sensitive info type confidence User Auto-labeling coverage DSPM Reports Information Protection Reports Which auto-labeling polices to promote from audit to enforce Where false positives need tuning before enforcement Which sensitive data types are under-protected Whether auto-labeling can safely scale further Can we trust our classification signal enough to automate protection? Auto-labeling by enforcement (which are in sim mode vs. enforcement mode) Auto-labeled items by policies Top auto-labeling policies (most active auto-labeling policies by number of items they have labeled) Auto-labeling policies by platform for endpoint devices Total labeled items Auto-labeled items Auto-labeled emails Auto-labeled files How applied Activity Location Platform Sensitivity label Sensitive info type Policy Rule How applied detail Sensitive info type confidence User Sensitivity Label Changes DSPM Reports Information Protection Reports Whether to restrict or justify label downgrades Where insider risk controls may be needed (users downgrading heavily) Which labels need stronger default enforcement? Whether user behavior is increasing data exposure Label changes are often an early warning signal of oversharing or misuse Sensitivity label transition trends (timelines for label upgraded/downgraded/removed over time) Sensitivity label removed across workloads (where labels have been removed) Types of Sensitivity labels downgraded (to which sensitivity labels items were often downgraded) Sensitivity label downgrade methods (Analyze sensitivity label downgrades by application method/workload. Dual chart helps identify if this is happening manual or automatic) Sensitivity label downgrades by user (which users are most frequently downgrading) Labels upgraded Labels removed Labels downgraded Labels downgraded manually How applied Activity Location Platform Sensitivity label Sensitive info type Policy Rule How applied detail Sensitive info type confidence User Top users triggering DLP Policies DSPM Reports Data Loss Prevention Posture Reports Whether activity reflects risky behavior or broken workflows Which users or roles need targeted controls or guidance If DLP policies are too broad or too noisy If insider risk investigations should be warranted or considered Distinguish Real risk vs policy misalignment vs. normal business activity DLP Policies Triggered by Users (DLP rule match per rule) Unique users involved in triggers Total users with repeated triggers Policy Location (Workload) Endpoint Device Activity Most triggered DLP Rules or Activities DSPM Reports Data Loss Prevention Posture Reports Which policies need tuning or scoping Where enforcement can be strengthened safely Which risks are systemic vs. isolated Whether DLP is actually aligned to sensitive data High volume DLP rules should drive prioritization, not alert fatigue Top DLP Rules Triggered DLP Rules Triggered by Device Activity (most common endpoint activities triggered) Total rules triggered Unique users involved in triggers Total protective actions taken Policy Location (Workload) Endpoint Device Activity Most triggered DLP policies DSPM Reports Data Loss Prevention Posture Reports Are my highest‑priority policies aligned to real user behavior Shows whether your most critical policies are: Actively protecting data, or rarely triggered (possibly mis-scoped or irrelevant) Which DLP policies are most actively protecting sensitive data, is this the highest risk? DLP Policies Triggered by Workload Total policy trigger volume Unique users involved in triggers Total rules triggered Policy Location (Workload) Endpoint Device Activity Customer Use Cases What are some customer concerns Posture Reports address OOB? Use Case Situation Guidance Labeling & auto-labeling program rollout: “Are we increasing coverage and preventing drift?” Customer situation: A customer is rolling out sensitivity labels and auto-labeling. Leadership asks: “Are we labeling more content?” Security asks: “Are sensitive items still unprotected?” And compliance asks: “Are users downgrading labels?” In posture reports, Information Protection coverage includes label distribution/adoption, auto-labeling posture, and posture drift through label transitions (e.g., label downgrades). This maps directly to “coverage + drift + enforcement” conversations. The built-in IP posture set also calls out label distribution and adoption, auto-labeling policy coverage, and sensitivity label activity as core reports. For “active data” posture, the design intent explicitly includes questions like “What % of my active data estate is labeled vs not labeled?” and “What %/count of unlabeled data has sensitive info?” and “How is labeling protection trending over 30 days?”: perfect for proving program progress (or identifying gaps). DLP tuning & noise reduction: “Which policies/rules are actually firing, and who’s tripping them?” Customer situation: The DLP admin is overwhelmed: policies exist, but they don’t know which ones are actually driving volume (or pain), and which users are repeatedly triggering violations. They need to prioritize tuning based on real-world triggers. Surfaces most triggered DLP rules, most triggered DLP policies, and top users triggering DLP policies. This is directly aligned to the operational question “Are our policies effective?” The service-description blurb explicitly frames DLP posture reports as highlighting most triggered rules, highest-volume policies, and top policy violators. This is exactly what admins use to decide what to tune first. Helps teams move from anecdotal “DLP is noisy” to a ranked view of where to focus (policy/rule/user). CISO Reports, “Are we safer this quarter?” posture readout Customer situation: A CISO (or compliance leader) needs a repeatable, executive-ready snapshot of how the organization is protecting sensitive data, without stitching together audit logs, Activity Explorer screenshots, and spreadsheets. Posture Reports are explicitly positioned as “executive-ready visibility” across Information Protection + DLP. Provides OOB, executive-ready visibility into data protection posture across Information Protection and Data Loss Prevention, so the CISO can answer “Is Purview doing what we intend it to do?” and “Where are the gaps?” quickly. Enables a consistent monthly/quarterly narrative from built-in metrics and trends, with hourly refresh called out as a customer/partner value driver (great for “freshness” credibility in leadership reviews). Uses a rolling window approach; guidance is to save/export what you want to retain for future reference (great for recurring readouts). Frequently Asked Questions (FAQs) Question Guidance What is the least permission required to see Posture Report section for DLP? Information Protection Reader We can see Activity Explorer details inside the reports in a non-simplified view, where all confidential information is visible. If someone has the Security Reader role, will they be able to see these things? Security Reader can see Activity Explorer content surfaced inside Posture Reports, including user/activity-level details that may expose sensitive metadata. If you want a role that can view posture reports but not see confidential item-level signals, Security Reader is not the safe minimum; Information Protection Reader is. Why are our DLP "Device Posture" reports are not in the Posture Reports and only on the DLP Overview page? It will move. Right now, the traffic on home page is high, so we launched there. There will eventually be a deep clone into our "Posture Reports" section, however, it will take some time before it shows up. Can I get reports going back longer than 30 days? We're working on increasing this number but at this time, the reports go back a max of 30 days. Is there any impact on tenant performance when enabling new reporting features? How quickly will reports populate after enabling the feature? No significant impact is expected. If labeling, scanning, and/or DLP policies are already active, reports populate instantly when the feature is enabled (assuming E5 is in place). No additional intrusive operations are performed on the tenant. Can we customize these reports? We have a current public preview in place for posture report customization. Stay tuned for more updates as we continue to build out Microsoft Purview Reporting. Co-Authors: Kevin Kirkpatrick and Jane Switzer148Views0likes1CommentMicrosoft Purview Referential Architecture Diagrams
Microsoft Purview architecture diagrams provide a reference view of how classification, sensitivity labelling, Data Loss Prevention (DLP), Insider Risk Management, and Microsoft 365 Copilot protections work together across Microsoft 365 workloads. They illustrate how organisations can consistently identify, label, and protect sensitive data across endpoints, email, collaboration services, browsers, and AI‑assisted workflows—without prescribing a single deployment model. Classification generates sensitivity signals, labels express organizational protection intent, and DLP enforces that intent in real time across devices, apps, and services. Together, these patterns show how Copilot inherits existing security controls so AI‑generated content remains governed within the same compliance boundaries as organizational data.2.9KViews6likes1Comment