data loss prevention
418 TopicsPurview DLP Policy Scope - Shared Mailbox
I have created a block policy in Purview DLP and scoped to a security group. The policy triggers when a scoped user sends email that matches the policy criteria but doesnt detect when the user sends the same email from a shared mailbox. Is that a feature of Purview DLP? I had expected the policy to still trigger as email is sent by the scoped user 'on behalf of' the shared mailbox, and the outbound email appears in Exchange Admin as coming from the scoped user.627Views0likes1CommentQuestion behavior same malware
Two malware with the same detection name but on different PCs and files, do they behave differently or the same? Example: Two detections of Trojan:Win32/Wacatac.C!ml 1) It remains latent in standby mode, awaiting commands. 2) It modifies, deletes, or corrupts files.378Views0likes5CommentsWelcome to the Microsoft Security Community!
Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Index Community Calls: January 2026 | February 2026 | March 2026 Upcoming Community Calls February 2026 Feb. 11 | 8:00am | Microsoft Sentinel graph | Unlocking Graph-based Security and Analysis Join us in this session where we will dive into the Microsoft Hunting graph and blast radius experiences, going deeper into the details of the new custom graph capabilities, why they matter, and some of their use cases. We will also cover the differences between ephemeral and materialized custom graphs and how to create each through Visual Studio Code and notebooks. Feb. 12 | 8:00am | Microsoft Purview | Data Security Investigations (DSI) Introducing Microsoft Purview Data Security Investigations (DSI) Identify: Efficiently search your Microsoft 365 data estate to locate incident-relevant documents, emails, Copilot prompts and responses, and Teams messages Investigate: Use AI-powered deep content analysis enriched with activity insights to find key sensitive data and security risks within impacted data quickly. Mitigate: Collaborate with partner teams securely to mitigate identified risks and use investigation learnings to strengthen security practices. Launch DSI from its home page, Microsoft Defender XDR, Microsoft Purview Insider Risk Management, or Microsoft Purview Data Security Posture Management. Feb. 17 | 8:00am | Microsoft Sentinel | Introducing the UEBA Behaviors Layer in Microsoft Sentinel Join us as we explore the new UEBA Behaviors layer in Microsoft Sentinel. See how AI-powered behaviors turn raw telemetry into clear, human-readable security insights, and hear directly from the product team on use cases, coverage, and what’s coming next. Feb. 19 | 8:00am | Security Copilot Skilling Series | Agents That Actually Work: From an MVP Microsoft MVP Ugur Koc will share a real-world workflow for building agents in Security Copilot, showing how to move from an initial idea to a consistently performing agent. The session highlights how to iterate on objectives, tighten instructions, select the right tools, and diagnose where agents break or drift from expected behavior. Attendees will see practical testing and validation techniques, including how to review agent decisions and fine-tune based on evidence rather than intuition to help determine whether an agent is production ready. Feb. 23 | 8:00am | Microsoft Defender for Identity | Identity Control Plane Under Attack: Consent Abuse and Hybrid Sync Risks A new wave of identity attacks abuses legitimate authentication flows, allowing attackers to gain access without stealing passwords or breaking MFA. In this session, we’ll break down how attackers trick users into approving malicious apps, how this leads to silent account takeover, and why traditional phishing defenses often miss it. RESCHEDULED FROM FEB 10 | Feb. 25 | 8:00am | Microsoft Security Store | From Alert to Resolution: Using Security Agents to Power Real‑World SOC Workflows In this webinar, we’ll show how SOC analysts can harness security agents from Microsoft Security Store to strengthen every stage of the incident lifecycle. Through realistic SOC workflows based on everyday analyst tasks, we will follow each scenario end to end, beginning with the initial alert and moving through triage, investigation, and remediation. Along the way, we’ll demonstrate how agents in Security Store streamline signal correlation, reduce manual investigation steps, and accelerate decision‑making when dealing with three of the most common incident types: phishing attacks, credential compromise, and business email compromise (BEC), helping analysts work faster and more confidently by automating key tasks, surfacing relevant insights, and improving consistency in response actions. Feb. 26 | 9:00am | Azure Network Security | Azure Firewall Integration with Microsoft Sentinel Learn how Azure Firewall integrates with Microsoft Sentinel to enhance threat visibility and streamline security investigations. This webinar will demonstrate how firewall logs and insights can be ingested into Sentinel to correlate network activity with broader security signals, enabling faster detection, deeper context, and more effective incident response. March 2026 Mar. 5 | 8:00am | Security Copilot Skilling Series | Conditional Access Optimization Agent: What It Is & Why It Matters Get a clear, practical look at the Conditional Access Optimization Agent—how it automates policy upkeep, simplifies operations, and uses new post‑Ignite updates like Agent Identity and dashboards to deliver smarter, standards‑aligned recommendations. Mar. 12 | 12:00pm (BRT) | Microsoft Intune | Novidades do Microsoft Intune - Últimos lançamentos Junte-se a nós para explorar as novidades do Microsoft Intune, incluindo os lançamentos mais recentes anunciados no Microsoft Ignite e a integração do Microsoft Security Copilot no Intune. A sessão contará com demonstrações ao vivo e um espaço interativo de perguntas e respostas, onde você poderá tirar suas dúvidas com especialistas. Mar. 18 | 1:00pm (AEDT) | Microsoft Entra | From Lockouts to Logins: Modern Account Recovery and Passkeys Lost phone, no backup? In a passwordless world, users can face total lockouts and risky helpdesk recovery. This session shows how Entra ID Account Recovery uses strong identity verification and passkey profiles to help users safely regain access. Mar. 19 | 8:00am | Microsoft Purview | Insider Risk Data Risk Graph We’re excited to share a new capability that brings Microsoft Purview Insider Risk Management (IRM) together with Microsoft Sentinel through the data risk graph (public preview) What it is: The data risk graph gives you an interactive, visual map of user activity, data movement, and risk signals—all in one place. Why it matters: Quickly investigate insider risk alerts with clear context, understand the impact of risky activities on sensitive data, accelerate response with intuitive, graph-based insights Getting started: Requires onboarding to the Sentinel data lake & graph. Needs appropriate admin/security roles and at least one IRM policy configured This session will provide practical guidance on onboarding, setup requirements, and best practices for data risk graph. Mar. 26 | 8:00am | Azure Network Security | What's New in Azure Web Application Firewall Azure Web Application Firewall (WAF) continues to evolve to help you protect your web applications against ever-changing threats. In this session, we’ll explore the latest enhancements across Azure WAF, including improvements in ruleset accuracy, threat detection, and configuration flexibility. Whether you use Application Gateway WAF or Azure Front Door WAF, this session will help you understand what’s new, what’s improved, and how to get the most from your WAF deployments. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub25KViews6likes8CommentsBuilding Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Comprehensive Guide to DLP Policy Tips
Feature Support and Compatibility Q: Which Outlook clients support DLP Policy Tips? A: DLP policy tips are supported across several Outlook clients, but the experience and capabilities vary depending on the end user’s client version and the Microsoft 365 license (E3 vs. E5). For detailed guidance on policy tip support across Microsoft apps, read more here. Below is a breakdown of policy tip support across Outlook clients: Glossary: Basic Policy Tip Support: Display of simple warnings or notifications based on DLP rules. Top 10 Predicates: Most commonly used conditions in DLP rules. Content is shared from M365 Content contains SITs Content contains sensitivity label Subject or Body contains words or phrases Sender is Sender is a member of Sender domain is Recipient is Recipient domain is Recipient is a member of Default Oversharing Dialog: A built-in popup warning users about potential data oversharing. Custom Oversharing Dialog: A tailored version of the oversharing warning. Wait on Send: A delay mechanism that gives users time to review sensitive content before sending. Out-of-box SITs: Out-of-box sensitive information types (SITs), like SSNs or credit card numbers. Custom SITs: User-defined sensitive data patterns. Exact Data Match: Used for precise detection of structured sensitive data. Important considerations: Client version matters: Even within the same client (e.g., Outlook Win32), the version must be recent enough to support the latest DLP features. Older builds may lack support for newer DLP features. Policy tip visibility: Policy tips may not appear if the DLP rule uses unsupported predicates or if the client is offline. Licensing: E5 licenses unlock advanced features like oversharing dialogs and support for custom sensitive information types (SITs). Q: Why don’t Policy tips appear for some users or rules? A: While the underlying DLP rules are always enforced, policy tips may not appear for some users due to several factors: Outlook Client Version: Policy yips are only supported in specific versions of Outlook. For example, older builds of Outlook Win32 may not support the latest DLP capabilities. To ensure the Outlook client version you’re using supports the latest capabilities, read more. Licensing: Users with E3 licenses may only see basic policy tips, and some features may not be available at all, while E5 licenses unlock advanced DLP capabilities such as the custom oversharing dialog. For more information on licensing, read more. Unsupported Conditions or Predicates: If a DLP rule uses unsupported predicates, the policy tip will not be displayed even though the rule is enforced. To ensure compatibility, refer to our documentation for a list of supported conditions by client version. Offline Mode: Policy tips rely on real-time evaluation of message content against Data Loss Prevention (DLP) rules by Microsoft 365 services. When a user is offline, their Outlook client cannot communicate with these services, which affects the visibility of policy tips. What about offline E5 users? Even if a user has an E5 license, which includes advanced DLP features, the client must be online to evaluate and display these advanced policy tips. While the message may still be blocked or logged according to the DLP rule, the user won’t see any tip or warning until they reconnect. Q: Are trainable classifiers supported in policy tips? A: Yes, but with specific limitations. Trainable classifiers are supported in DLP policy tips, but only under specific conditions related to licensing, client version, and connectivity: Licensing: The user must have a Microsoft 365 E5 license. Trainable classifiers are part of Microsoft Purview’s advanced classification capabilities, which are only available with E5 or equivalent add-ons. Client Support: Only certain Outlook clients support policy tips triggered by trainable classifiers. These include: Outlook Classic (Win32) New Outlook for Windows (Monarch) Other clients (such as Outlook Web App (OWA), Outlook for Mac, and Outlook Mobile) do not currently support this feature. Connectivity: The Outlook client must be online. Trainable classifiers rely on the Microsoft 365 Data Classification Service (DCS), which performs real-time content evaluation in the cloud. If the client is offline, policy tips based on trainable classifiers will not appear, even though the DLP rule may still be enforced when the message is sent. Q: Is OCR supported in Policy Tips? A: No, there is currently no support for OCR in policy tips. However, our goal is to support OCR in policy tips in the future. Setup & Configuration Q: What are the prerequisites for enabling DLP policy tips? A: DLP policy tips notify users in real time when their actions may violate data protection policies. To enable and use them effectively, the following prerequisites must be met: Licensing Considerations Microsoft 365 E5 is required for full feature access, including real-time policy tips, trainable classifiers, and connected experiences. Connected Experiences must be enabled in the tenant for real-time tips to appear. License Requirement Microsoft 365 E5 Required for full feature support including trainable classifiers, advanced predicates, and connected experiences. Microsoft 365 E3 Limited support, some advanced features may not be available. Client Compatibility: DLP policy tips are supported across several Outlook clients, but the experience and capabilities vary depending on the client version, licensing, and configuration. Refer to the comprehensive compatibility matrix (provided at the beginning of this guide) to learn about policy tip support across Outlook clients. Permissions To configure and manage DLP policy tips in Microsoft Purview, specific roles and permissions are required. These permissions ensure that only authorized personnel can create, deploy, and monitor DLP policies and their associated tips. Required Roles: Role Group Capabilities Compliance Administrator Full access to create, configure, and deploy DLP policies and tips. Compliance Data Administrator Manage DLP policies and view alerts. Information Protection Admin Configure sensitivity labels and integrate with DLP. Security Administrator View and investigate DLP alerts and incidents. Q: How do I configure a custom policy tip message using JSON? A: You can configure a custom policy tip dialog in DLP policies using a JSON file. This allows you to tailor the message shown to users when a policy is triggered, such as for oversharing or sensitive content detection. JSON must follow the schema outlined in Microsoft’s documentation and internal engineering guidance. Applies to: Microsoft 365 online E5 users with connected experience enabled. This feature is supported in Outlook Classic (Win32) and Monarch. JSON-based dialogs are not supported in Outlook on the Web (OWA), Mac, or Mobile clients. Q: Can I localize policy tips for different languages? A: Localization of DLP policy tips allows users to see messages in their preferred language, improving clarity and compliance across global teams. Microsoft Purview supports localization through JSON-based configuration, but support varies by client. Supported clients: Outlook Classic (Win32) How to configure: Use the LocalizationData block in your custom Policy Tip JSON. Example: LocalizationData block in your custom Policy Tip JSON Upload this JSON using PowerShell with the NotifyPolicyTipCustomDialog parameter. Q: What roles and permissions are required to manage DLP policy tips? A: To manage Data Loss Prevention (DLP) policies and policy tips in Microsoft Purview, you only need to be assigned one of the following roles. Each role provides different levels of access depending on your responsibilities. Role Group Capabilities Compliance Administrator Full access to create, configure, and deploy DLP policies and Policy Tips. Compliance Data Administrator Manage DLP policies and access compliance data. Information Protection Admin Configure sensitivity labels and integrate with DLP policies. Security Administrator View and investigate DLP alerts and incidents. Note: Microsoft recommends assigning the least privileged role necessary to perform the required tasks to enhance security. These roles are assigned in the Microsoft Purview portal under Roles and Scopes. Administrative Unit–scoped roles are also supported for organizations that segment access by department or geography. Troubleshooting & Known Issues Q: Why are policy tips delayed or not appearing at all? A: If you’re not seeing policy tips, follow this checklist to find out why: Outlook Client Compatibility and Licensing Check if your Outlook client supports policy tips. Policy tips are not supported on all Outlook clients. Refer to Q: Which Outlook clients support DLP Policy Tips? Confirm your license. Advanced policy tips (e.g., those using trainable classifiers or oversharing dialogs) require a Microsoft 365 E5 license. Refer to Q: What are the prerequisites for enabling DLP Policy Tips? Policy Configuration Issues Review your DLP policy configuration and check for unsupported conditions. Refer to Q: What predicates are supported across different Outlook clients? Watch for message size limits Only the first 4 MB of the email body and subject, and 2 MB per attachment, are scanned for real-time tips. Use Microsoft’s diagnostic tool Run a built-in diagnostic to test your DLP policy setup. Run the diagnostic. Q: What logs or data should I collect for support escalation? A: To ensure a smooth and complete escalation to Microsoft support or engineering, collect the following logs and metadata depending on the client type. This helps accelerate triage and resolution. Fiddler trace Must include: Timestamp of issue Correlation ID (found as updateGuid in the DLP response) Tenant ID User ID / SMTP address Tenant DLP Policies and Rules Expected rule match conditions and Rule IDs (Optional): Draft email or data input (sender, recipient, subject, message body) ETL logs from %temp%\Outlook Logging PNR logs (Problem Steps Recorder or screenshots) Tenant ID Tenant DLP Policies and Rules Expected rule match conditions and Rule IDs Q: Are there known limitations with policy tips? Unable to detect sensitivity labels in compressed files. Unable to detect CCSI (SITs/Trainable SITs) in encrypted files. Q: What are the limitations of the custom dialog? The title and the body and override justifications options can be customized using the JSON file. Basic text formatting is allowed: bold, underline, italic and line break. Justification options can be up to 3 plus an option for free-text input. The text for false positive and acknowledgment is not customizable. Below is the required structure of the JSON files that admins will create to customize the dialog for matched rules. The keys are all case-sensitive. Formatting and dynamic tokens for matched conditions can only be used in the Body key. Keys Mandatory? Rules/Notes {} Y Container LocalizationData Y Array that contains all the language options. Language Y Specify language code: "en", "es", "fr", "de". Title Y Specify the title for the dialog. Limited to 80 characters. Body Y Specify the body for the dialog. Limited to 1000 characters. Dynamic tokens for matched conditions can be added in the body. Options N Up to three options can be included. One more can be added by setting HasFreeTextOption = true. HasFreeTextOption N This can be true or false, true will display a text box below the last option added to the JSON file. DefaultLanguage Y Must be one of the languages defined within the LocalizationData key. The user must include at least one.Test DLP Policy: On-Prem
We have DLP policies based on SIT and it is working well for various locations such as Sharepoint, Exchange and Endpoint devices. But the DLP policy for On-Prem Nas shares is not matching when used with Microsoft Information Protection Scanner. DLP Rule: Conditions Content contains any of these sensitive info types: Credit Card Number U.S. Bank Account Number U.S. Driver's License Number U.S. Individual Taxpayer Identification Number (ITIN) U.S. Social Security Number (SSN) The policy is visible to the Scanner and it is being logged as being executed MSIP.Lib MSIP.Scanner (30548) Executing policy: Data Discovery On-Prem, policyId: 85........................ and the MIP reports are listing files with these SITs The results Information Type Name - Credit Card Number U.S. Social Security Number (SSN) U.S. Bank Account Number Action - Classified Dlp Mode -- Test Dlp Status -- Skipped Dlp Comment -- No match There is no other information in logs. Why is the DLP policy not matching and how can I test the policy ? thanks102Views1like2CommentsTeams Private Channels Reengineered: Compliance & Data Security Actions Needed by Sept 20, 2025
You may have missed this critical update, as it was published only on the Microsoft Teams blog and flagged as a Teams change in the Message Center under MC1134737. However, it represents a complete reengineering of how private channel data is stored and managed, with direct implications for Microsoft Purview compliance policies, including eDiscovery, Legal Hold, Data Loss Prevention (DLP), and Retention. 🔗 Read the official blog post here New enhancements in Private Channels in Microsoft Teams unlock their full potential | Microsoft Community Hub What’s Changing? A Shift from User to Group Mailboxes Historically, private channel data was stored in individual user mailboxes, requiring compliance and security policies to be scoped at the user level. Starting September 20, 2025, Microsoft is reengineering this model: Private channels will now use dedicated group mailboxes tied to the team’s Microsoft 365 group. Compliance and security policies must be applied to the team’s Microsoft 365 group, not just individual users. Existing user-level policies will not govern new private channel data post-migration. This change aligns private channels with how shared channels are managed, streamlining policy enforcement but requiring manual updates to ensure coverage. Why This Matters for Data Security and Compliance Admins If your organization uses Microsoft Purview for: eDiscovery Legal Hold Data Loss Prevention (DLP) Retention Policies You must review and update your Purview eDiscovery and legal holds, DLP, and retention policies. Without action, new private channel data may fall outside existing policy coverage, especially if your current policies are not already scoped to the team’s group. This could lead to significant data security, governance and legal risks. Action Required by September 20, 2025 Before migration begins: Review all Purview policies related to private channels. Apply policies to the team’s Microsoft 365 group to ensure continuity. Update eDiscovery searches to include both user and group mailboxes. Modify DLP scopes to include the team’s group. Align retention policies with the team’s group settings. Migration will begin in late September and continue through December 2025. A PowerShell command will be released to help track migration progress per tenant. Migration Timeline Migration begins September 20, 2025, and continues through December 2025. Migration timing may vary by tenant. A PowerShell command will be released to help track migration status. I recommend keeping track of any additional announcements in the message center.621Views2likes1CommentAlways‑on Diagnostics for Purview Endpoint DLP: Effortless, Zero‑Friction troubleshooting for admins
Historically, some security teams have struggled with the challenge of troubleshooting issues with endpoint DLP. Investigations can often slow down because reproducing issues, collecting traces, and aligning on context can be tedious. With always-on diagnostics in Purview endpoint data loss prevention (DLP), our goal has been simple: make troubleshooting seamless, and effortless—without ever disrupting the information worker. Today, we’re excited to share new enhancements to always-on diagnostics for Purview endpoint DLP. This is the next step in our journey to modernize supportability in Microsoft Purview and dramatically reduce admin friction during investigations. Where We Started: Introduction of continuous diagnostic collection Earlier this year, we introduced continuous diagnostic trace collection on Windows endpoints (support for macOS endpoints coming soon). This eliminated the single largest source of friction: the need to reproduce issues. With this capability: Logs are captured persistently for up to 90 days Information workers no longer need admin permissions to retrieve traces Admins can submit complete logs on the first attempt Support teams can diagnose transient or rare issues with high accuracy In just a few months, we saw resolution times drop dramatically. The message was clear: Always-on diagnostics is becoming a new troubleshooting standard. Our Newest Enhancements: Built for Admins. Designed for Zero Friction. The newest enhancements to always-on diagnostics unlock the most requested capability from our IT and security administrators: the ability to retrieve and upload always-on diagnostic traces directly from devices using the Purview portal — with no user interaction required. This means: Admins can now initiate trace uploads on demand No interruption to information workers and their productivity No issue reproduction sessions, minimizing unnecessary disruption and coordination Every investigation starts with complete context Because the traces are already captured on-device, these improvements now help complete the loop by giving admins a seamless, portal-integrated workflow to deliver logs to Microsoft when needed. This experience is now fully available for customers using endpoint DLP on Windows. Why This Matters As a product team, our success is measured not just by usage, but by how effectively we eliminate friction for customers. Always-on diagnostics minimizes the friction and frustration that has historically affected some customers. - No more asking your employee or information worker to "can you reproduce that?" and share logs - No more lost context - No more delays while logs are collected after the fact How it Works Local trace capture Devices continuously capture endpoint DLP diagnostic data in a compressed, proprietary format, and this data stays solely on the respective device based on the retention period and storage limits configured by the admin. Users no longer need to reproduce issues during retrieval—everything the investigation requires is already captured on the endpoint. Admin-triggered upload Admins can now request diagnostic uploads directly from the Purview portal, eliminating the need to disrupt users. Upload requests can be initiated from multiple entry points, including: Alerts (Data Loss Prevention → Alerts → Events) Activity Explorer (Data Loss Prevention → Explorers → Activity explorer) Device Policy Status Page (Settings → Device onboarding → Devices) From any of these locations, admins can simply choose Request device log, select the date range, add a brief description, and submit the request. Once processed, the device’s always-on diagnostic logs are securely uploaded to Microsoft telemetry as per customer-approved settings. Admins can include the upload request number in their ticket with Microsoft Support, and sharing this number removes the need for the support engineer to ask for logs again during the investigation. This workflow ensures investigations start with complete diagnostic context. Privacy & compliance considerations Data is only uploaded during admin-initiated investigations Data adheres to our published diagnostic data retention policies Logs are only accessible to the Microsoft support team, not any other parties We Want to Hear From You Are you using always-on diagnostics? We'd love to hear about your experience. Share your feedback, questions, or success stories in the Microsoft Tech Community, or reach out to our engineering team directly. Making troubleshooting effortless—so you can focus on what matters, not on chasing logs.Artificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.From “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.462Views0likes0Comments