Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Happy New Year and welcome to our first newsletter of 2026!
This year, we’re doubling down on something that matters to every one of us: keeping data safe without slowing innovation. Security isn’t...
Jan 07, 202666Views
1like
0Comments
In our first post, we established why securing AI workloads is mission-critical for the enterprise. Now, we turn to the AI pipeline—the end-to-end journey from raw data to deployed models—and explore...
Jan 06, 2026153Views
0likes
0Comments
6 MIN READ
The cost of poor-quality data runs into millions of dollars in direct losses. When indirect costs—such as missed opportunities—are included, the total impact is many times higher. Poor data quality a...
Jan 06, 2026105Views
0likes
0Comments
5 MIN READ
Understanding Artificial Intelligence
Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language un...
Jan 06, 2026158Views
0likes
0Comments
Recent Discussions
Aggregate alerts not showing up for Email DLP
Hi, I’m unable to see the “Aggregate alerts” option while configuring an Email DLP policy, although the same option is visible for Endpoint DLP. The available license is Microsoft 365 E5 Information Protection and DLP (add-on). If this is a licensing limitation, why am I still able to see the option for Endpoint DLP but not for Email DLP? Screen short showing option for Endpoint DLP alertsBlock transfer of labelled data through CLI Apps - Powershell
I have a ticket open with microsoft since mid november, and to date not fixed, still chasing. So we have labelled data, using a custom label intellectual property. We block and alert using it, from uploads to list of urls, to prompt to override, etc. So the label works. Next step is to prevent exfil using Cli apps. This is where the issue is.. Not working. Would you have any idea if this actually works? Has anyone set it up? In settings and then Restricted apps and app groups I have setup the following: Then I created a policy that is applied to my machine and my user to block the move and upload of data that is labelled as Intellectual Property (Sensivity Label) It should block when I am using WinSCP or powershell. It does not. I tried with the restricted app group and with access by restricted apps. None works My machine is in sync47Views0likes1CommentTest DLP Policy: On-Prem
We have DLP policies based on SIT and it is working well for various locations such as Sharepoint, Exchange and Endpoint devices. But the DLP policy for On-Prem Nas shares is not matching when used with Microsoft Information Protection Scanner. DLP Rule: Conditions Content contains any of these sensitive info types: Credit Card Number U.S. Bank Account Number U.S. Driver's License Number U.S. Individual Taxpayer Identification Number (ITIN) U.S. Social Security Number (SSN) The policy is visible to the Scanner and it is being logged as being executed MSIP.Lib MSIP.Scanner (30548) Executing policy: Data Discovery On-Prem, policyId: 85........................ and the MIP reports are listing files with these SITs The results Information Type Name - Credit Card Number U.S. Social Security Number (SSN) U.S. Bank Account Number Action - Classified Dlp Mode -- Test Dlp Status -- Skipped Dlp Comment -- No match There is no other information in logs. Why is the DLP policy not matching and how can I test the policy ? thanksRetention policy for Teams chat not working
Hello All, I have created a Teams chat retention policy under data lifecycle management, it is for 1 month retention. However It is not working, the message older then 1 month are still appearing in chat. Please let me know if I have missed any specific setting in the policy or any prerequisite. I have typically waited for more than 7 days after the 30 days of retention.37Views0likes1CommenteDiscovery KeyQL
I am hoping someone might be able to help me with some KeyQL syntax. I want to find documents that contain a combination of SITs with a minimum occurrence of 1 and a confidence level of between 85 - 100%. I have used the following syntax which shows no errors before I run the query. I have tested the first Sensitive type using the condition builder and it returns matches but even if I try the first line of KeyQL on it's own nothing is returned. Could anyone help please SensitiveType:“50b8b56b-4ef8-44c2-a924-03374f5831ce” |1..|85..100 - Microsoft built in SIT "All Full Names" AND SensitiveType:“accaf4c2-fb54-40f7-aea8-db0e36a2e9eb” |1..|85..100 - Custom SIT "DOB" AND SensitiveType:“8B9E5FBC-4AA9-4017-8256-BE3E8241AEB5” |1..|85..100 - Microsoft built in SIT "U.K. Physical Address" Thanks Chris33Views0likes1CommentData Quality Error (Internal Service Error)
I am facing an issue while running the DQ scan, when i tried doing manual scan and scheduled scans both time i faced Internal Service Error. ( DataQualityInternalError Internal service error occurred .Please retry or contact Microsoft support ) Data Profiling is running successfully but for none of the asset, DQ is working. After the lineage patch which MS had fixed, they had introduced Custom SQL option to create a rule, and after that only i am facing this issue. Is anyone else also facing the same? I tried with different data sources (ADLS, and Synapse) its same for both. If anyone has an idea, do share it here, it will be helpful.How to offboarding endpoint from Purview
Hi I'm a fresh user of Purview and after creating policies linked to Exchange, I've enabled the onboarding of computer. Unfortunately, all Defender endpoints have been onboarded, and I've not be able to define which one was concerned. Now, I would like to offboard all those devices from purview and only keep them in Defender without any DLP protection. I tried to remove them with the onboarding script, but my endpoints are still present in Purview. How can I completely remove them? Thanks for your help Yohann218Views0likes3CommentsEntra Risky Users Custom Role
My customer implemented unified RBAC (Defender Portal) and removed the Entra Security Operator role. They lost the ability to manage Risky Users in Entra. Two options explored by the customer - Protected Identity Administrator role (licensing unclear) or create a custom role with microsoft.directory/identityProtection/riskyUsers/update, which they couldn't find under custom role. Do you know if there are other options to manage Risky Users without using the Security Operator role?How to Prevent Workspace Details from Appearing in LAQueryLogs During Cross-Workspace Queries
I’ve onboarded multiple workspaces using Azure Lighthouse, and I’m running cross-workspace KQL queries using the workspace() function. However, I’ve noticed that LAQueryLogs records the query in every referenced workspace, and the RequestContext field includes details about all other workspaces involved in the query. Is there any way to run cross-workspace queries without having all workspace details logged in LAQueryLogs for each referenced workspace?9Views0likes0CommentsCannot setup phone sign in with Microsoft Authenticator
Hi All, My new Redmi Turbo 4 was working with Microsoft authenticator, but in the past month, it started malfunctioning, so I decided to reset the authenticator app and sign back into it. Now I can't setup the app to do phone sign-in, and the sign in request notifications does not come to the new phone. (old phone is currently still operational). Is there like a shadow ban to chinese android phones?6Views0likes0CommentsIssue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. LucasI have absolutely no idea what Microsoft Defender 365 wants me to do here
The process starts with an emal: There's more below on the email - an offer for credit monitoring, an option to add another device, an option to download the mobile app - but I don't want to do any of the, so I click on the "Open Defender" button, which results in this: OK, so my laptop is the bad boy here, there's that Status not of "Action recommended", with no "recommendations" and the only live link here is "Add device", something I don't need to do. The only potential "problem" I can even guess at here is that Microsoft is telling me that the laptop needs updating. Since I seldom use the laptop, only when traveling, I'd guess the next time I'd fire it up the update will occur, but of course I really don't know that's the recommended action it's warning me about, do I? You'd expect that if something is warning you "ACTION NEEDED!!!" they'd be a little more explicit, wouldn't you?Microsoft Purview Data Map Approach to scan
I plan to scan Purview data assets owner by owner rather than scanning entire databases in one go because this approach aligns with data governance and RBAC (Role-Based Access Control) principles. By segmenting scans by asset ownership, we ensure that only the designated data asset owners have the ability to edit or update metadata for their respective assets in Purview. This prevents broad, unrestricted access and maintains accountability, as each owner manages the metadata for the tables and datasets they are responsible for. Scanning everything at once would make it harder to enforce these permissions and could lead to unnecessary exposure of metadata management rights. This owner-based scanning strategy keeps governance tight, supports compliance, and ensures that metadata stewardship remains with the right people. This approach also aligns with Microsoft Purview best practices and the RBAC model: Microsoft recommends scoping scans to specific collections or assets rather than ingesting everything at once, allowing different teams or owners to manage their own domains securely and efficiently. Purview supports metadata curation via roles such as Data Owner and Data Curator, ensuring that only users assigned as owners; those with write or owner permissions on specific assets; can edit metadata like descriptions, contacts, or column details. The system adheres to the principle of least privilege, where users with Owner/Write permissions can manage metadata for their assets, while broader curation roles apply only where explicitly granted. Therefore, scanning owner by owner not only enforces governance boundaries but also ensures each data asset owner retains exclusive editing rights over their metadata; supporting accountability, security, and compliance. After scanning by ownership, we can aggregate those assets into a logical data product representing the full database without breaking governance boundaries. Is this considered best practice for managing metadata in Microsoft Purview, and does it confirm that my approach is correct?Solved79Views0likes2CommentsRequest for Advice on Managing Shared Glossary Terms in Microsoft Purview
Hi everyone, I'm looking for guidance from others who are working with Microsoft Purview (Unified Catalog), especially around glossary and governance domain design. Scenario; I have multiple governance domains, all within the same Purview tenant. We have some core business concepts from the conceptual data models for example, a term like “PARTY” that are needed in every governance domain. However, based on Microsoft’s documentation: Glossary terms can only be created inside a specific governance domain, not at a tenant‑wide or global level. The Enterprise Glossary is only a consolidated view, not a place to create global terms or maintain a single shared version. It simply displays all terms from all domains in one list. If the same term is needed across domains, Purview requires separate term objects in each domain. Consistency must therefore be managed manually (by re‑creating the term in each domain) or by importing/exporting via CSV or automation (API/PyApacheAtlas)? This leads to questions about maintainability; especially when we want one consistent definition across all domains. What I'm hoping to understand from others: How are you handling shared enterprise concepts – enterprise and conceptual data models that need to appear in multiple governance domains? Are you duplicating terms in each domain and synchronising them manually or via automation? Have you adopted a “central domain” for hosting enterprise‑standard terms and then linking or referencing them in other domains? Is there any better pattern you’ve found to avoid fragmentation and to ensure consistent definitions across domains? Any advice, lessons learned, or examples of how you’ve structured glossary governance in Purview would be really helpful. this is be a primary ORKS - Establish a unified method to consistently link individual entities (e.g., PARTY) to their associated PII‑classified column‑level data assets in Microsoft Purview, ensuring sensitive data is accurately identified, governed, and monitored across all domains. – I.e CDE to Glossary terms Thanks in advance!25Views0likes0CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.Updating SDK for Java used by Defender for Server/CSPM in AWS
Hi, I have a customer who is Defender for Cloud/CSPM in AWS. Last week, Cloud AWS Health Dashboard lit up with a recommendation around the use of AWS SDK for Java 1.x in their organization. This version will reach end of support on December 31, 2025. The recommendation is to migrate to AWS SDK for Java 2.x. The issue is present in all of AWS workload accounts. They found that a large amount of these alerts is caused by the Defender CSPM service, running remotely, and using AWS SDK for Java 1.x. Customer attaching a couple of sample events that were gathered from the CloudTrail logs. Please note that in both cases: assumed-role: DefenderForCloud-Ciem sourceIP: 20.237.136.191 (MS Azure range) userAgent: aws-sdk-java/1.12.742 Linux/6.6.112.1-2.azl3 OpenJDK_64-Bit_Server_VM/21.0.9+10-LTS java/21.0.9 kotlin/1.6.20 vendor/Microsoft cfg/retry-mode/legacy cfg/auth-source#unknown Can someone provide guidance about this? How to find out if DfC is going to leverage AWS SDK for Java 2.x after Dec 31, 2025? Thanks, TerruLineage in Purview - Fabric
Dears, I was checking the lineage which is collected in Microsoft Fabric by Purview and I saw this limitations on the Microsoft page : I have some sources like : Dataverse (I am doing a Dataverse Direct Shortcut to bring the information from Dataverse into Fabric - Raw Layer) Sharepoint (I am doing a Fabric Shortcut into sharepoint. - Fabric Raw Layer) to bring the files into Fabric SQL Server MI ( I am doing a Fabric mirroring - Fabric Raw Layer) I did not undertand very well the lineage exceptions mentioned above. Let me ask this questions please: 1) Suppose I have a table which came from SQL Server MI. This table was first ingested into Fabric Raw Layer, then into Bronze Layer, then into Fabric Silver Layer and After to Fabric Gold Layer. Each layer has its own fabric workspace. When I bring the metadata via Purview , into the Purview Unified Catalog do I get automatic lineage between Silver layer and Gold layer for that table (suppose the table is named table 1). Or this is not possible because : 1.1 ) They are different workspaces 1.2) The table was originated outside Fabric , in the SQL Server MI? 2) Suppose I have a table in Silver layer (in the silver layer workspace), then, in the gold layer, I create a shortcut to that table. And then I use a notebook to transform that table into another table in the same gold layer. Will I have lineage between the Silver table and the new transformed table in gold layer? 3) To make the relationship of lineage between tables which is not create automatically I was thinking in using Purview Rest APIs. Which API shall I use specifically for this? Thanks, Pedro Thanks, Pedro53Views0likes1CommentIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu KiranLatest Threat Intelligence (December 2025)
Microsoft Defender for IoT has released the December 2025 Threat Intelligence package. The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file). Threat Intelligence updates reflect the combined impact of proprietary research and threat intelligence carried out by Microsoft security teams. Each package contains the latest CVEs (Common Vulnerabilities and Exposures), IOCs (Indicators of Compromise), and other indicators applicable to IoT/ICS/OT networks (published during the past month) researched and implemented by Microsoft Threat Intelligence Research - CPS. The CVE scores are aligned with the National Vulnerability Database (NVD). Starting with the August 2023 threat intelligence updates, CVSSv3 scores are shown if they are relevant; otherwise the CVSSv2 scores are shown. Guidance Customers are recommended to update their systems with the latest TI package in order to detect potential exposure risks and vulnerabilities in their networks and on their devices. Threat Intelligence packages are updated every month with the most up-to-date security information available, ensuring that Microsoft Defender for IoT can identify malicious actors and behaviors on devices. Update your system with the latest TI package The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file), for more information, please review Update threat intelligence data | Microsoft Docs. MD5 Hash: 5c642a16bf56cb6d98ef8b12fdc89939 For cloud connected sensors, Microsoft Defender for IoT can automatically update new threat intelligence packages following their release, click here for more information.