Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Architecting Trust: A NIST-Based Security Governance Framework for AI Agents
The "Agentic Era" has arrived. We are moving from chatbots that simply talk to agents that act—triggering APIs, querying...
Jan 30, 2026577Views
0likes
0Comments
4 MIN READ
This article describes a simple, yet effective solution for the problem of segregating Microsoft Defender XDR and Entra ID Sentinel logs ingestion in a single-tenant with multiple companies scenario,...
Jan 30, 2026859Views
2likes
0Comments
Going to RSA? We’re recording short, 15-minute Microsoft Security podcast conversations live at the conference- focused on real-world practitioner experience. No pitches, no slides, no marketing. Jus...
Jan 29, 202693Views
0likes
0Comments
Onboard new tenants and maintain a consistent security baseline
We’re excited to announce a set of new content types that are now supported by the multi-tenant content distribution capability in th...
Jan 29, 2026180Views
0likes
0Comments
Recent Discussions
Allow Uniqueness of Glossary Terms across Governance Domains
When glossary terms are created and published, there is no check for the same term name in another governance domain. Some organizations do want to enforce term uniqueness across all domains. Would it be feasible to provide an optional switch in Unified Catalog settings to turn this on?15Views0likes0CommentsWorkaround Enabling Purview Data Quality & Profiling for Cross-Tenant Microsoft Fabric Assets
The Challenge: Cross-Tenant Data Quality Blockers Like many of you, I have been managing a complex architecture where Microsoft Purview sits in Tenant A and Microsoft Fabric resides in Tenant B. While we can achieve basic metadata scanning (with some configuration), I hit a hard wall when trying to enable Data Quality (DQ) scanning. Purview's native Data Quality scan for Fabric currently faces limitations in cross-tenant scenarios, preventing us from running Profiling or applying DQ Rules directly on the remote Delta tables. The Experiment: "Governance Staging" Architecture rather than waiting for a native API fix, I conducted an experiment to bridge this gap using a "Data Staging" approach. The goal was to bring the data's "physicality" into the same tenant as Purview to unlock the full DQ engine. The Solution Steps: Data Movement (Tenant B to Tenant A): Inside the Fabric Workspace (Tenant B), I created a Fabric Data Pipeline. I used this to export the critical Delta Tables as Parquet files to an ADLS Gen2 account located in Tenant A (the same tenant as Purview). Note: You can schedule this to run daily to keep the "Governance Copy" fresh. Native Scanning (Tenant A): I registered this ADLS Gen2 account as a source in Purview. Because both Purview and the ADLS account are in the same tenant, the scan was seamless, instantaneous, and required no complex authentication hurdles. Activating Data Quality: Once the Parquet files were scanned, I attached these assets to a Data Product in the Purview Data Governance portal. The Results: The results were immediate and successful. Because the data now resides on a fully supported, same-tenant ADLS Gen2 surface: ✅ Data Profiling: I could instantly see column statistics, null distributions, and value patterns. ✅ DQ Rules: I was able to apply custom logic and business rules to the data. ✅ Scans: The DQ scan ran successfully, generating a Data Quality Score for our Fabric data. Conclusion: While we await native cross-tenant "Live View" support for DQ in Fabric, this workaround works today. It allows you to leverage the full power of Microsoft Purview's Data Quality engine immediately. If you are blocked by tenant boundaries, I highly recommend setting up a lightweight "Governance Staging" container in your primary tenant. Has anyone else experimented with similar staging patterns for Governance? Let's discuss below.SolvedData Product Owner and Contacts should be separate fields
Currently, the 'contacts' field under a data product has a 1 on 1 relationship with the 'data product owner' field. It is not possible to add 'contacts' seperately. I believe this does not make sense for most organizations. For example, our data products have one owner, and multiple contacts (e.g. data stewards, data experts). That's how our governance works. We are not going to add people to the 'data product owner' field that are no data owners, just to show them in contacts. Also, why would you have two fields that basically do the same? Clicking on 'data product owner' already gives me the information for 'contacts'. Please let us add contacts here, that are not the data product owner.390Views4likes8CommentsFido passkeys blocked by policy
Hi all I'm helping out a customer with deploying physical passkeys and I'm running into a weird error. I've activated the sign in method and selected the two AAGuids for the Authenticator app and I've added the right AAGuid for the brand and model of passkey we are using. We can select the authentication method and enroll the security correctly but when trying to sign in using it we get the error as displayed in the attached picture. When checking the sign in logs i get this error message FIDO sign-in is disabled via policy and the error code is: 135016 I've not been able to track down any policy that would be blocking passkeys. anyone got any ideas?Encryption disappears in Outlook - Sensitivity Label not working
Hello everyone, we implemented Sensitivity Labels at our client and have iconsistent and unexpected behavior, we cannot explain. Maybe some of you can help or have ideas on whats going on: Scenario / Use Case A customer is using Sensitivity Labels to encrypt emails in Exchange Online. Label configuration: The sensitivity label applies encryption The label is scoped (published) to a Microsoft 365 group User A and User B are members of this Microsoft 365 group and therefore can apply the label User are licensed with M365 Business Premium The label is published and available to User A and User B (member of above M365 group) User C is an external recipient and not included in the label’s publishing scope Observed Behaviors Scenario 1 – Encryption Lost When Forwarded Externally User A (internal) sends an email to User B (internal) using a sensitivity label that applies encryption. User B receives the email correctly: The lock icon in Outlook is displayed, the message is encrypted as expected User B forwards the email to User C (external) User C receives the forwarded email unencrypted: No lock icon is shown, User C can read the entire conversation history, including content that was previously encrypted Scenario 2 – Encryption Disappears Within an Internal Email Conversation In addition to the external forwarding scenario, we are also observing the following behavior within an internal email thread: User A sends an encrypted email to User B using the sensitivity label. User B replies to User A: The reply remains encrypted User A replies again within the same conversation Suddenly, the encryption disappears: The lock icon is no longer shown The message and the full conversation history is no longer protected This happens without any user action to remove or change the sensitivity label. Key Observation Both scenarios occur intermittently: Sometimes encryption behaves as expected Sometimes encryption disappears “out of nowhere” The behavior is not reliably reproducible, which makes troubleshooting very difficult. Any help is appreciated!How do you work around the client restrictions for opening encrypted documents?
We are wanting to roll out Purview sensitivity labels. Specifically, encrypted labels so we can implement controls such as preventing printing, copy/paste, etc. The issue we have ran into is that once an Office doc is encrypted, there appears to only be two ways to open the document: In a licensed Office desktop client Sharing a link to the document in SharePoint so it can be opened in a web browser. We share documents with a large variety of 3rd parties that do not use Office. Many are small businesses who seem to prefer Google Workspace, so no Office clients. The SharePoint web browser option also does not work for us as we require users to have an Entra ID account to access our SharePoint, and it would not be feasible to onboard the number of external users we share documents with (nor to purchase O365 licenses for all of them). We considered using both encrypted and non-encrypted labels and using encrypted only when the recipient uses office. However there is no way for our internal users to know if the person they are sending a document to is using Office. So now we are left not really knowing what to do. I would love to hear some suggestions for how other organizations handled this.39Views1like1CommentCross workspace lineage for Fabric Lakehouse tables in Purview
I’m currently exploring lineage capture in Microsoft Purview for Fabric Lakehouse tables that are spread across multiple workspaces, following medallion architecture (Bronze, Silver, Gold in separate Fabric workspaces). While reviewing the documentation, I noticed the stated limitation around cross-workspace lineage for non-power BI assets, as mentioned here: https://learn.microsoft.com/en-us/purview/data-map-lineage-fabric Is there any update or workaround planned to support cross-workspace Fabric lineage in Purview? Is this limitation on the product roadmap or actively being worked on? Until native support is available, are there any recommended design patterns to handle lineage in this scenario?Cloud only Entra ID Domain Services and Seamless SSO from Entra ID Joined machines
Hello I am currently implementing Entra ID Domain Services with one customer (he has no on-premises active directory). We now face the issue that an Entra ID joined client is not able to access ressources on machines that are joined to Entra ID Domain Services without entering his username and password. The authentication fails with incorrect username and password (event id 200) message and the Security-Kerberos eventlog reports that it was not able to contact a domain controller for the AzureAd Domain (so he is not using the Domain name of the target domain). However has someone already tried this and is there something I am overlooking or is that something that simply can not work. Thank you very much in advance for any ideas.MS Defender setting
Hello, I have a question. I'm not an English-speaking country, so please understand any shortcomings. I'm trying to block or alert on specific URLs in Microsoft Defender > Settings > Endpoint > Rules > Indicators. I've completed the setup, but I'd like to customize the screen that appears on the webpage when an alert is triggered. Is there a way to do this? Thank you in advance for your help.88Views0likes2CommentsDefender for Business - No alert after process lock out ?
Hello all, A few days ago, I have setup Defender for business server on a Windows Server 2019. I can see that server in the Microsoft security portail devices list. I have also tested the "suspicious" powershell command provided by Microsoft and it went all good. Powershell blocked, alert escaladed as incident in the security portal, email received, ... But the next day, I tried to install a service on that server that got blocked by Virus & Thread Protection because it was attempting to modify a lot of files. That was a good point for Defender (it was not a real thread and was later added as exception). My worry is that it was never escaladed to the security portal, I didn't received a alert email, .. The system blocked that "thread" multiple times during my attempt to deploy it and no incident were throw. What could be wrong ? Thank you.URL rewriting does not apply during Attack Simulation (Credential Harvesting)
I’m running a credential-harvesting attack simulation in Microsoft Defender for Office 365, but the URL rewriting does not work as expected. In the final confirmation screen, the phishing link is shown as rewritten to something like: https://security.microsoft.com/attacksimulator/redirect?... However, during the actual simulation, the link is NOT rewritten. It stays as the original domain (e.g., www.officentry.com), which causes the simulation to fail with an error. I’m not sure whether this behavior is related to Safe Links or something else within Defender. Why is the URL not rewritten at runtime, and how can I ensure that the redirect link is applied correctly in the actual simulation?16Views0likes0CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.External (guest) users can't access my registered application
We have a FileMaker application registered with Entra ID, using OAuth, for internal and external (guests) users in my organization. Since January 19th, external users have been encountering a different authentication process, which results in a 404 error (see images below). No changes were made to the Entra ID or the application configurations before this change in behaviour. It seems that logging in to a personal account results in an incorrect token for the redirect URL, which does not happen when logging in with organizational accounts.Integrating Proofpoint and Mimecast Email Security with Microsoft Sentinel
Microsoft Sentinel can ingest rich email security telemetry from Proofpoint and Mimecast to power advanced phishing detection. The Proofpoint On Demand (POD) Email Security and Proofpoint Targeted Attack Protection (TAP) connectors pull threat logs (quarantines, spam, phishing attempts) and user click data into Sentinel. Similarly, the Mimecast Secure Email Gateway connector ingests detailed mail flow and targeted-threat logs (attachment/URL scans, impersonation events). These integrations use Azure-hosted ingestion (via Logic Apps or Azure Functions) and the new Codeless Connector framework to call vendor APIs on a schedule. The result is a consolidated dataset in Sentinel’s Log Analytics, enabling correlated alerting and hunting across email, identity, and endpoint signals. Figure: Phishing emails are processed by Mimecast’s gateway and Proofpoint POD/TAP services. Security logs (delivery/quarantine events, malicious attachments/links, user clicks) flow into Microsoft Sentinel. In Sentinel, these mail signals are correlated with identity (Azure AD), endpoint (Defender) and network telemetry for end-to-end phishing detection. Proofpoint POD (Email Protection) Connector The Proofpoint POD connector ingests core email protection logs. It creates two tables, ProofpointPODMailLog_CL and ProofpointPODMessage_CL. These logs include per-message metadata (senders, recipients, subject, message size, timestamps), threat scores (spamScore, phishScore, malwareScore, impostorScore), and attachment details (number of attachments, names, hash values and sandbox verdicts). Quarantine actions are recorded (quarantine folder/rule) and malicious indicators (URL or file hash) and campaign IDs are tagged in the threatsInfoMap field. For example, each ProofpointPODMessage_CL record may carry a sender_s (sender email domain hashed), recipient list, subject, and any detected threat type (Phish/Malware/Spam/Impostor) with associated threat hash or URL. Deployment: Proofpoint POD uses Sentinel’s codeless connector (an Azure Function behind the scenes). You must provide Proofpoint API credentials (Cluster ID and API token) in the connector UI. The connector periodically calls the Proofpoint SIEM API to fetch new log events (typically in 1–2 hour batches). The data lands in the above tables. (Older custom logic-app approaches similarly parse JSON output from the /v2/siem/messages endpoints.) Proofpoint TAP (Targeted Attack Protection) Connector Proofpoint TAP provides user-click and message-delivery events. Its connector creates four tables: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, and ProofPointTAPClicksBlockedV2_CL. The message tables report emails with detected threats (URL or attachment defense) that were delivered or blocked by TAP. They include the same fields as POD (message GUID, sender, recipients, subject, threat campaign ID, scores, attachments info). The click tables log when users click on URLs: each record has the URL, click timestamp (clickTime), the user’s IP (clickIP), user-agent, the message GUID, and the threat ID/category. These fields allow you to see who clicked which malicious link and when. As the connector description notes, these logs give “visibility into Message and Click events in Microsoft Sentinel” for hunting. Deployment: The TAP connector also uses the codeless framework. You supply a TAP API service principal and secret (proofpoint SIEM API credentials) in the Sentinel content connector. The function app calls TAP’s /v2/siem/clicks/blocked, /permitted, /messages/blocked, and /delivered endpoints. The Proofpoint SIEM API limits queries to 1-hour windows and 7-day history, with no paging (all events in the interval are returned). (A Logic App approach could also be used, as shown in the Tech Community blog: one HTTP GET per event type and a JSON Parse before sending to Log Analytics.) Mimecast Secure Email Gateway Connector The Mimecast connector ingests the Secure Email Gateway (SEG) logs and targeted-threat (TTP) logs. Inbound, outbound and internal mail events from the Mimecast MTA (receipt, processing, delivery stages) are pulled via the API. Typical fields include the unique message ID (aCode), sender, recipient, subject, attachment count/names, and the policy actions or holds (e.g. spam quarantine). For example, the Mimecast “Process” log shows AttCnt, AttNames, and if the message was held (Hld) for review. Delivery logs include the success/failure and TLS details. In addition, Mimecast TTP logs are collected: URL Protect logs (when a user clicks a blocked URL) include the clicked URL (url), category (urlCategory), sender/recipient, and block reason. Impersonation Protect logs capture spoofing detections (e.g. if an internal name is impersonated), with fields like Sender, Recipient, Definition and Action (hold/quarantine). Attachment Protect logs record malicious file detections (filename, hash, threat type). Deployment: Like Proofpoint, Mimecast’s connector uses Azure Functions via the Sentinel content hub. You install the Mimecast solution, open the connector page, then enter Azure app credentials and Mimecast API keys (API Application ID/Key and Access/Secret for the service account). As shown in the deployment guide, you must provide the Azure Subscription, Resource Group, Log Analytics Workspace and the Azure Client (App) ID, Tenant ID and Object ID of the admin performing the setup. On the Mimecast side, you supply the API Base URL (regional), App ID/Secret and user Access/Secret. The connector creates a Function App that polls Mimecast’s SIEM APIs on a cron schedule (default every 30 minutes). You can optionally specify a start date for backfilling up to 7 days of logs. The default tables are MimecastSIEM_CL (for email flow logs) and MimecastDLP_CL (for DLP/TTP events), though custom names can be set. Ingestion Considerations Data Latency: All these connectors are pull-based and typically run on a schedule (often 30–60 minutes). For example, the Proofpoint POD docs note hourly log increments, and Mimecast logs are aggregated every 30 minutes. Expect a delay of up to an hour or more from event occurrence to Sentinel ingestion. Schema Nuances: The APIs often return nested arrays and optional fields. For instance, the Proofpoint blog warns that some JSON fields can be null or vary in type, so the parse schema should account for all possibilities. Similarly, Mimecast logs come in pipe-delimited or JSON format, with values sometimes empty (e.g. no attachments). In KQL, use tostring() or parse_json() on the raw _CL columns, and mv-expand on any multivalue fields (like message parts or threat lists). Table Names: Use the connector’s tables as listed. For Proofpoint: ProofpointPODMailLog_CL and ProofpointPODMessage_CL; for TAP: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, ProofPointTAPClicksBlockedV2_CL. For Mimecast SEG/TTP: MimecastSIEM_CL (seg logs) and MimecastDLP_CL (TTP logs). API Behavior: The Proofpoint TAP API has no paging. Be aware of timezones (Proofpoint uses UTC) and use the Sentinal ingestion TimeGenerated or event timestamp fields for binning. Detection Engineering and Correlation To detect phishing effectively, we correlate these email logs with identity, endpoint and intel data: Identity (Azure AD): Mail logs contain recipient addresses and (hashed) sender user parts. A common tactic is to correlate SMTP recipients or sender domains with Azure AD user records. For example, join TAP clicks by recipient to the user’s UPN. The Proofpoint logs also include the clicker’s IP (clickIP); we can match that to Azure AD sign-in logs or VPN logs to find which device/location clicked a malicious link. Likewise, anomalous Azure AD sign-ins (impossible travel, MFA failure) after a suspicious email can strengthen the case. Endpoints (Defender): Once a user clicks a bad link or opens a malicious attachment (captured in TAP or Mimecast logs), watch for follow-on behaviors. For instance, use Sentinel’s DeviceSecurityEvents or DeviceProcessEvents to see if that user’s machine launched unusual processes. The threatID or URL hash from email events can be looked up in Defender’s file data. Correlate by username (if available) or IP: if the email log shows a link click from IP X, see if any endpoint alerts or logon events occurred from X around the same time. As the Mimecast integration touts, this enables “correlation across Mimecast events, cloud, endpoint, and network data”. Threat Intelligence: Use Sentinel’s ThreatIntelligenceIndicator tables or Microsoft’s TI feeds to tag known bad URLs/domains in the email logs. For example, join ProofPointTAPClicksBlockedV2_CL on the clicked url against ThreatIntelligenceIndicator (type=URL) to automatically flag hits. Proofpoint’s logs already classify threats (malware/phish) and provide a threatID; one can enrich that with external intel (e.g. check if the hash appears in TI feeds). Mimecast’s URL logs include a urlCategory field, which can be mapped to known malicious categories. Automated playbooks can also pull Intel: e.g. use Sentinel’s TI REST API or Azure Sentinel watchlists containing phishing domains to annotate events. In summary, a robust detection strategy might look like: (1) Identify malicious email events (high phish scores, quarantines, URL clicks). (2) Correlate these events by user with Azure AD logs (did the user log in from a new IP after a phish click?). (3) Correlate with endpoint alerts (Defender found malware on that device). (4) Augment with threat intelligence lookups on URLs and attachments from the email logs. By linking the Proofpoint/Mimecast signals to identity and endpoint events, one can detect the full attack chain from email compromise to endpoint breach. KQL Query Here are representative Kusto queries for common phishing scenarios (adapt table/field names as needed): Malicious URL Click Detection: Identify users who clicked known-malicious URLs. For example, join TAP click logs to TI indicators:This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: let TI = ThreatIntelligenceIndicator | where Active == true and _EntityType == "URL"; ProofPointTAPClicksPermittedV2_CL | where url_s != "" | project ClickTime=TimeGenerated, Recipient=recipient_s, URL=url_s, SenderIP=senderIP_s | join kind=inner TI on $left.URL == TI._Value | project ClickTime, Recipient, URL, Description=TI.Description This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: ProofPointTAPClicksPermittedV2_CL | extend clickedDomain = extract(@"https?://([^/]+)", 1, url_s) | summarize ClickCount=count() by clickedDomain | where clickedDomain has "maliciousdomain.com" or clickedDomain has "phish.example.com" Quarantine Spike (Burst) Detection: Detect sudden spikes in quarantined messages. For example, using POD mail log:This finds hours with an unusually high number of held (quarantined) emails, which may indicate a phishing campaign. You could similarly use ProofPointTAPMessagesBlockedV2_CL. ProofpointPODMailLog_CL | where action_s == "Held" | summarize HeldCount=count() by bin(TimeGenerated, 1h) | order by TimeGenerated desc | where HeldCount > 100 Targeted User Phishing: Find if a specific user received multiple malicious emails. E.g., for user email address removed for privacy reasons:This lists recent phish attempts targeting Username. You might also join with TAP click logs to see if she clicked anything. ProofpointPODMessage_CL | where recipient has "email address removed for privacy reasons" | where array_length(threatsInfoMap) > 0 and threatsInfoMap_classification_s == "Phish" | project TimeGenerated, sender_s, subject_s, threat=threatsInfoMap_threat_s Campaign-Level Analysis: Group emails by Proofpoint campaign ID to see scope of each campaign:This shows each campaign ID with how many unique recipients were hit and one example subject. Combining TAP and POD tables on GUID_s or QID_s can further link click events back to the originating message/campaign. ProofpointPODMessage_CL | mv-expand threatsInfoMap | summarize Recipients=make_set(recipient), Count=dcount(recipient) by CampaignID=threatsInfoMap_campaignId_s | project CampaignID, RecipientCount=Count, Recipients, SampleSubject=any(subject_s) Each query can be refined (for instance, filtering only within a recent time window) and embedded in Sentinel Analytics rules or hunting. The key is using the connectors’ fields – URLs, sender/recipient addresses, campaign IDs – to pivot between email data and other security signals.Objects in a Retention Policy populated by Adaptive Scopes
I need a way to get all users in a retention policy that is populated by an adaptive scope. I can get all the members of the scope, and I can show that the policy uses that adaptive scope. But I know my audience. They will want to see that the users are actually in the policy. They will probably even want to see that it matches the users in the adaptive scope. In the GUI, I can click on an adaptive retention policy and click on "policy details". This will show all the users that the policy applies to and the date/time they were added, if they were removed from the policy, etc. And I can even export that. How can I get this same information via PowerShell? It's going to be important because, as you can see, there's a big difference in the date/time added. they were all in the adaptive scope BEFORE this policy was created, but it still took nearly 24 hours for all users to be added. Which is fine, and typical, but if a user gets added to the adaptive scope and does not have the policy applied to them within 24 hours, we need to know this. The goal is as much automation as possible, with checks and balances in place. Checks and balances require gathering information. That's going to require getting this information via PowerShell.Recovering Quarantined File without Restoring
Hello Microsoft Community, I have been exploring the Defender for Endpoint API and noticed that it mentions the ability to fetch copies of files associated with alerts using a LiveResponse request using (GetFile). However, I've observed that for some alerts, Microsoft Defender quarantines the associated files. Is there a way to obtain a copy of a quarantined file or get the file itself without restoring it? Additionally, is there a way to determine if a file associated with an alert has been quarantined through the API, rather than manually logging into the Microsoft Defender for Endpoint portal? I understand there are two common methods for restoring a file from quarantine: through the Microsoft Defender for Endpoint portal or via the command line. Both methods are detailed here: https://learn.microsoft.com/en-us/defender-endpoint/respond-file-alerts#restore-file-from-quarantine. My concern is that restoring the file will cause Defender to quarantine it again, resulting in a new alert for the same file. In summary, is there a way to retrieve a copy of a quarantined file or the file itself without restoring it? And how can I know whether or not has been quarantined, by using the Microsoft Defender For Endpoint API or other Microsoft based API. Thank you!3.6KViews0likes7CommentsMicrosoft Defender for Cloud
For security operations teams managing Microsoft 365 and Azure environments, knowing which event logs to monitor in the Defender portal is fundamental. The right logs give you visibility into identity threats, device compromise, and policy violations before they escalate. Here are the most critical event log categories: ## 1. Sign-In Logs (Entra ID) **Location:** Microsoft Entra ID > Sign-in logs Monitor failed sign-ins, unfamiliar locations, Conditional Access failures, and risky sign-ins flagged by Identity Protection. Identity is the primary attack surface—these logs detect credential compromise and lateral movement. ## 2. Audit Logs (Entra ID) **Location:** Microsoft Entra ID > Audit logs Track changes to user accounts, privilege escalations, Conditional Access modifications, and application consent grants. Unauthorized administrative changes can bypass security controls. ## 3. Device Compliance Logs (Intune) **Location:** Microsoft Intune > Devices > Monitor Monitor non-compliant devices, enrollment failures, and policy errors. Non-compliant endpoints represent unmanaged risk. ## 4. Threat & Vulnerability Management **Location:** Microsoft Defender > Endpoints > TVM Track critical vulnerabilities, missing updates, and exposed credentials. Proactive vulnerability management prevents exploitation. ## 5. Alerts and Incidents (Defender XDR) **Location:** Microsoft Defender > Incidents & Alerts Your central SOC dashboard—monitor high-severity alerts for ransomware, credential theft, and lateral movement across endpoints, identities, email, and apps. ## 6. Cloud App Activity Logs **Location:** Defender for Cloud Apps > Activity log Detect unusual file downloads, admin activity from unmanaged devices, and OAuth app permissions. These logs reveal unauthorized data exfiltration and risky SaaS behavior. ## 7. Email Threat Logs **Location:** Microsoft Defender > Email & Collaboration > Threat Explorer Monitor phishing attempts, malware attachments, and spoofed emails. Email remains the most common attack vector. ## 8. Cloud Security Alerts **Location:** Microsoft Defender for Cloud > Security alerts Track misconfigurations, policy violations, and threats across Azure subscriptions and hybrid workloads. Essential for cloud infrastructure protection and compliance monitoring. ## How to Use These Logs Effectively 1. Set up automated alerts in Sentinel 2. Establish baselines to detect anomalies 3. Correlate across sources for full attack context 4. Automate response with AIR features 5. Review high-severity logs weekly **Microsoft Defender XDR Documentation:** https://learn.microsoft.com/en-us/microsoft-365/security/defender/ **Entra ID Monitoring:** https://learn.microsoft.com/en-us/entra/identity/monitoring-health/ **Microsoft Defender for Cloud:** https://learn.microsoft.com/en-us/azure/defender-for-cloud/ Monitoring the right logs is the foundation of a strong security posture. Start here, tune your alerts, and build the visibility your SOC needs. #MicrosoftDefender #CyberSecurity #SOC #DefenderXDR #ThreatHunting #SecurityOperations #EntraID #Microsoft365 #ZeroTrust #DefenderForCloud103Views0likes0CommentsCan't register for private microsoft account
A few years ago, I registered my domain becker-mainflingen.de with Microsoft and had my emails there. This account has been canceled for some time now. In order to use various Microsoft services now, I need to create a private Microsoft account. When I register with an email address from the above domain, I get an error message saying that I am not allowed to use a business account. How can I activate my domain here?53Views0likes2Comments
Events
Microsoft Purview Data Security Investigations is now generally available!
Data Security Investigations enables customers to quickly uncover and mitigate data security and sensitive data risks bur...
Thursday, Feb 05, 2026, 10:00 AM PSTOnline
6likes
96Attendees
17Comments