Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
12 MIN READ
Member: TysonPaul | Microsoft Community Hub
Reimagining AI at scale: NVIDIA GB300 NVL72 on Azure
Team Blog: Azure Infrastructure
Author: gwaqar
Published: 10/28/2025
Summary: Microsof...
Nov 06, 202598Views
0likes
0Comments
5 MIN READ
Introduction
As a Microsoft MVP (Most Valuable Professional) specializing in SIEM, XDR, and Cloud Security, I have witnessed the rapid evolution of cybersecurity technologies, especially those de...
Nov 06, 2025374Views
3likes
0Comments
When it comes to securing your multicloud environment, Microsoft Defender Cloud Security Posture Management offers a powerful suite of agentless capabilities. This blog post walks through a fast-star...
Nov 06, 2025348Views
0likes
0Comments
This article is part of The Sentinel data lake Practitioner Series. Part 1 of the series focuses on operationalizing the Sentinel data lake and our strategic vision for the customers. This series is ...
Nov 06, 2025647Views
0likes
0Comments
Recent Discussions
MDI AD CS sensor not switching from removed DC
We are in the process of replacing our Domain Controllers. What I found is that the MDI sensor on our PKI server is still stuck with a domain controller which has been demoted and removed from the domain. (Sensor version: 2.250.18972.18405) I guess, if I reinstall the sensor, it will find a new domain controller - but what if it finds a DC that is to be decommissioned? Should I reinstall the sensor until it choses a "new" DC? Thank you in advance, DanielCannot update Case number in Microsoft Purview eDiscovery
I can no longer update the Case number under case settings in the new eDiscovery UI. I used to be able to update it via the externalId Graph endpoint but that appears to be deprecated. The error simply reads "update failed" - there is no additional information. Is anyone else having this problem?XDR advanced hunting region specific endpoints
Hi, I am exploring XDR advanced hunting API to fetch data specific to Microsoft Defender for Endpoint tenants. The official documentation (https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) mentions to switch to Microsoft Graph advanced hunting API. I had below questions related to it: 1. To fetch the region specific(US , China, Global) token and Microsoft Graph service root endpoints(https://learn.microsoft.com/en-us/graph/deployments#app-registration-and-token-service-root-endpoints ) , is the recommended way to fetch the OpenID configuration document (https://learn.microsoft.com/en-us/entra/identity-platform/v2-protocols-oidc#fetch-the-openid-configuration-document) for a tenant ID and based on the response, the region specific SERVICE/TOKEN endpoints could be fetched? Since using it, there is no need to maintain different end points for tenants in different regions. And do we use the global service URL https://login.microsoftonline.com to fetch OpenID config document for a tenantID in any region? 2. As per the documentation, Microsoft Graph Advanced hunting API is not supported in China region (https://learn.microsoft.com/en-us/graph/api/security-security-runhuntingquery?view=graph-rest-1.0&tabs=http). In this case, is it recommended to use Microsoft XDR Advanced hunting APIs(https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) to support all region tenants(China, US, Global)?Defender for Endpoint - macOS scan takes 1 second
Hello, We use Defender for Endpoint on macOS deployed by Mosyle MDM. However, we noticed when user run quick or full scan that action takes 1 second and that is it - 0 files scanned. This used to work before; I happen to have a screenshot: Now, if I run scan from command line, again the same: We use config profiles from here: https://github.com/microsoft/mdatp-xplat/tree/master/macos/mobileconfig/profiles mdatp health output: Did anyone have this issue? Thanks!Some Fabric Lakehouse tables not appearing in Microsoft Purview after scan
Hi everyone, I’m running into an issue where several tables from a Fabric Lakehouse aren’t appearing in Microsoft Purview after a workspace scan. Here’s the situation: I scanned a Fabric workspace that contains multiple Lakehouses. For most Lakehouses, the tables appear correctly in Purview after the scan. However, for one specific Lakehouse, several tables that I know exist aren’t showing up in the scanned assets — even after adding the Lakehouse as an asset to a data product in the Unified Catalog. What I’ve tried: I rescanned the workspace and the specific Lakehouses. I verified that the tables are persistent (not temporary) and appear under the Tables section in Fabric, not only as files. I confirmed permissions for the Purview connection account. Scan results and errors: After the rescan, the tables still didn’t appear. The scan logs show several ingestion errors with messages like: Failed to ingest asset with type fabric_lakehouse and qualified name [qualified name] due to invalid data payload to data map I checked the error entries to see which assets they point to, and none of them are related to the tables in the Lakehouse in question. There were four of these errors in the last run. Additional context: Some older Lakehouses that had been archived months ago in Fabric still appeared as active in Purview before the rescan, so there may be stale metadata being retained. Notes: I’m aware Fabric scanning in Purview currently has sub-item scanning limitations where item-level metadata is prioritised, and individual tables aren’t always picked up. But given that tables from other Lakehouses appear as expected, and given the ingestion errors (even though the errors do not point to the missing tables), it feels like there may be a metadata sync or processing issue rather than a simple coverage limitation. Question: Has anyone encountered this behaviour or the “invalid data payload to data map” error before? Any guidance on further troubleshooting steps would be appreciated. Thanks in advance!need to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please helpUpdate content package Metadata
Hello Sentinel community and Microsoft. Ive been working on a script where i use this command: https://learn.microsoft.com/en-us/rest/api/securityinsights/content-package/install?view=rest-securityinsights-2024-09-01&tabs=HTTP Ive managed to successfully create everything from retrieving whats installed, uninstalling, reinstalling and lastly updating (updating needed to be "list, delete, install" however :'), there was no flag for "update available"). However, now to my issue. As this work like a charm through powershell, the metadata and hyperlinking is not being deployed - at all. So i have my 40 content packages successfully installed through the REST-api, but then i have to visit the content hub in sentinel in the GUI, filter for "installed" and mark them all, then press "install". When i do this the metadata and hyperlinking is created. (Its most noticeable that the analytic rules for the content hubs are not available under analytic rules -> Rule templates after installing through the rest api). But once you press install button in the GUI, they appear. So i looked in to the request that is made when pressing the button. It uses another API version, fine, i can add that to my script. But it also uses 2 variables that are not documented and encrypted-data. they are called c and t: Im also located in EU and it makes a request to SentinelUS. im OK with that, also as mentioned, another API version (2020-06-01) while the REST APi to install content packages above has 2024-09-01. NP. But i can not simulate this last request as the variables are encrypted and not available through the install rest api. They are also not possible to simulate. it ONLY works in the GUI when pressing install. Lastly i get another API version back when it successfully ran through install in GUI, so in total its 3 api versions. Here is my code snippet i tried (it is basically a mimic of the post request in the network tab of the browser then pressing "install" on the package in content hub, after i successfully installed it through the official rest api). function Refresh-WorkspaceMetadata { param ( [Parameter(Mandatory = $true)] [string]$SubscriptionId, [Parameter(Mandatory = $true)] [string]$ResourceGroup, [Parameter(Mandatory = $true)] [string]$WorkspaceName, [Parameter(Mandatory = $true)] [string]$AccessToken ) # Use the API version from the portal sample $apiVeri = "?api-version=" $RefreshapiVersion = "2020-06-01" # Build the batch endpoint URL with the query string on the batch URI $batchUri = "https://management.azure.com/\$batch$apiVeri$RefreshapiVersion" # Construct a relative URL for the workspace resource. # Append dummy t and c parameters to mimic the portal's request. $workspaceUrl = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/$WorkspaceName$apiVeri$RefreshapiVersion&t=123456789&c=dummy" # Create a batch payload with several GET requests $requests = @() for ($i = 0; $i -lt 5; $i++) { $requests += @{ httpMethod = "GET" name = [guid]::NewGuid().ToString() requestHeaderDetails = @{ commandName = "Microsoft_Azure_SentinelUS.ContenthubWorkspaceClient/get" } url = $workspaceUrl } } $body = @{ requests = $requests } | ConvertTo-Json -Depth 5 try { $response = Invoke-RestMethod -Uri $batchUri -Method Post -Headers @{ "Authorization" = "Bearer $AccessToken" "Content-Type" = "application/json" } -Body $body Write-Host "[+] Workspace metadata refresh triggered successfully." -ForegroundColor Green } catch { Write-Host "[!] Failed to trigger workspace metadata refresh. Error: $_" -ForegroundColor Red } } Refresh-WorkspaceMetadata -SubscriptionId $subscriptionId -ResourceGroup $resourceGroup -WorkspaceName $workspaceName -AccessToken $accessToken (note: i have variables higher up in my script for subscriptionid, resourcegroup, workspacename and token etc). Ive tried with and without mimicing the T and C variable. none works. So for me, currently, installing content hub packages for sentinel is always: Install through script to get all 40 packages Visit webpage, filter for 'Installed', mark them and press 'Install' You now have all metadata and hyperlinking available to you in your Sentinel (such as hunting rules, analytic rules, workbooks, playbooks -templates). Anyone else manage to get around this or is it "GUI" gated ? Greatly appreciated.SolvedDuplicate file detection
Hi Community, I need to scan multiple windows file servers using Microsoft Purview and one of the asks is to detect and identify duplicate files on those. Can someone please guide how that can be accomplished. What functionality needs to be used and how to go about duplicate detection? Note that this is primarily duplicate finding assignment for files as in office documents and pdfs. Thanks.Purview Connector Status
I have set up an Instant Bloomberg connector in Microsoft Purview. Data is now flowing daily from Bloomberg to Microsoft Purview. How can I retreive the status of the connector? My prefered option would be to have a a PowerShell script to extract the "Connection status with source", the "last import at" and the latest log. And if that PS script cannot be done, an email sent by Purview with about the same info would work.132Views0likes3CommentsExplorer permission to download an email
Global Admin is allegedly not sufficient access to download an email. So I have a user asking for a copy of her emaill, and I'm telling her 'sorry, I don't have that permission', I'm only global admin' What? The documentation basically forces you to use the new terrible 'role group' system. I see various 'roles' that you need to add to a 'role group' in order to do this.. Some mention Preview, some mention Security Administrator, some mention Security Operator. I've asked copilot 100 different times, and he keeps giving me made up roles. But then linking to the made up role. How is such a basic functionality broken? It makes 0 sense. I don't want to submit this email - it's not malware or anything. I just want to download the **bleep** thing, and I don't want to have to go through the whole poorview process. This is really basic stuff. I can do this on about 10% of my GA accounts. There's no difference in the permissions - it just seems inconsistent.How to offboarding endpoint from Purview
Hi I'm a fresh user of Purview and after creating policies linked to Exchange, I've enabled the onboarding of computer. Unfortunately, all Defender endpoints have been onboarded, and I've not be able to define which one was concerned. Now, I would like to offboard all those devices from purview and only keep them in Defender without any DLP protection. I tried to remove them with the onboarding script, but my endpoints are still present in Purview. How can I completely remove them? Thanks for your help Yohann72Views0likes2CommentsUnified detection rule management
Hi, I attended the webinar yesterday regarding the new unified custom detection rules in Defender XDR. I was wondering about the management of a library of rules. As with any SOC, our solution has a library of custom rules which we manage in a release cycle for a number of clients in different Tenants. To avoid having to manage rules individually we use the JSON approach, importing the library so it will update rules that we need to tune. Currently I'm not seeing an option to import unified detection rules in Defender XDR via JSON. Is that a feature that will be added? Thanks ZivToken Protection Conditional access policy is blocking access to PowerShell Modules.
Hi Everyone, Recently we have started implementing Microsoft token protection via CAP. We have created the policy based on the Microsoft documentation: https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-token-protection Everything is working fine for regular users, but for our admin accounts that require access to Powershell modules, they get this error when trying to access: I've confirmed this is linked to the token protection policy and no other policy is causing this behavior. The policy is configured in the following way: My question here is: How can I keep our admin accounts included on this policy without affecting Powershell access? Thank you for your help.39Views0likes1CommentDoes Rights Management Service currently support MFA claims from EAM?
We've been testing EAM (external authentication methods) for a few months now as we try to move our Duo configuration away from CA custom controls. I noticed today that when my Outlook (classic) client would not correctly authenticate to Rights Management Service to decrypt OME-protected emails from another org. It tries to open the message, fails to connect to RMS, and opens a copy of the email with the "click here to read the message" spiel. It then throws a "something is wrong with your account" warning in the Outlook client's top right corner. If I try to manually authenticate & let it redirect to Duo's EAM endpoint, it simply fails with an HTTP 400 error. When you close that error, it then presents another error of "No Network Connection. Please check your network settings and try again. [2603]". I can close/reopen Outlook and that warning message in the top right stays suppresses unless I attempt signing into RMS all over again. However.. If I do the same thing and instead use an alternate MFA method (MS Authenticator, for example), it signs in perfectly fine and will decrypt those OME-protected emails on the fly in the Outlook client, as expected. I verified that we excluded "aadrm.com" from SSL inspection and that we're not breaking certificate pinning. So all I can assume at the moment is that Rights Management Service isn't honoring MFA claims from EAM. Any experience/thoughts on this? Thanks in advance!Microsoft Sentinel device log destination roadmap
I just attended the 11/5/2025 Microsoft webinar "Adopting Unified Custom Detections in Microsoft Sentinel via the Defender Portal: Now Better Than Ever" and my question posted to Q&A was not answered by the team delivering the session. The moderator told us that if our question was not answered we were to post the question in this forum. Here is the question again: "Will firewall and other device logs continue to go to Azure Log Analytics indefinitely? By Indefinitely I mean not changing in the roadmap to something else like Data Lake or Event Grid/Service Bus, etc." Thank you, John17Views0likes0CommentsHigh CPU Usage by Microsoft Defender Antivirus on Windows Server 2019 Azure VMs
Hello, I’m running into a recurring issue on Windows Server 2019 Datacenter VMs running in Azure where MsMpEng.exe (Antimalware Service Executable) consistently spikes CPU usage every day. Here’s what I’ve observed so far: Microsoft Defender pulls threat intelligence from the cloud continuously in real-time, in addition to multiple scheduled updates per day. Despite this continuous checking, I’ve noticed a consistent CPU spike only between 4:40 PM and 4:55 PM daily. During this time, Defender consumes 100% CPU. I’ve checked Task Scheduler and Defender scan settings — there are no scans or tasks scheduled during this period. Limiting CPU usage using Set-MpPreference -ScanAvgCPULoadFactor 30 has had no effect on these background maintenance routines. Automatic provisioning via Defender for Cloud is enabled on these Azure VMs, so the MDE agent installs and updates automatically. Logs from Microsoft-Windows-Windows Defender/Operational during the high CPU window: 10/2/2025 4:41:57 PM 2010 Microsoft Defender Antivirus used cloud protection to get additional security intelligence... 10/2/2025 4:41:57 PM 2010 Microsoft Defender Antivirus used cloud protection to get additional security intelligence... 10/2/2025 4:49:41 PM 1150 Endpoint Protection client is up and running in a healthy state... These logs confirm that Defender’s cloud intelligence updates and endpoint checks run exactly during the CPU spike window. Even though Defender continuously checks for cloud protection updates throughout the day, the CPU spike occurs only during this particular window. The pattern is consistent across multiple Azure VMs, suggesting this is part of Defender’s automated behavior. Questions for the community: Is this behavior expected for Azure VMs, or could it indicate a bug in Defender on Windows Server 2019? Is there a supported way to throttle, defer, or better manage CPU usage during these maintenance and cloud intelligence routines? Are there recommended best practices for always-on production environments in Azure to avoid performance degradation caused by Defender? Any guidance or advice would be really appreciated. Thanks, NikunjSolved100Views1like5CommentsMS Purview InformationProtectionPolicy - Extract Sensitivity Labels - Permissions Granted
Hello community, I'm currently facing an issue trying to extract sensitivity labels from our Microsoft 365 tenant and could use some assistance. I have already ensured that the necessary permissions and application are in place. I initially attempted to retrieve the labels via the Microsoft Graph Explorer (graph-explorer) using the endpoint: https://graph.microsoft.com/beta/security/informationProtection/sensitivityLabels. As you can see in the attached image, I encountered a "Forbidden - 403" error, suggesting a problem with permissions or consent, even though InformationProtectionPolicy.Read is listed under the "Modify permissions" tab as "Unconsent". The only way that I found to solve it was using "https://graph.microsoft.com/beta/me/security/informationProtection/sensitivityLabels" but I need to use it in Python Code, without a user validation of credential. Next, I tried to achieve the same using Python and the Microsoft Graph API directly. I obtained an access token using a Client ID and Secret, authenticating against https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token. The application associated with this Client ID and Secret has been granted the InformationProtectionPolicy.Read permission. However, when making a GET request to https://graph.microsoft.com/beta/security/informationProtection/sensitivityLabels in Python, I receive the following error: I have already granted what I believe are the relevant permissions, including InformationProtectionPolicy.Read.All, InformationProtectionPolicy.Read, Application.Read.All, and User.Read. Has anyone successfully retrieved sensitivity labels using the Microsoft Graph API? If so, could you please share any insights or potential solutions? I'm wondering if there are other specific permissions required or if there's a particular nuance I might be missing. Any help would be greatly appreciated! Thank you in advance. Leonardo Canal188Views0likes3Comments