Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Background
Azure Firewall helps secure your network by filtering traffic and enforcing policies for your workloads and applications. DNS Proxy, a key capability in Azure Firewall, enables the firew...
Nov 12, 2025127Views
0likes
0Comments
Microsoft Sentinel continues to set the pace for innovation in cloud-native SIEMs, empowering security teams to meet today’s challenges with scalable analytics, built-in AI, and a cost-effective data...
Nov 12, 2025570Views
0likes
0Comments
We are simplifying the process of making your threat intelligence actionable while keeping costs in check. With Microsoft Sentinel SIEM and Defender XDR, you can now ingest threat intelligence feeds ...
Nov 12, 2025156Views
0likes
0Comments
With more data and intelligence than ever, it’s often a challenge to manage it all while making sure you’re maximizing its value for security investigations. We’ve made it easier for customers levera...
Nov 12, 2025162Views
0likes
0Comments
Recent Discussions
Content blocked by IT Admin
I am the IT Admin and I keep seeing this Windows Security pop up notification on my system about blocking mtalk.google.com. I do not have this installed nor can I find anything about it in the registry. How can I find and remove this completely to stop these notifications? Driving me crazy....Solved29KViews0likes17CommentsDisable Defender for Cloud Apps alerts
Hi all, we just enabled Defender for Cloud Apps in our environment (about 500 clients). We started with setting about 300 apps to "Unsanctioned". Now we get flooded with alerts. Mainly "Connection to a custom network indicator on one endpoint" and "Multi-stage incident on multiple endpoints" when an URL is blocked on more clients. Is there a possibility to disable the alerts for this kind of blocks? I tried creating a supression rules, but didnt manage to get it working. Dont know if it is not possible or if I made a mistake. As the Defender for Cloud Apps just creates a Indicator for every app i want to block I could click every single Indicator and disable the alert there. But thats a few hundred Indicators and we plan to extend the usage. Can I centrally disable alerts for custom indicators? Thanks & Cheers3.6KViews0likes4CommentsXDR RBAC missing Endpoint & Vulnerability Management
I've been looking at ways to provide a user with access to the Vulnerability Dashboard and associated reports without giving them access to anything else within Defender (Email, Cloud App etc) looking at the article https://learn.microsoft.com/en-us/defender-xdr/activate-defender-rbac it has a slider for Endpoint Management which I don't appear to have? I have business Premium licences which give me GA access to see the data so I know I'm licenced for it and it works but I can't figure out how to assign permissions. When looking at creating a custom permission here https://learn.microsoft.com/en-us/defender-xdr/custom-permissions-details#security-posture--posture-management it mentions Security Posture Management would give them Vulnerability Management Level Read which is what I'm after but that doesn't appear to be working. The test account i'm using to try this out just gets an error Error getting device data I'm assuming its because it doesn't have permissions of the device details?Blocking email in outlook mobile application via conditional access and Intune
Hello, all. We’re currently experiencing an issue where corporate email remains accessible in the Outlook mobile app on personally owned iOS devices, even after the device either falls out of compliance or undergoes an enterprise wipe. These devices are managed through Intune. Additionally, some users may have personal email accounts configured within the Outlook mobile app already. Below is the conditional access policy currently applied to mobile devices. Any assistance would be appreciated.27Views0likes1CommentCannot update Case number in Microsoft Purview eDiscovery
I can no longer update the Case number under case settings in the new eDiscovery UI. I used to be able to update it via the externalId Graph endpoint but that appears to be deprecated. The error simply reads "update failed" - there is no additional information. Is anyone else having this problem?SolvedSome Fabric Lakehouse tables not appearing in Microsoft Purview after scan
Hi everyone, I’m running into an issue where several tables from a Fabric Lakehouse aren’t appearing in Microsoft Purview after a workspace scan. Here’s the situation: I scanned a Fabric workspace that contains multiple Lakehouses. For most Lakehouses, the tables appear correctly in Purview after the scan. However, for one specific Lakehouse, several tables that I know exist aren’t showing up in the scanned assets — even after adding the Lakehouse as an asset to a data product in the Unified Catalog. What I’ve tried: I rescanned the workspace and the specific Lakehouses. I verified that the tables are persistent (not temporary) and appear under the Tables section in Fabric, not only as files. I confirmed permissions for the Purview connection account. Scan results and errors: After the rescan, the tables still didn’t appear. The scan logs show several ingestion errors with messages like: Failed to ingest asset with type fabric_lakehouse and qualified name [qualified name] due to invalid data payload to data map I checked the error entries to see which assets they point to, and none of them are related to the tables in the Lakehouse in question. There were four of these errors in the last run. Additional context: Some older Lakehouses that had been archived months ago in Fabric still appeared as active in Purview before the rescan, so there may be stale metadata being retained. Notes: I’m aware Fabric scanning in Purview currently has sub-item scanning limitations where item-level metadata is prioritised, and individual tables aren’t always picked up. But given that tables from other Lakehouses appear as expected, and given the ingestion errors (even though the errors do not point to the missing tables), it feels like there may be a metadata sync or processing issue rather than a simple coverage limitation. Question: Has anyone encountered this behaviour or the “invalid data payload to data map” error before? Any guidance on further troubleshooting steps would be appreciated. Thanks in advance!NPS Extension for azure MFA and multiple tenants?
Hi, is it possible to setup one NPS server with the Extension for Azure MFA to authenticate against multiple tenants? The onprem AD has azure ad connector for each domain and the users are in sync with there tenants. Its a RDS setup with one RD Gateway and one NPS server and multiple RD servers. I need email address removed for privacy reasons and email address removed for privacy reasons etc. to authenticate with MFA, but i can only get the users on the tenant thats linked in the NPS Extension for Azure MFA to work. I dont think its possible to setup more than one tenant in one NPS server (Extension for azure MFA). I get this error in the NPS log NPS Extension for Azure MFA: CID:xxxxxxxxxxxxxxx : Access Rejected for user email address removed for privacy reasons with Azure MFA response: AccessDenied and message: Caller tenant:'xxxxxxxxxxxxxxxxxxxxxxx' does not have access permissions to do authentication for the user in tenant:'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',,,xxxxxxxxxxxxxxxxxxxxxx The ID in the Caller tenant and the user tenant in the error is correct, so something have to work? I cat find a way to allow the Caller tenant to access users in the user tenant.22Views0likes0CommentsMDI AD CS sensor not switching from removed DC
We are in the process of replacing our Domain Controllers. What I found is that the MDI sensor on our PKI server is still stuck with a domain controller which has been demoted and removed from the domain. (Sensor version: 2.250.18972.18405) I guess, if I reinstall the sensor, it will find a new domain controller - but what if it finds a DC that is to be decommissioned? Should I reinstall the sensor until it choses a "new" DC? Thank you in advance, DanielSolvedMicrosoft Compliance Assessment issues - ASD L1
Hi, We are using Microsoft Compliance Assessments in Microsoft Purview In the Microsoft Compliance Manager we have enabled the ASD Essentials Level 1 assessment Under the Microsoft Actions There are 2 actions, one is: Malicious Code Protection - Periodic and Real-Time Scans (SI-0116) The issue that currently the testing status is 'failed low risk' , but the testing status has the date tested as Monday Sep 30 2024, well before we opened the assessment, also with notes that are completely irrelevant to this client and certainly not something we have put in. The information in there is quite long, I can provide a txt file with this information I have checked the documentation and we have implemented the required security configuration With these items set the way they are we have no way to complete the assessment109Views0likes2CommentsNeed Powershell Script for consolidated report of Active Directory users
Dear Experts, I need a consolidated report for the following instances for Active Directory users --> 1) All LIVE AD Users with “CREATED ON” header 2) Inactive Users (No Login in 90+ Days) 3) Users with “Password Never Expires” Mark 4) Users Who Never Logged In – Users never logged on 5) Users with Old Passwords (Not Changed in 90+ Days) 6) Disabled User Accounts with “Disabled ON” header 7) Inactive Computers (No Logon in 60+ Days) 8) Disabled Computer Accounts 9) Last User Logged in, on computers 10) ALL Users' with Last Password Change Date Kindly share the powershell script for the same ASAP. ..Ajit28Views0likes1CommentPurview-Retention Policy for Private channels
I have retention policy for Standard & Shared channels together with 2 Years retention period to keep posts for 2 years and remove after that period. Don’t have any policy for Private channels posts/messages, so posts will be available indefinitely . With this https://www.microsoft.com/en-in/microsoft-365/roadmap?id=500380from Microsoft my private channels will also part of the same policy which is applied for standard & Shared channels . in this case how i can retain the posts from private channels indefinitely . Please suggestXDR advanced hunting region specific endpoints
Hi, I am exploring XDR advanced hunting API to fetch data specific to Microsoft Defender for Endpoint tenants. The official documentation (https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) mentions to switch to Microsoft Graph advanced hunting API. I had below questions related to it: 1. To fetch the region specific(US , China, Global) token and Microsoft Graph service root endpoints(https://learn.microsoft.com/en-us/graph/deployments#app-registration-and-token-service-root-endpoints ) , is the recommended way to fetch the OpenID configuration document (https://learn.microsoft.com/en-us/entra/identity-platform/v2-protocols-oidc#fetch-the-openid-configuration-document) for a tenant ID and based on the response, the region specific SERVICE/TOKEN endpoints could be fetched? Since using it, there is no need to maintain different end points for tenants in different regions. And do we use the global service URL https://login.microsoftonline.com to fetch OpenID config document for a tenantID in any region? 2. As per the documentation, Microsoft Graph Advanced hunting API is not supported in China region (https://learn.microsoft.com/en-us/graph/api/security-security-runhuntingquery?view=graph-rest-1.0&tabs=http). In this case, is it recommended to use Microsoft XDR Advanced hunting APIs(https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) to support all region tenants(China, US, Global)?Defender for Endpoint - macOS scan takes 1 second
Hello, We use Defender for Endpoint on macOS deployed by Mosyle MDM. However, we noticed when user run quick or full scan that action takes 1 second and that is it - 0 files scanned. This used to work before; I happen to have a screenshot: Now, if I run scan from command line, again the same: We use config profiles from here: https://github.com/microsoft/mdatp-xplat/tree/master/macos/mobileconfig/profiles mdatp health output: Did anyone have this issue? Thanks!need to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please helpUpdate content package Metadata
Hello Sentinel community and Microsoft. Ive been working on a script where i use this command: https://learn.microsoft.com/en-us/rest/api/securityinsights/content-package/install?view=rest-securityinsights-2024-09-01&tabs=HTTP Ive managed to successfully create everything from retrieving whats installed, uninstalling, reinstalling and lastly updating (updating needed to be "list, delete, install" however :'), there was no flag for "update available"). However, now to my issue. As this work like a charm through powershell, the metadata and hyperlinking is not being deployed - at all. So i have my 40 content packages successfully installed through the REST-api, but then i have to visit the content hub in sentinel in the GUI, filter for "installed" and mark them all, then press "install". When i do this the metadata and hyperlinking is created. (Its most noticeable that the analytic rules for the content hubs are not available under analytic rules -> Rule templates after installing through the rest api). But once you press install button in the GUI, they appear. So i looked in to the request that is made when pressing the button. It uses another API version, fine, i can add that to my script. But it also uses 2 variables that are not documented and encrypted-data. they are called c and t: Im also located in EU and it makes a request to SentinelUS. im OK with that, also as mentioned, another API version (2020-06-01) while the REST APi to install content packages above has 2024-09-01. NP. But i can not simulate this last request as the variables are encrypted and not available through the install rest api. They are also not possible to simulate. it ONLY works in the GUI when pressing install. Lastly i get another API version back when it successfully ran through install in GUI, so in total its 3 api versions. Here is my code snippet i tried (it is basically a mimic of the post request in the network tab of the browser then pressing "install" on the package in content hub, after i successfully installed it through the official rest api). function Refresh-WorkspaceMetadata { param ( [Parameter(Mandatory = $true)] [string]$SubscriptionId, [Parameter(Mandatory = $true)] [string]$ResourceGroup, [Parameter(Mandatory = $true)] [string]$WorkspaceName, [Parameter(Mandatory = $true)] [string]$AccessToken ) # Use the API version from the portal sample $apiVeri = "?api-version=" $RefreshapiVersion = "2020-06-01" # Build the batch endpoint URL with the query string on the batch URI $batchUri = "https://management.azure.com/\$batch$apiVeri$RefreshapiVersion" # Construct a relative URL for the workspace resource. # Append dummy t and c parameters to mimic the portal's request. $workspaceUrl = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/$WorkspaceName$apiVeri$RefreshapiVersion&t=123456789&c=dummy" # Create a batch payload with several GET requests $requests = @() for ($i = 0; $i -lt 5; $i++) { $requests += @{ httpMethod = "GET" name = [guid]::NewGuid().ToString() requestHeaderDetails = @{ commandName = "Microsoft_Azure_SentinelUS.ContenthubWorkspaceClient/get" } url = $workspaceUrl } } $body = @{ requests = $requests } | ConvertTo-Json -Depth 5 try { $response = Invoke-RestMethod -Uri $batchUri -Method Post -Headers @{ "Authorization" = "Bearer $AccessToken" "Content-Type" = "application/json" } -Body $body Write-Host "[+] Workspace metadata refresh triggered successfully." -ForegroundColor Green } catch { Write-Host "[!] Failed to trigger workspace metadata refresh. Error: $_" -ForegroundColor Red } } Refresh-WorkspaceMetadata -SubscriptionId $subscriptionId -ResourceGroup $resourceGroup -WorkspaceName $workspaceName -AccessToken $accessToken (note: i have variables higher up in my script for subscriptionid, resourcegroup, workspacename and token etc). Ive tried with and without mimicing the T and C variable. none works. So for me, currently, installing content hub packages for sentinel is always: Install through script to get all 40 packages Visit webpage, filter for 'Installed', mark them and press 'Install' You now have all metadata and hyperlinking available to you in your Sentinel (such as hunting rules, analytic rules, workbooks, playbooks -templates). Anyone else manage to get around this or is it "GUI" gated ? Greatly appreciated.SolvedDuplicate file detection
Hi Community, I need to scan multiple windows file servers using Microsoft Purview and one of the asks is to detect and identify duplicate files on those. Can someone please guide how that can be accomplished. What functionality needs to be used and how to go about duplicate detection? Note that this is primarily duplicate finding assignment for files as in office documents and pdfs. Thanks.Purview Connector Status
I have set up an Instant Bloomberg connector in Microsoft Purview. Data is now flowing daily from Bloomberg to Microsoft Purview. How can I retreive the status of the connector? My prefered option would be to have a a PowerShell script to extract the "Connection status with source", the "last import at" and the latest log. And if that PS script cannot be done, an email sent by Purview with about the same info would work.140Views0likes3Comments