Recent Discussions
how to connect varonis as a connector in sentinel
Hi, I would like to know the process of how can we connect Varonis as a data connector in sentinel. I understand that by reading this article Azure Sentinel: The connectors grand (CEF, Syslog, Direct, Agent, Custom and more) - Microsoft Tech Community , we can connect Varonis through CEF connector. And I have read the instructions mentioned in the article(https://info.varonis.com/hubfs/docs/splunk-app/Varonis-App-for-Splunk-User-Guide.pdf), This is what I understood, can anyone correct me if any? in Datalert configuration(in varonis), connect Syslog msg fwding by giving the Syslog server IP and port number. create an alert template with Syslog msg alert method. ***by this, varonis alerts/data will be sent to Syslog server**** 3.now, we can connect easily from the Syslog server to sentinel easily by executing a few commands which I'm aware of.4.7KViews0likes2CommentsDevice Tables are not ingesting tables for an orgs workspace
Device Tables are not ingesting tables for an orgs workspace. I can confirm that all devices are enrolled and onboarded to MDE (Microsoft defender for endpoint) I had placed an EICAR file on one of the machine which bought an alert through to sentinel,however this did not invoke any of the device related tables . Workspace i am targeting Workspace from another org with tables enabled and ingesting data Microsoft Defender XDR connector shows as connected however the tables do not seem to be ingesting data; I run the following; DeviceEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated DeviceProcessEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated I receive no results; No results found from the specified time range Try selecting another time range Please assist As I cannot think where this is failing20Views1like1CommentSentinel Data Connector: Google Workspace (G Suite) (using Azure Functions)
I'm encountering a problem when attempting to run the GWorkspace_Report workbook in Azure Sentinel. The query is throwing this error related to the union operator: 'union' operator: Failed to resolve table expression named 'GWorkspace_ReportsAPI_gcp_CL' I've double-checked, and the GoogleWorkspaceReports connector is installed and updated to version 3.0.2. Has anyone seen this or know what might be causing the table GWorkspace_ReportsAPI_gcp_CL to be unresolved? Thanks!22Views0likes1CommentData Connectors Storage Account and Function App
Several data connectors downloaded via Content Hub has ARM deployment templates which is default OOB experience. If we need to customize we could however I wanted to ask community how do you go about addressing some of the infrastructure issues where these connectors deploy storage accounts with insecure configurations like infrastructure key requirement, vnet intergration, cmk, front door etc... Storage and Function Apps. It appears default configuration basically provisions all required services to get streams going but posture configuration seems to be dismissing security standards around hardening these services.27Views0likes0Commentsneed to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please help65Views2likes1CommentMicrosoft 365 defender alerts not capturing fields (entities) in azure sentinel
We got an alert from 365 defenders to azure sentinel ( A potentially malicious URL click was detected). To investigate this alert we have to check in the 365 defender portal. We noticed that entities are not capturing (user, host, IP). How can we resolve this issue? Note: This is not a custom rule.2.5KViews1like3CommentsUnified SecOps XDR
Hi, I am reaching out to community to seek understanding regarding Unified SecOps XDR portal for Multi-tenant Multi-workspace. Our organization already has a Azure lighthouse setup. My question is if M365 lighthouse license also required for the Multi-tenant Multi-workspace in unified SecOps XDR portal?187Views2likes4CommentsIssue while deploying Sentienl Rules
I know that when deleting a Sentinel rule, you need to wait a specific amount of time before it can be redeployed. However, in this tenant, we've been waiting for almost a month and are still getting the same deployment error ('was recently deleted. You need to allow some time before re-using the same ID. Please try again later. Click here for details'). I still want to use the same ID ect. Does anyone have any idea or similar issue why it's still not possible after waiting for about a month?508Views1like4Comments[DevOps] dps.sentinel.azure.com no longer responds
Hello, Ive been using Repository connections in sentinel to a central DevOps for almost two years now. Today i got my first automated email on error for a webhook related to my last commit from the central repo to my Sentinel intances. Its a webhook that is automticly created in connections that are made the last year (the once from 2 years ago dont have this webhook automaticly created). The hook is found in devops -> service hooks -> webhooks "run state change" for each connected sentinel However, after todays run (which was successfull, all content deployed) this hook generates alerts. It says it cant reach: (EU in my case) eu.prod.dps.sentinel.azure.com full url: https://eu.prod.dps.sentinel.azure.com/webhooks/ado/workspaces/[REDACTED]/sourceControls/[REDACTED] So, what happened to this domain? why is it no longer responding and when was it going offline? I THINK this is the hook that sets the status under Sentinel -> Repositories in the GUI. this success status in screenshoot is from 2025/02/06, no new success has been registered in the receiving Sentinel instance. For the Sentinel that is 2 year old and dont have a hook in my DevOps that last deployment status says "Unknown" - so im fairly sure thats what the webhook is doing. So a second question would be, how can i set up a new webhook ? (it want ID and password of the "Azure Sentinel Content Deployment App" - i will never know that password....) so i cant manually add ieather (if the URL ever comes back online or if a new one exists?). please let me know.97Views1like2CommentsSingle Rule for No logs receiving (Global + Per-device Thresholds)
Hi everyone, I currently maintain one Analytics rule per table to detect when logs stop coming in. Some tables receive data from multiple sources, each with a different expected interval (for example, some sources send every 10 minutes, others every 30 minutes). In other SIEM platforms there’s usually: A global threshold (e.g., 60 minutes) for all sources. Optional per-device/per-table thresholds that override the global value. Is there a recommended way to implement one global rule that uses a default threshold but allows per-source overrides when a particular device or log table has a different expected frequency? Also, if there are other approaches you use to manage “logs not received” detection, I’d love to hear your suggestions as well. This is a sample of my current rule let threshold = 1h; AzureActivity | summarize LastHeartBeat = max(TimeGenerated) | where LastHeartBeat < ago(threshold)33Views1like0CommentsUnable to Delete Threat Intelligence Indicator
Hi, For testing purposes, I added a TI indicator in Sentinel via the UI. When I deleted it, the indicator disappeared from the UI, but the record still exists in the ThreatIntelIndicators table. From what I’ve observed, every modification to a TI indicator leaves a record in the table, almost like an audit trail. So now I see two records: One for the original creation One for the deletion action The issue is that I’m building a rule based on this table, and it still matches the “created” record even though the indicator was deleted. I’ve already tried both: az sentinel threat-indicator delete module and REST API. But I got server errors. Is there any way to completely delete a TI record from the ThreatIntelIndicators table ? Thanks in advance.Solved98Views0likes3CommentsLog Ingestion Delay in all Data connectors
Hi, I have integrated multiple log sources in sentinel and all the log sources are ingesting logs between 7:00 pm to 2:00 am I want the log ingestion in real time. I have integrated Azure WAF, syslog, Fortinet, Windows servers. For evidence I am attaching a screenshots. I am totally clueless if anyone can help I will be very thankful!94Views0likes1CommentCodeless Connect Framework (CCF) Template Help
As the title suggests, I'm trying to finalize the template for a Sentinel Data Connector that utilizes the CCF. Unfortunately, I'm getting hung up on some parameter related issues with the polling config. The API endpoint I need to call utilizes a date range to determine the events to return and then pages within that result set. The issue is around the requirements for that date range and how CCF is processing my config. The API expects an HTTP GET verb and the query string should contain two instances of a parameter called EventDates among other params. For example, a valid query string may look something like: ../path/to/api/myEndpoint?EventDates=2025-08-25T15%3A46%3A36.091Z&EventDates=2025-08-25T16%3A46%3A36.091Z&PageSize=200&PageNumber=1 I've tried a few approaches in the polling config to accomplish this, but none have worked. The current config is as follows and has a bunch of extra stuff and names that aren't recognized by my API endpoint but are there simply to demonstrate different things: "queryParameters": { "EventDates.Array": [ "{_QueryWindowStartTime}", "{_QueryWindowEndTime}" ], "EventDates.Start": "{_QueryWindowStartTime}", "EventDates.End": "{_QueryWindowEndTime}", "EventDates.Same": "{_QueryWindowStartTime}", "EventDates.Same": "{_QueryWindowEndTime}", "Pagination.PageSize": 200 } This yields the following URL / query string: ../path/to/api/myEndpoint?EventDates.Array=%7B_QueryWindowStartTime%7D&EventDates.Array=%7B_QueryWindowEndTime%7D&EventDates.Start=2025-08-25T15%3A46%3A36.091Z&EventDates.End=2025-08-25T16%3A46%3A36.091Z&EventDates.Same=2025-08-25T16%3A46%3A36.091Z&Pagination.PageSize=200 There are few things to note here: The query param that is configured as an array (EventDates.Array) does indeed show up twice in the query string and with distinct values. The issue is, of course, that CCF doesn't seem to do the variable substitution for values nested in an array the way it does for standard string attributes / values. The query params that have distinct names (EventDates.Start and .End) both show up AND both have the actual timestamps substituted properly. Unfortunately, this doesn't match the API expectations since the names differ. The query params that are repeated with the same name (EventDates.Same) only show once and it seems to use the value from which comes last in the config (so last one overwrites the rest). Again, this doesn't meet the requirements of the API since we need both. I also tried a few other things ... Just sticking the query params and placeholders directly in the request.apiEndpoint polling config attribute. No surprise, it doesn't do the variable substitution there. Utilizing queryParametersTemplate instead of queryParameters. https://learn.microsoft.com/en-us/azure/sentinel/data-connector-connection-rules-referenceindicates this is a string parameter that expects a JSON string. I tried this with various approaches to the structure of the JSON. In ALL instances, the values here seemed to be completely ignored. All other examples from Azure-Sentinel repository utilize the POST verb. Perhaps that attribute isn't even interpreted on a GET request??? And because some AI agents suggested it and ... sure, why not??? ... I tried queryParametersTemplate as an actual query string template, so "EventDates={_QueryWindowStartTime}&EventDates={_QueryWindowEndTime}". Just as with previous attempts to use this attribute, it was completely ignored. I'm willing to try anything at this point, so if you have suggestions, I'll give it a shot! Thanks for any input you may have!131Views0likes4CommentsMicrosoft Sentinel Query History not updating
Hello, Apologies if this isn't the correct place for this but I know I will likely retire before I get any traction with Microsoft support. Has anyone experienced issues with their Sentinel Query History not updating with the latest queries? I run a lot of queries each day and any time I open a new browser window and go to the logs tab, the latest query it shows in my history is 7/29/2025. If I run any new queries in that browser tab, they show in my query history but the moment I open a new browser tab and access sentinel logs, they are gone and it shows the latest query as 7/29/2025. My colleague has the exact same issue except their latest query date is 8/7/2025... Yes I do have the "Save query history" setting set to On. I have toggled it of and back on just to see if it would do anything but no luck. Does anyone know what could be causing this?229Views0likes6CommentsI have no Microsoft Office 365 logs.
First of all, thanks in advance. In one tenant, I've configured Sentinel with several data sources, but the Microsoft 365 connector isn't logging events. I've done this about 20 times for different clients. The connector appears connected. I disconnected the connector and deleted the resource from the content center. Of course, I've waited; it's been a month since I did it the first time. I've tried checking Exchante, Teams, etc., to test combinations. I don't know if you know of any way to troubleshoot, see why the logs aren't arriving? Do I need to do something in Microsoft 365? Auditing is enabled, because when I go to Purview, audit, I can search for logs. I can't think of anything else. Thanks!!!79Views1like1CommentSentinel Playbook help required
Hi there, I am trying to create a logic app for when a new sentinel incident is triggered, it will check for the entities in the incident, compare it with a defined Entra ID group members, and if it matches, it will change the status to close the incident and it it does not match it will send an email. Is it something, someone in the forum has already built? or is there someone who could help me achieve this logic? Thank you.95Views0likes1CommentStandard Ontology and SIEM Field Mapping
Hello Community, We are working on a Microsoft Sentinel → Google Chronicle integration and need to automate the SIEM Field Mapping process between the two platforms Sentinel and google chronicle Schema Differences – Sentinel and Chronicle use different naming conventions and field hierarchies. Analytics Portability – Without mapping, a Chronicle rule expecting principal user email won’t understand Sentinel’s User Principal Name. Questions: Is there an API, PowerShell cmdlet, or Logic App method Sentinel’s field mapping with google chronicle fields.? is there any possibility via Automation.?97Views0likes2CommentsHow to exclude IPs & accounts from Analytic Rule, with Watchlist?
We are trying to filter out some false positives from a Analytic rule called "Service accounts performing RemotePS". Using automation rules still gives a lot of false mail notifications we don't want so we would like to try using a watchlist with the serviceaccounts and IP combination we want to exclude. Anyone knows where and what syntax we would need to exlude the items on the specific Watchlist? Query: let InteractiveTypes = pack_array( // Declare Interactive logon type names 'Interactive', 'CachedInteractive', 'Unlock', 'RemoteInteractive', 'CachedRemoteInteractive', 'CachedUnlock' ); let WhitelistedCmdlets = pack_array( // List of whitelisted commands that don't provide a lot of value 'prompt', 'Out-Default', 'out-lineoutput', 'format-default', 'Set-StrictMode', 'TabExpansion2' ); let WhitelistedAccounts = pack_array('FakeWhitelistedAccount'); // List of accounts that are known to perform this activity in the environment and can be ignored DeviceLogonEvents // Get all logon events... | where AccountName !in~ (WhitelistedAccounts) // ...where it is not a whitelisted account... | where ActionType == "LogonSuccess" // ...and the logon was successful... | where AccountName !contains "$" // ...and not a machine logon. | where AccountName !has "winrm va_" // WinRM will have pseudo account names that match this if there is an explicit permission for an admin to run the cmdlet, so assume it is good. | extend IsInteractive=(LogonType in (InteractiveTypes)) // Determine if the logon is interactive (True=1,False=0)... | summarize HasInteractiveLogon=max(IsInteractive) // ...then bucket and get the maximum interactive value (0 or 1)... by AccountName // ... by the AccountNames | where HasInteractiveLogon == 0 // ...and filter out all accounts that had an interactive logon. // At this point, we have a list of accounts that we believe to be service accounts // Now we need to find RemotePS sessions that were spawned by those accounts // Note that we look at all powershell cmdlets executed to form a 29-day baseline to evaluate the data on today | join kind=rightsemi ( // Start by dropping the account name and only tracking the... DeviceEvents // ... | where ActionType == 'PowerShellCommand' // ...PowerShell commands seen... | where InitiatingProcessFileName =~ 'wsmprovhost.exe' // ...whose parent was wsmprovhost.exe (RemotePS Server)... | extend AccountName = InitiatingProcessAccountName // ...and add an AccountName field so the join is easier ) on AccountName // At this point, we have all of the commands that were ran by service accounts | extend Command = tostring(extractjson('$.Command', tostring(AdditionalFields))) // Extract the actual PowerShell command that was executed | where Command !in (WhitelistedCmdlets) // Remove any values that match the whitelisted cmdlets | summarize (Timestamp, ReportId)=arg_max(TimeGenerated, ReportId), // Then group all of the cmdlets and calculate the min/max times of execution... make_set(Command, 100000), count(), min(TimeGenerated) by // ...as well as creating a list of cmdlets ran and the count.. AccountName, AccountDomain, DeviceName, DeviceId // ...and have the commonality be the account, DeviceName and DeviceId // At this point, we have machine-account pairs along with the list of commands run as well as the first/last time the commands were ran | order by AccountName asc // Order the final list by AccountName just to make it easier to go through | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "")167Views0likes1Comment
Events
Recent Blogs
- 12 MIN READOn September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta ...Sep 30, 2025935Views1like6Comments
- Unlocking Seamless Security Data IntegrationSep 30, 2025576Views3likes0Comments