alerts
346 TopicsAdvanced Hunting Query Help
Hey y'all, I'm trying to write a query that can be used to determine the number of times an each IOC generated an alert (file hash, URL, IP, etc). I'm using the query builder tool within Defender, and I'm looking into the AlertInfo and AlertEvidence tables, but I'm not seeing where the link exists between each of these alert records and the corresponding IOC. For instance. If I submit a custom indicator, to Block a file identified by a sha256 hash, and that file gets correctly blocked, I want to see a count for the number of times that IOC value (the hash in this instance) triggered an alert. I'm hoping the community can help me determine whether I'm missing something glaringly obvious or if there's some documentation I haven't read yet. Thanks for reading!18Views0likes0CommentsLack of alerts in Sentinel
Hello, I am troubleshooting a lack of alerts and incidents in my Sentinel deployment. When I look at the Micrsoft Defender XDR connector, I see plenty of events like DeviceEvents, DeviceInfo, IdentityLogonEvents, etc. However, the entries for: SecurityIncident-- SecurityAlert-- AlertInfo-- AlertEvidence-- all show grey with a disconnected connector showing. I've been over the onboarding documentation several times and can't find what I'm missing. Has anyone else experienced this who can point me in the right direction of what to check? Thanks!390Views0likes6CommentsMSSP Multi-Tenant Handling with Lighthouse and Defender XDR
Hello, As far as I know an MSSP providers, leverages Azure Lighthouse to call and access multiple customer workspaces, which allows to manage analytics across tenants. My questions are: In the case of moving to Defender XDR, how would this be possible in a multi-tenant MSSP scenario? Even with Lighthouse, how does Defender XDR avoid merging incidents/alerts across different customers when the same entities are involved? How does Defender XDR differentiate identical IOCs (same IP, hash, etc.) that appear in multiple customers? Can MSSPs customize correlation logic to prevent false cross-tenant merges? Content Ownership & Sharing Most MSSPs do not want to share their proprietary content (custom rules, detections, playbooks, analytics, etc.) with customers. How is Defender XDR approaching this requirement to ensure MSSPs can operate without exposing their intellectual property? Example: Customer Test 1 has a port scan incident from IP 10.10.10.10. Customer Test 2 also has a port scan incident from the same IP 10.10.10.10. In Sentinel today, these would remain separate. But in Defender XDR, would these two alerts risk being merged into a single incident because the same entity is detected across tenants? Thanks in advance for any clarification.163Views0likes1CommentDevice Tables are not ingesting tables for an orgs workspace
Device Tables are not ingesting tables for an orgs workspace. I can confirm that all devices are enrolled and onboarded to MDE (Microsoft defender for endpoint) I had placed an EICAR file on one of the machine which bought an alert through to sentinel,however this did not invoke any of the device related tables . Workspace i am targeting Workspace from another org with tables enabled and ingesting data Microsoft Defender XDR connector shows as connected however the tables do not seem to be ingesting data; I run the following; DeviceEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated DeviceProcessEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated I receive no results; No results found from the specified time range Try selecting another time range Please assist As I cannot think where this is failing4Views0likes0CommentsDeep Dive into Preview Features in Microsoft Defender Console
Background for Discussion Microsoft Defender XDR (Extended Detection and Response) is evolving rapidly, offering enhanced security capabilities through preview features that can be enabled in the MDE console. These preview features are accessible via: Path: Settings > Microsoft Defender XDR > General > Preview features Under this section, users can opt into three distinct integrations: Microsoft Defender XDR + Microsoft Defender for Identity Microsoft Defender for Endpoint Microsoft Defender for Cloud Apps Each of these options unlocks advanced functionalities that improve threat detection, incident correlation, and response automation across identity, endpoint, and cloud environments. However, enabling these features is optional and may depend on organizational readiness or policy. This raises important questions about: What specific technical capabilities are introduced by each preview feature? Where exactly are these feature parameters are reflected in the MDE console? What happens if an organization chooses not to enable these preview features? Are there alternative ways to access similar functionalities through public preview or general availability?180Views1like0CommentsHow to setup alerts for deadlocks using Log Analytics
Managed Instance diagnostic events do not support sending deadlock information to Log Analytics. However, through auditing, it's possible to query failed queries along with their reported error messages—though this does not include deadlock XML. We will see how we can send information to Log Analytics and setup an alert for when a deadlock occurs. Step 1 - Deploy Log Analytics Create a Log Analytics workspace if you currently don't have one Create a Log Analytics workspace Step 2 - Add diagnostic setting On the Azure Portal, open the Diagnostic settings of your Azure SQL Managed Instance and choose Add diagnostic setting Select SQL Security Audit Event and choose has destination your Log Analytics workspace Step 3 - Create a server audit on the Azure SQL Managed Instance Run the query below on the Managed Instance Rename the server audit and server audit specification to a name of your choice. CREATE SERVER AUDIT [audittest] TO EXTERNAL_MONITOR GO -- we are adding Login audit, but only BATCH_COMPLETED_GROUP is necessary for query execution CREATE SERVER AUDIT SPECIFICATION audit_server FOR SERVER AUDIT audittest ADD (SUCCESSFUL_LOGIN_GROUP), ~ ADD (BATCH_COMPLETED_GROUP), ADD (FAILED_LOGIN_GROUP) WITH (STATE = ON) GO ALTER SERVER AUDIT [audittest] WITH (STATE = ON) GO Step 4 - Check events on Log Analytics It may take some time for records to begin appearing in Log Analytics. Open your Log Analytics workspace and choose Logs To verify if data is being ingested, run the following query in Log Analytics and wait until you start getting the first results: Make sure that you change servername with your Azure SQL Managed Instance name AzureDiagnostics | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | take 10 Example: Step 5 - (Optional) Create a deadlock event for testing Create a deadlock scenario so you can see a record on log analytics. Example: Open SSMS and a new query window under the context of a user database (you can create a test database just for this test). create a table on a user database and insert 10 records: create table tb1 (id int identity(1,1) primary key clustered, col1 varchar(30)) go insert into tb1 values ('aaaaaaa') go 10 You can close the query window or reuse for the next step. Open a new query window (or reuse the first query window) and run (leave the query window open after executing): begin transaction update tb1 set col1 = 'bbbb' where id = 1 Open a second query window and run (leave the query window open after executing): begin transaction update tb1 set col1 = 'bbbb' where id = 2 Go back to the first query window opened and run (the query will be blocked - will stay executing): update tb1 set col1 = 'bbbb' where id = 2 Go back to the second query window opened and run (this transaction will be victim of deadlock): update tb1 set col1 = 'bbbb' where id = 1 You can rollback and close all windows after the deadlock exception. Step 6 - (Optional) Check the deadlock exception on Log Analytics Note: the record can take some minutes to appear on Log Analytics Use the query below to obtain the Deadlock events for the last hour (we are looking for Error 1205) Make sure that you change servername with your Azure SQL Managed Instance name AzureDiagnostics | where TimeGenerated > ago(1h) | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | where succeeded_s == "false" | where additional_information_s contains "Err 1205, Level 13" Step 7 - Use query to Create an Alert Use the query below to create an Alert on Azure Log Analytics Make sure that you change servername with your Azure SQL Managed Instance name. The query checks for deadlocks that occurred on the previous hour. AzureDiagnostics | where TimeGenerated > ago(1h) | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | where succeeded_s == "false" | where additional_information_s contains "Err 1205, Level 13" Run the query and click on New alert rule Create the alert with the desired settingsSingle Rule for No logs receiving (Global + Per-device Thresholds)
Hi everyone, I currently maintain one Analytics rule per table to detect when logs stop coming in. Some tables receive data from multiple sources, each with a different expected interval (for example, some sources send every 10 minutes, others every 30 minutes). In other SIEM platforms there’s usually: A global threshold (e.g., 60 minutes) for all sources. Optional per-device/per-table thresholds that override the global value. Is there a recommended way to implement one global rule that uses a default threshold but allows per-source overrides when a particular device or log table has a different expected frequency? Also, if there are other approaches you use to manage “logs not received” detection, I’d love to hear your suggestions as well. This is a sample of my current rule let threshold = 1h; AzureActivity | summarize LastHeartBeat = max(TimeGenerated) | where LastHeartBeat < ago(threshold)22Views0likes0CommentsHow to: Ingest Splunk alert data to Microsoft Sentinel SIEM
Thanks to Javier Soriano, Principal Product Manager - OneSOC Customer Experience Engineering, for the peer review Introduction Although the recommended approach is to not have multiple SIEM solutions in place, many organizations are still running dual-SIEM setups, sometimes even introducing additional ones in the mix. Combination most often seen is running a legacy solution and pairing it with modern SIEM solutions like Microsoft Sentinel SIEM. Side-by-side architecture is recommended for just long enough to complete the migration, train people and update processes - but organizations might stay with the side-by-side configuration longer when they are not ready to move away from legacy solutions. In such situations, organizations usually opt for sending alerts from their legacy SIEM to Sentinel SIEM: Cloud data is ingested and analyzed in Sentinel SIEM On-premises data is ingested and analyzed in legacy SIEM which generates alerts Alerts are forwarded from legacy SIEM to Sentinel SIEM With this approach, SOC teams have a single interface where they are able to cross-correlate and investigate alerts from their organizations while benefiting from full value of unified security operations in Microsoft Defender. Send Splunk alerts to Sentinel SIEM Splunk side When an alert is raised in Splunk, organizations have an option to set up following alert actions: Email notification action Webhook alert action Output results to a CSV lookup Log events Monitor triggered alerts Send alerts and dashboards to Splunk Mobile Users Interestingly, it is possible to work with Webhooks to make an HTTP POST request on a URL. The data is in JSON format which makes it easily consumable by Sentinel SIEM. For this to work, Splunk needs the hook URL from the target source (in this case, Sentinel SIEM). { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } Example: Splunk alert JSON payload Microsoft side From Microsoft perspective, organizations can take advantage of Logs Ingestion API, which allows for any application that can make a REST API call to send data to Sentinel SIEM. To configure Logs Ingestion API, a couple of supporting resources are needed: Microsoft Entra application which will authenticate against the API Custom table in Log Analytics workspace, where the data will be sent to and accessible from Sentinel SIEM Data Collection Rule (DCR) which will direct data to the target custom table Entra application from the first step needs to have RBAC assigned on the DCR resource A solution to call Logs Ingestion API so the data can be sent to the Sentinel SIEM. In order to make this process streamlined and easy to deploy, a solution has been developed which will automate creation of all of these supporting resources and allow you to have a Webhook URL ready which can be just placed in your Splunk solution: https://github.com/Azure/Azure-Sentinel/tree/master/Tools/SplunkAlertIngestion Picture: Content of the solution The script with supporting ARM templates can be run directly from the Azure Cloud Shell and configured with a couple of parameters: ./SplunkAlertIngestion.ps1 -ServicePrincipalName "" -tableName "" -workspaceResourceId "" -dataCollectionRuleName "" -location "" ServicePrincipalName - mandatory, define a name for the SP tableName - mandatory, define a name for the custom table with the suffix _CL (example: SplunkAlertInfo_CL) workspaceResourceId - mandatory, the resourceId can be fetched from the Log Analytics Workspace Properties blade (/subscriptions/xxx-xxx/resourceGroups/xxx/providers/Microsoft.OperationalInsights/workspaces/xxx) dataCollectionRuleName - mandatory, define DCR name location - mandatory, define Azure location for the resources (example: northeurope, eastus2) LogicAppName - not mandatory, define the name for the LogicApp, otherwise it will be named SplunkAlertAutomationLogicApp Result The script will create all supporting resources that are needed and will provide the Webhook URL as an output. Use this URL to configure trigger action in Splunk: Picture: Instructions for configuring webhook alert action in Splunk Once the webhook is configured on Splunk side, any time the alert is raised it will trigger the webhook, which will initiate the Logic App resource on Azure side responsible for parsing the data and sending that data through Logs Ingestion API to the destination table in Sentinel SIEM. Picture: Workflow of the Logic App Conclusion Ingesting alert data from other solutions in your organization to Sentinel SIEM allows for security teams to take advantage of unified security operations in Microsoft Defender - easier cross-correlation between various data sources, comprehensive threat intelligence and case management experience.537Views1like0CommentsChange default recipient for default security alerts
In the Office 365 Security admin center, under Alerts > Alert policies, all of Microsoft's default alert policies are configured to send to "TenantAdmins". I would like to change this default recipient address for all existing and future default alerts, without having to manually change it on each alert. I have found that you cannot modify these alerts using PowerShell. But is there a way to set the default recipient for default alerts? I haven't been able to find one.Solved15KViews0likes3CommentsAdvanced Hunting Custom detection rule notification cannot be customized
Hello, We have a case with both Microsoft and US cloud about the custom detection rule created by a query. The problem that we have is that I want to send the rule's notification to an email group. However, after about 2 months of investigations, I was advised below: "We can go one of two routes. Either the alerts from Defender can be ingested into sentinel based on the custom detection rule you created, or the Entra Sign-in logs can be ingested allowing Sentinel to check the logs itself." Could you please help us find an easier solution for the notification or create a feature request so that we could have the configuration of notification for custom detection rules when creating the alert?111Views0likes1Comment