kusto
123 TopicsI'm stuck!
Logically, I'm not sure how\if I can do this. I want to monitor for EntraID Group additions - I can get this to work for a single entry using this: AuditLogs | where TimeGenerated > ago(7d) | where OperationName == "Add member to group" | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName == "NameOfGroup" <-- This returns the single entry | extend User = tostring(TargetResources[0].userPrincipalName) | summarize ['Count of Users Added']=dcount(User), ['List of Users Added']=make_set(User) by GroupName | sort by GroupName asc However, I have a list of 20 Priv groups that I need to monitor. I can do this using: let PrivGroups = dynamic[('name1','name2','name3'}); and then call that like this: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any (PrivGroup) But that's a bit dirty to update - I wanted to call a watchlist. I've tried defining with: let PrivGroup = (_GetWatchlist('TestList')); and tried calling like: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any ('PrivGroup') I've tried dropping the let and attempted to lookup the watchlist directly: | where GroupName has_any (_GetWatchlist('TestList')) The query runs but doesn't return any results (Obvs I know the result exists) - How do I lookup that extracted value on a Watchlist. Any ideas or pointers why I'm wrong would be appreciated! Many thanksSolved78Views0likes2CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.893Views0likes0Commentsneed to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please help309Views2likes3CommentsSentinel Data Connector: Google Workspace (G Suite) (using Azure Functions)
I'm encountering a problem when attempting to run the GWorkspace_Report workbook in Azure Sentinel. The query is throwing this error related to the union operator: 'union' operator: Failed to resolve table expression named 'GWorkspace_ReportsAPI_gcp_CL' I've double-checked, and the GoogleWorkspaceReports connector is installed and updated to version 3.0.2. Has anyone seen this or know what might be causing the table GWorkspace_ReportsAPI_gcp_CL to be unresolved? Thanks!178Views0likes2CommentsDevice Tables are not ingesting tables for an orgs workspace
Device Tables are not ingesting tables for an orgs workspace. I can confirm that all devices are enrolled and onboarded to MDE (Microsoft defender for endpoint) I had placed an EICAR file on one of the machine which bought an alert through to sentinel,however this did not invoke any of the device related tables . Workspace i am targeting Workspace from another org with tables enabled and ingesting data Microsoft Defender XDR connector shows as connected however the tables do not seem to be ingesting data; I run the following; DeviceEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated DeviceProcessEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated I receive no results; No results found from the specified time range Try selecting another time range Please assist As I cannot think where this is failing114Views1like1CommentHow to exclude IPs & accounts from Analytic Rule, with Watchlist?
We are trying to filter out some false positives from a Analytic rule called "Service accounts performing RemotePS". Using automation rules still gives a lot of false mail notifications we don't want so we would like to try using a watchlist with the serviceaccounts and IP combination we want to exclude. Anyone knows where and what syntax we would need to exlude the items on the specific Watchlist? Query: let InteractiveTypes = pack_array( // Declare Interactive logon type names 'Interactive', 'CachedInteractive', 'Unlock', 'RemoteInteractive', 'CachedRemoteInteractive', 'CachedUnlock' ); let WhitelistedCmdlets = pack_array( // List of whitelisted commands that don't provide a lot of value 'prompt', 'Out-Default', 'out-lineoutput', 'format-default', 'Set-StrictMode', 'TabExpansion2' ); let WhitelistedAccounts = pack_array('FakeWhitelistedAccount'); // List of accounts that are known to perform this activity in the environment and can be ignored DeviceLogonEvents // Get all logon events... | where AccountName !in~ (WhitelistedAccounts) // ...where it is not a whitelisted account... | where ActionType == "LogonSuccess" // ...and the logon was successful... | where AccountName !contains "$" // ...and not a machine logon. | where AccountName !has "winrm va_" // WinRM will have pseudo account names that match this if there is an explicit permission for an admin to run the cmdlet, so assume it is good. | extend IsInteractive=(LogonType in (InteractiveTypes)) // Determine if the logon is interactive (True=1,False=0)... | summarize HasInteractiveLogon=max(IsInteractive) // ...then bucket and get the maximum interactive value (0 or 1)... by AccountName // ... by the AccountNames | where HasInteractiveLogon == 0 // ...and filter out all accounts that had an interactive logon. // At this point, we have a list of accounts that we believe to be service accounts // Now we need to find RemotePS sessions that were spawned by those accounts // Note that we look at all powershell cmdlets executed to form a 29-day baseline to evaluate the data on today | join kind=rightsemi ( // Start by dropping the account name and only tracking the... DeviceEvents // ... | where ActionType == 'PowerShellCommand' // ...PowerShell commands seen... | where InitiatingProcessFileName =~ 'wsmprovhost.exe' // ...whose parent was wsmprovhost.exe (RemotePS Server)... | extend AccountName = InitiatingProcessAccountName // ...and add an AccountName field so the join is easier ) on AccountName // At this point, we have all of the commands that were ran by service accounts | extend Command = tostring(extractjson('$.Command', tostring(AdditionalFields))) // Extract the actual PowerShell command that was executed | where Command !in (WhitelistedCmdlets) // Remove any values that match the whitelisted cmdlets | summarize (Timestamp, ReportId)=arg_max(TimeGenerated, ReportId), // Then group all of the cmdlets and calculate the min/max times of execution... make_set(Command, 100000), count(), min(TimeGenerated) by // ...as well as creating a list of cmdlets ran and the count.. AccountName, AccountDomain, DeviceName, DeviceId // ...and have the commonality be the account, DeviceName and DeviceId // At this point, we have machine-account pairs along with the list of commands run as well as the first/last time the commands were ran | order by AccountName asc // Order the final list by AccountName just to make it easier to go through | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "")248Views0likes1CommentPlaybook when incident trigger is not working
Hi I want to create a playbook to automatically revoke session user when incident with specifics title or gravity is created. But after some test the playbook is'nt run autimacally, it work when I run it manually. I did'nt find what I do wrong. See the image and the code bellow. Thanks in advance! { "definition": { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Microsoft_Sentinel_incident": { "type": "ApiConnectionWebhook", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['azuresentinel']['connectionId']" } }, "body": { "callback_url": "@{listCallbackUrl()}" }, "path": "/incident-creation" } } }, "actions": { "Get_incident": { "type": "ApiConnection", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['azuresentinel-1']['connectionId']" } }, "method": "post", "body": { "incidentArmId": "@triggerBody()?['object']?['id']" }, "path": "/Incidents" }, "runAfter": {} }, "Send_e-mail_(V2)": { "type": "ApiConnection", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['office365']['connectionId']" } }, "method": "post", "body": { "To": "email address removed for privacy reasons", "Subject": "Ceci est un test", "Body": "</p> <p class="\"editor-paragraph\"">@{body('Get_incident')?['id']}</p> <p class="\"editor-paragraph\"">@{body('Get_incident')?['properties']?['description']}</p> <p class="\"editor-paragraph\"">@{body('Get_incident')?['properties']?['incidentNumber']}</p> <p>", "Importance": "Normal" }, "path": "/v2/Mail" }, "runAfter": { "Get_incident": [ "Succeeded" ] } } }, "outputs": {}, "parameters": { "$connections": { "type": "Object", "defaultValue": {} } } }, "parameters": { "$connections": { "type": "Object", "value": { "azuresentinel": { "id": "/subscriptions/xxxx/providers/Microsoft.Web/locations/xxxxx/managedApis/xxxxxxx", "connectionId": "/subscriptions/xxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.Web/connections/azuresentinel-Revoke-RiskySessions1", "connectionName": "azuresentinel-Revoke-RiskySessions1", "connectionProperties": { "authentication": { "type": "ManagedServiceIdentity" } } }, "azuresentinel-1": { "id": "/subscriptions/xxxxxx/providers/Microsoft.Web/locations/xxxx/managedApis/xxx", "connectionId": "/subscriptions/xxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Web/connections/xxxx", "connectionName": "xxxxxx", "connectionProperties": { "authentication": { "type": "ManagedServiceIdentity" } } }, "office365": { "id": "/subscriptions/xxxxxx/providers/Microsoft.Web/locations/xxxxx/managedApis/office365", "connectionId": "/subscriptions/xxxxx/resourceGroups/xxxxxx/providers/Microsoft.Web/connections/o365-Test_Send-email-incident-to-xxxx", "connectionName": "o365-Test_Send-email-incident-to-xxxxx" } } } } }Solved2.3KViews0likes2CommentsLogic app - Escaped Characters and Formatting Problems in KQL Run query and list results V2 action
I’m building a Logic App to detect sign-ins from suspicious IP addresses. The logic includes: Retrieving IPs from incident entities in Microsoft Sentinel. Enriching each IP using an external API. Filtering malicious IPs based on their score and risk level. Storing those IPs in an array variable (MaliciousIPs). Creating a dynamic KQL query to check if any of the malicious IPs were used in sign-ins, using the in~ operator. Problem: When I use a Select and Join action to build the list of IPs (e.g., "ip1", "ip2"), the Logic App automatically escapes the quotes. As a result, the KQL query is built like this: IPAddress in~ ([{"body":"{\"\":\"\\\"X.X.X.X\\\"\"}"}]) Instead of the expected format: IPAddress in~ ("X.X.X.X", "another.ip") This causes a parsing error when the Run Query and List Results V2 action is executed against Log Analytics. ------------------------ Here's the For Each action loop who contain the following issue: Dynamic compose to formulate the KQL query in a concat, since it's containing the dynamic value above : concat('SigninLogs | where TimeGenerated > ago(3d) | where UserPrincipalName == \"',variables('CurrentUPN'),'\" | where IPAddress in~ (',outputs('Join_MaliciousIPs_KQL'),') | project TimeGenerated, IPAddress, DeviceDetail, AppDisplayName, Status') The Current UPN is working as expected, using the same format in a Initialize/Set variable above (Array/String(for IP's)). The rest of the loop : Note: Even if i have a "failed to retrieve" error on the picture don't bother with that, it's just about the dynamic value about the Subscription, I've entered it manually, it's working fine. What I’ve tried: Using concat('\"', item()?['ip'], '\"') inside Select (causes extra escaping). Removing quotes and relying on Logic App formatting (resulted in object wrapping). Flattening the array using a secondary Select to extract only values. Using Compose to debug outputs. Despite these attempts, the query string is always malformed due to extra escaping or nested JSON structure. I would like to know if someone has encountered or have the solution to this annoying problem ? Best regardsSolved225Views0likes1CommentResearching a rule template "FailedLogonToAzurePortal"
Hello, I have the template rule "FailedLogonToAzurePortal"(https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/FailedLogonToAzurePortal.yaml and there is a column of data that I don't understand. The column is "FailedLogonCount" and it was showing inconclusive data because it was showing more data than it was... Here is an example: The issue states that 38 login failures have been detected, but if I investigate in the non-interactive login logs I only see one failure which matches the error code type "50173" but this only shows me one failure, I don't understand where the remaining 37 failures come from... Can you help me?, I am a beginner in KQL and I don't think I understand the context of the alert. Regards.2.3KViews0likes3CommentsScheduled Analytics Rule not triggering...
Hello, I am trying to trigger the following KQL query in a custom scheduled Analytics Rule... It is to identify ANY Global Administrator and verify if they have committed any activity (Sign-in) over the last 24 hours. Simple testing is to get a Global Administrator to sign in within the last 24 hours. Now the query triggers and returns records when run in the Logs pane... What I have noticed is that when activated in a custom-scheduled Analytics Rule, it fails to return records! Now the time range set for the analytics rule (query frequency & lookback duration) aligns properly with the query logic or any log ingestion delay. Query scheduling Run query every: 1 day Lookup data from the last: 1 day The funny thing is, when testing the KQL query in the Analytics Rule and Set rule logic/View query results, if the FIRST ATTEMPT returns no results (in the simulation), after repeatedly testing (clicking the test link), it DOES return records! Why is there a time-lag? How can I ensure the query triggers correctly, returning records accordingly, and related Incidents? This is the KQL query... let PrivilgedRoles = dynamic(["Global Administrator"]); let PrivilegedIdentities = IdentityInfo | summarize arg_max(TimeGenerated, *) by AccountObjectId | mv-expand AssignedRoles | where AssignedRoles in~ (PrivilgedRoles) | extend lc_AccountUPN = tolower(AccountUPN) | summarize AssignedRoles=make_set(AssignedRoles) by AccountObjectId, AccountSID, lc_AccountUPN, AccountDisplayName, JobTitle, Department; SigninLogs | where TimeGenerated > ago (1d) | extend lc_UserPrincipalName = tolower(UserPrincipalName) | join kind=inner PrivilegedIdentities on $left.lc_UserPrincipalName == $right.lc_AccountUPN | project TimeGenerated, AccountDisplayName, AccountObjectId, lc_AccountUPN, lc_UserPrincipalName, AppDisplayName, ResultType, ResultDescription, IPAddress, LocationDetails308Views0likes4Comments