Forum Widgets
Latest Discussions
Dependency Agent Alternatives
Hello. The retirement notice for the Azure Dependency Agent (https://learn.microsoft.com/en-us/azure/azure-monitor/vm/vminsights-maps-retirement) recommends selecting an Azure Marketplace product as a replacement but is not specific about what product(s) offer similar functionality. Would appreciate more specific guidance and experiences from the wider community. Thanks.Cory_MatieyshenJul 30, 2025Copper Contributor75Views0likes1CommentRecent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized
Hi everyone, I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the TimeGenerated parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting. For example, this basic KQL query: DeviceLogonEvents | where TimeGenerated >= ago(30d) | where LogonType != "Local" | where DeviceName !contains ".fr" | where DeviceName !contains "shared-" | where DeviceName !contains "gdc-" | where DeviceName !contains "mon-" | distinct DeviceName Now throws the error: Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query. Removing TimeGenerated makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI. This issue started affecting a Logic App that runs weekly—it worked on May 11th but failed on May 18th. Questions: Has there been a recent schema change or deprecation of TimeGenerated in Defender ATP's KQL for Logic Apps? Is there an alternative column or syntax we should use now? Are others experiencing this? Any insights or workarounds would be greatly appreciated!154Views1like3CommentsAKS Pod resource utilization (CPU/Memory) alert
Hi All, I am trying to set up an alert for AKS pod CPU/Memory utilization alert when a max utilization hits certain threshold (let's say >95%). Sample query for CPU utilization. let cpuusage = materialize(Perf | where ObjectName == 'K8SContainer' | where CounterName has 'cpuUsageNanoCores' | extend ContainerNameParts = split(InstanceName, '/') | extend ContainerNamePartCount = array_length(ContainerNameParts) | extend PodUIDIndex = ContainerNamePartCount - 2, ContainerNameIndex = ContainerNamePartCount - 1 | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex]) | summarize AggregatedValue=max(CounterValue) by bin(TimeGenerated, 15m), ContainerName | project TimeGenerated, ContainerName, AggregatedValue | join kind = inner ( KubePodInventory | summarize arg_max(TimeGenerated, *) by ContainerName | project Name, ContainerName, Namespace, ServiceName ) on ContainerName | project TimeGenerated, Name, ServiceName, ContainerName, Namespace, CPU_mCores_Usage=AggregatedValue / 1000000); let cpurequest=materialize(Perf | where ObjectName == 'K8SContainer' | where CounterName == 'cpuRequestNanoCores' | extend ContainerNameParts = split(InstanceName, '/') | extend ContainerNamePartCount = array_length(ContainerNameParts) | extend PodUIDIndex = ContainerNamePartCount - 2, ContainerNameIndex = ContainerNamePartCount - 1 | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex]) | project ContainerName, CounterValue | join kind = inner (KubePodInventory //| summarize arg_max(TimeGenerated, 24h) by ContainerName, Name, Namespace | project Name, ContainerName, Namespace ) on ContainerName | project Name, Namespace, ContainerName, CpuReq_in_mcores=(CounterValue / 1000000)); let cpulimits = materialize(Perf | where ObjectName == 'K8SContainer' | where CounterName == 'cpuLimitNanoCores' | extend ContainerNameParts = split(InstanceName, '/') | extend ContainerNamePartCount = array_length(ContainerNameParts) | extend PodUIDIndex = ContainerNamePartCount - 2, ContainerNameIndex = ContainerNamePartCount - 1 | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex]) | extend CpuNanoCoreLimit= CounterValue | project ContainerName, CpuNanoCoreLimit | join kind = inner ( KubePodInventory | summarize arg_max(TimeGenerated, *) by ContainerName | project Name, ContainerName, Namespace, ServiceName ) on ContainerName | project Name, ServiceName, Namespace, ContainerName, CPU_mCores_Limit=CpuNanoCoreLimit / 1000000); cpulimits | join cpurequest on ContainerName | join cpuusage on ContainerName | order by Namespace asc, ContainerName asc | extend CName = split(ContainerName, '/') | extend PodName= Name | extend Cpu_Perct_utilization=round((CPU_mCores_Usage / CPU_mCores_Limit) * 100, 2) | project TimeGenerated, Namespace, ServiceName, PodName, CPU_mCores_Usage, CPU_mCores_Limit, CpuReq_in_mcores, Cpu_Perct_utilization | sort by TimeGenerated desc But just wanted to modify the query little bit, wanted to get an alert only when utilization hits maximum continuously 3 times within 30 minutes (by keeping frequency of evaluation 10 min). Please advise.Ashok42470May 13, 2025Copper Contributor122Views0likes3CommentsGetting empty response while running a kql query using rest api
Hello All, Trying to run a KQL query using power via rest API by passing azure Entra app id and secret key. But we are getting empty response. Log analytics reader role is assigned on LA workspace and able to retrieve access token. When we try to run KQL query manually, we are seeing result. Below is sample snippet that i used, Not sure what is wrong with it? Any help would be highly appreciated. $tenantId = <Tenant id> $clientId = <azure entra application app id> $clientSecret = < app secret key> # Log Analytics Workspace details $workspaceId = <workspace ID> # Acquire a token $body = @{ client_id = $clientId scope = "https://api.loganalytics.io/.default" client_secret = $clientSecret grant_type = "client_credentials" } $query = "AppRequests | limit 10" $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $response = Invoke-RestMethod -Uri $uri -Method Post -ContentType "application/x-www-form-urlencoded" -Body $body $accessToken = $response.access_token # Define the Log Analytics REST API endpoint $baseUri = "https://api.loganalytics.io/v1/workspaces/$workspaceId/query" # Set headers for the query $headers = @{ Authorization = "Bearer $accessToken" "Content-Type" = "application/json" } # Prepare the request body $requestbody = @{ query = $query } | ConvertTo-Json # Send the request $response = Invoke-RestMethod -Uri $baseUri -Method Post -Headers $headers -Body $requestbody -Debug # Display the results $responseAshok42470Mar 07, 2025Copper Contributor141Views0likes1CommentHow to Monitor New Management Group Creation and Deletion.
I am writing this post to monitor new Management group creation and Deletion using Azure Activity Logs and Trigger Incident in Microsoft Sentinel. You can also use it to Monitor the Subscription Creation as well using this Step. By default, the Dianostic settings for at the management group level is not enabled. It cannot be enabled using Azure Policy or from the Portal interface. Use the below article to enable the "Management Group Diagnostic Settings" Management Group Diagnostic Settings - Create Or Update - REST API (Azure Monitor) | Microsoft Learn Below is the screenshot of message body if you like to forward the logs only to the Log analytic workspace where sentinel is enabled. Also make sure you enable the Diagnostic settings at the tenant management group level to track all changes in your tenant. { "properties": { "workspaceId": "<< replace with workspace resource ID>>", "logs": [ { "category": "Administrative", "enabled": true }, { "category": "Policy", "enabled": true } ] } } Once you have enabled the Diagnostic settings, you can use the below KQL query to monitor the New Management group creation and Deletion using Azure Activity Logs. //KQL Query to Identify if Management group is deleted AzureActivity | where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/DELETE" | where ActivityStatusValue == "Success" | extend mg = split(tostring(Properties_d.entity),"/") | project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message), caller_ = tostring(Properties_d.caller) //KQL Query to Identify if Management group is Created AzureActivity | where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/WRITE" | where ActivityStatusValue == "Success" | extend mg = split(tostring(Properties_d.entity),"/") | project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message), caller_ = tostring(Properties_d.caller) This log can also be used to monitor the new subscription creation as well, using the below query AzureActivity | where OperationNameValue == "Microsoft.Management" and ActivityStatusValue == "Succeeded" and isnotempty(SubscriptionId) If you need to trigger incident on sentinel, use the above query in your custom scheduled analytical rule and create alert. Note: Enabling this API on the Mangement group diagnostic logs will also be inherited by the subscriptions downstream on the specific category.hemanthselvaFeb 20, 2025Microsoft467Views1like1CommentNeed assistance on KQL query for pulling AKS Pod logs
I am trying to pull historical pod logs using below kql query. Looks like joining the tables; containerlog and KubePodInventory didn't go well as i see lot of duplicates in the output. ContainerLog //| project TimeGenerated, ContainerID, LogEntry | join kind= inner ( KubePodInventory | where ServiceName == "<<servicename>>" ) on ContainerID | project TimeGenerated, Namespace, ContainerID, ServiceName, LogEntrySource, LogEntry, Name1 | sort by TimeGenerated asc Can someone suggest a better query?Ashok42470Feb 20, 2025Copper Contributor193Views0likes4CommentsSentinel Incident Priority Mapping to SIR
Hi , we are working on implementing SIR module within our ServiceNow platform. And we have 5 level of priority within SIR (Critical, High, moderate, low, Planning) whereas sentinel has only 4 priorities (informational, Low, Medium, High). Interested to know how other organizations have handled and mapped these priorities. Thanks in advance.AmiShinuFeb 05, 2025Copper Contributor149Views0likes2CommentsMultiple Failed SignIn Events
I've a user for whom within the last week I'm consistently seeing more than 100 failed login events from this authorized device. And these seems to be something running at the back-end as these logs are within 2/3 mins intervals. The error messages goes as "Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access the resource." Also, the application that shows interrupted are: - Office Online Maker SSO - Office Online Core SSO Office365 Shell WCSS-Client SharePoint Online Web Client Extensibility->Microsoft Graph Anyone has any insights into addressing this issue. ThanksAmiShinuJan 30, 2025Copper Contributor254Views0likes1CommentSentinel Threat Intelligence Detection Rule
I'm working on connecting various Threat Intelligenece TAXII with our sentinel platform. Does anyone have suggestions on the kind of detection rules using KQL we can build around these TAXII's. Most of the come with IP's, URLS, domain and hash values. Thanks in advance.AmiShinuJan 29, 2025Copper Contributor76Views0likes2CommentsAre you getting the most out of your Azure Log Analytics Workspace (LAW) investment?
Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources. That's the easy part - mining this data for actionable insights is often the real challenge. One way we did this was by surfacing events related to disks across our physical server estate. We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility. The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced. Here's the KQL query we came up with. let start_time=ago(30d); let end_time=now(); Event | where TimeGenerated > start_time and TimeGenerated < end_time | where EventLog contains 'System' | where Source contains 'Storage Agents' | where RenderedDescription contains 'Drive Array Physical Drive Status Change' | parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'* | project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName You can of course set up alerting with Alerts for Azure Monitor. I hope this example helps you get more value from your LAW.AdeelazizJan 17, 2025Brass Contributor133Views1like2Comments
Resources
Tags
- azure monitor1,092 Topics
- Azure Log Analytics400 Topics
- Query Language247 Topics
- Log Analytics63 Topics
- Custom Logs and Custom Fields18 Topics
- solutions17 Topics
- Metrics15 Topics
- Workbooks14 Topics
- alerts14 Topics
- application insights13 Topics