Log Analytics
73 TopicsRecent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized
Hi everyone, I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the TimeGenerated parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting. For example, this basic KQL query: DeviceLogonEvents | where TimeGenerated >= ago(30d) | where LogonType != "Local" | where DeviceName !contains ".fr" | where DeviceName !contains "shared-" | where DeviceName !contains "gdc-" | where DeviceName !contains "mon-" | distinct DeviceName Now throws the error: Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query. Removing TimeGenerated makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI. This issue started affecting a Logic App that runs weekly—it worked on May 11th but failed on May 18th. Questions: Has there been a recent schema change or deprecation of TimeGenerated in Defender ATP's KQL for Logic Apps? Is there an alternative column or syntax we should use now? Are others experiencing this? Any insights or workarounds would be greatly appreciated!160Views1like3CommentsNeed assistance on KQL query for pulling AKS Pod logs
I am trying to pull historical pod logs using below kql query. Looks like joining the tables; containerlog and KubePodInventory didn't go well as i see lot of duplicates in the output. ContainerLog //| project TimeGenerated, ContainerID, LogEntry | join kind= inner ( KubePodInventory | where ServiceName == "<<servicename>>" ) on ContainerID | project TimeGenerated, Namespace, ContainerID, ServiceName, LogEntrySource, LogEntry, Name1 | sort by TimeGenerated asc Can someone suggest a better query?197Views0likes4CommentsHow to Monitor New Management Group Creation and Deletion.
I am writing this post to monitor new Management group creation and Deletion using Azure Activity Logs and Trigger Incident in Microsoft Sentinel. You can also use it to Monitor the Subscription Creation as well using this Step. By default, the Dianostic settings for at the management group level is not enabled. It cannot be enabled using Azure Policy or from the Portal interface. Use the below article to enable the "Management Group Diagnostic Settings" Management Group Diagnostic Settings - Create Or Update - REST API (Azure Monitor) | Microsoft Learn Below is the screenshot of message body if you like to forward the logs only to the Log analytic workspace where sentinel is enabled. Also make sure you enable the Diagnostic settings at the tenant management group level to track all changes in your tenant. { "properties": { "workspaceId": "<< replace with workspace resource ID>>", "logs": [ { "category": "Administrative", "enabled": true }, { "category": "Policy", "enabled": true } ] } } Once you have enabled the Diagnostic settings, you can use the below KQL query to monitor the New Management group creation and Deletion using Azure Activity Logs. //KQL Query to Identify if Management group is deleted AzureActivity | where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/DELETE" | where ActivityStatusValue == "Success" | extend mg = split(tostring(Properties_d.entity),"/") | project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message), caller_ = tostring(Properties_d.caller) //KQL Query to Identify if Management group is Created AzureActivity | where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/WRITE" | where ActivityStatusValue == "Success" | extend mg = split(tostring(Properties_d.entity),"/") | project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message), caller_ = tostring(Properties_d.caller) This log can also be used to monitor the new subscription creation as well, using the below query AzureActivity | where OperationNameValue == "Microsoft.Management" and ActivityStatusValue == "Succeeded" and isnotempty(SubscriptionId) If you need to trigger incident on sentinel, use the above query in your custom scheduled analytical rule and create alert. Note: Enabling this API on the Mangement group diagnostic logs will also be inherited by the subscriptions downstream on the specific category.468Views1like1CommentAre you getting the most out of your Azure Log Analytics Workspace (LAW) investment?
Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources. That's the easy part - mining this data for actionable insights is often the real challenge. One way we did this was by surfacing events related to disks across our physical server estate. We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility. The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced. Here's the KQL query we came up with. let start_time=ago(30d); let end_time=now(); Event | where TimeGenerated > start_time and TimeGenerated < end_time | where EventLog contains 'System' | where Source contains 'Storage Agents' | where RenderedDescription contains 'Drive Array Physical Drive Status Change' | parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'* | project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName You can of course set up alerting with Alerts for Azure Monitor. I hope this example helps you get more value from your LAW.143Views1like2CommentsEffective Cloud Governance: Leveraging Azure Activity Logs with Power BI
We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times. One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log. This log contains a wealth of information ranging from noise to interesting to actionable data. One could set up alerts for delete and update signals however, that can result in a flood of notifications. To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log and allows you to group and summarize data by various dimensions. You still need someone to review the report regularly however consuming the data this way makes it a whole lot easier. This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment. If you're interested, this is the KQL query I'm using in Power Bi let start_time = ago(24h); let end_time = now(); AzureActivity | where TimeGenerated > start_time and TimeGenerated < end_time | where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE' | project TimeGenerated, Properties_d.resource, ResourceGroup, OperationNameValue, Authorization_d.scope, Authorization_d.action, Caller, CallerIpAddress, ActivityStatusValue | order by TimeGenerated asc52Views0likes0CommentsCan I use regex in a DCR custom text logfile filepath?
Hi, I have about 50 servers attached to a DCR to collect a custom text log into a log analytics workspace custom table. Is it possible or if anyone has experience with using a regex filepath in the DCR situation? The logs are in the same format but paths differs slightly on each servers. There are two structures, but includes the servernames so we have 50 different filepaths: App Server c:\appserver\logs\<server Fully Qualified Name>\server\*.log App Portal c:\appportal\logs\<server Fully Qualified Name>\portal\*.log When I use static paths it works (there's a limit of 20 by the way). I have tried using the following regex filepath nothing comes in: c:\app(server|portal)\logs\SYS[a-zA-Z0-9]{4}wm[0-9]{2}.domain.net\(server|portal)\*.log Can someone confirm with me whether I can use regex in the filepath pattern in the DCR Data Source Tex log setup? If so, how do I get it to work please? Am I missing some escapes somewhere please? Many thanks in advance.142Views0likes1CommentSNMP Polling of Network Devices Using Azure Platform
Hi All, I am looking for SNMP polling capability in Azure platform so that will do network device fault monitoring. Currently using third party application for fault monitoring. Any suggestions would be highly appreciable. Alerts which we monitor currently using third party application are like threshold utilization, high error rate, device down/unresponsive, BGP session etc. There is no option i found in azure monitor for SNMP polling. Thanks, Neeraj Mohan2.8KViews0likes3CommentsImportant Update: Azure Automation Update Management and Log Analytics Agent Retirement
Important Update: Azure Automation Update Management and Log Analytics Agent Retirement Attention Azure users! This is a critical notice regarding the retirement of two key services: Azure Automation Update Management and the Log Analytics agent. Both will be discontinued on August 31, 2024. To ensure uninterrupted update management for your virtual machines, migrating to Azure Update Manager is essential before the retirement date. Why the Change? Microsoft is streamlining its update management offerings by focusing on Azure Update Manager, a robust solution with several advantages. These include: Simplified onboarding: Azure Update Manager leverages existing Azure features for effortless integration. Enhanced control: Granular access controls allow for precise management of update deployment. Flexible automation: Automatic patching capabilities streamline the update process. Taking Action: Migrate to Azure Update Manager To avoid disruptions after August 31st, migrating to Azure Update Manager is necessary. Microsoft provides a comprehensive guide to facilitate this transition: Move from Automation Update Management to Azure Update Manager https://learn.microsoft.com/en-us/azure/automation/update-management/overview This guide details the migration process, ensuring a smooth transfer to the new platform. Don't wait! Begin the migration process today to ensure your virtual machines receive updates seamlessly after the retirement of Azure Automation Update Management and the Log Analytics agent.529Views0likes1CommentNot receiving Error, only Warnings from 1 out of 6 VM
Hi. I have 6 VMs on Azure all with the Azure Monitor Agent installed and In Data Collection Rules all 6 machines are selected, and from them I've set up to Collect Errors and Warnings from EventViewer to my Azure Log Analytics. This work, and I get errors and Warnings from All machines, except 1... On that I for some reason only get Warnings but not Errors (despite there being errors if I log into the machine and check the Event-log)... I've checked the various machines and they all seems to have exactly the same settings. Anyone able to give some hints why this might be, or ideas how to troubleshoot?435Views0likes2Comments