azure log analytics
410 TopicsAzure Diagnostic data cannot be processed by Azure Stream Analytics due to InputDeserializerError
Planning to steam Azure resource(frontdoor) diagnostic logs to stream to Azure Stream Analytics. However, having troubles on this one as data specifically from AzureDiagnostics failed to get deserialized as input for Stream Analytics job. Error: Error while deserializing input message Id: Partition: [0], Offset: [3663944], SequenceNumber: [285]. Hit following error: Column name: ErrorInfo is already being used. Please ensure that column names are unique (case insensitive) and do not differ only by whitespaces. It's caused by a duplicating column, errorInfo and ErrorInfo on AzureDiagnostic Table, which I am unsure what distinguishes them apart when observing its values. Have any thoughts or solution in mind on how we could simplify or transform these Diagnostic log to possibly remove this duplicating column prior to getting ingested to the Stream Analytics job? Have initially thought of the following solutions, but they aren't so straight-forward and probably costs more and would like to hear other's thoughts as well. 1. Transformation using DCR. I beleive this is ideal for sending Diagnostic Logs to Log Analytics workspace. but this would mean diagnostic logs have to pass through the workspace and then get exported to Stream Analytics which to achieve, may require to add in more components in between the data pipeline. 2. Logic App. Saw somewhere where a scheduled Logic App(probably run by schedule) is used to export data using a query (KQL) from Log analytics workspace then get sent to a storage. Has to modify the destination to an event hub instead perhaps. yet again, to many layers just to pass on the data to ASA. Any other solution you can suggest to refining the incoming data to ASA while minimizing the utilization of compute resources?769Views0likes1CommentAzure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ?
Hi Team, Can some one help me how to send Azure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ? Earlier some years ago I used to do it, by clicking on “Diagnostic Settings”, but now if I go to “Diagnostic Settings” tab its asking me to enable guest level monitoring (guest level metrics I don’t want) and pointing to a Storage Account. I don’t see the option to send the these metrics to Log analytics workspace. I have around 500 azure VMs whose host (platform) metrics (not guest metrics) I want to send it to the log analytics workspace.53Views0likes1CommentLogic Flow name in Azure Log Analytics
dependencies | where type == "Ajax" | where success == "False" | where name has "logicflows" | project timestamp, name, resultCode, duration, type, target, data, operation_Name, appName | order by timestamp desc This KQL query in Azure Application Insights> Azure Log Analytics is used to get errors for logicflows. It returns the data but, I cannot see the logicflow name or ID anywhere. Is there any way to fetch logicflow ID? The azure app insight is registered for a power app, where we are using automate flows to call apis. We need the flow's name in analytics. I tried looking the database, there is no field for logic flow's name or ID. Though when seen in user>sessions, it shows name in requestHeaders.469Views0likes1CommentDependency Agent Alternatives
Hello. The retirement notice for the Azure Dependency Agent (https://learn.microsoft.com/en-us/azure/azure-monitor/vm/vminsights-maps-retirement) recommends selecting an Azure Marketplace product as a replacement but is not specific about what product(s) offer similar functionality. Would appreciate more specific guidance and experiences from the wider community. Thanks.250Views0likes1CommentRecent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized
Hi everyone, I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the TimeGenerated parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting. For example, this basic KQL query: DeviceLogonEvents | where TimeGenerated >= ago(30d) | where LogonType != "Local" | where DeviceName !contains ".fr" | where DeviceName !contains "shared-" | where DeviceName !contains "gdc-" | where DeviceName !contains "mon-" | distinct DeviceName Now throws the error: Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query. Removing TimeGenerated makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI. This issue started affecting a Logic App that runs weekly—it worked on May 11th but failed on May 18th. Questions: Has there been a recent schema change or deprecation of TimeGenerated in Defender ATP's KQL for Logic Apps? Is there an alternative column or syntax we should use now? Are others experiencing this? Any insights or workarounds would be greatly appreciated!417Views1like3CommentsNeed assistance on KQL query for pulling AKS Pod logs
I am trying to pull historical pod logs using below kql query. Looks like joining the tables; containerlog and KubePodInventory didn't go well as i see lot of duplicates in the output. ContainerLog //| project TimeGenerated, ContainerID, LogEntry | join kind= inner ( KubePodInventory | where ServiceName == "<<servicename>>" ) on ContainerID | project TimeGenerated, Namespace, ContainerID, ServiceName, LogEntrySource, LogEntry, Name1 | sort by TimeGenerated asc Can someone suggest a better query?365Views0likes4CommentsAre you getting the most out of your Azure Log Analytics Workspace (LAW) investment?
Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources. That's the easy part - mining this data for actionable insights is often the real challenge. One way we did this was by surfacing events related to disks across our physical server estate. We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility. The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced. Here's the KQL query we came up with. let start_time=ago(30d); let end_time=now(); Event | where TimeGenerated > start_time and TimeGenerated < end_time | where EventLog contains 'System' | where Source contains 'Storage Agents' | where RenderedDescription contains 'Drive Array Physical Drive Status Change' | parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'* | project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName You can of course set up alerting with Alerts for Azure Monitor. I hope this example helps you get more value from your LAW.258Views1like2CommentsAzure Monitor AMA Migration helper workbook question for subscriptions with AKS clusters
Hi, In an ongoing project, I've been looking into helping a customer updating their agents from the Microsoft Monitoring Agent (MMA) to the new Azure Monitoring Agent (AMA) that consolidates installation and the previous Log Analytics agent, Telegraf agent, diagnostics extension in Azure Event Hubs, Storage etc., and then configure Data Collection Rules (DCRs) to collect data using the new agent. One of the first steps is of course to identify which resources are affected and that needs to be migrated. There are multiple tools to identify the resources such as this PowerShell script as well as the built-in AMA Migration workbook in Azure Monitor, which is what I used as the initial option at the start of the AMA migration process. When running the notebook, it will list all VMs, VMSSs etc. in the subscription that do not have the AMA agent installed, e.g., through an Azure Policy or automatically by having configured a DCR, or that do have the old MMA installed, and thus needs to be migrated. In Azure, Azure Kubernetes Services (AKS), as Kubernetes is a rather specific hosting service, almost like its own mini-ecosystem in regard to networking, storage, scaling etc., enables access and control of the underlying infrastructure composing the cluster created by the AKS and its master node, providing the potential fine-grain and granular control of these resources for IT administrators, power users etc. However, in most typical use cases the underlying AKS infrastructure resources should not be modified as it could break configured SLOs. When running the Azure Monitor built-in AMA migration workbook, it includes all resources by default that do not have the AMA installed already, no matter what type of resource it is, including potential underlying cluster infrastructure resources created by AKS in the "MC_" resource group(s), such as virtual machine scale sets handling the creation and scaling of nodes and node pools of an AKS cluster. Perhaps the underlying AKS infrastructure resources could be excluded from the AMA migration results of the Azure Monitor workbook by default, or if underlying non-AMA migrated AKS infrastructure resources are found, perhaps accompanied with a text describing potential remediation steps for AMA agent migration for AKS cluster infrastructure resources. Has anyone encountered the same issue and if so how did you work around it? Would be great to hear some input and if there's already some readily available solutions/workaround out there already (if not, I've been thinking perhaps making a proposed PR here with a filter and exclusion added to the default workbook e.g. here https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook). Thanks!248Views0likes1CommentBehavior when Batch Send Failed
Hi All, I am looking to send messages in batches to both Log Analytics and Event Hub services. My solution requires that the sent batches be all-or-none, meaning either all messages are sent successfully, or all messages are dropped in case of failure. Could you please clarify how Log Analytics and Event Hub handle failures during batch sends?Solved210Views0likes1Comment