Metrics
15 TopicsAzure Monitor AMA Migration helper workbook question for subscriptions with AKS clusters
Hi, In an ongoing project, I've been looking into helping a customer updating their agents from the Microsoft Monitoring Agent (MMA) to the new Azure Monitoring Agent (AMA) that consolidates installation and the previous Log Analytics agent, Telegraf agent, diagnostics extension in Azure Event Hubs, Storage etc., and then configure Data Collection Rules (DCRs) to collect data using the new agent. One of the first steps is of course to identify which resources are affected and that needs to be migrated. There are multiple tools to identify the resources such as this PowerShell script as well as the built-in AMA Migration workbook in Azure Monitor, which is what I used as the initial option at the start of the AMA migration process. When running the notebook, it will list all VMs, VMSSs etc. in the subscription that do not have the AMA agent installed, e.g., through an Azure Policy or automatically by having configured a DCR, or that do have the old MMA installed, and thus needs to be migrated. In Azure, Azure Kubernetes Services (AKS), as Kubernetes is a rather specific hosting service, almost like its own mini-ecosystem in regard to networking, storage, scaling etc., enables access and control of the underlying infrastructure composing the cluster created by the AKS and its master node, providing the potential fine-grain and granular control of these resources for IT administrators, power users etc. However, in most typical use cases the underlying AKS infrastructure resources should not be modified as it could break configured SLOs. When running the Azure Monitor built-in AMA migration workbook, it includes all resources by default that do not have the AMA installed already, no matter what type of resource it is, including potential underlying cluster infrastructure resources created by AKS in the "MC_" resource group(s), such as virtual machine scale sets handling the creation and scaling of nodes and node pools of an AKS cluster. Perhaps the underlying AKS infrastructure resources could be excluded from the AMA migration results of the Azure Monitor workbook by default, or if underlying non-AMA migrated AKS infrastructure resources are found, perhaps accompanied with a text describing potential remediation steps for AMA agent migration for AKS cluster infrastructure resources. Has anyone encountered the same issue and if so how did you work around it? Would be great to hear some input and if there's already some readily available solutions/workaround out there already (if not, I've been thinking perhaps making a proposed PR here with a filter and exclusion added to the default workbook e.g. here https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook). Thanks!34Views0likes1CommentAzure alert on multipel subscriptions
Hi all, i am not sure if this is the rigt place, but here goes. I am working on creating a monitoring solution, and are trying to create some dynamic alert rules. I need them to look on a lot of subscriptions, but when use chose scope, you can only chose one subscription. So i have exported the template and add'ed another subscription in the scopes section, but will it work? This is what the properties section looks like in the template, it is looking on cpu usage over time: "properties": { "description": "Dynamic warning on CPU ussage", "severity": 2, "enabled": true, "scopes": [ "/subscriptions/Sub1", "/subscriptions/Sub2" ], "evaluationFrequency": "PT15M", "windowSize": "PT1H", "criteria": { "allOf": [ { "alertSensitivity": "High", "failingPeriods": { "numberOfEvaluationPeriods": 4, "minFailingPeriodsToAlert": 4 }, "name": "Metric1", "metricNamespace": "microsoft.compute/virtualmachines", "metricName": "Percentage CPU", "operator": "GreaterOrLessThan", "timeAggregation": "Average", "criterionType": "DynamicThresholdCriterion" } ], "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria" }, Any input is more than welcome 🙂 Regards Jan L Dam759Views0likes3CommentsHo w to correctly measure Bytes Received/sec &&&&& Bytes Sent/sec
I would like to correctly measure through log analytic and then in Grafana network traffic generated for one or more VMs. For test VMs I have enable Data collection rule and enabled collecting data every 60s for network Interface "Bytes Received/sec" and "Bytes Sent/sec". Inside metric is also enabled. Query that I use in log analytic is : Perf | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Computer == "***********" | where ObjectName == "Network Interface" and CounterName == "Bytes Received/sec" and InstanceName == "Microsoft Hyper-V Network Adapter _2" | summarize BytsSent = sum(CounterValue)/1073741824 by bin(TimeGenerated, 24h),CounterName InsightsMetrics | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond" | where Computer == "******" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"]) | summarize AggregatedValue = sum(Val) by bin(TimeGenerated, 1d), Computer, _ResourceId, NetworkInterface Results for Perf is 0,32339 GB/day and for InsightsMetrics is 14.7931 GB/day. If I go to network interface and select metric data for network interface is data that I get return from query in log analytic for Metric same/correct . I have now shorten sample period of data collection rule to 15s, I hope that this will ,give more accurate results. I’m I doing something wrong or I collect data the wrong way. I don’t want to activate inside metric for every VM, I want to activate any data that I’m interesting.317Views0likes0CommentsSeeking an understanding about migrating Application Insights to Log Analytics Workspace
Concerning the following: https://docs.microsoft.com/en-us/azure/azure-monitor/app/convert-classic-resource I'm looking to understand more about the impacts of migrating existing Application Insights based logs to an Azure Monitor "Log Analytics Workspace" resource. At my company we already have quite a few dashboards, queries, and alerts built against the existing App Insights solution. I fear that the migration of logs will break all of these. For example, "customEvents" will become "AppEvents" and that will may break many things according to this: https://docs.microsoft.com/en-us/azure/azure-monitor/app/apm-tables Am I understanding correctly? Can anyone advise an upgrade path that won't break things? When you try to migrate an App Insights resource you get a message that says "Telemetry stored in both locations will be merged when making queries.". Can anyone clarify this? Does that mean we can continue using the old saved queries and dashboards within App Insights and they will continue to function after the migration (but reading the new Workspace)? Is it possible to start feeding logs to the Workspace in parallel while still feeding them into the old App Insights solution? This would allow us to gradually convert our queries and dashboards to the new Workspace without instantly breaking them all in an instant, and then perform the full migration only when done the updates. Is there a way to identify all saved log queries and log usages in Azure? We have many teams and resources in use and may have unknown log usages we don't want to miss. Thank you, Jonathan4.4KViews0likes4CommentsHow to redirect performance logs to another Azure log analytic workspace
Dear members, I am new to Azure monitor/log analytic workspace and I'm in the process of configuring it. Initially, we believed that having two LAWs would suffice for our business requirements, and we put significant effort into adjusting Azure Policies exclusions to make it work. However, we didn't succeed in that approach. After gaining a deeper understanding of LAW, we decided to using a single LAW and have most of our resources report to it. To achieve this, we cleaned up Azure policies and direct DCRs to point to this unified LAW. The issue we currently face is that the specific group of VMs continues to send performance data to the outdated LAW, and we can't identify where to make the necessary changes. We have triple checked all levels of management groups of Azure policies or the remaining active DCRs yet still no luck. All of these VMs have the AMA installed. Is there a need to update the AMA, which we may be unaware of? We are running out of ideas on where to adjust the settings so that we can consolidate all the logs into this single LAW. We would greatly appreciate any suggestions or recommendations from forum members! Thank you in advance for any help! Sally921Views0likes2CommentsAdded support for deep links to Azure Portal for Metrics from Grafana
We have enabled easy exploration of Azure monitor metrics from grafana to Azure portal. With this feature now when a user clicks on the query result of metrics in grafana, they will see a context menu with a link to View in Azure Portal. Selecting it takes them to the corresponding chart in the Azure portal Metrics Explorer.1.5KViews1like1CommentAzure Diagnostic data cannot be processed by Azure Stream Analytics due to InputDeserializerError
Planning to steam Azure resource(frontdoor) diagnostic logs to stream to Azure Stream Analytics. However, having troubles on this one as data specifically from AzureDiagnostics failed to get deserialized as input for Stream Analytics job. Error: Error while deserializing input message Id: Partition: [0], Offset: [3663944], SequenceNumber: [285]. Hit following error: Column name: ErrorInfo is already being used. Please ensure that column names are unique (case insensitive) and do not differ only by whitespaces. It's caused by a duplicating column, errorInfo and ErrorInfo on AzureDiagnostic Table, which I am unsure what distinguishes them apart when observing its values. Have any thoughts or solution in mind on how we could simplify or transform these Diagnostic log to possibly remove this duplicating column prior to getting ingested to the Stream Analytics job? Have initially thought of the following solutions, but they aren't so straight-forward and probably costs more and would like to hear other's thoughts as well. 1. Transformation using DCR. I beleive this is ideal for sending Diagnostic Logs to Log Analytics workspace. but this would mean diagnostic logs have to pass through the workspace and then get exported to Stream Analytics which to achieve, may require to add in more components in between the data pipeline. 2. Logic App. Saw somewhere where a scheduled Logic App(probably run by schedule) is used to export data using a query (KQL) from Log analytics workspace then get sent to a storage. Has to modify the destination to an event hub instead perhaps. yet again, to many layers just to pass on the data to ASA. Any other solution you can suggest to refining the incoming data to ASA while minimizing the utilization of compute resources?647Views0likes0CommentsAlert Suppression
Hey there, I think what I'm looking for is alert suppression but as available in Azure Monitor now it doesn't seem to do what I want. I have an event that shows up in the log and, once it gets started, it repeats a lot. What I want is to look for an event in the logs and send an alert when it first occurs. After that I only want an alert every hour or every 4 hours or something. I've always thought of this as a form of suppression but I don't see a way to do this. Thoughts? TIA ~DGM~808Views0likes1CommentManaging your data with Azure Managed Gafana integrations with Azure Monitor
Along with the announcement of Azure Managed Grafana, learn about new Grafana integrations with #Azure Monitor, including the ability to pin Azure Monitor visualizations from Azure Portal to Grafana dashboards and new out-of-the-box Azure Monitor dashboards! https://techcommunity.microsoft.com/t5/azure-observability-blog/enabling-full-stack-observability-with-azure-monitor-and-grafana/ba-p/3287145 How have you been using Azure Managed Grafana to manage your app data and infrastructure? How do you plan to use this new integration?1.1KViews0likes0CommentsInsights of Virtual Machine not showing all mount points
Hi, I have enabled Insights (in the Monitoring section of VM) on the Virtual Machine. But the `Max Logical Disk Used %` graph in Insights tab doesn't show all the mount points present in the VM. Can someone please help. Even the following query doesn't show all the mount points present in the VM. InsightsMetrics | where Name == "FreeSpacePercentage" | summarize arg_max(TimeGenerated, *) by Tags | project TimeGenerated, Computer, Val, Tags Thanks.Solved2.1KViews0likes4Comments