Forum Widgets
Latest Discussions
Effective Cloud Governance: Leveraging Azure Activity Logs with Power BI
We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times. One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log. This log contains a wealth of information ranging from noise to interesting to actionable data. One could set up alerts for delete and update signals however, that can result in a flood of notifications. To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log and allows you to group and summarize data by various dimensions. You still need someone to review the report regularly however consuming the data this way makes it a whole lot easier. This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment. If you're interested, this is the KQL query I'm using in Power Bi let start_time = ago(24h); let end_time = now(); AzureActivity | where TimeGenerated > start_time and TimeGenerated < end_time | where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE' | project TimeGenerated, Properties_d.resource, ResourceGroup, OperationNameValue, Authorization_d.scope, Authorization_d.action, Caller, CallerIpAddress, ActivityStatusValue | order by TimeGenerated ascAdeelazizJan 17, 2025Brass Contributor73Views0likes0CommentsHo w to correctly measure Bytes Received/sec &&&&& Bytes Sent/sec
I would like to correctly measure through log analytic and then in Grafana network traffic generated for one or more VMs. For test VMs I have enable Data collection rule and enabled collecting data every 60s for network Interface "Bytes Received/sec" and "Bytes Sent/sec". Inside metric is also enabled. Query that I use in log analytic is : Perf | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Computer == "***********" | where ObjectName == "Network Interface" and CounterName == "Bytes Received/sec" and InstanceName == "Microsoft Hyper-V Network Adapter _2" | summarize BytsSent = sum(CounterValue)/1073741824 by bin(TimeGenerated, 24h),CounterName InsightsMetrics | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond" | where Computer == "******" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"]) | summarize AggregatedValue = sum(Val) by bin(TimeGenerated, 1d), Computer, _ResourceId, NetworkInterface Results for Perf is 0,32339 GB/day and for InsightsMetrics is 14.7931 GB/day. If I go to network interface and select metric data for network interface is data that I get return from query in log analytic for Metric same/correct . I have now shorten sample period of data collection rule to 15s, I hope that this will ,give more accurate results. I’m I doing something wrong or I collect data the wrong way. I don’t want to activate inside metric for every VM, I want to activate any data that I’m interesting.BlatniBPMCPMar 25, 2024Copper Contributor420Views0likes0CommentsAzure Monitoring Agent Virtual Machines not connecting to log analytics workspace
Hey there, i tried to rollout monitoring for Azure Virtual Machines. For Testing i created a basic DCR to collect general Performance Counters of the associated VM's. The DCR is in Terraform defined as following : resource "azurerm_monitor_data_collection_rule" "log" { name = "test_rule" location = azurerm_resource_group.test_group.location resource_group_name = azurerm_resource_group.test_group.name kind = "Windows" destinations { log_analytics { workspace_resource_id = azurerm_log_analytics_workspace.default_workspace.id name = azurerm_log_analytics_workspace.default_workspace.name } } data_flow { streams = [ "Microsoft-Perf" ] destinations = [azurerm_log_analytics_workspace.default_workspace.name] } data_sources { performance_counter { streams = [ "Microsoft-Perf" ] sampling_frequency_in_seconds = 60 counter_specifiers = [ "\\Processor Information(_Total)\\% Processor Time", "\\Processor Information(_Total)\\% Privileged Time", "\\Processor Information(_Total)\\% User Time", "\\Processor Information(_Total)\\Processor Frequency", "\\System\\Processes", "\\Process(_Total)\\Thread Count", "\\Process(_Total)\\Handle Count", "\\System\\System Up Time", "\\System\\Context Switches/sec", "\\System\\Processor Queue Length", "\\Memory\\% Committed Bytes In Use", "\\Memory\\Available Bytes", "\\Memory\\Committed Bytes", "\\Memory\\Cache Bytes", "\\Memory\\Pool Paged Bytes", "\\Memory\\Pool Nonpaged Bytes", "\\Memory\\Pages/sec", "\\Memory\\Page Faults/sec", "\\Process(_Total)\\Working Set", "\\Process(_Total)\\Working Set - Private", "\\LogicalDisk(_Total)\\% Disk Time", "\\LogicalDisk(_Total)\\% Disk Read Time", "\\LogicalDisk(_Total)\\% Disk Write Time", "\\LogicalDisk(_Total)\\% Idle Time", "\\LogicalDisk(_Total)\\Disk Bytes/sec", "\\LogicalDisk(_Total)\\Disk Read Bytes/sec", "\\LogicalDisk(_Total)\\Disk Write Bytes/sec", "\\LogicalDisk(_Total)\\Disk Transfers/sec", "\\LogicalDisk(_Total)\\Disk Reads/sec", "\\LogicalDisk(_Total)\\Disk Writes/sec", "\\LogicalDisk(_Total)\\Avg. Disk sec/Transfer", "\\LogicalDisk(_Total)\\Avg. Disk sec/Read", "\\LogicalDisk(_Total)\\Avg. Disk sec/Write", "\\LogicalDisk(_Total)\\Avg. Disk Queue Length", "\\LogicalDisk(_Total)\\Avg. Disk Read Queue Length", "\\LogicalDisk(_Total)\\Avg. Disk Write Queue Length", "\\LogicalDisk(_Total)\\% Free Space", "\\LogicalDisk(_Total)\\Free Megabytes", "\\Network Interface(*)\\Bytes Total/sec", "\\Network Interface(*)\\Bytes Sent/sec", "\\Network Interface(*)\\Bytes Received/sec", "\\Network Interface(*)\\Packets/sec", "\\Network Interface(*)\\Packets Sent/sec", "\\Network Interface(*)\\Packets Received/sec", "\\Network Interface(*)\\Packets Outbound Errors", "\\Network Interface(*)\\Packets Received Errors" ] name = "datasourceperfcounter" } } description = "General data collection rule for collecting windows performance counter rules" } Also i created the association of the DCR and my Virtual Machine using either Terraform, Policies and Portal. The Monitor Agent and identity is assinged in all cases properly. But the Connection of the DCR / DCR Associations doesn't seem to work in case of terraform or policy enrollment. For some reason the log analytic namespace neither receive an Heartbeat of the agent nor creating the tables for the performance counters. If i recreate the association between DCR and vm in those cases it works again. Is there any additional Step required when using the Policies or Terraform to setup the data collection rule or this a bug where some kind of required event is not raised properly ?Hauke_lNov 13, 2023Copper Contributor908Views0likes0CommentsHow to organize workspace-based Application Insights resources
With the announcement the classic application insights need to be moved to workspaces by February 29, 2024, I want to understand how to organize our instances. Right now we have 100+ instances of application insights that need to be moved to workspaces. The three current proposals are 1. Make a single log analytics workspace, and simply move everything. 2. Make a log analytics workspace per environment. Dev, QA, Staging, Production and move accordingly. 3. Make a log analytics workspace per environment and move accordingly. Has anyone had experience with this effort? What would you suggest? What are the pros and cons of putting all of the instances into a single workspace? Thanks, Jakejdriscoll1755Sep 19, 2023Copper Contributor640Views0likes0Commentslegacy linux MMA agent still sending data after Primary Key rotation
We have performed a primary / secondary key change for those servers not yet ready to move to AMA. we have noticed a few "onprem" linux syslog nodes still sending data even after the key was rotated but the key was not updated on the agent. is this expected behaviour? linux distro:ubuntu agent v 1.14.23-0mikebaker26Sep 06, 2023Brass Contributor346Views0likes0CommentsAzure Diagnostic data cannot be processed by Azure Stream Analytics due to InputDeserializerError
Planning to steam Azure resource(frontdoor) diagnostic logs to stream to Azure Stream Analytics. However, having troubles on this one as data specifically from AzureDiagnostics failed to get deserialized as input for Stream Analytics job. Error: Error while deserializing input message Id: Partition: [0], Offset: [3663944], SequenceNumber: [285]. Hit following error: Column name: ErrorInfo is already being used. Please ensure that column names are unique (case insensitive) and do not differ only by whitespaces. It's caused by a duplicating column, errorInfo and ErrorInfo on AzureDiagnostic Table, which I am unsure what distinguishes them apart when observing its values. Have any thoughts or solution in mind on how we could simplify or transform these Diagnostic log to possibly remove this duplicating column prior to getting ingested to the Stream Analytics job? Have initially thought of the following solutions, but they aren't so straight-forward and probably costs more and would like to hear other's thoughts as well. 1. Transformation using DCR. I beleive this is ideal for sending Diagnostic Logs to Log Analytics workspace. but this would mean diagnostic logs have to pass through the workspace and then get exported to Stream Analytics which to achieve, may require to add in more components in between the data pipeline. 2. Logic App. Saw somewhere where a scheduled Logic App(probably run by schedule) is used to export data using a query (KQL) from Log analytics workspace then get sent to a storage. Has to modify the destination to an event hub instead perhaps. yet again, to many layers just to pass on the data to ASA. Any other solution you can suggest to refining the incoming data to ASA while minimizing the utilization of compute resources?AizaBCAug 02, 2023Copper Contributor729Views0likes0CommentsAzure Alert ITSM Servicenow Connector Payload not appearing in ticket description
Hello, Trying to create ServiceNow tickets based on alerts from Azure alert rule in "Log Analytics Workspace" for Machine learning Job failures with ITSM connector based action group. In this process, in ServiceNow tickets are getting generated but issue is with Payload i.e., payload passed is not appearing in ticket description under the section <-- Log Entry --> as shown in screenshot below. I have gone through the documentation but I couldn't find exact reference in addressing this issue. It would be great if you can provide any suggestions / exact references in the documentation. Please let me know for any additional inputs. Thanks & Regards, Siva KumarSiva_Kumar_mentaJun 02, 2023Copper Contributor633Views0likes0CommentsIoT Hub Distributed Tracing
Hi I have been following this guide: https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-distributed-tracing and have done everything and messages are being sent with tracestates but I am not receiving any logs in my container or log analytics workspace, I get logs for other things like connections but not distributed tracing logs. what could the issue be? Thanks692Views0likes0CommentsLog Analytics dropping logs if the application shuts down immediately after
In my micronaut application, there is an error condition where I write the error to the logs and then I am forcing my application to shutdown. The shutdown happens as expected, but I do not see the error log that was fired just before the shutdown. Is it possible it is lost? Is it related to latency? My understanding is that even with the latency, I should get to see the error log.JohnOldmanApr 05, 2023Microsoft491Views0likes0CommentsKQL Policy Definition ID to displayName and Description
I'm new to KQL and I have a KQL query (CIS Benchmark). Among other things, the query returns me the policyDefinitionId. Unfortunately, this is not readable. How do I do a join so I can retrieve the policy definition displayname and description? Here is the query: PolicyResources | where type =~ 'Microsoft.PolicyInsights/PolicyStates' and properties.policyAssignmentId =~ '/providers/microsoft.management/managementgroups/xxx/providers/microsoft.authorization/policyassignments/8e0161c630a04095a6f38306' |project subscriptionId, properties,id, resource_id=tolower(tostring(properties.resourceId)) | join kind=leftouter (resources | project resource_id=tolower(tostring(id)),resource_name=name) on resource_id | join kind=inner (resourcecontainers | where type == 'microsoft.resources/subscriptions' | project subscriptionId,subscription_contact=tostring(tags.resourcecontact), sbg=tostring(tags.sbg), management_group=tostring(properties.managementGroupAncestorsChain[0].displayName),subscription_name=name)on subscriptionId | project management_group, subscription_name, subscriptionId, subscription_contact, properties.complianceState, properties.policyDefinitionReferenceId, AssignmentID = tostring(id), properties.resourceType, InstanceID = tostring(properties.resourceId), resource_namecopperleafFeb 13, 2023Copper Contributor1.1KViews0likes0Comments
Resources
Tags
- azure monitor1,092 Topics
- Azure Log Analytics400 Topics
- Query Language247 Topics
- Log Analytics63 Topics
- Custom Logs and Custom Fields18 Topics
- solutions17 Topics
- Metrics15 Topics
- Workbooks14 Topics
- alerts14 Topics
- application insights13 Topics