Forum Widgets
Latest Discussions
How to organize workspace-based Application Insights resources
With the announcement the classic application insights need to be moved to workspaces by February 29, 2024, I want to understand how to organize our instances. Right now we have 100+ instances of application insights that need to be moved to workspaces. The three current proposals are 1. Make a single log analytics workspace, and simply move everything. 2. Make a log analytics workspace per environment. Dev, QA, Staging, Production and move accordingly. 3. Make a log analytics workspace per environment and move accordingly. Has anyone had experience with this effort? What would you suggest? What are the pros and cons of putting all of the instances into a single workspace? Thanks, Jakejdriscoll1755Mar 28, 2026Copper Contributor654Views0likes1CommentHo w to correctly measure Bytes Received/sec &&&&& Bytes Sent/sec
I would like to correctly measure through log analytic and then in Grafana network traffic generated for one or more VMs. For test VMs I have enable Data collection rule and enabled collecting data every 60s for network Interface "Bytes Received/sec" and "Bytes Sent/sec". Inside metric is also enabled. Query that I use in log analytic is : Perf | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Computer == "***********" | where ObjectName == "Network Interface" and CounterName == "Bytes Received/sec" and InstanceName == "Microsoft Hyper-V Network Adapter _2" | summarize BytsSent = sum(CounterValue)/1073741824 by bin(TimeGenerated, 24h),CounterName InsightsMetrics | where TimeGenerated between (datetime(2024-03-19) .. datetime(2024-03-20)) | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond" | where Computer == "******" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"]) | summarize AggregatedValue = sum(Val) by bin(TimeGenerated, 1d), Computer, _ResourceId, NetworkInterface Results for Perf is 0,32339 GB/day and for InsightsMetrics is 14.7931 GB/day. If I go to network interface and select metric data for network interface is data that I get return from query in log analytic for Metric same/correct . I have now shorten sample period of data collection rule to 15s, I hope that this will ,give more accurate results. I’m I doing something wrong or I collect data the wrong way. I don’t want to activate inside metric for every VM, I want to activate any data that I’m interesting.BlatniBPMCPMar 25, 2026Copper Contributor459Views0likes1CommentAzure Monitoring Agent Virtual Machines not connecting to log analytics workspace
Hey there, i tried to rollout monitoring for Azure Virtual Machines. For Testing i created a basic DCR to collect general Performance Counters of the associated VM's. The DCR is in Terraform defined as following : resource "azurerm_monitor_data_collection_rule" "log" { name = "test_rule" location = azurerm_resource_group.test_group.location resource_group_name = azurerm_resource_group.test_group.name kind = "Windows" destinations { log_analytics { workspace_resource_id = azurerm_log_analytics_workspace.default_workspace.id name = azurerm_log_analytics_workspace.default_workspace.name } } data_flow { streams = [ "Microsoft-Perf" ] destinations = [azurerm_log_analytics_workspace.default_workspace.name] } data_sources { performance_counter { streams = [ "Microsoft-Perf" ] sampling_frequency_in_seconds = 60 counter_specifiers = [ "\\Processor Information(_Total)\\% Processor Time", "\\Processor Information(_Total)\\% Privileged Time", "\\Processor Information(_Total)\\% User Time", "\\Processor Information(_Total)\\Processor Frequency", "\\System\\Processes", "\\Process(_Total)\\Thread Count", "\\Process(_Total)\\Handle Count", "\\System\\System Up Time", "\\System\\Context Switches/sec", "\\System\\Processor Queue Length", "\\Memory\\% Committed Bytes In Use", "\\Memory\\Available Bytes", "\\Memory\\Committed Bytes", "\\Memory\\Cache Bytes", "\\Memory\\Pool Paged Bytes", "\\Memory\\Pool Nonpaged Bytes", "\\Memory\\Pages/sec", "\\Memory\\Page Faults/sec", "\\Process(_Total)\\Working Set", "\\Process(_Total)\\Working Set - Private", "\\LogicalDisk(_Total)\\% Disk Time", "\\LogicalDisk(_Total)\\% Disk Read Time", "\\LogicalDisk(_Total)\\% Disk Write Time", "\\LogicalDisk(_Total)\\% Idle Time", "\\LogicalDisk(_Total)\\Disk Bytes/sec", "\\LogicalDisk(_Total)\\Disk Read Bytes/sec", "\\LogicalDisk(_Total)\\Disk Write Bytes/sec", "\\LogicalDisk(_Total)\\Disk Transfers/sec", "\\LogicalDisk(_Total)\\Disk Reads/sec", "\\LogicalDisk(_Total)\\Disk Writes/sec", "\\LogicalDisk(_Total)\\Avg. Disk sec/Transfer", "\\LogicalDisk(_Total)\\Avg. Disk sec/Read", "\\LogicalDisk(_Total)\\Avg. Disk sec/Write", "\\LogicalDisk(_Total)\\Avg. Disk Queue Length", "\\LogicalDisk(_Total)\\Avg. Disk Read Queue Length", "\\LogicalDisk(_Total)\\Avg. Disk Write Queue Length", "\\LogicalDisk(_Total)\\% Free Space", "\\LogicalDisk(_Total)\\Free Megabytes", "\\Network Interface(*)\\Bytes Total/sec", "\\Network Interface(*)\\Bytes Sent/sec", "\\Network Interface(*)\\Bytes Received/sec", "\\Network Interface(*)\\Packets/sec", "\\Network Interface(*)\\Packets Sent/sec", "\\Network Interface(*)\\Packets Received/sec", "\\Network Interface(*)\\Packets Outbound Errors", "\\Network Interface(*)\\Packets Received Errors" ] name = "datasourceperfcounter" } } description = "General data collection rule for collecting windows performance counter rules" } Also i created the association of the DCR and my Virtual Machine using either Terraform, Policies and Portal. The Monitor Agent and identity is assinged in all cases properly. But the Connection of the DCR / DCR Associations doesn't seem to work in case of terraform or policy enrollment. For some reason the log analytic namespace neither receive an Heartbeat of the agent nor creating the tables for the performance counters. If i recreate the association between DCR and vm in those cases it works again. Is there any additional Step required when using the Policies or Terraform to setup the data collection rule or this a bug where some kind of required event is not raised properly ?Hauke_lJan 28, 2026Copper Contributor948Views0likes1CommentIngest Azure Monitor logs in Elasticsearch and visualize it on Kibana
Dear Fellow, If Anyone has did please share ? Thanks & RegardsirshadJan 07, 2026Brass Contributor680Views0likes1CommentFailed to send, wrong host address or cannot reach address due to network issues
Hi, we are using application insight agent 3.0.2 . Created a custom availability test using application insight and powershell. Ref: https://swimburger.net/blog/azure/run-availability-tests-using-powershell-and-azure-application-insights It was working successfully for almost 1.5 years . since 31st March , data are not sent to azure on the scheduled interval (5m) .. Data are available once in an hour or two . Below logs are found in the applicationinsight log file ERROR c.m.a.i.c.c.TransmissionNetworkOutput - Failed to send, wrong host address or cannot reach address due to network issues java.net.UnknownHostException: dc.services.visualstudio.com at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source) at java.net.InetAddress.getAddressesFromNameService(Unknown Source) at java.net.InetAddress.getAllByName0(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.microsoft.applicationinsights.internal.channel.common.ApacheSender43.sendPostRequest(ApacheSender43.java:79) at com.microsoft.applicationinsights.internal.channel.common.TransmissionNetworkOutput.sendSync(TransmissionNetworkOutput.java:162) at com.microsoft.applicationinsights.internal.channel.common.ActiveTransmissionNetworkOutput$1.run(ActiveTransmissionNetworkOutput.java:80) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2023-04-25 12:11:38.606+03 ERROR c.m.a.i.c.c.TransmissionFileSystemOutput - Persistent storage max capacity has been reached; currently at 10486324 bytes. Telemetry will be lost, please consider increasing the value of MaxTransmissionStorageFilesCapacityInMB property in the configuration file. what needs to be done to fix this issue ? Other than the availability tests , dependency and exception logs are not affected.Racheal2kJan 05, 2026Copper Contributor922Views0likes1CommentOpenmetrics on Azure observability
Team, I am trying to connect a couple of openmetrics endpoints to Azure Monitor allthough it does not seem to support out of the box, I am overseeing the documentation? is there a easy way to connect openmetrics endpoint ? It seems it might be able to connect the openmetrics endpoints if I deploy Azure Monitor managed service for Prometheus, but its not clear.jcandido345Jan 04, 2026Copper Contributor445Views0likes1Commentpullout data from event hub to a linux platform
We have a project to read some api interrogation and feed them to our db on a linux platform. The aim of the whole project it to interact with an IOT. The rest api have been developped on third party azure platform. We can query the api from the linux shell by "curl -x post...." command but to be able to read the answer, we need to go through the "event hub" of azure. This is not the most convenient way for us to get the info and feed them to our system. Is there any way to fetch this info directly in our linux server?Fabio-linuxJan 03, 2026Copper Contributor568Views0likes1CommentHow to setup a Log Analytics Workspace in a Fabric Notebook
I am using azure-monitor-query to automate some logs we need but can't get the workspace to work in the notebook with service principal credentials. The service principal has read access to the log analytics workspace. credential = ClientSecretCredential( tenant_id= 'tenant_id', client_id= "client_id", client_secret= "client_secret") client = LogsQueryClient(credential) log_workspace_id = "workspace_id" query = """AppRequests | take 5""" start_time=datetime(2021, 7, 2, tzinfo=timezone.utc) end_time=datetime(2021, 7, 4, tzinfo=timezone.utc) try: response = client.query_workspace( workspace_id = log_workspace_id, query=query, timespan=(start_time, end_time) ) if response.status == LogsQueryStatus.PARTIAL: error = response.partial_error data = response.partial_data print(error) elif response.status == LogsQueryStatus.SUCCESS: data = response.tables for table in data: df = pd.DataFrame(data=table.rows, columns=table.columns) print(df) except HttpResponseError as err: print("something fatal happened") print(err) something fatal happened (PathNotFoundError) The requested path does not exist Code: PathNotFoundError Message: The requested path does not existcdreeetzJan 01, 2026Copper Contributor2KViews0likes1CommentAzure Devops to Workbook visualization
Hi team, please help us on enabling the azure devops pipeline jobs status, log and relevant dashboard to integrate in azure workbook view rather checking from Azure devops portal. please share the procedure, documentation link on performing this activity. Thanks!SeshadrrDec 31, 2025Iron Contributor380Views1like1Comment
Tags
- azure monitor1,092 Topics
- Azure Log Analytics400 Topics
- Query Language247 Topics
- Log Analytics63 Topics
- Custom Logs and Custom Fields18 Topics
- solutions17 Topics
- Metrics15 Topics
- workbooks14 Topics
- alerts14 Topics
- application insights13 Topics