performance
6 TopicsCalculating performance counters memory and network usage as percentage
Hi everyone, Calculating memory and network usage as a percentage of the resource is a classic problem for Windows performance counters. The issue is that the available counters are absolute numbers, but you don't necessarily know the total for these resources on a given machine. Has anyone come up with a clever way to do this through in OMS and log analytics? For reference, here is a query I'm using to get the average for Bytes Total/sec: Perf | where ObjectName == "Network Interface" and CounterName == "Bytes Total/sec" | where TimeGenerated > startofday(now()-22d) and TimeGenerated < endofday(now()-1d) | summarize AggregatedValue = avg(CounterValue) by Computer, InstanceName , bin(TimeGenerated, 1hour)Solved11KViews0likes3CommentsMin and Max memory setup on SQL server
We are running a BT 2010 environment which processes over 150,000 messages per day during the peak business hours. We have an SQL Server instance running on a separate Windows server 2008 R2 instance. For past few weeks, we are experiencing intermittent outages and the error logged on BizTalk APP server says "Login is from an untrusted domain.....". upon checking the logs on database server we have NETLOGON errors (unable to contact AD for credential validation) due to "Unavailable Memory". CPU Utilization and Memory utilization is 95-100% on SQL server most of the times. Following are the settings on SQL server - RAM: 16 GB, SQL Server Min and Max memory Allocation is 12 GB. ENTSSO is also running on the same SQL server. Please suggest any change in setting on SQL server which can reduce High Utilization and free up some memory for background windows processes and ENTSSO.1.4KViews0likes1CommentWVD Gateway Performance Issues?
Hey Everyone, We have been chugging along just fine with our WVD installation. All of our users are remote at this point, due to COVID-19 and everything had been fine. This morning, around 9:30, we started seeing performance issues with connection to the desktop. By performance issues I mean, slow mouse movements, slow to scroll through a website or email and the like. There are no indicators of performance issues on the WVD Hosts themselves - Disk / CPU / Memory all fine. I have used Horizon View and XenDesktop in the past pretty extensively, and this felt like a scenario where bandwidth to the gateway/connection broker is saturated or is seeing high latency/packet loss. I would blame this on an individual's home internet connection or coffee shop wifi, but, it is all of my users (including myself). I wish there were some way to see or manage the performance of the WVD connection broker...but I am not aware of any. Is anyone else seeing this? I did see the thread about drops and what not, but this is not related, I do not believe.10KViews3likes15CommentsAzure app service - http performance
Hi, as part of a project we have multiple app services running on azure. These services communicate with each other via http. As part of this project, we started running load tests and noticed surprisingly slow results even with, in our opinion, small number of requests. In order to identify performance issues, I created 2 small web apps, 1 using .net and 1 using node.js. Both apps provide a REST API that is called from our load tests running in VS Online. The apps call a single azure function that does a simple Thread.Sleep(1000) and returns, no other processing is done. Trying both vertical and horizontal scaling, I got the following results: Single B1 instance # of requests Avg. response in s .net Node.js 100 1.5 1.7 300 3.1 6.6 500 7.1 23.4 Single B3 instance # of requests Avg. response in s .net Node.js 100 1.7 1.5 300 5.3 4 500 6.5 10.2 3 B1 instances # of requests Avg. response in s .net Node.js 100 1.4 3.5 300 4.6 4.9 500 6.7 15.1 3 B3 instances # of requests Avg. response in s .net Node.js 100 1.4 3.5 300 4.6 4.9 500 7.5 7.2 Looking at these results, it seems like the choice of technology (.net or node.js) does not have a big impact on performance. Generally however, the avg. response times seem very long, especially since there is no real processing or business logic in the applications itself. Of course e.g 500 concurrent requests equals a much larger number of users on the platform but it seems like this should still take a lot less time than is seen in the test results. The configuration of both applications is completely default, so I am wondering if there is anything we should consider to improve performance. This is a rather important issue because we have a strong requirement of response times of less than 3 seconds. Thanks and best regards, Juergen Ziegler1.3KViews0likes0Comments