First published on MSDN on Oct 02, 2017
I've heard some customer complain database performance is degraded after moving from on-premise server to azure and they suspect it is network latency. Here are some advice to troubleshoot this scenario.
-
First of all, we want to ensure the update statistics with fullscan is done anyway and the query plan used in on-premise and Azure SQL are the same. If we are dealing with different query plan, then we are not comparing apple to apple.
-
Use the select wait_type, * from sys.dm_exec_requests to double check what resource the query is waiting for when the query is running. If the slow performance is caused by network latency, you should be able to see the query is suspended on ASYNC_NETWORK_IO. To confirm it is stuck in waiting for network resource, you will need to run this dmv continuously.
-
Ensure your DTU is not max out. You can run Select * FROM sys.dm_db_resource_stats or check your portal.
-
Use BCP to test if it is pure network issue or some other application issue. You can find bcp example below.
-
Change the packet size in bcp if possible to see if you can benefit from it. In bcp, it is parameter -a. In above example, I used packet size 32768.
-
Check if it is possible to send concurrent requests in the same time to increase the throughput.
-
Test and try to move to different data center if possible.
-
Test and try to move the application server to azure if possible.
Bcp example:
Sample syntax is: bcp "select top 10000000 * from dbo.[tablename]" queryout C:\BCP\test.bcp -d dbname -U username@ servername -n -S tcp:dwsql01eastus2.database.windows.net -P password -a 32768 -o D:\BCP\Output.log
I have seen that bcp shows fast performance but not the application. Later we are able to find that the slow transfer rate is due to application made many unnecessary connections to the database.
Hope it helps. Good luck!