Forum Discussion
Procradminator
Feb 12, 2025Copper Contributor
DNS lookup performance
Hello all
I've got this to do what I want but thought I'd run it past people who know more than me in the hope someone would be kind enough to advise on the following.
The intention is to run this every few minutes using task scheduler, I'll push to one or more machines with an RMM.
Questions.
- Is this an efficient an accurate way to do this?
- Are there any improvements anyone wants to suggest for the code
- Am I re-inventing a wheel that I can get somewhere for free or low cost?
I'm waiting for the new version of GRC's DNS testing tool so this is a stopgap unless it works well enough.
TIA
# Define an array to store the DNS Servers to be queried with thier FQDN and IP address
$dnsServers = @()
# Add 5 hosts with their FQDN and IP addresses
$dnsServers += [PSCustomObject]@{ FQDN = "OurDNS1"; IPAddress = "14.15.16.17" }
$dnsServers += [PSCustomObject]@{ FQDN = "OurDNS2"; IPAddress = "11.12.13.14" }
$dnsServers += [PSCustomObject]@{ FQDN = "Cloudflare"; IPAddress = "1.1.1.1" }
$dnsServers += [PSCustomObject]@{ FQDN = "Quad9"; IPAddress = "9.9.9.9" }
$dnsServers += [PSCustomObject]@{ FQDN = "Google"; IPAddress = "8.8.8.4" }
# Define an array to store target FQDNs
$targetFqdns = @(
"bbc.co.uk",
"www.porsche.com",
"www.amazon.co.uk"
)
# Get the current date in yyyy-MM-dd format
$currentDate = Get-Date -Format "yyyy-MM-dd"
# Define the path to the CSV file with the current date in the filename
$filePath = "$PSScriptRoot\DNSResults_$currentDate.csv"
# Initialize the CSV file with headers if it doesn't exist
if (-not (Test-Path $filePath)) {
"Timestamp,Milliseconds,TargetURL,DNSServerIP,DNSServer" | Out-File -FilePath $filePath
}
# Loop through each target host and then each DNS server
foreach ($targetFqdn in $targetFqdns) {
foreach ($dnsServer in $dnsServers) {
# Measure the time taken to run the command
$measure = Measure-Command -Expression { nslookup $targetFqdn $dnsServer > $null 2>&1 }
# Get the current date and time in ISO 8601 format
$timestamp = Get-Date -Format "yyyy-MM-ddTHH:mm:ss"
# Get the total milliseconds and round up to a whole number
$milliseconds = [math]::Ceiling($measure.TotalMilliseconds)
# Append the timestamp, milliseconds, domain, server, and name to the CSV file
$result = "$timestamp,$milliseconds,$targetFqdn,"
$dnsServerUSed = "$($dnsServer.IPAddress),$($dnsServer.FQDN)"
$output = $result + $dnsServerUsed
$output | Out-File -FilePath $filePath -Append
}
}
- LainRobertsonSilver Contributor
Hi Procradminator,
While your script will work, there's a number of areas that could be improved upon, such as:
- Strategy: Using PowerShell concepts ahead of DOS concepts;
- Performance: Includes parallel processing and code optimisations;
- Exception handling (specifically, the impact of it).
I'm not going to get into parallel processing as that would make for a larger, more complex script, but I'll speak briefly to the other points.
First, here's a slightly altered version of your script:
Example script
# Add 5 hosts with their service description and IP address. $dnsServers = @( [PSCustomObject]@{ Description = "OurDNS1"; IPAddress = "14.15.16.17" } , [PSCustomObject]@{ Description = "OurDNS2"; IPAddress = "11.12.13.14" } , [PSCustomObject]@{ Description = "Cloudflare"; IPAddress = "1.1.1.1" } , [PSCustomObject]@{ Description = "Quad9"; IPAddress = "9.9.9.9" } , [PSCustomObject]@{ Description = "Google"; IPAddress = "8.8.8.4" } ); # Define an array to store target FQDNs. $targetFqdns = @( "bbc.co.uk." , "www.porsche.com." , "www.amazon.co.uk." ) # Define the path to the CSV file with the current date in the filename. $filePath = "$PSScriptRoot\DNSResults_$([datetime]::Today.ToString("yyyy-MM-dd")).csv"; # Loop through each target host and then each DNS server. foreach ($targetFqdn in $targetFqdns) { foreach ($dnsServer in $dnsServers) { # Prepare the output object. $Output = [PSCustomObject] [ordered] @{ Successful = $true; Timestamp = [datetime]::Now.ToString("s"); Name = $targetFqdn.ToLowerInvariant(); Server = $dnsServer.IPAddress; ServerDescription = $dnsServer.Description; Milliseconds = 0; Error = $null; }; try { # Measure the time taken to run the command. $measure = Measure-Command -Expression { Resolve-DnsName -DnsOnly -Name $targetFqdn -Server $dnsServer.IPAddress -ErrorAction:Stop *> $null; } # Set the total milliseconds and round up to a whole number. $Output.Milliseconds = [math]::Ceiling($measure.TotalMilliseconds); } catch { $Output.Successful = $false; $Output.Error = $_.Exception.Message; } #Output the results. $Output | Export-Csv -NoTypeInformation -Path $filePath -Append; } }
Example output
Strategy
There's only two points I want to make here:
- PowerShell works with objects and it pays to frame your thinking that way;
- Try to keep your object data and your stream output separate.
What these two points primarily relate to in your code is how you're going about the construction of your final CSV output file.
As an FYI, PowerShell sits on top of .NET - .NET Framework for Windows PowerShell (which ships with Windows) and .NET Core for PowerShell (which does not ship with Windows), meaning it's inherently object-oriented - a point I'll keep coming back to.
In plain English, what this translates to is you shouldn't need to be constructing the CSV line-by-line - including headers and formatting.
Instead, you construct the object containing the data you wish to see in the results. You then can pipe those results to something else, where that could be almost anything from another piece of code/commandlet, to a web API, or even - as you are doing - a humble file.
If you haven't already heard of pipelines, at some point you likely will, and this object approach of PowerShell's is critical to successfully leveraging pipeline to achieve far more complex things than this.
Where you see this in effect in my example is in lines 24 to 32.
What this block is doing is preparing an object containing all the data I wish to see in response to the DNS test. I've added a couple of additional columns to support post execution processing and analysis. But the general gist is that this object can be handed off any anything at all later on.
What this replaces from your script is the process where you're constantly rebuilding the final CSV string you're outputting to the CSV file.
Next - and this is quite specific to your scenario, you would be better off using the native Resolve-DnsName commandlet rather than nslookup.
In the context of what I wrote above, it automatically provides object data as output rather than flat, unstructured string output, meaning it automatically fits into the PowerShell way of doing things. But there are other reasons to use it.
nslookup and Resolve-DnsName are quite different beasts under the hood, which can impact the validity of your test results.
The Windows operating system has a built-in DNS client (stating the obvious). Resolve-DnsName - as with most of your running services/applications - uses this client.
nslookup does not use the Windows DNS client as it contains its own DNS client implementation. This means it will not honour certain artefacts that Windows itself respects, such as the settings found in the name resolution policy table and a few others - both of which are typically delivered using group policy (or an MDM alternative such as Intune).
So, you have to be aware of the scope of your testing as that will influence whether your choose nslookup.exe or Resolve-DnsName. (For what it's worth, I can't think of a good reason to use nslookup.exe but I won't claim there isn't one.)
Next - and this is an easy one to overlook, while considering the scope of your scenario, you want to be aware of how a trailing period can impact the DNS resolution process (for both Windows and nslookup, as this is an RFC topic).
If we take the Internet hostname of www.abc.net.au, it can be expressed two ways:
- www.abc.net.au: This is the "normal" way where because there is no trailing period, the client query will iterate through each and every specified (via DHCP and/or DNS client group policy) domain search suffix;
- www.abc.net.au.: Which is the strict format for a fully-qualified domain name (FQDN) and features the trailing period. This tells the client DNS resolver not to search any domain search suffixes - i.e. authoritatively resolve what was entered or fail the query.
If you're troubleshooting DNS query timeout issues, scenario 1 can often be the cause (i.e. Too many and/or unresponsive DNS servers relating to the suffix search order list are present). It's also misleading as to the performance of all the externally-located DNS servers, since there's often a strong correlation between suffix search domains and internal DNS namespaces, which an external DNS server is almost always likely to not be able to resolve.
The second strict format ensures you avoid all these issues, which is appropriate if you're only looking to measure raw DNS server performance.
Within the example script I've provided, you'll notice I've gone for the strict format, but again, they are both valid and it's up to you to decide which best suits your scenario.
Performance
In a script this small, performance tips are pointless, but it's not about improving this particular script. It's about developing habits that will benefit you as you author more performance-sensitive scripts. Eventually, you'll probably find yourself writing all your scripts in a more performant manner by default (except maybe the tactical, once-off, scenario-specific ones where you just want a quick, throwaway result).
As I said, I'm not going to cover parallel processing, but I'll give you some pointers that you can chase up if you're interested:
- You'll want to leverage Start-ThreadJob (ThreadJob) - PowerShell | Microsoft Learn;
- You'll could look at Start-Job (Microsoft.PowerShell.Core) - PowerShell | Microsoft Learn instead, but it's considerably slower for this kind of scenario. I wouldn't recommend this pathway.
I've already talk about suffix search orders and how they can impact performance, so I won't reiterate that again here.
In relation to the code optimisations, the main issue is that there's a lot of unnecessary string manipulating going on inside the foreach loops.
When you manipulate a string, you're actually creating a new copy of it entirely, so most of the time, you want to strike a balance between readability and performance by reducing/consolidating string manipulations.
Remembering what I mentioned about PowerShell sitting on top of .NET, there's even faster classes available like StringBuilder Class (System.Text) | Microsoft Learn.
I haven't gone that far with my example - it was enough just to remove the various string manipulations and letting them be handled once through the call to Export-Csv.
Exception handling
Of the three topics, this is the most important one to me since it relates directly to how usable the resulting CSV is.
Currently, your script is cramming all the nslookup output (including errors) to the null device. That keeps your CSV clear of unwanted text but in doing so also ensures you're unaware of the nature of any errors.
I've kept the errors in scope while ensuring they don't mess up the format of the CSV file - which is easily achieved if you leverage PowerShell's innate object- and pipeline-oriented architecture.
Anyhow, as I opened with, your script with work, but there's concepts you can improve upon that will dramatically improve more complex scripts that operate at significant scale.
Cheers,
Lain
I suggest adding Clear-DnsClientCache before measuring (https://learn.microsoft.com/en-us/powershell/module/dnsclient/clear-dnsclientcache?view=windowsserver2025-ps ) because after the first query... Your client will cache the website being queried :)