Written by Jason Yi, PM on the Azure Edge & Platform team at Microsoft.
Many performance tools often reach the end of their lifespan. Fortunately, Microsoft still loves Azure Stack HCI performance, which is why DiskSpd and VMFleet are here to stay. With that said, today we are excited to bring you a long-awaited update – VMFleet 2.0.
This is the biggest update in over 3 years, so you can look forward to three major improvements:
To improve VMFleet, we needed to improve DiskSpd. We are also fully aware that DiskSpd was lacking in certain aspects compared to other tools out in the world. But with this most recent update, DiskSpd/VMFleet is taking a leap forward by offering more functionality and granular control. Let’s discuss a few of the more major ones.
Previously, 100% of the I/O generated by DiskSpd was uniformly distributed across the target file. If using VMFleet, the I/O would be sent to the 10 GiB target file on the virtual machine’s data disk. This means that I/O was distributed evenly across the target and fit in the cache. The result would be a boost in performance compared to “realistic” workloads where data may be split between hot and cold working sets.
In order to offer more flexibility and simulate more realistic workloads, DiskSpd now allows you to specify a non-uniform distribution for random IO in the target file. There are 2 methods to specify the distribution:
Previously, DiskSpd only ran 100% random or 100% sequential workloads. Today, you can specify the percentage of IO requests that will be issued randomly with respect to the last issued IO, allowing you to run . If a mixed read/write ratio is also specified (-wN, where 0 < N < 100), sequential IO runs will be homogenous (single IO type). The length of these runs, follow a geometric distribution based on a probability split.
Previously, in order to throttle the IOPS per thread, the only specification was the throughput limit flag (-g). This is fine, but the only downside was that you would need to calculate and reason about the relationship between throughput and IOPS to throttle IOPS to a desired value. Those days are now gone. You no longer need to manually perform the calculations as we introduce a new spinoff flag that allows you to directly control the IOPS limit.
DiskSpd has always had the -X parameter to pipe in an XML profile containing your desired parameters or workload. Doing so allowed you to automate and run DiskSpd tests via the XML file. However, in order to even get that XML profile section, you would need to manually run the test with the parameters, open the output file, and copy the “profile” section.
Today, you can now easily extract the parameter profile without running the DiskSpd load. Alternatively, you can also generate a text description of the input XML profile.
Previously, VMFleet deployed one data disk (default OS disk) with a 10 GiB target file that DiskSpd used, for each virtual machine.
Today, you have the option to attach an additional data disk to the virtual machines.
New Module Command:
Previously, when creating a CSV, it would be up to the user to determine the volume size. Today, this command performs the calculations for you and outputs an appropriate CSV size for your desired resiliency type. This avoids running into an insufficient CSV size. More information on creating volumes in Storage Spaces Direct can be found here.
New Module Command:
Previously, VMFleet deployed an even number of VMs per node, which by default are collocated with the owner node. However, this is often not the case in a real-world scenario.
Today, in order to simulate a realistic workload, VMFleet now offers the ability to specify a percentage of VMs that will be misaligned from the owner node. In other words, VMs rotated to other nodes. This is denoted by the new module command:
New Module Command:
Once you familiarize yourself with VMFleet, it’s almost like second nature. However, if you are a first-time user, or even if you take a break from using the tool, it’s extremely easy to forget how to deploy, utilize, and measure results using VMFleet. Not to mention, it has always been a bit tedious to deploy the tool in the first place.
As a result, we’ve now converted VMFleet into a Module, hosted on the PowerShell Gallery. You no longer need to navigate to the GitHub repository, install the directory, manually tweak the scripts, move files around, etc. Instead, you can simply run “Install-Module -Name VMFleet” and get started with deploying your virtual machines – It even downloads DiskSpd automatically!
Despite these changes, you can rest assured that VMFleet still maintains its’ familiarity. For example, the directory structure still remains the same, and the scripts (now commands), maintains an intuitive nomenclature.
And in case you were worried about backwards compatibility, no worries! As always, DiskSpd is backwards compatible up until Windows 7 and Windows Server 2008 R2.
Previously, VMFleet provided relatively little guidance as to which parameters or flags one should use to mimic certain workloads.
Today, we introduce a new command (Measure-FleetCoreWorkload) that runs 4 pre-defined workloads: General, Peak (maxed out IOPS), VDI, and SQL. These workloads are defined using DiskSpd flags and stored as an XML profile. For General, Peak, and VDI workloads we deploy 1 VM per physical core, each containing 1 vCPU. SQL uses one fourth of the VM count but each VM contains 4 vCPUs. After the test completes running, it generates a ZIP file that contains the IOPS results for each workload.
Today we are excited to announce the newly refreshed tools, but there are many more features that we did not highlight today. For more details, please stay tuned for future articles or refer to the updated DiskSpd/VMFleet Github Wiki Page.
As always, please continue to provide your valuable feedback, and we will do our best to listen! Thank you!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.