SQLIOStress did not go away it was replaced with
). I developed SQLIOStress in response to various cases encountering stale reads, checksum failures and other unwanted I/O behavior. SQLIOStress continued to get more test cases but was being developed on an issue by issue basis and was becoming difficult to maintain.
SQLIOStress has been in use with the WHCL testing for several years and is widely used by many I/O manufactures. I have had various conference calls with these vendors in an effort to help all parties increase the I/O quality and overall SQL Server experience.
Several years ago I stated a re-write called
that was designed to do I/O stress testing and would be something that was easier to maintain. It can easily handle multiple files, grow and shrink, more than 4GB files, and dozens of other configurations options are provided.
All the test patterns from SQLIOStress are ported into the SQLIOSim code based (I tested them all on known hardware configurations that caused failure) and we have added dozens more test patterns.
Two years prior to the SQL 2005 release the SQL Server development team (SQLOS specifically) took over the code for SQLIOSim making it an official part of the SQL 2005 product. This was a big step because it too joined the WHCL testing ranks, is widely adopted by I/O vendor testing suites and maintained in conjunction with the SQL Server source code.
Along with strict code review the SQL Server development team ran detailed I/O pattern testing from our customer replay labs and internal ITG servers to make sure the patterns simulated by SQLIOSim could and would accurately reflect the SQL Server I/O patterns from various loads.
SQLIOSim, like SQLIOStress, is designed to run on a server without requiring an install of SQL Server. This allows new hardware to be tested before SQL Server is even installed.
SQLIOStress and SQLIOSim are NOT performance testing tools. They run both defined and random patterns. This makes it impossible to use them for strict performance timing efforts. In fact, some of the common patterns are designed to burst 10,000+ I/Os at the subsystem. We found several drivers that did not handle low non-paged pool situations gracefully. This is not an ideal situation for a server to be in but a graceful recovery and not a blue screen would be the preferred outcome.