Mar 06 2023 11:51 AM - edited Mar 09 2023 10:15 AM
I have an application that does high speed (>3000 MB/s) sequential unbuffered writes to large (256GB) pre-allocated files in NTFS. I also perform random unbuffered reads from this volume at a much lower rate when needed, using a block size of around 2MB.
Reading and writing at high speed works perfectly almost 100% of the time. However, when I read from (a different part of) the same file that is currently being written, it causes the write speeds to suffer dramatically and queue depths to shrink below 1. The moment writing moves on to another file, the speeds pick back up to where they belong. This is confirmed on Server 2016 and 2019.
This could be a difference in I/O scheduling, caching behavior, or something else that is dependent on the file system, but I can't seem to find any documentation on it. What could explain this behavior, and is there a workaround?