So, funny story...
I had a project at work once to create a resilient file storage system for hosting the most sensitive production data and services (we hosted VMs in there too).
I was pressured to use Linux as or SysAdmin was heavily bias against Microsoft, despite my attempts to dissuade him (licensing and lack of beta testing broke that camels back). So, I attempted to immerse myself in Linux despite my anxieties of vastly unfamiliar territory on the prompt syntax (completely self taught IT stuff), and was curious to understand 'clustering' concepts. It was suggested that we try either Gluster or CEPH, we brainstormed and settled on CEPH.
For those unfamiliar, CEPH is basically Storage Spaces Direct (SDS?), or distributed block storage across multiple hosts with heavy resiliency.
CEPH, by design I believe, uses this exact write through method; they call it sync write in Linux land, I think?
Anywho, my real passion is actually hardware, and I'm a speed demon. So, I know the type of drives you're looking for in order to get the best performance using this write method.
What you need are SSDs with a dedicated power failure protection as the firmware ensures buffer/cache flushes during write operations. These typically are in the M.2 22110 format, and have several block capacitors to act as an uninterruptible power source to flush the buffer in the event of power failure. It makes best effort to get the buffer data into the flash media gracefully, typically with very high success rate.
These drives tend to have write acceleration in the SSDs design in order to handle this write method, common among enterprise SSDs (but not all!). You can't adjust the write cache settings for the drives in the device manager, typically, as (I believe) they handle write operation in a protected path to ensure data resiliency. They provide very low write latency, and are beasts with software defined block storage.
I actually invested in a 960GB Samsung PM963 M.2 SSD after working on that storage project (2018) and discovering these niche use case drives, and they're really engaging at just about any workload you throw at it, higher the queue depth the better! š
Edit for the numerous typos