storage
345 TopicsBLOG: Determine and modernize Filesystem Deduplication
Version history - 1.6 Added references / links - 1.5 Added insights from Steven Ekren. Many thanks! / Added ReFS Docs link and added clarification about drawbacks. - 1.4 revised script so ReFS volumes with classic dedup will be identified, added more eligibly checks and error handling - 1.3 added point #4 in migration guidance - 1.2 revised script - 1.1 formatting This blog explains the two Windows deduplication modes classic Windows Data Deduplication (ReFS or NTFS) and ReFS Deduplication (ReFS). It covers how they differ, why you should consider upgrading to Windows Server 2025 to leverage the new ReFS dedup engine, and clear warnings about scenarios where ReFS is not recommended. Practical migration guidance and detection commands are included. Differences between classic dedup and ReFS dedup File system: Classic dedup runs on NTFS or ReFS; ReFS dedup runs on ReFS and Windows Server 2025 or later, only. Implementation: They are separate engines with different metadata formats and management cmdlets. Management: Classic dedup uses the Dedup PowerShell module (Get‑DedupVolume, Start‑DedupJob, Disable‑DedupVolume). ReFS dedup uses its own ReFS dedup cmdlets (Get‑ReFSDedupStatus, Enable‑ReFSDedup). Conversion: There is no in‑place conversion between the two; metadata and chunk formats are incompatible. Improvements: the new in-line ReFS Deduplication leverages the advantages of ReFS files system. This makes deduplication more efficient and less CPU intensive. The new ReFS Deduplication can also compress data in-line using L1Z algorithm. This makes it up to par with enterprise solutions, often found in SAN storage or Linux appliances. Compression needs to be set per volume, and optional. Edit: Steven Ekren, a former Senior Product Manager for Hyper-V shared valuable insights on how both engines operate in a comment on LinkedIn: [...] the basic conceptual difference between WS Deduplication and ReFS deduplication is that the Windows Server [dedup] version takes the duplicate file data and moves it to a repository and puts a reparse point in the file system from each point that references the data. This involves data movement and therefore not recommended for workloads that are changing it's data often, but best for more static data like documents and picture/videos. ReFS is a file system that uses links natively for all the objects so leaving the data in place and managing the links is much more efficient and doesn't involve the data copy and managing a repository. Effectively it's built into the file system. As the blog notes, there are some situations not recommended for this version of dedupe, but generally it's lower performance and storage I/O impact. Why upgrade to Windows Server 2025 Improved version of ReFS Filesystem Improved ReFS in-line deduplication + optional L1Z compression: Server 2025 includes enhancements to ReFS dedup performance, scalability, and integration with modern storage features. Support and fixes: Windows Server 2016 and 2019 are past mainstream support, increasing the likelihood of costly support cases and delayed fixes; upgrading reduces operational risk and ensures access to ongoing improvements. Future compatibility: Newer OS releases receive optimizations and bug fixes for ReFS and dedup scenarios that older releases will not. SMB compression: for reasonably faster data transfer at minimal CPU when transferring data through the networks. Feature and security related improvements refer to availabile Microsoft Windows Server 2025 Summit content on techcommunity.microsoft.com Scenarios where ReFS is not recommended ReFS on SAN in clustered CSV environments: Avoid placing ReFS dedup on top of SAN‑backed Cluster Shared Volumes (CSVFS) in production clusters; clustered SAN/CSV scenarios causing severe performance issues in practice. Please refer to the ReFS documentation. (personal opinion and experience, not endorsed by Microsoft): Many small, fast‑changing files: Workloads with frequent small writes, such as user profiles, folder redirection of AppData folders, or applications that churn small config files (for example, Lotus Notes config files) can cause locks, performance degradation, or unexpected behavior on ReFS. Exclude these disks from dedup or keep them on NTFS. Note: Restrictions on high churn rate, like lockups or high RAM consumption, deadlocks / BSOD might have been addressed in Windows Server 2025 and the ReFS Dedup, see comment of Steven Ekren. Improving reliability and performance is a top goal for ReFS, to improve the adoption and feature parity with NTFS. For information about feature parity please refer to the ReFS documentation. Migration guidance The following instructions describe a high level and supported migration path from Windows deduplication using the NTFS file system to native ReFS Deduplication. Note: Step #3, data migration is not required when already using ReFS with Data Deduplication. In this case it's enough to execute step #1 and #2. Note: Validate on non‑production data first. Plan for rehydration time and network/storage throughput. Ensure backups are current before starting. Make sure to have a full backup before upgrading Server OS or making changes. 1. Disable classic dedup on the NTFS source: Disable-DedupVolume -Volume YourDriveLetter: 2. Rehydrate (un‑deduplicate) the data: Start-DedupJob -Volume YourDriveLetter: -Type Unoptimization 3. Copy or move data to a ReFS volume (new target): For straightforward NTFS→ReFS copies, robocopy is recommended. A GUI and job based alternative to this is the File Server Migration Feature (uses robocopy) in Windows Admin Center. For complex scenarios, open files long path names very large datasets (< 5 TB) or many small files restructuring, GUI (including Windows Server Core) automation, improved logging cloud/hybrid migrations I recommend the usage of GS RichCopy Enterprise by GuruSquad for higher speed (up to 40%) and reliability, compared to robocopy. 4. Optionally remove the Windows Server feature When there is no old deduplication in use consider to remove the feature. Your advantages of doing so: removes an unneccessary service. removes the file system filter driver for dedup, which causes performance impacts, even when not in use. removes the PowerShell commandlets for the old dedup, so they cannot mistakenly used by existing scripts, unaware admins etc. When migrating files over network: SMB compression: consider both source and target run Windows Server 2025 and leverage SMB compression. SMB Compression is available in Microsoft xcopy, Microsoft robocopy and Gurusquad GScopy Enterprise. Balancing and Teaming with SMB: SMB does not require LFBO or SET Teaming. It automagically detects network links and actively balances on its own on Windows Server 2016 and later. Using teaming, depending the configuration, can negatively affect transfer speed. Quick detection and diagnostic commands Check file systems: Get-Volume | Select DriveLetter, FileSystem Check classic dedup feature: Get-WindowsFeature -Name FS-Data-Deduplication Get-DedupVolume Get-DedupStatus Check ReFS dedup: Get-Command -Module Microsoft.ReFsDedup.Commands Get-ReFSDedupStatus -Volume YourDriveLetter: Diagnostic script to detect both: <# .SYNOPSIS Detects classic NTFS Data Deduplication and ReFS Deduplication across local volumes. .DESCRIPTION - Reports NTFS volumes with classic Data Dedup enabled. - Lists ReFS volumes present on the host. - If the ReFS dedup cmdlet exists AND OS build >= 26100, checks ReFS dedup status per ReFS volume. - Color coding: * Classic dedup enabled → Yellow * Classic dedup not enabled → Cyan * ReFS dedup enabled → Green * ReFS dedup not enabled → Cyan .NOTES Version: 1.7 Author: Karl Wester-Ebbinghaus + Copilot Requirements: Elevated PowerShell session, PowerShell 5.1 or newer Supported OS: Windows Server 2025, Azure Stack HCI 24H2 or newer Unsupported OS: Windows 10, Windows 11 (script terminates) #> #region Initialization Write-Verbose "Initializing variables and environment..." $Volumes = $null $Volume = $null $DedupVolumesList = $null $DedupReFSVolumesList = $null $DedupReFSVolumesListLetters = $null $DedupReFSStatus = $null $refsCmd = $null $OSBuild = $null $runReFSDedupChecks = $null #endregion Initialization #region Volume Discovery Clear-Host Write-Verbose "Querying NTFS and ReFS volumes..." $Volumes = Get-Volume | Where-Object FileSystem -in 'NTFS','ReFS' #endregion Volume Discovery #region ReFS Dedup Cmdlet, OS Build and OS SKU Detection Write-Verbose "Checking for ReFS deduplication cmdlet..." $refsCmd = Get-Command -Name Get-ReFSDedupStatus -ErrorAction SilentlyContinue Write-Verbose "Reading OS build number..." try { $OSBuild = [int](Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' -Name CurrentBuildNumber).CurrentBuildNumber } catch { Write-Verbose "Registry read for OS build failed. Falling back to Environment OSVersion." $OSBuild = [int][Environment]::OSVersion.Version.Build } # end try/catch for OS build detection Write-Verbose "Checking OS InstallationType and EditionID..." $CurrentVersionKey = Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' $InstallationType = $CurrentVersionKey.InstallationType # "Client" or "Server" $EditionID = $CurrentVersionKey.EditionID # e.g. "AzureStackHCI", "ServerStandard", etc. Write-Verbose "Detected InstallationType: $InstallationType" Write-Verbose "Detected EditionID: $EditionID" Write-Verbose "Detected OSBuild: $OSBuild" # Block Windows 10/11 (Client OS) if ($InstallationType -eq 'Client') { Write-Error "Unsupported OS detected: Windows Client (Windows 10/11). Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } # Allow Azure Stack HCI explicitly if ($EditionID -eq 'AzureStackHCI') { Write-Verbose "Azure Stack HCI detected. Supported platform." } else { # Must be Windows Server if ($InstallationType -ne 'Server') { Write-Error "Unsupported OS detected. Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } Write-Verbose "Windows Server detected (EditionID: $EditionID). Supported platform." } Write-Verbose "Evaluating ReFS dedup eligibility based on cmdlet presence and build >= 26100..." $runReFSDedupChecks = $false if ($refsCmd -and ($OSBuild -ge 26100)) { $runReFSDedupChecks = $true Write-Verbose "ReFS dedup checks ENABLED (cmdlet present and OS build >= 26100)." } else { Write-Verbose "ReFS dedup checks DISABLED (cmdlet missing or OS build < 26100)." } #endregion ReFS Dedup Cmdlet, OS Build and OS SKU Detection #region Main Loop foreach ($Volume in $Volumes) { # begin foreach volume loop Write-Host "Volume $($Volume.DriveLetter): ($($Volume.FileSystem))" Write-Verbose "Processing volume $($Volume.DriveLetter)..." #region Classic Dedup + ReFS Volume Listing if ($Volume.FileSystem -eq 'NTFS' -or $Volume.FileSystem -eq 'ReFS') { Write-Verbose "Checking classic deduplication status for volume $($Volume.DriveLetter)..." $DedupVolumesList = Get-DedupVolume -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupVolumesList) { Write-Host " → Classic Data Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Yellow } else { Write-Host " → Classic Data Dedup NOT enabled on $($Volume.DriveLetter),$($Volume.FileSystem)" -ForegroundColor Cyan } # end if classic dedup enabled Write-Verbose "Listing ReFS volumes on host..." $DedupReFSVolumesList = Get-Volume | Where-Object FileSystem -eq 'ReFS' if ($DedupReFSVolumesList) { $DedupReFSVolumesListLetters = ($DedupReFSVolumesList | ForEach-Object { $_.DriveLetter }) -join ',' Write-Host " → ReFS volumes present on host: $DedupReFSVolumesListLetters" } else { Write-Host " → No ReFS volumes detected on host" } # end if ReFS volumes present } # end NTFS/ReFS block #endregion Classic Dedup + ReFS Volume Listing #region ReFS Dedup Status if ($Volume.FileSystem -eq 'ReFS') { if ($runReFSDedupChecks) { Write-Verbose "Checking ReFS deduplication status for volume $($Volume.DriveLetter)..." $DedupReFSStatus = Get-ReFSDedupStatus -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupReFSStatus) { Write-Host " → ReFS Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Green } else { Write-Host " → ReFS Dedup NOT enabled on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Cyan } # end if ReFS dedup enabled } else { if (-not $refsCmd) { Write-Error " → Skipping ReFS dedup check: Get-ReFSDedupStatus cmdlet not present" -ForegroundColor Cyan } else { Write-Error " → Skipping ReFS dedup check: OS build $OSBuild < required 26100" -ForegroundColor Cyan } # end reason for skipping ReFS dedup check } # end if runReFSDedupChecks } # end if ReFS filesystem block #endregion ReFS Dedup Status Write-Host "" } # end foreach volume loop #endregion Main Loop #region End Write-Verbose "Script completed." #endregion End Recommendations and next steps Inventory: Identify volumes using NTFS dedup and ReFS dedup, and map workloads that create many small or rapidly changing files. Plan: Schedule rehydration and migration windows; test ReFS dedup on representative datasets. Upgrade: Prioritize upgrading servers still on 2016/2019 (End of Mainstream Support) to reduce support risk and gain the latest ReFS dedup improvements. Kindly consider reading my Windows Server Installation Guidance and Windows Server Upgrade Guidance Exclude: Keep user profiles, AppData, and other high‑churn small‑file paths off ReFS dedup or on NTFS. Consider ReFS Dedup with Compression: Enable compression optionally. Mind ReFS dedup compression is not the same as compress files integration in File Explorer or File Explorer properties (Windows 9x). It's transparent to the application Make smart decisions: Avoid using dedup when the dataset is changing fast or your dedup + compression rate is below 20%. Usually you can expect 40% or more savings, and up to 80% in specific use cases like VDI VHDX with ReFS Dedup + Compression. Plan your dedup jobs: Ensure of making use of the planning features for dedup jobs through PowerShell or Windows Admin Center (WAC) when using ReFS dedup on more than one volume per Server. Otherwise they might all run at the same time and impact your storage performance (esp. spinning rust) and consumption of RAM and CPU. Share and Educate: Inform your infrastructure team about the changes so they avoid using the traditional dedup on ReFS. Related blogposts: https://splitbrain.com/windows-data-deduplication-vs-refs-deduplication/ , Thanks Darryl van der Peijl and team. https://www.veeam.com/kb2023 Veeam best practices about Windows Deduplication and ReFS Deduplication.792Views2likes2CommentsIntroducing the Windows NVMe-oF Initiator Preview in Windows Server Insiders Builds
What Is NVMe-over-Fabrics? NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports: NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware. NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs. Why NVMe-oF on Windows Server? For Windows Server deployments, NVMe-oF builds on the same principles as Native NVMe support: helping you reduce protocol overhead, improve scalability, and better align your storage stack with modern hardware. For Windows Server customers, NVMe-oF offers: Lower overhead networked storage access — NVMe-oF has less protocol overhead than iSCSI, helping extract the performance of modern NVMe devices while preserving the parallelism and efficiency of NVMe. Flexible infrastructure choices — NVMe-oF supports both TCP and RDMA transports, allowing customers to choose between standard Ethernet-based deployments or low-latency RDMA-capable networks based on their infrastructure and performance goals. A forward-looking storage foundation — NVMe-oF is designed to scale across multiple controllers, namespaces, and queues, making it a strong foundation for future disaggregated and software-defined storage architectures. This Insider release represents the first step in bringing NVMe-oF capabilities natively to Windows Server. What’s Included in This Insider Release In this Windows Server Insider build, you can evaluate the following NVMe-oF capabilities: An inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support A new command-line utility, nvmeofutil.exe, for configuration and management Manual configuration of discovery and I/O connections Automatic exposure of NVMe namespaces as Windows disks once connected Note: PowerShell cmdlets are not available yet. All configuration is performed using nvmeofutil.exe. Getting Started with nvmeofutil.exe To start evaluating NVMe-oF in this build, you’ll use nvmeofutil.exe, the command-line utility included with supported Windows Server Insider builds. 1. Install the Latest Windows Server Insiders Build Ensure you are running a Windows Server Insiders build that includes: The inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support The nvmeofutil.exe utility 2. Open an Elevated Command Prompt All NVMe-oF commands must be run from an administrator command prompt. 3. List Available NVMe-oF Initiator Adapters nvmeofutil.exe list -t ia This command displays the available NVMe-oF initiator adapters on the system. 4. Enumerate Host Gateways nvmeofutil.exe list -t hg -ia <AdapterNumber> Host gateways represent transport-specific endpoints, such as NVMe/TCP over IPv4. 5. Configure an I/O Subsystem Port Tip: You’ll need three values from your target configuration: the Subsystem NQN, the target IP/DNS, and the TCP port. If you haven’t set up a target yet, see the Target Setup section below for a quick Linux-based configuration and where to find these values. nvmeofutil.exe add -t sp -ia <Adapter> -hg <HostGateway> -dy true -pi <PortNumber> -nq <SubsystemNQN> -ta <TargetAddress> -ts <ServiceId> This defines the connection parameters to the remote NVMe-oF target. 6. Connect and Use the Namespace nvmeofutil.exe connect -ia <Adapter> -sp <SubsystemPort> Once connected, the NVMe namespace appears as a disk in Windows and can be partitioned and formatted using standard Windows tools. Target Setup (Recommendations for Early Evaluation) If you plan to evaluate NVMe-oF with an existing storage array, check with your SAN vendor to confirm support and get configuration guidance. Where possible, we also encourage you to validate interoperability using your production storage platform. For early evaluation and lab testing, the simplest and most interoperable option is to use a Linux-based NVMe-oF target, as described below. To evaluate the inbox Windows NVMe-oF initiator in this Insider release, you’ll need an NVMe-oF target that can export a block device as an NVMe namespace over TCP. Recommended: Linux kernel NVMe-oF target (nvmet) over TCP For early testing, the simplest and most interoperable option is the Linux kernel NVMe target (“nvmet”). It’s straightforward to stand up in a lab and is widely used for basic NVMe-oF interoperability validation. Lab note: The example below uses “allow any host” to reduce friction during evaluation. In production environments, you should restrict access to specific host NQNs instead. What You’ll Need A Linux system (physical or VM) A block device to export (an NVMe SSD, SATA SSD, a virtual disk, etc.) IP connectivity to your Windows Server Insider machine A TCP port opened between initiator and target (you’ll choose a port below) VMs are fine for functional evaluation. For performance testing, you’ll want to move to physical hosts and realistic networking later. Option A — Configure nvmet Directly via configfs (Minimal, Copy/Paste Friendly) On the Linux target, run the following as root (or with sudo). This configures one NVMe-oF subsystem exporting one namespace over NVMe/TCP. 1) Load kernel modules and mount configfs sudo modprobe nvmet sudo modprobe nvmet-tcp # Required for nvmet configuration sudo mount -t configfs none /sys/kernel/config 2) Create a subsystem (choose an NQN) and allow host access Pick a subsystem name/NQN. Use a proper NQN format to avoid collisions on shared networks (example shown). SUBSYS="nqn.2026-02.com.contoso:win-nvmeof-test" sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS # Lab-only: allow any host to connect echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/attr_allow_any_host > /dev/null 3) Add a namespace (export a local block device) Choose a block device on the target (example: /dev/nvme0n1). Be careful: you are exporting the raw block device. DEV="/dev/nvme0n1" # <-- replace with your device (e.g., /dev/sdb) sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1 echo -n $DEV | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/device_path > /dev/null echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/enable > /dev/null 4) Create a TCP port (listener) and bind the subsystem Choose: TRADDR = the Linux target’s IP address on the test network TRSVCID = the TCP port (commonly 4420, but you can use any free TCP port) PORTID=1 TRADDR="192.168.1.92" # <-- replace with target IP TRSVCID="4420" # <-- TCP port sudo mkdir -p /sys/kernel/config/nvmet/ports/$PORTID echo -n $TRADDR | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_traddr > /dev/null echo -n tcp | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trtype > /dev/null echo -n $TRSVCID | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trsvcid > /dev/null echo -n ipv4 | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_adrfam > /dev/null # Bind subsystem to port sudo ln -s /sys/kernel/config/nvmet/subsystems/$SUBSYS \ /sys/kernel/config/nvmet/ports/$PORTID/subsystems/$SUBSYS 5) Quick validation (optional, from any Linux host with nvme-cli) If you have a Linux host handy, nvme discover will confirm the target is advertising the subsystem and will show the subnqn value you’ll use from Windows. sudo nvme discover -t tcp -a 192.168.1.92 -s 4420 Mapping the Target Values to Your Windows nvmeofutil.exe Steps In your Windows steps, you already define the key connection parameters in the Subsystem Port add/connect flow. Use these mappings: SubsystemNQN (-nq) → the subsystem name/NQN you created (example: nqn.2026-02.com.contoso:win-nvmeof-test) TargetAddress (-ta) → the Linux target IP address (example: 192.168.1.92) ServiceId (-ts) → the TCP port you used (example: 4420) Option B — If You Prefer a Tool-Based Setup: nvmetcli If you’d rather not manipulate configfs directly, nvmetcli provides an interactive shell and can save/restore configurations from JSON (useful for repeating the setup across reboots in a lab). At a high level, nvmetcli can: Create subsystems and namespaces Configure ports (including TCP) Manage allowed hosts (or allow any host in controlled environments) Save/restore configs (for example, /etc/nvmet/config.json) Optional (Advanced): SPDK NVMe-oF Target If you already use SPDK or want to explore higher-performance user-space targets, SPDK’s NVMe-oF target supports TCP and RDMA and is configured via JSON-RPC. For early evaluation, the Linux kernel target above is usually the quickest path. Known Limitations As you evaluate this early Insider release, keep the following limitations in mind: Configuration is CLI-only (no GUI or PowerShell cmdlets yet) No multipathing Limited recovery behavior in some network failure scenarios These areas are under active development. Try It and Share Feedback We encourage you to try NVMe-oF in your lab or test environment and share your experience on Windows Server Insiders Discussions so the engineering team can review public feedback in one place. For private feedback or questions that can’t be shared publicly, you can also reach us at nvmeofpreview@microsoft.com. We look forward to your feedback as we take the next steps in modernizing remote storage on Windows Server. — Yash Shekar (and the Windows Server team)Announcing ReFS Boot for Windows Server Insiders
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume. Why ReFS Boot? Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance. In short, ReFS boot means a more robust server right from startup with several benefits: Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly. Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings. Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement. Maximum Boot Volume Size: NTFS vs. ReFS Resiliency Enhancements with ReFS Boot Feature ReFS Boot Volume NTFS Boot Volume Metadata checksums ✅ Yes ❌ No Integrity streams (optional) ✅ Yes ❌ No Proactive error detection (scrubber) ✅ Yes ❌ No Online integrity (no chkdsk) ✅ Yes ❌ No Check out Microsoft Learn for more information on ReFS resiliency enhancements. Performance Enhancements with ReFS Boot Operation ReFS Boot Volume NTFS Boot Volume Fixed-size VHD creation Seconds Minutes Large file copy operations Milliseconds-seconds (independent of file size) Seconds-minutes (linear with file size) Sparse provisioning ✅ ❌ Check out Microsoft Learn for more information on ReFS performance enhancements. Getting Started with ReFS Boot Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview: 1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs. 3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume. A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video. Call to Action In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one. We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum. — Christina Curlette (and the Windows Server team)Announcing Windows Server vNext Preview Build 29558
Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions and Azure Edition (for VM evaluation only). Branding remains Windows Server 2025 in this preview - when reporting issues please refer to Windows Server vNext preview. Build 29531 established a new Server preview baseline build. Please perform a clean install of Build 29531 (or later) using the installation media linked below. Please note: Upgrades from earlier Windows Server vNext preview builds older than 29531 are not supported. We encourage all Windows Server vNext preview users to perform a clean install using 29531 or later to successfully upgrade to future Windows Server vNext preview builds. While upgrades from earlier Windows Server previews (Build 26525 and older) are not technically blocked by setup.exe, a number of known issues have been identified related to upgrades necessitating the establishment of a new baseline build for our Server vNext Preview Program. The new baseline build (29531) will not be Flighted due to upgrade issues. Flighting support resumed with preview build 29550 or later. What's New NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports: NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware. NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs. For more information, please visit: Introducing the Windows NVMe-oF Initiator Preview in Windows Server Insiders Builds | Microsoft Community Hub ReFS Boot is enabled for Windows Server vNext preview builds. Known Limitations ReFS Boot systems create a minimum 2GB WinRE partition. When WinRE cannot be updated due to space constraints, the system may disable WinRE. Disabling WinRE does not remove the partition. If the WinRE partition is deleted and the boot volume is extended over it, this operation is unrecoverable without a clean install. For more information, please visit: Resilient File System (ReFS) overview | Microsoft Learn Feedback Hub app is available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues [NEW] Thin Provisioning fails on clean installs in this build (29558). Users selecting Thin Provisioning when attempting clean installs of this build may experience failures. This issue is understood and will be fixed in a future release. Upgrading from earlier builds of Windows Server vNext previews (26525 or older) are not supported. Please perform a clean install of build 29531 or later. Users may experience failures when attempting to upgrade from earlier previews (build 26525 and older). VMs may fail to upgrade or start after upgrade from older preview builds impacting live migration and failover cluster scenarios. Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server vNext update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.1.2KViews1like0CommentsWindows NVMe-oF Initiator connect error
Hi everyone, I am trying to test the new Windows NVMe-oF Initiator within build 29550.1000 but I am not able to connect to a remote storage array - getting the error "IOCTL_NVMEOF_CONNECT_CONTROLLER failed with error code 0x1f" (see screenshot). Any idea what does this error mean or what can be wrong?114Views0likes1CommentCapabilities of NVMe-oF so far?
Started playing around with NVMe-oF last night and got a successful connection to TrueNAS. I'm wondering what the current capabilities/limitations are. As far as I can tell you can't cluster the disk which is what I was primarily interested in. I was able to connect the same NQN endpoint to two servers but the WSFC cluster wouldn't detect it as a cluster available disk. Also the disks seems to disappear on every reboot and has to be set up again, so is there no auto-mount yet? I understand this is early preview so I don't have many expectations, just would like some clarity. Thank you.93Views0likes1CommentUpdated Failover Clustering API Documentation
Does anyone know when the Failover Cluster API documentation will be updated? Currently, in learn it only shows Server 2008 and 2012. https://learn.microsoft.com/en-us/previous-versions/windows/desktop/mscs/failover-cluster-apis-portal The same goes for storage spaces. Is there any updated documentation?28Views1like0CommentsAnnouncing Native NVMe in Windows Server 2025: Ushering in a New Era of Storage Performance
We’re thrilled to announce the arrival of Native NVMe support in Windows Server 2025—a leap forward in storage innovation that will redefine what’s possible for your most demanding workloads. Modern NVMe (Non-Volatile Memory Express) SSDs now operate more efficiently with Windows Server. This improvement comes from a redesigned Windows storage stack that no longer treats all storage devices as SCSI (Small Computer System Interface) devices—a method traditionally used for older, slower drives. By eliminating the need to convert NVMe commands into SCSI commands, Windows Server reduces processing overhead and latency. Additionally, the whole I/O processing workflow is redesigned for extreme performance. This release is the result of close collaboration between our engineering teams and hardware partners, and it serves as a cornerstone in modernizing our storage stack. Native NVMe is now generally available (GA) with an opt-in model (disabled by default as of October’s latest cumulative update for WS2025). Switch onto Native NVMe as soon as possible or you are leaving performance gains on the table! Stay tuned for more updates from our team as we transition to a dramatically faster, more efficient storage future. Why Native NVMe and why now? Modern NVMe devices—like PCIe Gen5 enterprise SSDs capable of 3.3 million IOPS, or HBAs delivering over 10 million IOPS on a single disk—are pushing the boundaries of what storage can do. SCSI-based I/O processing can’t keep up because it uses a single-queue model, originally designed for rotational disks, where protocols like SATA support just one queue with up to 32 commands. In contrast, NVMe was designed from the ground up for flash storage and supports up to 64,000 queues, with each queue capable of handling up to 64,000 commands simultaneously. With Native NVMe in Windows Server 2025, the storage stack is purpose-built for modern hardware—eliminating translation layers and legacy constraints. Here’s what that means for you: Massive IOPS Gains: Direct, multi-queue access to NVMe devices means you can finally reach the true limits of your hardware. Lower Latency: Traditional SCSI-based stacks rely on shared locks and synchronization mechanisms in the kernel I/O path to manage resources. Native NVMe enables streamlined, lock-free I/O paths that slash round-trip times for every operation. CPU Efficiency: A leaner, optimized stack frees up compute for your workloads instead of storage overhead. Future-Ready Features: Native support for advanced NVMe capabilities like multi-queue and direct submission ensures you’re ready for next-gen storage innovation. Performance Data Using DiskSpd.exe, basic performance testing shows that with Native NVMe enabled, WS2025 systems can deliver up to ~80% more IOPS and a ~45% savings in CPU cycles per I/O on 4K random read workloads on NTFS volumes when compared to WS2022. This test ran on a host with Intel Dual Socket CPU (208 logical processors, 128GB RAM) and a Solidigm SB5PH27X038T 3.5TB NVMe device. The test can be recreated by running "diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30 testfile1.dat > output.dat" and modifying the parameters as desired. Results may vary. Top Use Cases: Where You’ll See the Difference Try Native NVMe on servers running your enterprise applications. These gains are not just for synthetic benchmarks—they translate directly to faster database transactions, quicker VM operations, and more responsive file and analytics workloads. SQL Server and OLTP: Shorter transaction times, higher IOPS, and lower tail latency under mixed read/write workloads. Hyper‑V and virtualization: Faster VM boot, checkpoint operations, and live migration with reduced storage contention. High‑performance file servers: Faster large‑file reads/writes and quicker metadata operations (copy, backup, restore). AI/ML and analytics: Low‑latency access to large datasets and faster ETL, shuffle, and cache/scratch I/O. How to Get Started Check your hardware: Ensure you have NVMe-capable devices that are currently using the Windows NVMe driver (StorNVMe.sys). Note that some NVMe device vendors provide their own drivers, so unless using the in-box Windows NVMe driver, you will not notice any differences. Enable Native NVMe: After applying the 2510-B Latest Cumulative Update (or most recent), add the registry key with the following PowerShell command: reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f Alternatively, use this Group Policy MSI to add the policy that controls the feature then run the local Group Policy Editor to enable the policy (found under Local Computer Policy > Computer Configuration > Administrative Templates > KB5066835 251014_21251 Feature Preview > Windows 11, version 24H2, 25H2). Once Native NVMe is enabled, open Device Manager and ensure that all attached NVMe devices are displayed under the “Storage disks” section. Monitor and Validate: Use Performance Monitor and Windows Admin Center to see the gains for yourself. Or try DiskSpd.exe yourself to measure microbenchmarks in your own environment! A quick way to measure IOPS in Performance Monitor is to set up a histogram chart and add a counter for Physical Disk>Disk Transfers/sec (where the selected instance is a drive that corresponds to one of your attached NVMe devices) then run a synthetic workload with DiskSpd. Compare the numbers before and after enabling Native NVMe to see the realized difference in your real environment! Join the Storage Revolution This is more than just a feature—it’s a new foundation for Windows Server storage, built for the future. We can’t wait for you to experience the difference. Share your feedback, ask questions, and join the conversation. Let’s build the future of high-performance Windows Server storage together. Send us your feedback or questions at nativenvme@microsoft.com! — Yash Shekar (and the Windows Server team)Windows NVMe-oF Initiator
Hi everyone, I am trying to test the new Windows NVMe-oF Initiator within build 29550.1000 but I am not able to configure an I/O Subsystem Port, still getting the message " A Host Gateway with the specified identifier was not found for the Initiator Adapter." (see screenshot). Any idea what I am doing wrong?Solved123Views0likes2CommentsNTFS permissions are partially not working.
Participant A is sometimes unable to see Participant B’s files. The issue can be resolved by clicking the option: "Replace all child object permission entries with inheritable permission entries from this object." However, the problem keeps reappearing. Windows Server 2022 Datacenter (VMware 7.1), formatted as NTFS.154Views0likes4Comments