windows server
2414 TopicsAnnouncing ReFS Boot for Windows Server Insiders
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume. Why ReFS Boot? Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance. In short, ReFS boot means a more robust server right from startup with several benefits: Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly. Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings. Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement. Maximum Boot Volume Size: NTFS vs. ReFS Resiliency Enhancements with ReFS Boot Feature ReFS Boot Volume NTFS Boot Volume Metadata checksums ✅ Yes ❌ No Integrity streams (optional) ✅ Yes ❌ No Proactive error detection (scrubber) ✅ Yes ❌ No Online integrity (no chkdsk) ✅ Yes ❌ No Check out Microsoft Learn for more information on ReFS resiliency enhancements. Performance Enhancements with ReFS Boot Operation ReFS Boot Volume NTFS Boot Volume Fixed-size VHD creation Seconds Minutes Large file copy operations Milliseconds-seconds (independent of file size) Seconds-minutes (linear with file size) Sparse provisioning ✅ ❌ Check out Microsoft Learn for more information on ReFS performance enhancements. Getting Started with ReFS Boot Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview: 1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs. 3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume. A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video. Call to Action In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one. We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum. — Christina Curlette (and the Windows Server team)Announcing Native NVMe in Windows Server 2025: Ushering in a New Era of Storage Performance
We’re thrilled to announce the arrival of Native NVMe support in Windows Server 2025—a leap forward in storage innovation that will redefine what’s possible for your most demanding workloads. Modern NVMe (Non-Volatile Memory Express) SSDs now operate more efficiently with Windows Server. This improvement comes from a redesigned Windows storage stack that no longer treats all storage devices as SCSI (Small Computer System Interface) devices—a method traditionally used for older, slower drives. By eliminating the need to convert NVMe commands into SCSI commands, Windows Server reduces processing overhead and latency. Additionally, the whole I/O processing workflow is redesigned for extreme performance. This release is the result of close collaboration between our engineering teams and hardware partners, and it serves as a cornerstone in modernizing our storage stack. Native NVMe is now generally available (GA) with an opt-in model (disabled by default as of October’s latest cumulative update for WS2025). Switch onto Native NVMe as soon as possible or you are leaving performance gains on the table! Stay tuned for more updates from our team as we transition to a dramatically faster, more efficient storage future. Why Native NVMe and why now? Modern NVMe devices—like PCIe Gen5 enterprise SSDs capable of 3.3 million IOPS, or HBAs delivering over 10 million IOPS on a single disk—are pushing the boundaries of what storage can do. SCSI-based I/O processing can’t keep up because it uses a single-queue model, originally designed for rotational disks, where protocols like SATA support just one queue with up to 32 commands. In contrast, NVMe was designed from the ground up for flash storage and supports up to 64,000 queues, with each queue capable of handling up to 64,000 commands simultaneously. With Native NVMe in Windows Server 2025, the storage stack is purpose-built for modern hardware—eliminating translation layers and legacy constraints. Here’s what that means for you: Massive IOPS Gains: Direct, multi-queue access to NVMe devices means you can finally reach the true limits of your hardware. Lower Latency: Traditional SCSI-based stacks rely on shared locks and synchronization mechanisms in the kernel I/O path to manage resources. Native NVMe enables streamlined, lock-free I/O paths that slash round-trip times for every operation. CPU Efficiency: A leaner, optimized stack frees up compute for your workloads instead of storage overhead. Future-Ready Features: Native support for advanced NVMe capabilities like multi-queue and direct submission ensures you’re ready for next-gen storage innovation. Performance Data Using DiskSpd.exe, basic performance testing shows that with Native NVMe enabled, WS2025 systems can deliver up to ~80% more IOPS and a ~45% savings in CPU cycles per I/O on 4K random read workloads on NTFS volumes when compared to WS2022. This test ran on a host with Intel Dual Socket CPU (208 logical processors, 128GB RAM) and a Solidigm SB5PH27X038T 3.5TB NVMe device. The test can be recreated by running "diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30 testfile1.dat > output.dat" and modifying the parameters as desired. Results may vary. Top Use Cases: Where You’ll See the Difference Try Native NVMe on servers running your enterprise applications. These gains are not just for synthetic benchmarks—they translate directly to faster database transactions, quicker VM operations, and more responsive file and analytics workloads. SQL Server and OLTP: Shorter transaction times, higher IOPS, and lower tail latency under mixed read/write workloads. Hyper‑V and virtualization: Faster VM boot, checkpoint operations, and live migration with reduced storage contention. High‑performance file servers: Faster large‑file reads/writes and quicker metadata operations (copy, backup, restore). AI/ML and analytics: Low‑latency access to large datasets and faster ETL, shuffle, and cache/scratch I/O. How to Get Started Check your hardware: Ensure you have NVMe-capable devices that are currently using the Windows NVMe driver (StorNVMe.sys). Note that some NVMe device vendors provide their own drivers, so unless using the in-box Windows NVMe driver, you will not notice any differences. Enable Native NVMe: After applying the 2510-B Latest Cumulative Update (or most recent), add the registry key with the following PowerShell command: reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f Alternatively, use this Group Policy MSI to add the policy that controls the feature then run the local Group Policy Editor to enable the policy (found under Local Computer Policy > Computer Configuration > Administrative Templates > KB5066835 251014_21251 Feature Preview > Windows 11, version 24H2, 25H2). Once Native NVMe is enabled, open Device Manager and ensure that all attached NVMe devices are displayed under the “Storage disks” section. Monitor and Validate: Use Performance Monitor and Windows Admin Center to see the gains for yourself. Or try DiskSpd.exe yourself to measure microbenchmarks in your own environment! A quick way to measure IOPS in Performance Monitor is to set up a histogram chart and add a counter for Physical Disk>Disk Transfers/sec (where the selected instance is a drive that corresponds to one of your attached NVMe devices) then run a synthetic workload with DiskSpd. Compare the numbers before and after enabling Native NVMe to see the realized difference in your real environment! Join the Storage Revolution This is more than just a feature—it’s a new foundation for Windows Server storage, built for the future. We can’t wait for you to experience the difference. Share your feedback, ask questions, and join the conversation. Let’s build the future of high-performance Windows Server storage together. Send us your feedback or questions at nativenvme@microsoft.com! — Yash Shekar (and the Windows Server team)Windows Server 2025 Remote Desktop Session Host Capacity Planning Whitepaper
The Remote Desktop Session Host (RDSH) is a role service available on Windows Server 2025, which allows multiple users to access desktops and applications hosted on a single machine simultaneously. This document serves as a guide for capacity planning of Remote Desktop Session Host servers running Windows Server 2025. In a server-based computing environment, all applications execution and data processing occur on the server. Consequently, the server is one of the systems most likely to experience resource depletion during peak loads, which can lead to disruptions throughout the deployment. A multi-session computing environment experiences significantly higher peak loads compared to single-session environments. An RDSH server with a specific hardware capacity has a maximum workload limit that it can support before its resources are exhausted. RDSH server customers need to estimate the required hardware type and quantity for their user base. The process of doing this type of evaluation is referred to as capacity planning. Multi session capacity planning is dependent upon the specific application usage pattern of the user base. Based on the user scenario, an estimation can be made about the hardware required to support the targeted user capacity. This white paper presents guidelines and a general methodology for assessing a server’s capacity using a sample user scenario. It outlines the methodology employed for capacity planning using Microsoft's internal tools. It includes various test cases and provides an analysis of the results. The document also provides guidance on the hardware and other parameters that can have a significant impact on the number of users a server can support effectively. You can read the rest of the whitepaper by downloading here.CSV Auto-Pause on Windows Server 2025 Hyper-V Cluster
Hi everyone, i'm facing a very strange behavior with a newly created HyperV Clsuter running on Windows Server 2025. One of the two nodes keep calling for autopause on the CSV during the I/O peak. Does anyone have experienced this ? Here are the details : Environment Cluster: 2-node Failover Cluster Nodes: HV1 & HV2 (HPE ProLiant DL360 Gen11) OS: Windows Server 2025 Datacenter, Build 26100.32370 (KB5075899 installed Feb 21, 2026) Storage: HPE MSA 2070 full SSD, iSCSI point-to-point (4×25 Gbps per node, 4 MPIO paths) CSV: Single volume "Clsuter Disk 2" (~14 TB, NTFS, CSVFS_NTFS) Quorum: Disk Witness (Node and Disk Majority) Networking: 4×10 Gbps NIC Teaming for management/cluster/VMs traffic, dedicated iSCSI NICs Problem Description The cluster experiences CSV auto-pause events daily during a peak I/O period (~10:00-11:30), caused by database VMs generating ~600-800 MB/s (not that much). The auto-pause is triggered by HV2's CsvFs driver, even though HV2 hosts no VMs. All VMs run on HV1, which is the CSV coordinator/owner. Comparative Testing (Feb 23-26, 2026) Date HV2 Status Event 5120 SMB Slowdowns (1054) Auto-pause Cycles VM Impact Feb 23 Active 1 44 1 cycle (237ms recovery) None Feb 24 Active 0 8 0 None Feb 25 Drained (still in cluster) 4 ~60 (86,400,000ms max!) 3 cascade cycles Severe - all VMs affected Feb 26 Powered off 0 0 0 None Key finding: Draining HV2 does NOT prevent the issue. Only fully powering off HV2 eliminates all auto-pause events and SMB slowdowns during the I/O peak. Root Cause Analysis 1. CsvFs Driver on HV2 Maintains Persistent SMB Sessions to CSV SMB Client Connectivity log (Event 30833) on HV2 shows ~130 new SMB connections per hour to the CSV share, continuously, constant since boot: Share: \\xxxx::xxx:xxx:xxx:xxx\xxxxxxxx-...-xxxxxxx$ (HV1 cluster virtual adapter) All connections from PID 4 (System/kernel) — CsvFs driver 5,649 connections in 43.6 hours = ~130/hour Each connection has a different Session ID (not persistent) This behavior continues even when HV2 is drained 2. HV2 Opens Handles on ALL VM Files During the I/O peak on Feb 25, SMB Server Operational log (Event 1054) on HV1 showed HV2 blocking on files from every VM directory, including powered-off VMs and templates: .vmgs, .VMRS, .vmcx, .xml — VM configuration and state files .rct, .mrt — RCT/CBT tracking files Affected VMs: almost all Also affected: powered-off VMs And templates: winsrv2025-template 3. Catastrophic Block Durations On Feb 25 (HV2 drained but still in cluster): Operations blocked for 86,400,000 ms (exactly 24 hours) — handles accumulated since previous day These all expired simultaneously at 10:13:52, triggering cascade auto-pause Post-autopause: big VM freeze/lag for additional 2,324 seconds (39 minutes) On Feb 24 (HV2 active): Operations blocked for 1,150,968 ms (19 minutes) on one of the VM files Despite this extreme duration, no auto-pause was triggered that day 4. Auto-pause Trigger Mechanism HV2 Diagnostic log at auto-pause time: CsvFs Listener: CsvFsVolumeStateChangeFromIO->CsvFsVolumeStateDraining, status 0xc0000001 OnVolumeEventFromCsvFs: reported VolumeEventAutopause to node 1 Error status 0xc0000001 (STATUS_UNSUCCESSFUL) on I/O operation from HV2 CsvFsVolumeStateChangeFromIO = I/O failure triggered the auto-pause HV2 has no VMs running — this is purely CsvFs metadata/redirected access 5. SMB Connection Loss During Auto-pause SMB Client Connectivity on HV2 at auto-pause time: Event 30807: Share connection lost - "Le nom réseau a été supprimé" Event 30808: Share connection re-established What Has Been Done KB5075899 installed (Feb 21) — Maybe improved recovery from multi-cycle loop to single cycle a little, but did not prevent the auto-pause Disabled ms_server binding on iSCSI NICs (both nodes) Tuned MPIO: PathVerification Enabled, PDORemovePeriod 120, RetryCount 6, DiskTimeout 100 Drained HV2 — no effect Powered off HV2 — Completely eliminated the problem I'm currently running mad with this problem, i've deployed a lot of HyperV clusters and it's the first time i'm experiencing such a strange behavior, the only workaround i found is to take the second nodes off to be sure he is not putting locks on CSV files. The cluster is only running well with one node turned on. Why does the CsvFs driver on a non-coordinator node (HV2) maintain ~130 new SMB connections per hour to the CSV, even when it hosts no VMs and is drained?Why do these connections block for up to 24 hours during I/O peaks on the coordinator node? Why does draining the node not prevent CsvFs from accessing the CSV? Is this a known issue with the CsvFs driver in Windows Server 2025 Build 26100.32370? Are there any registry parameters to limit or disable CsvFs metadata scanning on non-coordinator nodes ? If someone sees somthing that i am missing i would be so grateful ! Have a great day.80Views0likes0CommentsUsers "Status" fields blank on RDS with Windows Server 2025
Hi, we have two RDS Server with Windows Server 2025 installed (In-Place Upgrade from Server 2019). In Task-Manager under the "Users" Tab all fields of the "Status" row are blank. We cant see if a user is connected or disconnected. In cmd with "query user" it works. Someone else discovered this problem?579Views2likes5CommentsServer 2025 not accepting Ricoh scans
The scanner has stopped scanning to their server since I upgraded the server OS from Windows Server 2022 to 2025. • Installed the Ricoh drivers for both the scanner and printer (from Ricoh’s web site) • Created a new simple share/filepath for the scanner to send to (\\SERVER2022\Scans) • Used IP address (10.1.10.2) instead of server name in file (UNC) path • Entered admin credentials with or without server name (it is a workgroup server, not a DC) • Created another user and tried all above with that new admin • With either server share and/or user, tried different permissions on the shared folder • Tried disabling/enabling inherited permissions on the shared folder • Disabled the Advanced Firewall entirely for testing – no change either way • Double checked incoming ports/programs on the firewall – all required were open • Activated SMB1 on server, tried with or without SMB2/SMB3 disabled • I was able to create a share on two other computers; one running Windows 10 and one running Windows 11. They both worked.114Views0likes1CommentMigrating from VMware to Hyper-v
Hi, I've recently deployed a new 3x node Hyper-v cluster running Windows Server 2025. I have an existing VMware cluster running exsi 7.x. What tools or approach have you guys used to migrate from VMware to Hyper-v? I can see there are many 3rd party tools available, and now the Windows Admin Center appears to also support this. Having never done this before (vmware to hyper-v) I'm not sure what the best method is, does anyone here have any experience and recommendations pls?143Views0likes2CommentsEncrypted vhdx moved to new host, boots without pin or recovery key
Hyper-V environment. Enabled VTPM on guest Server, 2022 OS and encrypted OS drive C:\ with BitLocker. Host server 2022 has physical TPM. Shut down guest OS and copied vhdx file to another Hyper-V host server that is completely off network (also server 2022 with a physical TPM). Created a new VM based on the "encrypted" vhdx. I was able to start the VM without needing a PIN or a recovery key. Doesn't this defeat the whole point of encrypting vhd's? Searching says that this should not be possible, but I replicated it twice on two different off network Hyper-V host servers. Another odd thing is that when the guest boots on the new host and you log in, the drive is NOT encrypted. So, where's the security in that? Does anyone have any ideas on this or if I'm missing something completely? Or have I just made Microsoft angry for pointing out this glaring flaw??124Views0likes3CommentsIssues with Group Policy Update (gpupdate)
I am getting an error when I attempt to perform a gpupdate /force on workstations. I have checked the health of the DC's and find no issues. I am going to include a screenshot of the error - hoping someone can guide me as on how to resolve. The system will say to reboot but the policy never seems to run just keeps prompting for reboot.151Views0likes1CommentDid Microsoft make a mistake? WinServer 2022 Standard and up.
Microsoft removed functionality of Windows Deployment Service. I know their are ways to to get around this but they either are hackjobs or deploying your own windows with PE. as far as i know of writing this. I know I could go linux. they have a simple cd to follow. Or Mac has their own version for macs. but not microsoft. They THREW it away for some stupid reason. Do I really have to do a VM or worse ditch DNS & DHCP?31Views0likes0Comments