windows server
2847 TopicsIntroducing the Windows NVMe-oF Initiator Preview in Windows Server Insiders Builds
What Is NVMe-over-Fabrics? NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports: NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware. NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs. Why NVMe-oF on Windows Server? For Windows Server deployments, NVMe-oF builds on the same principles as Native NVMe support: helping you reduce protocol overhead, improve scalability, and better align your storage stack with modern hardware. For Windows Server customers, NVMe-oF offers: Lower overhead networked storage access — NVMe-oF has less protocol overhead than iSCSI, helping extract the performance of modern NVMe devices while preserving the parallelism and efficiency of NVMe. Flexible infrastructure choices — NVMe-oF supports both TCP and RDMA transports, allowing customers to choose between standard Ethernet-based deployments or low-latency RDMA-capable networks based on their infrastructure and performance goals. A forward-looking storage foundation — NVMe-oF is designed to scale across multiple controllers, namespaces, and queues, making it a strong foundation for future disaggregated and software-defined storage architectures. This Insider release represents the first step in bringing NVMe-oF capabilities natively to Windows Server. What’s Included in This Insider Release In this Windows Server Insider build, you can evaluate the following NVMe-oF capabilities: An inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support A new command-line utility, nvmeofutil.exe, for configuration and management Manual configuration of discovery and I/O connections Automatic exposure of NVMe namespaces as Windows disks once connected Note: PowerShell cmdlets are not available yet. All configuration is performed using nvmeofutil.exe. Getting Started with nvmeofutil.exe To start evaluating NVMe-oF in this build, you’ll use nvmeofutil.exe, the command-line utility included with supported Windows Server Insider builds. 1. Install the Latest Windows Server Insiders Build Ensure you are running a Windows Server Insiders build that includes: The inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support The nvmeofutil.exe utility 2. Open an Elevated Command Prompt All NVMe-oF commands must be run from an administrator command prompt. 3. List Available NVMe-oF Initiator Adapters nvmeofutil.exe list -t ia This command displays the available NVMe-oF initiator adapters on the system. 4. Enumerate Host Gateways nvmeofutil.exe list -t hg -ia <AdapterNumber> Host gateways represent transport-specific endpoints, such as NVMe/TCP over IPv4. 5. Configure an I/O Subsystem Port Tip: You’ll need three values from your target configuration: the Subsystem NQN, the target IP/DNS, and the TCP port. If you haven’t set up a target yet, see the Target Setup section below for a quick Linux-based configuration and where to find these values. nvmeofutil.exe add -t sp -ia <Adapter> -hg <HostGateway> -dy true -pi <PortNumber> -nq <SubsystemNQN> -ta <TargetAddress> -ts <ServiceId> This defines the connection parameters to the remote NVMe-oF target. 6. Connect and Use the Namespace nvmeofutil.exe connect -ia <Adapter> -sp <SubsystemPort> Once connected, the NVMe namespace appears as a disk in Windows and can be partitioned and formatted using standard Windows tools. Target Setup (Recommendations for Early Evaluation) If you plan to evaluate NVMe-oF with an existing storage array, check with your SAN vendor to confirm support and get configuration guidance. Where possible, we also encourage you to validate interoperability using your production storage platform. For early evaluation and lab testing, the simplest and most interoperable option is to use a Linux-based NVMe-oF target, as described below. To evaluate the inbox Windows NVMe-oF initiator in this Insider release, you’ll need an NVMe-oF target that can export a block device as an NVMe namespace over TCP. Recommended: Linux kernel NVMe-oF target (nvmet) over TCP For early testing, the simplest and most interoperable option is the Linux kernel NVMe target (“nvmet”). It’s straightforward to stand up in a lab and is widely used for basic NVMe-oF interoperability validation. Lab note: The example below uses “allow any host” to reduce friction during evaluation. In production environments, you should restrict access to specific host NQNs instead. What You’ll Need A Linux system (physical or VM) A block device to export (an NVMe SSD, SATA SSD, a virtual disk, etc.) IP connectivity to your Windows Server Insider machine A TCP port opened between initiator and target (you’ll choose a port below) VMs are fine for functional evaluation. For performance testing, you’ll want to move to physical hosts and realistic networking later. Option A — Configure nvmet Directly via configfs (Minimal, Copy/Paste Friendly) On the Linux target, run the following as root (or with sudo). This configures one NVMe-oF subsystem exporting one namespace over NVMe/TCP. 1) Load kernel modules and mount configfs sudo modprobe nvmet sudo modprobe nvmet-tcp # Required for nvmet configuration sudo mount -t configfs none /sys/kernel/config 2) Create a subsystem (choose an NQN) and allow host access Pick a subsystem name/NQN. Use a proper NQN format to avoid collisions on shared networks (example shown). SUBSYS="nqn.2026-02.com.contoso:win-nvmeof-test" sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS # Lab-only: allow any host to connect echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/attr_allow_any_host > /dev/null 3) Add a namespace (export a local block device) Choose a block device on the target (example: /dev/nvme0n1). Be careful: you are exporting the raw block device. DEV="/dev/nvme0n1" # <-- replace with your device (e.g., /dev/sdb) sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1 echo -n $DEV | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/device_path > /dev/null echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/enable > /dev/null 4) Create a TCP port (listener) and bind the subsystem Choose: TRADDR = the Linux target’s IP address on the test network TRSVCID = the TCP port (commonly 4420, but you can use any free TCP port) PORTID=1 TRADDR="192.168.1.92" # <-- replace with target IP TRSVCID="4420" # <-- TCP port sudo mkdir -p /sys/kernel/config/nvmet/ports/$PORTID echo -n $TRADDR | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_traddr > /dev/null echo -n tcp | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trtype > /dev/null echo -n $TRSVCID | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trsvcid > /dev/null echo -n ipv4 | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_adrfam > /dev/null # Bind subsystem to port sudo ln -s /sys/kernel/config/nvmet/subsystems/$SUBSYS \ /sys/kernel/config/nvmet/ports/$PORTID/subsystems/$SUBSYS 5) Quick validation (optional, from any Linux host with nvme-cli) If you have a Linux host handy, nvme discover will confirm the target is advertising the subsystem and will show the subnqn value you’ll use from Windows. sudo nvme discover -t tcp -a 192.168.1.92 -s 4420 Mapping the Target Values to Your Windows nvmeofutil.exe Steps In your Windows steps, you already define the key connection parameters in the Subsystem Port add/connect flow. Use these mappings: SubsystemNQN (-nq) → the subsystem name/NQN you created (example: nqn.2026-02.com.contoso:win-nvmeof-test) TargetAddress (-ta) → the Linux target IP address (example: 192.168.1.92) ServiceId (-ts) → the TCP port you used (example: 4420) Option B — If You Prefer a Tool-Based Setup: nvmetcli If you’d rather not manipulate configfs directly, nvmetcli provides an interactive shell and can save/restore configurations from JSON (useful for repeating the setup across reboots in a lab). At a high level, nvmetcli can: Create subsystems and namespaces Configure ports (including TCP) Manage allowed hosts (or allow any host in controlled environments) Save/restore configs (for example, /etc/nvmet/config.json) Optional (Advanced): SPDK NVMe-oF Target If you already use SPDK or want to explore higher-performance user-space targets, SPDK’s NVMe-oF target supports TCP and RDMA and is configured via JSON-RPC. For early evaluation, the Linux kernel target above is usually the quickest path. Known Limitations As you evaluate this early Insider release, keep the following limitations in mind: Configuration is CLI-only (no GUI or PowerShell cmdlets yet) No multipathing Limited recovery behavior in some network failure scenarios These areas are under active development. Try It and Share Feedback We encourage you to try NVMe-oF in your lab or test environment and share your experience on Windows Server Insiders Discussions so the engineering team can review public feedback in one place. For private feedback or questions that can’t be shared publicly, you can also reach us at nvmeofpreview@microsoft.com. We look forward to your feedback as we take the next steps in modernizing remote storage on Windows Server. — Yash Shekar (and the Windows Server team)CSV Auto-Pause on Windows Server 2025 Hyper-V Cluster
Hi everyone, i'm facing a very strange behavior with a newly created HyperV Clsuter running on Windows Server 2025. One of the two nodes keep calling for autopause on the CSV during the I/O peak. Does anyone have experienced this ? Here are the details : Environment Cluster: 2-node Failover Cluster Nodes: HV1 & HV2 (HPE ProLiant DL360 Gen11) OS: Windows Server 2025 Datacenter, Build 26100.32370 (KB5075899 installed Feb 21, 2026) Storage: HPE MSA 2070 full SSD, iSCSI point-to-point (4×25 Gbps per node, 4 MPIO paths) CSV: Single volume "Clsuter Disk 2" (~14 TB, NTFS, CSVFS_NTFS) Quorum: Disk Witness (Node and Disk Majority) Networking: 4×10 Gbps NIC Teaming for management/cluster/VMs traffic, dedicated iSCSI NICs Problem Description The cluster experiences CSV auto-pause events daily during a peak I/O period (~10:00-11:30), caused by database VMs generating ~600-800 MB/s (not that much). The auto-pause is triggered by HV2's CsvFs driver, even though HV2 hosts no VMs. All VMs run on HV1, which is the CSV coordinator/owner. Comparative Testing (Feb 23-26, 2026) Date HV2 Status Event 5120 SMB Slowdowns (1054) Auto-pause Cycles VM Impact Feb 23 Active 1 44 1 cycle (237ms recovery) None Feb 24 Active 0 8 0 None Feb 25 Drained (still in cluster) 4 ~60 (86,400,000ms max!) 3 cascade cycles Severe - all VMs affected Feb 26 Powered off 0 0 0 None Key finding: Draining HV2 does NOT prevent the issue. Only fully powering off HV2 eliminates all auto-pause events and SMB slowdowns during the I/O peak. Root Cause Analysis 1. CsvFs Driver on HV2 Maintains Persistent SMB Sessions to CSV SMB Client Connectivity log (Event 30833) on HV2 shows ~130 new SMB connections per hour to the CSV share, continuously, constant since boot: Share: \\xxxx::xxx:xxx:xxx:xxx\xxxxxxxx-...-xxxxxxx$ (HV1 cluster virtual adapter) All connections from PID 4 (System/kernel) — CsvFs driver 5,649 connections in 43.6 hours = ~130/hour Each connection has a different Session ID (not persistent) This behavior continues even when HV2 is drained 2. HV2 Opens Handles on ALL VM Files During the I/O peak on Feb 25, SMB Server Operational log (Event 1054) on HV1 showed HV2 blocking on files from every VM directory, including powered-off VMs and templates: .vmgs, .VMRS, .vmcx, .xml — VM configuration and state files .rct, .mrt — RCT/CBT tracking files Affected VMs: almost all Also affected: powered-off VMs And templates: winsrv2025-template 3. Catastrophic Block Durations On Feb 25 (HV2 drained but still in cluster): Operations blocked for 86,400,000 ms (exactly 24 hours) — handles accumulated since previous day These all expired simultaneously at 10:13:52, triggering cascade auto-pause Post-autopause: big VM freeze/lag for additional 2,324 seconds (39 minutes) On Feb 24 (HV2 active): Operations blocked for 1,150,968 ms (19 minutes) on one of the VM files Despite this extreme duration, no auto-pause was triggered that day 4. Auto-pause Trigger Mechanism HV2 Diagnostic log at auto-pause time: CsvFs Listener: CsvFsVolumeStateChangeFromIO->CsvFsVolumeStateDraining, status 0xc0000001 OnVolumeEventFromCsvFs: reported VolumeEventAutopause to node 1 Error status 0xc0000001 (STATUS_UNSUCCESSFUL) on I/O operation from HV2 CsvFsVolumeStateChangeFromIO = I/O failure triggered the auto-pause HV2 has no VMs running — this is purely CsvFs metadata/redirected access 5. SMB Connection Loss During Auto-pause SMB Client Connectivity on HV2 at auto-pause time: Event 30807: Share connection lost - "Le nom réseau a été supprimé" Event 30808: Share connection re-established What Has Been Done KB5075899 installed (Feb 21) — Maybe improved recovery from multi-cycle loop to single cycle a little, but did not prevent the auto-pause Disabled ms_server binding on iSCSI NICs (both nodes) Tuned MPIO: PathVerification Enabled, PDORemovePeriod 120, RetryCount 6, DiskTimeout 100 Drained HV2 — no effect Powered off HV2 — Completely eliminated the problem I'm currently running mad with this problem, i've deployed a lot of HyperV clusters and it's the first time i'm experiencing such a strange behavior, the only workaround i found is to take the second nodes off to be sure he is not putting locks on CSV files. The cluster is only running well with one node turned on. Why does the CsvFs driver on a non-coordinator node (HV2) maintain ~130 new SMB connections per hour to the CSV, even when it hosts no VMs and is drained?Why do these connections block for up to 24 hours during I/O peaks on the coordinator node? Why does draining the node not prevent CsvFs from accessing the CSV? Is this a known issue with the CsvFs driver in Windows Server 2025 Build 26100.32370? Are there any registry parameters to limit or disable CsvFs metadata scanning on non-coordinator nodes ? If someone sees somthing that i am missing i would be so grateful ! Have a great day.133Views0likes1CommentBookmark the Secure Boot playbook for Windows Server
Secure Boot is a long‑standing security capability that works in conjunction with the Unified Extensible Firmware Interface (UEFI) to confirm that firmware and boot components are trusted before they are allowed to run. Microsoft is updating the Secure Boot certificates originally issued in 2011 to ensure Windows devices continue to verify trusted boot software. These older certificates begin expiring in June 2026. While Windows Server 2025 certified server platforms already include the 2023 certificates in firmware. For servers that do not, you will need to manually update the certificates. Unlike Windows PCs, which may receive the 2023 Secure Boot certificates through Controlled Feature Rollout (CFR) as part of the monthly update process, Windows Server requires manual action. Luckily, there is a step=by-step guide to help! With the Secure Boot Playbook for Windows Server, you'll find information on the tools and options available to help you update Secure Boot certificates on Windows Server. Check it out today!47Views0likes0CommentsMigrating from VMware to Hyper-v
Hi, I've recently deployed a new 3x node Hyper-v cluster running Windows Server 2025. I have an existing VMware cluster running exsi 7.x. What tools or approach have you guys used to migrate from VMware to Hyper-v? I can see there are many 3rd party tools available, and now the Windows Admin Center appears to also support this. Having never done this before (vmware to hyper-v) I'm not sure what the best method is, does anyone here have any experience and recommendations pls?210Views0likes5CommentsAnnouncing ReFS Boot for Windows Server Insiders
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume. Why ReFS Boot? Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance. In short, ReFS boot means a more robust server right from startup with several benefits: Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly. Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings. Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement. Maximum Boot Volume Size: NTFS vs. ReFS Resiliency Enhancements with ReFS Boot Feature ReFS Boot Volume NTFS Boot Volume Metadata checksums ✅ Yes ❌ No Integrity streams (optional) ✅ Yes ❌ No Proactive error detection (scrubber) ✅ Yes ❌ No Online integrity (no chkdsk) ✅ Yes ❌ No Check out Microsoft Learn for more information on ReFS resiliency enhancements. Performance Enhancements with ReFS Boot Operation ReFS Boot Volume NTFS Boot Volume Fixed-size VHD creation Seconds Minutes Large file copy operations Milliseconds-seconds (independent of file size) Seconds-minutes (linear with file size) Sparse provisioning ✅ ❌ Check out Microsoft Learn for more information on ReFS performance enhancements. Getting Started with ReFS Boot Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview: 1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs. 3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume. A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video. Call to Action In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one. We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum. — Christina Curlette (and the Windows Server team)CrowdStrike Secure Boot Lifecycle Management Content Pack
CrowdStrike has recently released the Secure Boot Lifecycle Management Content Pack. This new feature helps Falcon for IT module users manage Windows Secure Boot certificate updates ahead of these certificates’ expiration beginning in late June 2026. The dashboard provides an at‑a‑glance view of Secure Boot–enabled devices, showing which systems are already compliant with the updated 2023 Secure Boot certificate, which are in progress, and which are blocked or require opt‑in to a managed rollout. It also highlights certificate update failures that may require investigation. In addition, overall readiness is summarized through a compliance gauge, while a 30‑day trend shows how pass and fail counts change as remediation progresses. Filters by operating system, server edition, hostname, and update status help administrators quickly identify devices that need action to help ensure systems remain secure after the certificates expire. The feature also provides management options to opt devices into Microsoft's managed rollout for gradual, tested deployment, and to block updates on hardware with known compatibility issues to prevent boot failures. Note that this feature is available as part of CrowdStrike's Falcon for IT module. CrowdStrike Endpoint Detection and Response (EDR) customers who are not licensed for this module can enable a free trial from the CrowdStrike Store. To learn more about this feature, please see the content pack tutorial video.59Views0likes0CommentsAnnouncing Native NVMe in Windows Server 2025: Ushering in a New Era of Storage Performance
We’re thrilled to announce the arrival of Native NVMe support in Windows Server 2025—a leap forward in storage innovation that will redefine what’s possible for your most demanding workloads. Modern NVMe (Non-Volatile Memory Express) SSDs now operate more efficiently with Windows Server. This improvement comes from a redesigned Windows storage stack that no longer treats all storage devices as SCSI (Small Computer System Interface) devices—a method traditionally used for older, slower drives. By eliminating the need to convert NVMe commands into SCSI commands, Windows Server reduces processing overhead and latency. Additionally, the whole I/O processing workflow is redesigned for extreme performance. This release is the result of close collaboration between our engineering teams and hardware partners, and it serves as a cornerstone in modernizing our storage stack. Native NVMe is now generally available (GA) with an opt-in model (disabled by default as of October’s latest cumulative update for WS2025). Switch onto Native NVMe as soon as possible or you are leaving performance gains on the table! Stay tuned for more updates from our team as we transition to a dramatically faster, more efficient storage future. Why Native NVMe and why now? Modern NVMe devices—like PCIe Gen5 enterprise SSDs capable of 3.3 million IOPS, or HBAs delivering over 10 million IOPS on a single disk—are pushing the boundaries of what storage can do. SCSI-based I/O processing can’t keep up because it uses a single-queue model, originally designed for rotational disks, where protocols like SATA support just one queue with up to 32 commands. In contrast, NVMe was designed from the ground up for flash storage and supports up to 64,000 queues, with each queue capable of handling up to 64,000 commands simultaneously. With Native NVMe in Windows Server 2025, the storage stack is purpose-built for modern hardware—eliminating translation layers and legacy constraints. Here’s what that means for you: Massive IOPS Gains: Direct, multi-queue access to NVMe devices means you can finally reach the true limits of your hardware. Lower Latency: Traditional SCSI-based stacks rely on shared locks and synchronization mechanisms in the kernel I/O path to manage resources. Native NVMe enables streamlined, lock-free I/O paths that slash round-trip times for every operation. CPU Efficiency: A leaner, optimized stack frees up compute for your workloads instead of storage overhead. Future-Ready Features: Native support for advanced NVMe capabilities like multi-queue and direct submission ensures you’re ready for next-gen storage innovation. Performance Data Using DiskSpd.exe, basic performance testing shows that with Native NVMe enabled, WS2025 systems can deliver up to ~80% more IOPS and a ~45% savings in CPU cycles per I/O on 4K random read workloads on NTFS volumes when compared to WS2022. This test ran on a host with Intel Dual Socket CPU (208 logical processors, 128GB RAM) and a Solidigm SB5PH27X038T 3.5TB NVMe device. The test can be recreated by running "diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30 testfile1.dat > output.dat" and modifying the parameters as desired. Results may vary. Top Use Cases: Where You’ll See the Difference Try Native NVMe on servers running your enterprise applications. These gains are not just for synthetic benchmarks—they translate directly to faster database transactions, quicker VM operations, and more responsive file and analytics workloads. SQL Server and OLTP: Shorter transaction times, higher IOPS, and lower tail latency under mixed read/write workloads. Hyper‑V and virtualization: Faster VM boot, checkpoint operations, and live migration with reduced storage contention. High‑performance file servers: Faster large‑file reads/writes and quicker metadata operations (copy, backup, restore). AI/ML and analytics: Low‑latency access to large datasets and faster ETL, shuffle, and cache/scratch I/O. How to Get Started Check your hardware: Ensure you have NVMe-capable devices that are currently using the Windows NVMe driver (StorNVMe.sys). Note that some NVMe device vendors provide their own drivers, so unless using the in-box Windows NVMe driver, you will not notice any differences. Enable Native NVMe: After applying the 2510-B Latest Cumulative Update (or most recent), add the registry key with the following PowerShell command: reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f Alternatively, use this Group Policy MSI to add the policy that controls the feature then run the local Group Policy Editor to enable the policy (found under Local Computer Policy > Computer Configuration > Administrative Templates > KB5066835 251014_21251 Feature Preview > Windows 11, version 24H2, 25H2). Once Native NVMe is enabled, open Device Manager and ensure that all attached NVMe devices are displayed under the “Storage disks” section. Monitor and Validate: Use Performance Monitor and Windows Admin Center to see the gains for yourself. Or try DiskSpd.exe yourself to measure microbenchmarks in your own environment! A quick way to measure IOPS in Performance Monitor is to set up a histogram chart and add a counter for Physical Disk>Disk Transfers/sec (where the selected instance is a drive that corresponds to one of your attached NVMe devices) then run a synthetic workload with DiskSpd. Compare the numbers before and after enabling Native NVMe to see the realized difference in your real environment! Join the Storage Revolution This is more than just a feature—it’s a new foundation for Windows Server storage, built for the future. We can’t wait for you to experience the difference. Share your feedback, ask questions, and join the conversation. Let’s build the future of high-performance Windows Server storage together. Send us your feedback or questions at nativenvme@microsoft.com! — Yash Shekar (and the Windows Server team)PS script for moving clustered VMs to another node
Windows Server 2022, Hyper-V, Failover cluster We have a Hyper-V cluster where the hosts reboot once a month. If the host being rebooted has any number of VMs running on it the reboot can take hours. I've proven this by manually moving VM roles off of the host prior to reboot and the host reboots in less than an hour, usually around 15 minutes. Does anyone know of a powershell script that will detect clustered VMs running on the host and move them to another host within the cluster? I'd rather not reinvent this if someone's already done it.20Views0likes0CommentsGetting Started with Windows Admin Center Virtualization Mode
Getting Started with Windows Admin Center Virtualization Mode Windows Admin Center (WAC) Virtualization Mode is a new, preview experience for managing large Hyper-V virtualization fabrics—compute, networking, and storage—from a single, web-based console. It’s designed to scale from a handful of hosts up to thousands, centralizing configuration and day-to-day operations. This post walks through: • What Virtualization Mode is and its constraints • How to install it on a Windows Server host • How to add an existing Hyper-V host into a resource group Prerequisites and Constraints Before you begin, note the current preview limitations: • The WAC Virtualization Mode server and the Hyper-V hosts it manages must be in the same Active Directory domain. • You cannot install Virtualization Mode side-by-side with a traditional WAC deployment on the same server. • Do not install Virtualization Mode directly on a Hyper-V host you plan to manage. – You can install it on a VM running on that host. • Plan for at least 8 GB RAM on the WAC Virtualization Mode server. For TLS, the walkthrough assumes you have an Enterprise CA and are deploying domain-trusted certificates to servers, so browsers automatically trust the HTTPS endpoint. You can use a self signed certificate, but you’ll end up with all the fun that entails when you use WAC-V from a host on which the self signed cert isn’t installed. Given the domain requirements of WAC-V and the hosts it manages, going the Enterprise CA method seemed the path of least resistance. Step 1 – Install the C++ Redistributable On your Windows Server 2025 host that will run WAC Virtualization Mode: 1. Open Windows Terminal or PowerShell. 2. Use winget to search for the VC++ redistributable: powershell winget search "VC Redist" 3. Identify the package corresponding to “Microsoft Visual C++ 2015–2022 Redistributable” (or equivalent). 4. Install it with winget, for example: powershell winget install "Microsoft.VC++2015-2022Redist-x64" This fulfills the runtime dependency for the WAC Virtualization Mode installer. Step 2 – Install Windows Admin Center Virtualization Mode 1. Download the installer 1. Download the Windows Admin Center Virtualization Mode installer from the Windows Insider Preview location provided in the official documentation. Save it to a local folder on the WAC host. 2. Run the setup wizard 1. Double-click the downloaded binary. 2. Approve the UAC prompt. 3. In the Welcome page, proceed as with traditional WAC setup. 3. Accept the license and choose setup type 1. Accept the license agreement. 2. Choose Express setup (suitable for most lab and PoC deployments). 4. Select a TLS certificate 1. When prompted for a TLS certificate: 1. Select a certificate issued by your Enterprise CA that matches the server name. 2. Using CA-issued certs ensures all domain-joined clients will trust the site without manual certificate import. 5. Configure PostgreSQL for WAC 1. Virtualization Mode uses PostgreSQL as its configuration and state database. 2. When prompted: 1. Provide a strong password for the database account WAC will use. 2. Record this securely if required by your org standards. 6. Configure update and diagnostic settings 1. Choose how WAC should be updated (manual/automatic). 2. Set diagnostic data preferences according to your policy. 7. Complete the installation 1. Click Install to deploy: 1. The WAC Virtualization Mode web service 2. The PostgreSQL database instance 2. When installation completes, click Finish. Step 3 – Sign In to Virtualization Mode 1. Open a browser on a domain-joined machine and browse to the WAC URL (for example, https://wac-vmode01.contoso.internal). 2. Sign in with your domain credentials that have appropriate rights to manage Hyper-V hosts (for example, DOMAIN\adminuser). 3. You’ll see the new Virtualization Mode UI, which differs significantly from traditional WAC and is optimized for fabric-wide management. Step 4 – Create a Resource Group Resource groups help you logically organize Hyper-V servers you’ll manage (for example, by site, function, or cluster membership). 1. In the Virtualization Mode UI, select Resource groups. 2. Click Create resource group. 3. Provide a name, such as Zava-Nested-Vert. 4. Save the resource group. You now have a logical container ready for one or more Hyper-V hosts. Step 5 – Prepare the Hyper-V Host Before adding an existing Hyper-V host: 1. Ensure the host is: 1. Running Hyper-V and reachable by FQDN (for example, zava-hvA.zavaops.internal). 2. In the same AD domain as the WAC Virtualization Mode server. 2. Temporarily open File and Printer Sharing from the Hyper-V host’s firewall to the WAC Virtualization Mode server: 1. This is required for initial onboarding. 2. After onboarding, you can re-lock firewall rules according to your security baseline. Step 6 – Add a Hyper-V Host to the Resource Group 1. In the WAC Virtualization Mode UI, go to your resource group. 2. Click the ellipsis (…) and choose Add resource. 3. On the Add resource page, select Compute (you’re adding a Hyper-V server, not a storage fabric resource). 4. Enter the Hyper-V host’s FQDN (for example, zava-hvA.zavaops.internal). 5. Confirm the host resolves correctly and proceed. Configure Networking Template 1. On the Networking page, assign fabric roles to NICs using the network template model: 1. Each NIC can be tagged for one or more roles: 1. Compute 2. Management 3. Storage 2. In a simple, single-NIC lab scenario, you may assign Compute, Management, and Storage all to Ethernet0. 3. All three roles must be fully assigned across available adapters before you can proceed. Configure Storage 1. On the Storage page, specify the storage model: 1. For an existing host using local disks, choose Use existing storage. 2. In future, you can select SAN or file server storage when those options are available and configured in your environment. Configure Compute Properties 1. On the Compute page, configure host-level defaults: 1. Enable or disable Enhanced Session Mode. 2. Set the maximum concurrent live migrations. 3. Confirm or update the default VM storage path. 2. Review the configuration, click Next, then Submit. 3. The Hyper-V host is registered into the resource group and becomes manageable via Virtualization Mode. Step 7 – Verify Host and VM Management With the host onboarded: 1. Open the resource group and select the Hyper-V host. 2. You’ll see a streamlined view similar to traditional WAC, with nodes for: 1. Event logs 2. Files 3. Networks 4. Storage 5. Windows Update 6. Virtual Machines 3. To validate functionality, create a test VM: 1. Go to Virtual Machines → Add. 2. Provide a VM name (for example, WS25-temp). 3. Set vCPUs (for example, 2). 4. Optionally enable nested virtualization. 5. Select the appropriate virtual switch. 6. Click Create, then attach an ISO or existing VHDX and complete OS setup. ▶️ Public Preview: https://aka.ms/WACDownloadvMode ▶️ Documentation: https://aka.ms/WACvModeDocs2.5KViews2likes6CommentsPowershell runs fine manually but not in Task Scheduler
I have a strange problem, and I am hoping someone will be able to point me to a solution. Below you will see a PowerShell script below that I am running The script works fine when it is run manually in PowerShell, but when I put it in the Windows Task Scheduler and it runs at the appointed time (5 am) the output is garbage You can see the script running below at 5am If it Right click on the task in the scheduler, and tell it to run, the script it runs normally and the output is fine I am guessing there is some switch or something that I am missing that will need to make this work, but I am hoping someone has an idea and can help me because I am lost Thanks Gary64Views0likes1Comment