We’re excited to announce that a basic NVMe-over-Fabrics (NVMe-oF) initiator is now available in the latest Windows Server Insiders build. This release introduces an in-box Windows initiator for NVMe/TCP and NVMe/RDMA, enabling early evaluation of networked NVMe storage using native Windows Server components. This is an early, evaluation-focused release designed to help you explore NVMe-oF scenarios in your environment and share feedback as we continue development.
What Is NVMe-over-Fabrics?
NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports:
- NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware.
- NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs.
Why NVMe-oF on Windows Server?
For Windows Server deployments, NVMe-oF builds on the same principles as Native NVMe support: helping you reduce protocol overhead, improve scalability, and better align your storage stack with modern hardware. For Windows Server customers, NVMe-oF offers:
- Lower overhead networked storage access — NVMe-oF has less protocol overhead than iSCSI, helping extract the performance of modern NVMe devices while preserving the parallelism and efficiency of NVMe.
- Flexible infrastructure choices — NVMe-oF supports both TCP and RDMA transports, allowing customers to choose between standard Ethernet-based deployments or low-latency RDMA-capable networks based on their infrastructure and performance goals.
- A forward-looking storage foundation — NVMe-oF is designed to scale across multiple controllers, namespaces, and queues, making it a strong foundation for future disaggregated and software-defined storage architectures.
This Insider release represents the first step in bringing NVMe-oF capabilities natively to Windows Server.
What’s Included in This Insider Release
In this Windows Server Insider build, you can evaluate the following NVMe-oF capabilities:
- An inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support
- A new command-line utility, nvmeofutil.exe, for configuration and management
- Manual configuration of discovery and I/O connections
- Automatic exposure of NVMe namespaces as Windows disks once connected
Note: PowerShell cmdlets are not available yet. All configuration is performed using nvmeofutil.exe.
Getting Started with nvmeofutil.exe
To start evaluating NVMe-oF in this build, you’ll use nvmeofutil.exe, the command-line utility included with supported Windows Server Insider builds.
1. Install the Latest Windows Server Insiders Build
Ensure you are running a Windows Server Insiders build that includes:
- The inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support
- The nvmeofutil.exe utility
2. Open an Elevated Command Prompt
All NVMe-oF commands must be run from an administrator command prompt.
3. List Available NVMe-oF Initiator Adapters
nvmeofutil.exe list -t ia
This command displays the available NVMe-oF initiator adapters on the system.
4. Enumerate Host Gateways
nvmeofutil.exe list -t hg -ia <AdapterNumber>
Host gateways represent transport-specific endpoints, such as NVMe/TCP over IPv4.
5. Configure an I/O Subsystem Port
Tip: You’ll need three values from your target configuration: the Subsystem NQN, the target IP/DNS, and the TCP port. If you haven’t set up a target yet, see the Target Setup section below for a quick Linux-based configuration and where to find these values.
nvmeofutil.exe add -t sp -ia <Adapter> -hg <HostGateway> -dy true
-pi <PortNumber> -nq <SubsystemNQN> -ta <TargetAddress> -ts <ServiceId>
This defines the connection parameters to the remote NVMe-oF target.
6. Connect and Use the Namespace
nvmeofutil.exe connect -ia <Adapter> -sp <SubsystemPort>
Once connected, the NVMe namespace appears as a disk in Windows and can be partitioned and formatted using standard Windows tools.
Target Setup (Recommendations for Early Evaluation)
If you plan to evaluate NVMe-oF with an existing storage array, check with your SAN vendor to confirm support and get configuration guidance. Where possible, we also encourage you to validate interoperability using your production storage platform. For early evaluation and lab testing, the simplest and most interoperable option is to use a Linux-based NVMe-oF target, as described below.
To evaluate the inbox Windows NVMe-oF initiator in this Insider release, you’ll need an NVMe-oF target that can export a block device as an NVMe namespace over TCP.
Recommended: Linux kernel NVMe-oF target (nvmet) over TCP
For early testing, the simplest and most interoperable option is the Linux kernel NVMe target (“nvmet”). It’s straightforward to stand up in a lab and is widely used for basic NVMe-oF interoperability validation.
Lab note: The example below uses “allow any host” to reduce friction during evaluation. In production environments, you should restrict access to specific host NQNs instead.
What You’ll Need
- A Linux system (physical or VM)
- A block device to export (an NVMe SSD, SATA SSD, a virtual disk, etc.)
- IP connectivity to your Windows Server Insider machine
- A TCP port opened between initiator and target (you’ll choose a port below)
VMs are fine for functional evaluation. For performance testing, you’ll want to move to physical hosts and realistic networking later.
Option A — Configure nvmet Directly via configfs (Minimal, Copy/Paste Friendly)
On the Linux target, run the following as root (or with sudo). This configures one NVMe-oF subsystem exporting one namespace over NVMe/TCP.
1) Load kernel modules and mount configfs
sudo modprobe nvmet
sudo modprobe nvmet-tcp
# Required for nvmet configuration
sudo mount -t configfs none /sys/kernel/config
2) Create a subsystem (choose an NQN) and allow host access
Pick a subsystem name/NQN. Use a proper NQN format to avoid collisions on shared networks (example shown).
SUBSYS="nqn.2026-02.com.contoso:win-nvmeof-test"
sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS
# Lab-only: allow any host to connect
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/attr_allow_any_host > /dev/null
3) Add a namespace (export a local block device)
Choose a block device on the target (example: /dev/nvme0n1). Be careful: you are exporting the raw block device.
DEV="/dev/nvme0n1" # <-- replace with your device (e.g., /dev/sdb)
sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1
echo -n $DEV | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/device_path > /dev/null
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/enable > /dev/null
4) Create a TCP port (listener) and bind the subsystem
Choose:
- TRADDR = the Linux target’s IP address on the test network
- TRSVCID = the TCP port (commonly 4420, but you can use any free TCP port)
PORTID=1
TRADDR="192.168.1.92" # <-- replace with target IP
TRSVCID="4420" # <-- TCP port
sudo mkdir -p /sys/kernel/config/nvmet/ports/$PORTID
echo -n $TRADDR | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_traddr > /dev/null
echo -n tcp | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trtype > /dev/null
echo -n $TRSVCID | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trsvcid > /dev/null
echo -n ipv4 | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_adrfam > /dev/null
# Bind subsystem to port
sudo ln -s /sys/kernel/config/nvmet/subsystems/$SUBSYS \
/sys/kernel/config/nvmet/ports/$PORTID/subsystems/$SUBSYS
5) Quick validation (optional, from any Linux host with nvme-cli)
If you have a Linux host handy, nvme discover will confirm the target is advertising the subsystem and will show the subnqn value you’ll use from Windows.
sudo nvme discover -t tcp -a 192.168.1.92 -s 4420
Mapping the Target Values to Your Windows nvmeofutil.exe Steps
In your Windows steps, you already define the key connection parameters in the Subsystem Port add/connect flow. Use these mappings:
- SubsystemNQN (-nq) → the subsystem name/NQN you created (example: nqn.2026-02.com.contoso:win-nvmeof-test)
- TargetAddress (-ta) → the Linux target IP address (example: 192.168.1.92)
- ServiceId (-ts) → the TCP port you used (example: 4420)
Option B — If You Prefer a Tool-Based Setup: nvmetcli
If you’d rather not manipulate configfs directly, nvmetcli provides an interactive shell and can save/restore configurations from JSON (useful for repeating the setup across reboots in a lab). At a high level, nvmetcli can:
- Create subsystems and namespaces
- Configure ports (including TCP)
- Manage allowed hosts (or allow any host in controlled environments)
- Save/restore configs (for example, /etc/nvmet/config.json)
Optional (Advanced): SPDK NVMe-oF Target
If you already use SPDK or want to explore higher-performance user-space targets, SPDK’s NVMe-oF target supports TCP and RDMA and is configured via JSON-RPC. For early evaluation, the Linux kernel target above is usually the quickest path.
Known Limitations
As you evaluate this early Insider release, keep the following limitations in mind:
- Configuration is CLI-only (no GUI or PowerShell cmdlets yet)
- No multipathing
- Limited recovery behavior in some network failure scenarios
These areas are under active development.
Try It and Share Feedback
We encourage you to try NVMe-oF in your lab or test environment and share your experience on Windows Server Insiders Discussions so the engineering team can review public feedback in one place.
For private feedback or questions that can’t be shared publicly, you can also reach us at nvmeofpreview@microsoft.com.
We look forward to your feedback as we take the next steps in modernizing remote storage on Windows Server.
—
Yash Shekar (and the Windows Server team)