storage
1071 TopicsWhy Windows Should Adopt ReFS as a Bootable Filesystem
ReFS could become a bootable filesystem — it only needs a few missing layers. No need to copy NTFS, just implement what the Windows boot process requires. Key missing pieces: System‑level journaling (not only metadata) Full hardlink + extended attribute support EFS, ACLs, USN Journal for security + Windows Update Boot‑critical atomicity for safe system file updates Bootloader‑compatible APIs (BCD, BitLocker pre‑boot, WinRE, Secure Boot) Goals: Use NTFS as a reference map, add the missing capabilities to ReFS, and optimize them using ReFS features (copy‑on‑write, integrity streams, block cloning). Result: A modern, resilient filesystem that can finally boot Windows - without losing its benefits.80Views1like5CommentsJoin us at Microsoft Azure Infra Summit 2026 for deep technical Azure infrastructure content
Microsoft Azure Infra Summit 2026 is a free, engineering-led virtual event created for IT professionals, platform engineers, SREs, and infrastructure teams who want to go deeper on how Azure really works in production. It will take place May 19-21, 2026. This event is built for the people responsible for keeping systems running, making sound architecture decisions, and dealing with the operational realities that show up long after deployment day. Over the past year, one message has come through clearly from the community: infrastructure and operations audiences want more in-depth technical content. They want fewer surface-level overviews and more practical guidance from the engineers and experts who build, run, and support these systems every day. That is exactly what Azure Infra Summit aims to deliver. All content is created AND delivered by engineering, targeting folks working with Azure infrastructure and operating production environments. Who is this for: IT professionals, platform engineers, SREs, and infrastructure teams When: May 19-21, 2026 - 8:00 AM–1:00 PM Pacific Time, all 3 days Where: Online Virtual Cost: Free Level: Most sessions are advanced (L300-400). Register here: https://aka.ms/MAIS-Reg Built for the people who run workloads on Azure Azure Infra Summit is for the people who do more than deploy to Azure. It is for the people who run it. If your day involves uptime, patching, governance, monitoring, reliability, networking, identity, storage, or hybrid infrastructure, this event is for you. Whether you are an IT professional managing enterprise environments, a platform engineer designing landing zones, an Azure administrator, an architect, or an SRE responsible for resilience and operational excellence, you will find content built with your needs in mind. We are intentionally shaping this event around peer-to-peer technical learning. That means engineering-led sessions, practical examples, and candid discussion about architecture, failure modes, operational tradeoffs, and what breaks in production. The promise here is straightforward: less fluff, more infrastructure. What to expect Azure Infra Summit will feature deep technical content in the 300 to 400 level range, with sessions designed by engineering to help you build, operate, and optimize Azure infrastructure more effectively. The event will include a mix of live and pre-recorded sessions and live Q&A. Throughout the three days, we will dig into topics such as: Hybrid operations and management Networking at scale Storage, backup, and disaster recovery Observability, SLOs, and day-2 operations Confidential compute Architecture, automation, governance, and optimization in Azure Core environments And more… The goal is simple: to give you practical guidance you can take back to your environment and apply right away. We want attendees to leave with stronger mental models, a better understanding of how Azure behaves in the real world, and clearer patterns for designing and operating infrastructure with confidence. Why this event matters Infrastructure decisions have a long tail. The choices we make around architecture, operations, governance, and resilience show up later in the form of performance issues, outages, cost, complexity, and recovery challenges. That is why deep technical learning matters, and why events like this matter. Join us I hope you will join us for Microsoft Azure Infra Summit 2026, happening May 19-21, 2026. If you care about how Azure infrastructure behaves in the real world, and you want practical, engineering-led guidance on how to build, operate, and optimize it, this event was built for you. Register here: https://aka.ms/MAIS-Reg Cheers! Pierre Roman532Views1like0CommentsPremium SSD v2 Is Now Generally Available for Azure Database for PostgreSQL
We are excited to announce the General Availability (GA) of Premium SSD v2 for Azure Database for PostgreSQL flexible server. With Premium SSD v2, you can achieve up to 4× higher IOPS, significantly lower latency, and better price-performance for I/O-intensive PostgreSQL workloads. With independent scaling of storage and performance, you can now eliminate overprovisioning and unlock predictable, high-performance PostgreSQL at scale. This release is especially impactful for OLTP, SaaS, and high‑concurrency applications that require consistent performance and reliable scaling under load. In this post, we will cover: Why Premium SSD v2: Core capabilities such as flexible disk sizing, higher performance, and independent scaling of capacity and I/O. Premium SSD v2 vs. Premium SSD: A side‑by‑side overview of what’s new and what’s improved. Pricing: Pricing estimates. Performance: Benchmarking results across two workload scenarios. Migration options: How to move from Premium SSD to Premium SSD v2 using restore and read‑replica approaches. Availability and support: Regional availability, supported features, current limitations, and how to get started. Why Premium SSD v2? Flexible Disk Size - Storage can be provisioned from 32 GiB to 64 TiB in 1 GiB increments, allowing you to pay only for required capacity without scaling disk size for performance. High Performance -Achieve up to 80,000 IOPS and 1,200 MiB/s throughput on a single disk, enabling high-throughput OLTP and mixed workloads. Adapt instantly to workload changes: With Premium SSD v2, performance is no longer tied to disk size. Independently tune IOPS and throughput without downtime, ensuring your database keeps up with real-time demand. Free baseline performance: Premium SSD v2 includes built-in baseline performance at no additional cost. Disks up to 399 GiB automatically include 3,000 IOPS and 125 MiB/s, while disks sized 400 GiB and larger include up to 12,000 IOPS and 500 MiB/s. Premium SSD v2 vs. Premium SSD: What’s new? Pricing Pricing for Premium SSD v2 is similar to Premium SSD, but will vary depending on the storage, IOPS, and bandwidth configuration set for a Premium SSD v2 disk. Pricing information is available on the pricing page or pricing calculator. Performance Premium SSD v2 is designed for IO‑intensive workloads that require sub‑millisecond disk latencies, high IOPS, and high throughput at a lower cost. To demonstrate the performance impact, we ran pgbench on Azure Database for PostgreSQL using the test profile below. Test Setup To minimize external variability and ensure a fair comparison: Client virtual machines and the database server were deployed in the same availability zone in the East US region. Compute, region, and availability zones were kept identical. The only variable changed was the storage tier. TPC-B benchmark using pgbench with a database size of 350 GiB. Test Scenario 1: Breaking the IOPS Ceiling with Premium SSD v2 Premium SSD v2 eliminates the traditional storage bottleneck by scaling linearly up to 80,000 IOPS, while Premium SSD plateaus early due to fixed performance limits. To demonstrate this, we configured each storage tier with its maximum supported IOPS and throughput while keeping all other variables constant. Premium SSD v2 achieves up to 4x higher IOPS at nearly half the cost, without requiring large disk sizes. Note: Premium SSD requires a 32 TiB disk to reach 20K IOPS, while SSD v2 achieves 80K IOPS even on a 160 GiB disk though we used 1 TiB disk in this test for a bigger scaling factor for pgbench test. We ran pgbench across five workload profiles, ranging from 32 to 256 concurrent clients, with each test running for 20 minutes. The results go beyond incremental improvements and highlight a material shift in how applications scale with Premium SSD v2. Throughput Scaling As concurrency increases, Premium SSD quickly reaches its IOPS limits while Premium SSD v2 continues to scale. At 32 clients: Premium SSD v2 achieved 10,562 TPS vs 4,123 TPS on Premium SSD representing a 156% performance improvement. At 256 clients: At higher load, Premium SSD v2 achieved over 43,000 TPS representing a 279% improvement compared to the 11,465 TPS observed on Premium SSD. Latency Stability Throughput is an indication of how much work is done while latency reflects how quickly users experience it. Premium SSD v2 maintains consistently low latency even as workload increases. Reduced Wait Times: 61–74% lower latency across all test phases. Consistency under Load: Premium SSD latency increased to 22.3 ms, while Premium SSD v2 maintained a latency of 5.8 ms, remaining stable even under peak load. IOPS Behavior The table below illustrates the IOPS behavior observed during benchmarking for both storage tiers. Dimension Premium SSD Premium SSD v2 IOPS Lower baseline performance, Hits limits early ~2× higher IOPS at low concurrency, Up to 4× higher IOPS at peak load IOPS Plateau Throughput stalls at ~20k IOPS for 64 clients -256 clients Scales from ~29k IOPS (32 clients) to ~80k IOPS (256 clients) Additional Clients Adding clients does not increase throughput Additional clients continue to drive higher throughput Primary Bottleneck Storage becomes the bottleneck early No single bottleneck observed Scaling Behavior Stops scaling early True linear scaling with workload demand Resource Utilization Disk saturation leaves CPU and memory underutilized Balanced utilization across IOPS, CPU, and memory Key Takeaway Storage limits performance before compute is fully used Unlocks higher throughput and lower latency by fully utilizing compute resources Test Scenario 2: Better Performance at same price At the same price point, Premium SSD v2 delivers higher throughput and lower latency than Premium SSD without requiring any application changes. To demonstrate this, we ran multiple pgbench tests using two workload configurations 8 clients / 8 threads and 32 clients / 32 threads with each run lasting 20 minutes. Results were consistent across all runs, with Premium SSD v2 consistently outperforming Premium SSD. Both configurations cost $578/month, the only difference is storage performance. Results: Moderate concurrency (8 clients) Premium SSD v2 delivered approximately 154% higher throughput (Transactions Per Second) than Premium SSD (1,813 TPS vs. 715 TPS), while average latency decreased by about 60% (from ~11.1 ms to ~4.4 ms). High concurrency (32 clients) The performance gap increases as concurrency grows, Premium SSD v2 delivered about 169% higher throughput than Premium SSD (3,643 TPS vs. ~1,352 TPS) and reduced average latency by around 67% (from ~26.3 ms to ~8.7 ms). IOPS Behavior In the 8‑client, 8‑thread test, Premium SSD reached its IOPS ceiling early, operating at 100% utilization, while Premium SSD v2 retained approximately 30% headroom under the same workload delivering 8,037 IOPS vs 3,761 IOPS with Premium SSD. When the workload increased to 32 clients and 32 threads, both tiers approached their IOPS limits however, Premium SSD v2 sustained a significantly higher performance ceiling, delivering approximately 2.75x higher IOPS (13,620 vs. 4,968) under load. Key Takeaway: With Premium SSD v2, you do not need to choose between cost and performance you get both. At the same price, applications run faster, scale further, and maintain lower latency without any code changes. Migrate from Premium SSD to Premium SSD v2 Migrating is simple and fast. You can migrate from Premium SSD to Premium SSD v2 using the two strategies below with minimal downtime. These methods are generally quicker than logical migration strategies, such as exporting and restoring data using pg_dump and pg_restore. Restore from Premium SSD to Premium SSD v2 Migrate using Read Replicas When migrating from Premium SSD to Premium SSD v2, using a virtual endpoint helps keep downtime to a minimum and allows applications to continue operating without requiring configuration changes after the migration. After the migration completes, you can stop the original server until your backup requirements are met. Once the required backup retention period has elapsed and all new backups are available on the new server, the original server can be safely deleted. Region Availability & Features Supported Premium SSD v2 is available in 48 regions worldwide for Azure Database for PostgreSQL – Flexible Server. For the most up‑to‑date information on regional availability, supported features, and current limitations, refer to the official Premium SSD v2 documentation. Getting Started: To learn more, review the official documentation for storage configuration available with Azure Database for PostgreSQL. Your feedback is important to us, have suggestions, ideas, or questions? We would love to hear from you: https://aka.ms/pgfeedback.288Views2likes0CommentsOneDrive on MacOS Eating Up Huge Amounts of Storage
Hardware: M3 Max MacBook Pro 14" (14 Core CPU/30 Core GPU, 36GB, 1TB) Software: MacOS 26.4 OneDrive Version: 26.002.0105 (Mac App Store version) I use OneDrive to sync files between my PC (primarily used for gaming) and my MacBook Pro (pretty much everything else) mostly because I'm paying for Office 365. I've noticed in the MacOS storage storage section of settings, there's something taking up ~170 GB of storage. Digging around, I found the culprit: Library>Cloud Storage>OneDrive is currently sucking up ~143GB of storage. It seems like OneDrive is keeping large numbers of files stores locally. Legitimately have no idea why this is the way it is, or how to solve the issue (or if there is even a resolution). Like, why bother with files on demand when the application is going to store half the files locally, anyway?55Views0likes1CommentAnnouncing ReFS Boot for Windows Server Insiders
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume. Why ReFS Boot? Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance. In short, ReFS boot means a more robust server right from startup with several benefits: Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly. Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings. Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement. Maximum Boot Volume Size: NTFS vs. ReFS Resiliency Enhancements with ReFS Boot Feature ReFS Boot Volume NTFS Boot Volume Metadata checksums ✅ Yes ❌ No Integrity streams (optional) ✅ Yes ❌ No Proactive error detection (scrubber) ✅ Yes ❌ No Online integrity (no chkdsk) ✅ Yes ❌ No Check out Microsoft Learn for more information on ReFS resiliency enhancements. Performance Enhancements with ReFS Boot Operation ReFS Boot Volume NTFS Boot Volume Fixed-size VHD creation Seconds Minutes Large file copy operations Milliseconds-seconds (independent of file size) Seconds-minutes (linear with file size) Sparse provisioning ✅ ❌ Check out Microsoft Learn for more information on ReFS performance enhancements. Getting Started with ReFS Boot Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview: 1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs. 3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume. A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video. Call to Action In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one. We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum. — Christina Curlette (and the Windows Server team)Announcing Windows Server vNext Preview Build 29558
Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions and Azure Edition (for VM evaluation only). Branding remains Windows Server 2025 in this preview - when reporting issues please refer to Windows Server vNext preview. Build 29531 established a new Server preview baseline build. Please perform a clean install of Build 29531 (or later) using the installation media linked below. Please note: Upgrades from earlier Windows Server vNext preview builds older than 29531 are not supported. We encourage all Windows Server vNext preview users to perform a clean install using 29531 or later to successfully upgrade to future Windows Server vNext preview builds. While upgrades from earlier Windows Server previews (Build 26525 and older) are not technically blocked by setup.exe, a number of known issues have been identified related to upgrades necessitating the establishment of a new baseline build for our Server vNext Preview Program. The new baseline build (29531) will not be Flighted due to upgrade issues. Flighting support resumed with preview build 29550 or later. What's New NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports: NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware. NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs. For more information, please visit: Introducing the Windows NVMe-oF Initiator Preview in Windows Server Insiders Builds | Microsoft Community Hub ReFS Boot is enabled for Windows Server vNext preview builds. Known Limitations ReFS Boot systems create a minimum 2GB WinRE partition. When WinRE cannot be updated due to space constraints, the system may disable WinRE. Disabling WinRE does not remove the partition. If the WinRE partition is deleted and the boot volume is extended over it, this operation is unrecoverable without a clean install. For more information, please visit: Resilient File System (ReFS) overview | Microsoft Learn Feedback Hub app is available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues [NEW] Thin Provisioning fails on clean installs in this build (29558). Users selecting Thin Provisioning when attempting clean installs of this build may experience failures. This issue is understood and will be fixed in a future release. Upgrading from earlier builds of Windows Server vNext previews (26525 or older) are not supported. Please perform a clean install of build 29531 or later. Users may experience failures when attempting to upgrade from earlier previews (build 26525 and older). VMs may fail to upgrade or start after upgrade from older preview builds impacting live migration and failover cluster scenarios. Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server vNext update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.921Views1like0CommentsWindows NVMe-oF Initiator connect error
Hi everyone, I am trying to test the new Windows NVMe-oF Initiator within build 29550.1000 but I am not able to connect to a remote storage array - getting the error "IOCTL_NVMEOF_CONNECT_CONTROLLER failed with error code 0x1f" (see screenshot). Any idea what does this error mean or what can be wrong?94Views0likes1CommentIntroducing the Windows NVMe-oF Initiator Preview in Windows Server Insiders Builds
What Is NVMe-over-Fabrics? NVMe-over-Fabrics (NVMe-oF) extends the NVMe protocol—originally designed for local PCIe-attached SSDs—across a network fabric. Instead of using legacy SCSI-based protocols such as iSCSI or Fibre Channel, NVMe-oF allows a host to communicate directly with remote NVMe controllers using the same NVMe command set used for local devices. In this Insider build, Windows Server supports: NVMe-oF over TCP (NVMe/TCP), allowing NVMe-oF to run over standard Ethernet networks without specialized hardware. NVMe-oF over RDMA (NVMe/RDMA), enabling low-latency, high-throughput NVMe access over RDMA-capable networks (for example, RoCE or iWARP) using supported RDMA NICs. Why NVMe-oF on Windows Server? For Windows Server deployments, NVMe-oF builds on the same principles as Native NVMe support: helping you reduce protocol overhead, improve scalability, and better align your storage stack with modern hardware. For Windows Server customers, NVMe-oF offers: Lower overhead networked storage access — NVMe-oF has less protocol overhead than iSCSI, helping extract the performance of modern NVMe devices while preserving the parallelism and efficiency of NVMe. Flexible infrastructure choices — NVMe-oF supports both TCP and RDMA transports, allowing customers to choose between standard Ethernet-based deployments or low-latency RDMA-capable networks based on their infrastructure and performance goals. A forward-looking storage foundation — NVMe-oF is designed to scale across multiple controllers, namespaces, and queues, making it a strong foundation for future disaggregated and software-defined storage architectures. This Insider release represents the first step in bringing NVMe-oF capabilities natively to Windows Server. What’s Included in This Insider Release In this Windows Server Insider build, you can evaluate the following NVMe-oF capabilities: An inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support A new command-line utility, nvmeofutil.exe, for configuration and management Manual configuration of discovery and I/O connections Automatic exposure of NVMe namespaces as Windows disks once connected Note: PowerShell cmdlets are not available yet. All configuration is performed using nvmeofutil.exe. Getting Started with nvmeofutil.exe To start evaluating NVMe-oF in this build, you’ll use nvmeofutil.exe, the command-line utility included with supported Windows Server Insider builds. 1. Install the Latest Windows Server Insiders Build Ensure you are running a Windows Server Insiders build that includes: The inbox NVMe-oF initiator with NVMe/TCP and NVMe/RDMA support The nvmeofutil.exe utility 2. Open an Elevated Command Prompt All NVMe-oF commands must be run from an administrator command prompt. 3. List Available NVMe-oF Initiator Adapters nvmeofutil.exe list -t ia This command displays the available NVMe-oF initiator adapters on the system. 4. Enumerate Host Gateways nvmeofutil.exe list -t hg -ia <AdapterNumber> Host gateways represent transport-specific endpoints, such as NVMe/TCP over IPv4. 5. Configure an I/O Subsystem Port Tip: You’ll need three values from your target configuration: the Subsystem NQN, the target IP/DNS, and the TCP port. If you haven’t set up a target yet, see the Target Setup section below for a quick Linux-based configuration and where to find these values. nvmeofutil.exe add -t sp -ia <Adapter> -hg <HostGateway> -dy true -pi <PortNumber> -nq <SubsystemNQN> -ta <TargetAddress> -ts <ServiceId> This defines the connection parameters to the remote NVMe-oF target. 6. Connect and Use the Namespace nvmeofutil.exe connect -ia <Adapter> -sp <SubsystemPort> Once connected, the NVMe namespace appears as a disk in Windows and can be partitioned and formatted using standard Windows tools. Target Setup (Recommendations for Early Evaluation) If you plan to evaluate NVMe-oF with an existing storage array, check with your SAN vendor to confirm support and get configuration guidance. Where possible, we also encourage you to validate interoperability using your production storage platform. For early evaluation and lab testing, the simplest and most interoperable option is to use a Linux-based NVMe-oF target, as described below. To evaluate the inbox Windows NVMe-oF initiator in this Insider release, you’ll need an NVMe-oF target that can export a block device as an NVMe namespace over TCP. Recommended: Linux kernel NVMe-oF target (nvmet) over TCP For early testing, the simplest and most interoperable option is the Linux kernel NVMe target (“nvmet”). It’s straightforward to stand up in a lab and is widely used for basic NVMe-oF interoperability validation. Lab note: The example below uses “allow any host” to reduce friction during evaluation. In production environments, you should restrict access to specific host NQNs instead. What You’ll Need A Linux system (physical or VM) A block device to export (an NVMe SSD, SATA SSD, a virtual disk, etc.) IP connectivity to your Windows Server Insider machine A TCP port opened between initiator and target (you’ll choose a port below) VMs are fine for functional evaluation. For performance testing, you’ll want to move to physical hosts and realistic networking later. Option A — Configure nvmet Directly via configfs (Minimal, Copy/Paste Friendly) On the Linux target, run the following as root (or with sudo). This configures one NVMe-oF subsystem exporting one namespace over NVMe/TCP. 1) Load kernel modules and mount configfs sudo modprobe nvmet sudo modprobe nvmet-tcp # Required for nvmet configuration sudo mount -t configfs none /sys/kernel/config 2) Create a subsystem (choose an NQN) and allow host access Pick a subsystem name/NQN. Use a proper NQN format to avoid collisions on shared networks (example shown). SUBSYS="nqn.2026-02.com.contoso:win-nvmeof-test" sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS # Lab-only: allow any host to connect echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/attr_allow_any_host > /dev/null 3) Add a namespace (export a local block device) Choose a block device on the target (example: /dev/nvme0n1). Be careful: you are exporting the raw block device. DEV="/dev/nvme0n1" # <-- replace with your device (e.g., /dev/sdb) sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1 echo -n $DEV | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/device_path > /dev/null echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$SUBSYS/namespaces/1/enable > /dev/null 4) Create a TCP port (listener) and bind the subsystem Choose: TRADDR = the Linux target’s IP address on the test network TRSVCID = the TCP port (commonly 4420, but you can use any free TCP port) PORTID=1 TRADDR="192.168.1.92" # <-- replace with target IP TRSVCID="4420" # <-- TCP port sudo mkdir -p /sys/kernel/config/nvmet/ports/$PORTID echo -n $TRADDR | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_traddr > /dev/null echo -n tcp | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trtype > /dev/null echo -n $TRSVCID | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_trsvcid > /dev/null echo -n ipv4 | sudo tee /sys/kernel/config/nvmet/ports/$PORTID/addr_adrfam > /dev/null # Bind subsystem to port sudo ln -s /sys/kernel/config/nvmet/subsystems/$SUBSYS \ /sys/kernel/config/nvmet/ports/$PORTID/subsystems/$SUBSYS 5) Quick validation (optional, from any Linux host with nvme-cli) If you have a Linux host handy, nvme discover will confirm the target is advertising the subsystem and will show the subnqn value you’ll use from Windows. sudo nvme discover -t tcp -a 192.168.1.92 -s 4420 Mapping the Target Values to Your Windows nvmeofutil.exe Steps In your Windows steps, you already define the key connection parameters in the Subsystem Port add/connect flow. Use these mappings: SubsystemNQN (-nq) → the subsystem name/NQN you created (example: nqn.2026-02.com.contoso:win-nvmeof-test) TargetAddress (-ta) → the Linux target IP address (example: 192.168.1.92) ServiceId (-ts) → the TCP port you used (example: 4420) Option B — If You Prefer a Tool-Based Setup: nvmetcli If you’d rather not manipulate configfs directly, nvmetcli provides an interactive shell and can save/restore configurations from JSON (useful for repeating the setup across reboots in a lab). At a high level, nvmetcli can: Create subsystems and namespaces Configure ports (including TCP) Manage allowed hosts (or allow any host in controlled environments) Save/restore configs (for example, /etc/nvmet/config.json) Optional (Advanced): SPDK NVMe-oF Target If you already use SPDK or want to explore higher-performance user-space targets, SPDK’s NVMe-oF target supports TCP and RDMA and is configured via JSON-RPC. For early evaluation, the Linux kernel target above is usually the quickest path. Known Limitations As you evaluate this early Insider release, keep the following limitations in mind: Configuration is CLI-only (no GUI or PowerShell cmdlets yet) No multipathing Limited recovery behavior in some network failure scenarios These areas are under active development. Try It and Share Feedback We encourage you to try NVMe-oF in your lab or test environment and share your experience on Windows Server Insiders Discussions so the engineering team can review public feedback in one place. For private feedback or questions that can’t be shared publicly, you can also reach us at nvmeofpreview@microsoft.com. We look forward to your feedback as we take the next steps in modernizing remote storage on Windows Server. — Yash Shekar (and the Windows Server team)Capabilities of NVMe-oF so far?
Started playing around with NVMe-oF last night and got a successful connection to TrueNAS. I'm wondering what the current capabilities/limitations are. As far as I can tell you can't cluster the disk which is what I was primarily interested in. I was able to connect the same NQN endpoint to two servers but the WSFC cluster wouldn't detect it as a cluster available disk. Also the disks seems to disappear on every reboot and has to be set up again, so is there no auto-mount yet? I understand this is early preview so I don't have many expectations, just would like some clarity. Thank you.77Views0likes1Commentanyone else keeping files on two clouds just in case onedrive goes down?
hey everyone seen a few posts here about files going missing or sync just stopping randomly and it got me thinking about how much we all just trust one cloud completely my situation is i have onedrive for work, google drive for stuff i share with clients, and dropbox still has old project files from years ago that i never got around to moving. so im constantly opening three different tabs just to find one file. pretty annoying honestly couple months back onedrive just didnt sync a folder for a few days. i only found out when i needed something on another laptop. nothing was actually lost but it made me a bit nervous about having everything in one place with no backup plan so i started looking for something that just shows all three drives together. found a tool called All Cloud Hub (allcloudhub.com) and been using it since. it puts all your connected drives into one dashboard so you can search across all of them at once instead of checking each one separately. you can also move files directly from onedrive to google drive without downloading anything to your computer first which saves a lot of time it connects through oauth so your login details never go to them and your files stay in your own accounts. nothing gets copied to their servers. free plan covers up to 3 cloud accounts which was enough for me wont fix microsofts sync issues obviously but at least if something breaks on one side your files are still somewhere else does anyone else do something like this or is it just me being paranoid lol75Views0likes1Comment