storage
1058 TopicsAnnouncing Native NVMe in Windows Server 2025: Ushering in a New Era of Storage Performance
We’re thrilled to announce the arrival of Native NVMe support in Windows Server 2025—a leap forward in storage innovation that will redefine what’s possible for your most demanding workloads. Modern NVMe (Non-Volatile Memory Express) SSDs now operate more efficiently with Windows Server. This improvement comes from a redesigned Windows storage stack that no longer treats all storage devices as SCSI (Small Computer System Interface) devices—a method traditionally used for older, slower drives. By eliminating the need to convert NVMe commands into SCSI commands, Windows Server reduces processing overhead and latency. Additionally, the whole I/O processing workflow is redesigned for extreme performance. This release is the result of close collaboration between our engineering teams and hardware partners, and it serves as a cornerstone in modernizing our storage stack. Native NVMe is now generally available (GA) with an opt-in model (disabled by default as of October’s latest cumulative update for WS2025). Switch onto Native NVMe as soon as possible or you are leaving performance gains on the table! Stay tuned for more updates from our team as we transition to a dramatically faster, more efficient storage future. Why Native NVMe and why now? Modern NVMe devices—like PCIe Gen5 enterprise SSDs capable of 3.3 million IOPS, or HBAs delivering over 10 million IOPS on a single disk—are pushing the boundaries of what storage can do. SCSI-based I/O processing can’t keep up because it uses a single-queue model, originally designed for rotational disks, where protocols like SATA support just one queue with up to 32 commands. In contrast, NVMe was designed from the ground up for flash storage and supports up to 64,000 queues, with each queue capable of handling up to 64,000 commands simultaneously. With Native NVMe in Windows Server 2025, the storage stack is purpose-built for modern hardware—eliminating translation layers and legacy constraints. Here’s what that means for you: Massive IOPS Gains: Direct, multi-queue access to NVMe devices means you can finally reach the true limits of your hardware. Lower Latency: Traditional SCSI-based stacks rely on shared locks and synchronization mechanisms in the kernel I/O path to manage resources. Native NVMe enables streamlined, lock-free I/O paths that slash round-trip times for every operation. CPU Efficiency: A leaner, optimized stack frees up compute for your workloads instead of storage overhead. Future-Ready Features: Native support for advanced NVMe capabilities like multi-queue and direct submission ensures you’re ready for next-gen storage innovation. Performance Data Using DiskSpd.exe, basic performance testing shows that with Native NVMe enabled, WS2025 systems can deliver up to ~80% more IOPS and a ~45% savings in CPU cycles per I/O on 4K random read workloads on NTFS volumes when compared to WS2022. This test ran on a host with Intel Dual Socket CPU (208 logical processors, 128GB RAM) and a Solidigm SB5PH27X038T 3.5TB NVMe device. The test can be recreated by running "diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30 testfile1.dat > output.dat" and modifying the parameters as desired. Results may vary. Top Use Cases: Where You’ll See the Difference Try Native NVMe on servers running your enterprise applications. These gains are not just for synthetic benchmarks—they translate directly to faster database transactions, quicker VM operations, and more responsive file and analytics workloads. SQL Server and OLTP: Shorter transaction times, higher IOPS, and lower tail latency under mixed read/write workloads. Hyper‑V and virtualization: Faster VM boot, checkpoint operations, and live migration with reduced storage contention. High‑performance file servers: Faster large‑file reads/writes and quicker metadata operations (copy, backup, restore). AI/ML and analytics: Low‑latency access to large datasets and faster ETL, shuffle, and cache/scratch I/O. How to Get Started Check your hardware: Ensure you have NVMe-capable devices that are currently using the Windows NVMe driver (StorNVMe.sys). Note that some NVMe device vendors provide their own drivers, so unless using the in-box Windows NVMe driver, you will not notice any differences. Enable Native NVMe: After applying the 2510-B Latest Cumulative Update (or most recent), add the registry key with the following PowerShell command: reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f Alternatively, use this Group Policy MSI to add the policy that controls the feature then run the local Group Policy Editor to enable the policy (found under Local Computer Policy > Computer Configuration > Administrative Templates > KB5066835 251014_21251 Feature Preview > Windows 11, version 24H2, 25H2). Once Native NVMe is enabled, open Device Manager and ensure that all attached NVMe devices are displayed under the “Storage disks” section. Monitor and Validate: Use Performance Monitor and Windows Admin Center to see the gains for yourself. Or try DiskSpd.exe yourself to measure microbenchmarks in your own environment! A quick way to measure IOPS in Performance Monitor is to set up a histogram chart and add a counter for Physical Disk>Disk Transfers/sec (where the selected instance is a drive that corresponds to one of your attached NVMe devices) then run a synthetic workload with DiskSpd. Compare the numbers before and after enabling Native NVMe to see the realized difference in your real environment! Join the Storage Revolution This is more than just a feature—it’s a new foundation for Windows Server storage, built for the future. We can’t wait for you to experience the difference. Share your feedback, ask questions, and join the conversation. Let’s build the future of high-performance Windows Server storage together. Send us your feedback or questions at nativenvme@microsoft.com! — Yash Shekar (and the Windows Server team)Announcing ReFS Boot for Windows Server Insiders
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume. Why ReFS Boot? Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance. In short, ReFS boot means a more robust server right from startup with several benefits: Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly. Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings. Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement. Maximum Boot Volume Size: NTFS vs. ReFS Resiliency Enhancements with ReFS Boot Feature ReFS Boot Volume NTFS Boot Volume Metadata checksums ✅ Yes ❌ No Integrity streams (optional) ✅ Yes ❌ No Proactive error detection (scrubber) ✅ Yes ❌ No Online integrity (no chkdsk) ✅ Yes ❌ No Check out Microsoft Learn for more information on ReFS resiliency enhancements. Performance Enhancements with ReFS Boot Operation ReFS Boot Volume NTFS Boot Volume Fixed-size VHD creation Seconds Minutes Large file copy operations Milliseconds-seconds (independent of file size) Seconds-minutes (linear with file size) Sparse provisioning ✅ ❌ Check out Microsoft Learn for more information on ReFS performance enhancements. Getting Started with ReFS Boot Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview: 1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs. 3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume. A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video. Call to Action In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one. We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum. — Christina Curlette (and the Windows Server team)How the M: Drive came about
In Exchange 2000, we introduced a new feature called IFS. IFS stands for “Installable File System”. This uses a little known and even less used feature of NT that allows the OS’s file system (like NTFS or FAT) to be replaced. The initial reason for doing that was as an optimization: it would allow protocols, such as NNTP and SMTP, to transfer the MIME messages directly as files. In Exchange 5.5, MIME messages are broken down into MAPI properties and stored in database tables. When they need to be accessed as MIME, they are put back together. In E2K, MIME messages are stored as MIME files in IFS and only converted into MAPI if a MAPI client (such as Outlook) accesses them. The other perceived benefit of IFS was that the Exchange storage objects could then be made visible through the file system. So you could go to a drive letter (M: was chosen for two reasons: first, “M” for “Mail”, and second, because it was in the middle of the alphabet and least likely to collide either with actual storage drives -which start at A and move up - or mapped network drives - which start at Z and move down), and get a list of mailboxes, navigate to mail folders via cmd or windows explorer and look at actual messages. This was considered pretty neat at the time and since it didn’t seem to be much more work to allow that access, it was thrown in (there may have been other, better reasons but I’m not aware of them). This ended up causing some challenges down the line related to the intricacies in how email objects need to be handled and mapping the file access behavior to them One of the biggest problems encountered was around security descriptors. This is difficult to explain without a detailed understanding of NT security descriptors, so I will simplify the explanation for the purpose of this discussion. The main part of an NTSD is called a DACL (discretionary access control list). It contains a list of users and groups and what they can do to that object. There are two main types of entries: allows, which say what an entity can do; and denies, which say what they can’t. The order of this list is very important. A standard sequence of entry types is called “canonical”. NT canonical form calls for a particular sequence. Because of legacy issues, MAPI canonical form requires a difference sequence of entry types. Applications that modify security expect a particular sequence and will behave erratically if the sequence is wrong. By creating or modifying objects through the M: drive, it will change the canonical format of the DACL’s and result in unexpected security behavior. This is bad. A related issue here has to do with item level security. E2K also introduced this feature, which is that items in a folder can be secured independently of each other and the folder. While this has some great uses, for many email systems this level of security is not needed. When a message has the folder default security, it simply references that property in the folder. When a message has its own security, there is an additional property that needs to be stored (this also has an affect on how folder aggregate properties, such as unread count, are computed). Having lots of individual security descriptors can result in both increased storage size and poor performance. When a message is created or modified through the M: drive, it will always get an individual security descriptor stamped on it, even if it is exactly the same as the folder default. This can also lead to unexpected behavior. For instance, if you change the default security on the folder, it will not change the security on any messages in it that have their own security descriptors. They have to be resecured individually. Another challenge is in relation to virus scanners. Virus scanners typically look for valid storage drives and spin through all the files on those drives and check them against virus signatures. The M: drive appears as a normal drive, so virus scanners were picking this up and processing it. This can have very detrimental affects on system performance and may also result in message corruption in some cases. Finally, IFS runs in kernel mode. This is a privileged execution path and it means that problems in this area can have much more severe affects (and be harder to track down) than in other areas of Exchange, which all run in user mode. Blue screens are one possibility if something goes wrong. IFS has given Exchange 2000 and Exchange 2003 a lot of advantages: we maintain content parity for MIME and make MIME message handling faster and more efficient as well as increasing the performance of such messages retrieved via internet protocols. But as I described above, there can be problems if IFS is misused via the M: drive. In Exchange 2003 we have disabled the M: drive by default to hopefully help reduce the likelihood that customers will encounter any of the issues described above. I encourage every system administrator to keep this disabled on E2K3 and disable it on all E2K servers as well. Jon Avner4.2KViews1like1CommentAutomating Large‑Scale Data Management with Azure Storage Actions
Azure Storage customers increasingly operate at massive scale, with millions or even billions of items distributed across multiple storage accounts. As the scale of the data increases, managing the data introduces a different set of challenges. In a recent episode of Azure Storage Talk, I sat down with Shashank, a Product Manager on the Azure Storage Actions team, to discuss how Azure Storage Actions helps customers automate common data management tasks without writing custom code or managing infrastructure. This post summarizes the key concepts, scenarios, and learnings from that conversation. Listen to the full conversation below. The Problem: Data Management at Scale Is Hard As storage estates grow, customers often need to: Apply retention or immutability policies for compliance Protect sensitive or important data from modification Optimize storage costs by tiering infrequently accessed data Add or clean up metadata (blob index tags) for discovery and downstream processing Today, many customers handle these needs by writing custom scripts or maintaining internal tooling. This approach requires significant engineering effort, ongoing maintenance, careful credential handling, and extensive testing, especially when operating across millions of item across multiple storage accounts. These challenges become more pronounced as data estates sprawl across regions and subscriptions. What Is Azure Storage Actions? Azure Storage Actions is a fully managed, serverless automation platform designed to perform routine data management operations at scale for: Azure Blob Storage Azure Data Lake Storage It allows customers to define condition-based logic and apply native storage operations such as tagging, tiering, deletion, or immutability, across large datasets without deploying or managing servers. Azure Storage Actions is built around two main concepts: Storage Tasks A storage task is an Azure Resource Manager (ARM) resource that defines: The conditions used to evaluate blobs (for example, file name, size, timestamps, or index tags) The actions to take when conditions are met (such as changing tiers, adding immutability, or modifying tags) The task definition is created once and centrally managed. Task Assignments A task assignment applies a storage task to one or more storage accounts. This allows the same logic to be reused without redefining it for each account. Each assignment can: Run once (for cleanup or one-off processing) Run on a recurring schedule Be scoped using container filters or excluded prefixes Walkthrough Scenario: Compliance and Cost Optimization During the episode, Shashank demonstrated a real-world scenario involving a storage account used by a legal team. The Goal Identify PDF files tagged as important Apply a time-based immutability policy to prevent tampering Move those files from the Hot tier to the Archive tier to reduce storage costs Add a new tag indicating the data is protected Move all other blobs to the Cool tier for cost efficiency The Traditional Approach Without Storage Actions, this would typically require: Writing scripts to iterate through blobs Handling credentials and permissions Testing logic on sample data Scaling execution safely across large datasets Maintaining and rerunning the scripts over time Using Azure Storage Actions With Storage Actions, the administrator: Defines conditions based on file extension and index tags Chains multiple actions (immutability, tiering, tagging) Uses a built-in preview capability to validate which blobs match the conditions Executes the task without provisioning infrastructure The entire workflow is authored declaratively in the Azure portal and executed by the platform. Visibility, Monitoring, and Auditability Azure Storage Actions provides built-in observability: Preview conditions allow customers to validate logic against a subset of blobs before execution Azure Monitor metrics track task runs, targeted objects, and successful operations Execution reports are generated as CSV files for each run, detailing: Blobs processed Actions performed Execution status for audit purposes This makes Storage Actions suitable for scenarios where traceability and review are important. Common Customer Use Cases Shashank shared several examples of how customers are using Azure Storage Actions today: Financial services: Applying immutability and retention policies to call recordings for compliance Airlines: Cost optimization by tiering or cleaning up blobs based on creation time or size Manufacturing: One-time processing to reset or remove blob index tags on IoT-generated data These scenarios range from recurring automation to one-off operational tasks. Getting Started and Sharing Feedback Azure Storage Actions is available in over 40 public Azure regions. To learn more, check out: Azure Storage Actions product page: https://azure.microsoft.com/en-us/products/storage-actions Azure Storage Actions public documentation: https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal Azure Storage Actions pricing page: https://azure.microsoft.com/en-us/pricing/details/storage-actions/ For questions or feedback, the team can be reached at: storageactions@microsoft.com156Views1like0CommentsBuilt an Android app for OneDrive duplicate detection (something I wish Microsoft offered natively)
After years of dealing with duplicate photos and videos in OneDrive, I built a solution and wanted to share it with the community. **The problem:** - Samsung Gallery sync creates duplicates in both "Camera Roll" and "Samsung Gallery" folders - WhatsApp media gets backed up twice (original + shared copy) - App resets/reinstalls trigger re-uploads with "(1)" suffixes - No native duplicate detection in OneDrive **What I built:** OneDrive MediaOps - an Android app that scans for duplicates directly in the cloud. No need to download files to a desktop first. Key features: - Cloud-based scanning (no downloads required) - Algorithm visually identical photos even with different filenames - Preview before deleting - Batch deletion **Why cloud-based matters:** With 50GB+ of photos, downloading everything to run a desktop duplicate finder wasn't practical. This scans media directly via the Graph API. Available on Google Play: https://play.google.com/store/apps/details?id=com.onedrive.mediaops&pcampaignid=web_share Would love feedback from the community - especially if you've been dealing with the Samsung/OneDrive sync duplicate issue.212Views0likes2CommentsNTFS permissions are partially not working.
Participant A is sometimes unable to see Participant B’s files. The issue can be resolved by clicking the option: "Replace all child object permission entries with inheritable permission entries from this object." However, the problem keeps reappearing. Windows Server 2022 Datacenter (VMware 7.1), formatted as NTFS.76Views0likes3CommentsShadow Copy with 6 days delay
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } Shadow Copy Windows Server 2022 Datacenter (VMWare 7.1) The MaxShadowCopies registry value has been increased to 256. Shadow Copies have been redirected to a different hard disk; FS=NTFS. The schedule is set to run three times per day. The maximum size is set to “Limit” with 1,700,000 MB. Current issue: Shadow Copies were available normally for several months. Now, suddenly, the Shadow Copies are displayed with a 6‑day delay.41Views0likes1CommentMacOS 14.8.1 OneDrive - Timestamped Sync Root directory?
Hello community, Attempted a plethora of web searches to find someone else who may have seen this. I utilize Shared Libraries with my MacOS OneDrive, and it seems within my ./Library/CloudStorage/ there are two versions of the Sync Root: OneDrive-SharedLibraries-*Company*/Directories & OneDrive-SharedLibraries-*Company* (Timestamp)/Directories This Timestamped Sync Root has over 140GBytes of 'reported' storage space consumption. Some of the folders already exist in OneDrive-ShareLibraries-*Company*/Directories, but others do not. Was curious if anyone else has seen this before and knows if the related storage can be 'freed'?90Views0likes2CommentsCorrupted VT+ transaction files
We are a small accounting company using VT+ Transaction on a local drive synchronized with OneDrive for backup and file storage. A few days ago when we tried to open the application, we suddenly started receiving the following error messages: Run Time Error 0 and Run Time Error 440, and the program does not start. According to VT+ support the program files are corrupted and the data can only be restored up to the year 2022, as the more recent backups are also affected. Somehow the system is overriding our backups, which makes the latest ones unusable. Any advice what could cause that and how to resolve the issue. Thanks139Views0likes1Comment