Forum Discussion

UdayKumarDevarapalli's avatar
UdayKumarDevarapalli
Copper Contributor
Jan 02, 2026

Delivery Optimization breaking Windows 11 update downloads?

We started seeing Delivery Optimization–related issues with Windows updates after upgrading devices to Windows 11 24H2.

In our SCCM environment, Windows updates begin downloading but consistently fail or stall partway through the download. In many cases, the download restarts multiple times and eventually errors out. This behavior is consistent across multiple devices and different boundaries.

These same devices were patching normally prior to the 24H2 upgrade. Since moving to 24H2, patching has become unreliable, especially for larger updates.

From what we’re seeing, this doesn’t look like a traditional content or boundary issue. It feels like Delivery Optimization is failing mid-transfer or not resuming downloads correctly after the OS upgrade.

So far we’ve checked the following:
- Boundaries and boundary groups are unchanged
- Content is available and distributed correctly on DPs
- No recent SCCM site or infrastructure changes
- Network connectivity looks normal

On the client side, we’ve been reviewing:
- DataTransferService.log (downloads start but fail or restart mid-way)
- DeliveryOptimization logs (showing repeated retries / stalled transfers)
- CAS.log and LocationServices.log (content location looks normal)
- WUAHandler.log (update detection looks fine)

Overall, detection and policy seem healthy — the issue appears during the actual download phase.

Has anyone else seen Delivery Optimization downloads stall or fail during Windows patching after upgrading to Windows 11 24H2?
If so, did you find a specific DO setting, policy change, or workaround that stabilized patching?

1 Reply

  • Hi UdayKumarDevarapalli​ ,

    yes, we’ve seen this exact pattern on Win11 24H2, especially with the newer (much larger) LCUs/feature update payloads: everything looks healthy (scan, policy, content location), then the download phase keeps restarting and eventually throws 0x80D02002 (“no progress within the defined period”).

    A few things to focus on, because they do change the behavior even in ConfigMgr-managed patching:

    1) Check your Delivery Optimization “Download Mode” first (this one bites hard on newer builds)

    If you (or an old baseline) set DownloadMode = 100 (Bypass), that setting is deprecated starting in Windows 11 and Microsoft explicitly warns it can cause some content to fail downloading. Also, they note you don’t need Bypass for Configuration Manager anyway.

    What typically stabilizes downloads is testing with:

    • DownloadMode = 0 (HTTP only, no peering) to keep things simple and stop P2P behavior

      or, if your DO cloud access is restricted:

    • DownloadMode = 99 (Simple, no DO cloud services)

    This is the quickest “prove it’s DO” test: set mode 0 on a pilot group, retry the same LCU download, and see if the mid-transfer stalls disappear.

    2) If you have strict egress/proxy rules: DO can stall mid-transfer

    DO relies on cloud coordination/metadata and uses byte-range downloads. If something in the network path breaks range requests or blocks DO cloud endpoints, you’ll see retries and “no progress” timeouts. Microsoft lists the DO cloud/metadata hostnames you may need to allow.

    So either:

    • allow the DO endpoints per the doc, or
    • avoid the dependency by using mode 99 for affected networks.

    3) If you’re using delta/express behavior, try disabling it as a workaround

    Even when it “looks like” a normal DP download in CAS/LocationServices, certain update payload types can invoke delta/DO-style behavior. As a workaround, test disabling “delta content” (Express) on a small set of clients and compare results. (This has been a recurring fix for “DO-ish” download failures in ConfigMgr scenarios.)

    4) Quick client-side cleanup that often helps (especially right after OS upgrade)

    On a couple of broken clients, try clearing DO cache and restarting the service (DO cache can get funky after upgrades). Microsoft also documents clearing the DO cache as part of DO troubleshooting.

    5) What I’d reply as “most likely fix”

    Given it started right after 24H2 and it’s worse on larger updates, my top bet is:

    • You have a DO policy (often legacy) set to Bypass 100 or a DO configuration that isn’t happy post-24H2.
    • Pilot DownloadMode = 0 (or 99 if you block DO cloud), and patching usually becomes boring again.

    If you share your current DO DownloadMode setting (GPO/Intune/registry) and whether you have any outbound filtering/proxy that inspects range requests, people can tell you pretty quickly which branch you’re in.

Resources