Direct Server Return (DSR) in a nutshell
Published Jun 19 2019 06:00 AM 90.7K Views


A question we get asked quite a bit is: "What is Microsoft doing to improve Networking performance?".

To answer that we have today’s blog post, which is about load balancing performance enhancements in SDN networks for Windows Containers and Kubernetes Networks, especially for services.

One of these enhancements is Direct Server Return (DSR) routing for overlay and l2bridge networks.

DSR is available in Windows Server 19H1 or later.

What it is

DSR is an implementation of asymmetric network load distribution in load balanced systems, meaning that the request and response traffic use a different network path.

The use of different network paths helps avoid extra hops and reduces the latency by which not only speeds up the response time between the client and the service but also removes some extra load from the load balancer. 
Using DSR is a transparent way to achieve increased network performance for your applications with little to no infrastructure changes.

While DSR will improve your application’s network performance, there are a couple of things to keep in mind when enabling DSR:

  • Persistence is limited to source IP or destination IP (no cookie persistence)
  • SSL offloading on the load balancer is not going work as they need to see both inbound and outbound traffic.
  • There might be some ARP issues with some operating systems

Both the outbound and inbound packets pass through the filtering/load balancing layer and the root name space to reach their destination Service VIP resulting in delays due to the time it takes to process the packet in each layer both ways.

In the below diagram you can see the typical non-DSR flow. As you can see both the inbound as well as the outbound packets between POD1 and POD2 are traversing through all components of the network.

  1. Packet leaves POD1 addressed from to the lService VIP address
  2. Packet enters VMswitch at Port 3 and a SNAT rule is applied changing the destination address to to route the package through the ROOT namespace.
  3. The packet is then routed back to through the VMSwitch Port 4 carrying the source address of the ROOT Namespace.
  4. The return path follows the steps in reverse order and the packet will have to travel through the ROOT Namespace again to get back to the original source.

Non DSR enabled flowNon DSR enabled flowNon DSR enabled flow

In DSR enabled configurations only the outbound packets from the “client” to the server pass through the load balancing layer and are changed to convey the “real” address:port of the client to the server which in turn will use that on the return path to forward the packet directly to the “client” bypassing the Root Namespace steps and saving time along the way.

The diagram below illustrates the flow of packets in a DSR enabled network.

  1. Packet leaves POD1 addressed from to the load balancer address
  2. Packet enters VMswitch at Port 3 and rule is applied changing the destination address which is POD2
    An example of an LBNAT rule is below


        Friendly name : LB_DSR_E5B0F_10.231.111.97_13.0.0.13_8000_80_6
        Priority : 100
        Flags : 1 terminating
        Type : lbnat
            Protocols : 6
            Destination IP :
            Destination ports : 8000
        Flow TTL: 240
        Rule Data:
        Decrementing TTL
        Fixing MAC
        Modifying destination IP
        Modifying destination port
        Creating a flow pair
        Map space : 82F1AFA2-1B42-4A38-81C5-B414B7541171
        Count of DIP Ranges: 2
        DIP Range(s) :
                { : 80 }
                { : 80 }
        FlagsEx : 0
  1. The packet drops to the Forwarding layer in VFP where the MAC address is updated
  2. Forwarding rules look up the destination MAC address in the DSR cache
  3. Packet is forwarded to Port 4 of the VMswitch with the POD1 IP address as the source address and POD2 as the destination address
  4. Packet reaches the service in POD2 at
  5. On the return path the packet bypasses the Root Namespace and mux and is routed directly from Port 4 to Port 3 of the VMSwitch

DSR enabled flowDSR enabled flowDSR enabled flow

How to try it out

To enable DSR in Windows Container networking you need to know the feature is in preview and you will need to run Windows Server 19H1 or later including the latest insider builds. 

When starting kube-proxy provide the --enable-ds switch set to true and an additional –feature-gate WinDSR set to true.

To do that you will have to use overlay networking as outlined in this blog post by David Schott.

Enabling DSR, follow the instructions here in the manual approach section with two  changes:

PS C:\k> nssm install kube-proxy C:\k\kube-proxy.exe
PS C:\k> nssm set kube-proxy AppDirectory c:\k
PS C:\k> nssm set kube-proxy AppParameters --v=4 --proxy-mode=kernelspace --feature-gates="WinOverlay=true,WinDSR=true” --hostname-override= --kubeconfig=c:\k\config --network-name=vxlan0 --source-vip= --enable-dsr=true --log-dir= --logtostderr=false
PS C:\k> nssm set kube-proxy DependOnService Kubelet
PS C:\k> nssm start kube-proxy

Verify that DSR is working as expected

Verification involves a few steps. These steps can also help in general troubleshooting of the Windows networking components.

  • On the Windows work node download collectlogs.ps1 from here
  • Run the script in an elevated PowerShell session
  • This will generate a few output txt files.
  • Open policy.txt.
  • Below is part of a policy entry. Each policy will have a block like that.
    Check the IsDSR setting if it is set to TRUE
 "Tag":"VFP ELB Policy",

If you see that on every policy, then DSR has been enabled successfully

In Closing

Since DSR load balancing for Kubernetes on Windows is brand new, we’d love to hear any feedback in trying it out at SIG-Windows or in the comments below!


Thanks for reading this far

Mike Kostersitz


*Special thanks to our Kalya Subramanian, Pradip Dhara, Madhan Raj Mookkandy, Buck Buckley and in our engineering team for designing and implementing DSR in overlay networking for Windows containers, as well as providing materials to help create content for this blog!


Editor's notes:

8/26/19: few fixes to content and typo corrections.

Version history
Last update:
‎Apr 03 2020 01:33 PM
Updated by: