Blog Post

ITOps Talk Blog
4 MIN READ

Docker Host network alternatives for Windows containers

ViniciusApolinario's avatar
May 18, 2022

One of the things I like to do on my spare time is browse around forums, such as Reddit, Stack Overflow, and others, to check on questions people have around Windows containers that are not showing up on comments or here on Tech Community. Recently, a few topics emerged more than once, so I thought I should blog about it to cover in more details.

The first one is about Docker Host Network on Windows containers.

 

Is Host network available on Windows containers?

The short answer is no. The longer version of this is more complex.

On Linux containers, host network is an option that gives performance on one hand and low flexibility on the other. Host network means standalone containers don’t get an IP address and instead have the container share the container host networking namespace. For scenarios on which you have a large range of ports to be accessible, Host network come in handy as the ports are automatically mapped, with no need for Network Address Translation (NAT). On a Linux container, processes on the container will bind to the port on the host. For example, a website trying to spin its service on HHTP and HTTPS will bind its process to ports 80 and 443, respectively.

The other side of that is that you can’t have multiple instances of containers running and trying to bind the same port on the same node.

 

What can I use instead on Windows containers?

Since Host network is not available, the alternative will depend on what you are trying to achieve. In terms of networking, by default Windows containers use a NAT network. With this network driver, the container gets an IP address from the NAT network (which is not exposed outside of that network) and ports are mapped from the host to the container. For example, you can have multiple containers listening on their port 80, but when mapping from the host, you expose a different port, such as 8081 for container 1 and 8082 for container 2. This option is great because you don’t have to worry about changing the container itself. On the other hand, you do need to keep tracking of the ports. The other benefit of this option is that you can easily publish a range of ports using something like:

 

docker run -p 8000-9000:8000-9000

 

The main thing about the NAT network is that you still need to translate the address from the host to the container. When thinking about scale and performance, this option might not be ideal.

Alternatively, Windows containers offer multiple other options for networking. If you are familiar with Hyper-V, the transparent network is the most simple one. Transparent network works pretty much as an External Virtual Switch on Hyper-V. With that network driver, containers that are attached to this network will get IP addresses from the network – assuming a DHCP server is available, of course, or statically configured per container. You can spin up a new Transparent network using:

 

docker network create -d "transparent" --subnet 10.244.0.0/24 --gateway 10.244.0.1 -o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent

 

And to assign this network to new containers, you can use:

 

docker run --network=my_transparent

 

Keep in mind this mode is not supported on Azure VMs. This is because you are creating an External Virtual Switch that requires MAC address spoofing.

The Transparent network is a great alternative for small production scenarios and cases on which you want the containers to behave pretty much as any other compute resource in your network. However, as I mentioned, there are other options. In our documentation, we have a great summary of the network types available, their pros and cons, and how to configure each of them:

 

Docker Windows Network Driver

Typical uses

Container-to-container (Single node)

Container-to-external (single node + multi-node)

Container-to-container (multi-node)

NAT (Default)

Good for Developers

  • Same Subnet: Bridged connection through Hyper-V virtual switch
  • Cross subnet: Not supported (only one NAT internal prefix)

Routed through Management vNIC (bound to WinNAT)

Not directly supported: requires exposing ports through host

Transparent

Good for Developers or small deployments

  • Same Subnet: Bridged connection through Hyper-V virtual switch
  • Cross Subnet: Routed through container host

Routed through container host with direct access to (physical) network adapter

Routed through container host with direct access to (physical) network adapter

Overlay

Good for multi-node; required for Docker Swarm, available in Kubernetes

  • Same Subnet: Bridged connection through Hyper-V virtual switch
  • Cross Subnet: Network traffic is encapsulated and routed through Mgmt vNIC

Not directly supported - requires second container endpoint attached to NAT network on Windows Server 2016 or VFP NAT rule on Windows Server 2019.

Same/Cross Subnet: Network traffic is encapsulated using VXLAN and routed through Mgmt vNIC

L2Bridge

Used for Kubernetes and Microsoft SDN

  • Same Subnet: Bridged connection through Hyper-V virtual switch
  • Cross Subnet: Container MAC address re-written on ingress and egress and routed

Container MAC address re-written on ingress and egress

  • Same Subnet: Bridged connection
  • Cross Subnet: routed through Mgmt vNIC on WSv1809 and above

L2Tunnel

Azure only

Same/Cross Subnet: Hair-pinned to physical host's Hyper-V virtual switch to where policy is applied

Traffic must go through Azure virtual network gateway

Same/Cross Subnet: Hair-pinned to physical host's Hyper-V virtual switch to where policy is applied

 

Hopefully this will give you an idea on what are the alternatives for networking on Windows containers. If you still have a scenario that you’d like to see, let us know in the comments section below.

Updated May 17, 2022
Version 1.0
  • Got it. Again, none of the above in this blog post apply in that case, since WSL 2 actually is a full Linux Kernel. In that case, you don't need an alternative for Host Network, you should be able to use it.

     

    I have not tried what you are asking myself, but have you tried to configure the Host Network on the WSL 2 utility VM? Honestly, I think it should work as the utility VM gets its own IP address with WSL 2. 

  • DerChristoph's avatar
    DerChristoph
    Copper Contributor

    ViniciusApolinario Thank you for your response, but that did not bring me anny futher. My goal is to startup a Linux container running on WSL 2 on a Windows node that is networkwise located in the Hosts Network, so that my DHCP server will give the Linux container a IP from my Network, and that i can access the Linux container from any ware in my Host's network.

     

    I don't se how i can do that with the portproxy...

  • DerChristoph's avatar
    DerChristoph
    Copper Contributor

    The Transparent network thing dose not work with Docker Desktop for Windows and WSL 2...

    ViniciusApolinario Do you have any solution for this setup to use Docker Host network?