L2bridge Container Networking
Published Mar 05 2020 06:00 AM 69.5K Views
Microsoft

Overview

Containers attached to a l2bridge network will be directly connected to the physical network through an external Hyper-V switch. L2bridge networks can be configured with the same IP subnet as the container host, with IPs from the physical network assigned statically. L2bridge networks can also be configured using a custom IP subnet through a HNS host endpoint that is configured as a gateway.

 

In l2bridge, all container frames will have the same MAC address as the host due to Layer-2 address translation (MAC re-write) operation on ingress and egress. For larger, cross-host container deployments, this helps reduce the stress on switches having to learn MAC addresses of sometimes short-lived containers. Whenever container hosts are virtualized, this comes with the additional advantage that we do not need to enable MAC address spoofing on the VM NICs of the container hosts for container traffic to reach destinations outside of their host.

Reference l2bridge networkReference l2bridge network

There are several networking scenarios that are essential to successfully containerize and connect a distributed set of services, such as:

  1. Outbound connectivity (Internet access)
  2. DNS resolution
  3. Container name resolution
  4. Host to container connectivity (and vice versa)
  5. Container to container connectivity (local)
  6. Container to container connectivity (remote)
  7. Binding container ports to host ports

We will be showing all the above on l2bridge and briefly touch on some more advanced use-cases:

  1. Creating an HNS container load balancer
  2. Defining and applying network access control lists (ACLs) to container endpoints
  3. Attaching multiple NICs to a single container
 

Pre-requisites

In order to follow along, 2x Windows Server machines (Windows Server, version 1809 or above) are required with:

  • Containers feature and container runtime (e.g. Docker) installed
  • HNS Powershell Helper Module

To achieve this, run the following commands on the machines:

 

Install-WindowsFeature -Name Containers -Restart 
Install-PackageProvider -Name NuGet -RequiredVersion 2.8.5.201 -Force
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider -Force
Start-Service Docker 
Start-BitsTransfer https://raw.githubusercontent.com/microsoft/SDN/master/Kubernetes/windows/hns.psm1

 

 

Creating an L2bridge network

Many of the needed policies to setup l2bridge are conveniently exposed through Docker’s libnetwork driver on Windows.

For example, an l2bridge network of name “winl2bridge” with subnet 10.244.3.0/24 can be created as follows:

 

docker network create -d l2bridge --subnet=10.244.3.0/24 -o com.docker.network.windowsshim.dnsservers=10.127.130.7,10.127.130.8 --gateway=10.244.3.1 -o com.docker.network.windowsshim.enable_outboundnat=true -o com.docker.network.windowsshim.outboundnat_exceptions=10.244.0.0/16,10.10.0.0/24,10.127.130.36/30 winl2bridge

 

 

The available options for network creation are documented in 2 locations (see #1 here and #2 here) but here is a table breaking down all the arguments used:

Name Description

-d

Type of driver to use for network creation

--subnet

Subnet range to use for network in CIDR notation

-o com.docker.network.windowsshim.dnsservers

List of DNS servers to assign to containers.

--gateway

IPv4 Gateway of the assigned subnet.

-o com.docker.network.windowsshim.enable_outboundnat

Apply outbound NAT HNS policy to container vNICs/endpoints. All traffic from the container will be SNAT’ed to the host IP. If the container subnet is not routable, this policy is needed for containers to reach destinations outside of their own respective subnet.

-o com.docker.network.windowsshim.outboundnat_exceptions

List of destination IP ranges in CIDR notation where NAT operations will be skipped. This will typically include the container subnet (e.g. 10.244.0.0/16), load balancer subnet (e.g. 10.10.0.0/24), and a range for the container hosts (e.g. 10.127.130.36/30).

IMPORTANT: Usually, l2bridge requires that the specified gateway (“10.244.3.1”) exists somewhere in the network infrastructure and that the gateway provides proper routing for our designated prefix. We will be showing an alternative approach where we will create an HNS endpoint on the host from scratch and configure it so that it acts as a gateway.

NOTE: You may see a network blip for a few seconds while the vSwitch is being created for the first l2bridge network.

TIP: You can create multiple l2bridge networks on top of a single vSwitch, “consuming” only one NIC. It is even possible to isolate the networks by VLAN using -o com.docker.network.windowsshim.vlanid flag.

 

Next, we will enable forwarding on the host vNIC and setup a host endpoint as a quasi gateway for the containers to use.

 

# Import HNS Powershell module
ipmo .\hns.psm1
# Enable forwarding
netsh int ipv4 set int "vEthernet (Ethernet)" for=en
$network = get-hnsnetwork | ? Name -Like $(docker network inspect --format='{{.ID}}' winl2bridge)
# Create default gateway (need to use x.x.x.2 as x.x.x.1 is already reserved)
$hnsEndpoint = New-HnsEndpoint -NetworkId $network.ID -Name cbr0_ep -IPAddress 10.244.3.2 -Verbose 
# Attach gateway endpoint to host network compartment
Attach-HnsHostEndpoint -EndpointID $hnsEndpoint.Id -CompartmentID 1 
# Enable forwarding for default gateway
netsh int ipv4 set int "vEthernet (cbr0_ep)" for=en 
netsh int ipv4 add neighbors "vEthernet (cbr0_ep)" "10.244.3.1" "00-01-e8-8b-2e-4b"

 

 

NOTE: The last netsh command above would not be needed if we supplied a proper gateway that exists in the network infrastructure at network creation. Since we created a host endpoint to use in place of a gateway, we need to add a static ARP entry with a dummy MAC so that traffic is able to leave our host without being stuck waiting for an ARP probe to resolve this gateway IP.

 

This is all that is needed to setup a local l2bridge container network with working outbound connectivity, DNS resolution, and of course container to container and container to host connectivity.

 

Multi-host Deployment

One of the most compelling reasons for using l2bridge is the ability to connect containers not only on the local machine, but also with remote machines to form a network. For communication across container hosts, one needs to plumb static routes so that each host knows where a given container lives.

 

For demonstration, assume there are 2 container host machines (Host “A”, Host “B”) with IP 10.127.132.38 and 10.127.132.36 and container subnets 10.244.2.0/24 and 10.244.3.0/24 respectively.

Static routes for cross-node l2bridge container connectivityStatic routes for cross-node l2bridge container connectivity

 

To realize connecting containers across the 2 hosts, the following commands would need to be executed on host A:

 

New-NetRoute -InterfaceAlias "vEthernet (Ethernet)" -DestinationPrefix 10.244.3.0/24 -NextHop 10.127.132.36

 

 

Similarly, on host B the following also needs to be executed:

 

New-NetRoute -InterfaceAlias "vEthernet (Ethernet)" -DestinationPrefix 10.244.2.0/24 -NextHop 10.127.132.38

 

 

Now l2bridge containers running both locally and on remote hosts can communicate with each other.

 

TIP: On public cloud platforms, one also needs to add these routes to the default system’s route table, so the underlying host cloud network knows how to forward packets with container IPs to the correct destination. For instance on Azure, user-defined routes of type “virtual appliance” would need to be added to the Azure virtual network. If host A and host B were VMs provisioned in an Azure resource group “$Rg”, this could be done by issuing the following az commands:

 

az network route-table create --resource-group $Rg --name BridgeRoute 
az network route-table route create --resource-group $Rg --address-prefix 10.244.3.0/24 --route-table-name BridgeRoute  --name HostARoute --next-hop-type VirtualAppliance --next-hop-ip-address 10.127.130.36 
az network route-table route create --resource-group $Rg --address-prefix 10.244.2.0/24 --route-table-name BridgeRoute  --name HostBRoute --next-hop-type VirtualAppliance --next-hop-ip-address 10.127.130.38

 

 

Starting l2bridge containers

Once all static routes have been updated and l2bridge network created on each host, it is simple to spin up containers and attach them to the l2bridge network.

 

For example, to spin up two IIS containers with ID “c1”, “c2” on container subnet with gateway “10.244.3.1”:

 

$array = @("c1", "c2")
$array |foreach {
docker run -d --rm --name $_ --hostname $_ --network winl2bridge mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
docker exec $_ cmd /c netsh int ipv4 add neighbors "Ethernet" "10.244.3.1" "00-01-e8-8b-2e-4b"
}

 

 

NOTE: The last netsh command above would not be needed if we supplied a proper gateway that exists in the network infrastructure at network creation. Since we created a host endpoint to use in place of a gateway, we need to add a static ARP entry with a dummy MAC so that traffic is able to leave our host without being stuck waiting for an ARP probe to resolve this gateway IP.

 

Here is a video demonstrating all the connectivity paths available after launching the containers:

 

Publishing container ports to host ports

One feature to expose containerized applications and make them more available is to map container ports to an external port on the host.

For example, to map TCP container port 80 to the host port 8080, and assuming the container has respective endpoint with ID “448c0e22-a413-4882-95b5-2d59091c11b8” this can be achieved using an ELB policy as follows:

 

ipmo .\hns.psm1
$publish_json = '{
    "References": [
        "/endpoints/448c0e22-a413-4882-95b5-2d59091c11b8"
    ],
    "Policies": [
        {
            "Type": "ELB",
            "InternalPort": 80,
            "ExternalPort": 8080,
            "Protocol": 6
        }
    ]
}'
Invoke-HNSRequest -Method POST -Type policylists -Data $publish_json

 

 

Here is a video demonstrating how to apply the policy to bind a TCP container port to a host port and access it:

 

Advanced: Setting up Load Balancers

The ability to distribute traffic across multiple containerized backends using a load balancer leads to higher scalability and reliability of applications.

 

For example, creating a load balancer with frontend virtual IP (VIP) 10.10.0.10:8090 on host A (IP 10.127.130.36) and backend DIPs of all local containers can be achieved as follows:

 

ipmo .\hns.psm1
[GUID[]] $endpoints = (Get-HNSEndpoint |? Name -Like "Ethernet" | Select ID).ID
New-HNSLoadBalancer -Endpoints $endpoints -InternalPort 80 -ExternalPort 8090 -Vip "10.10.0.10"

 

 

Finally, for the load balancer to be accessible from inside the containers, we also need to add two encapsulation rules for every endpoint that needs to access the load balancer:

 

$endpoints | foreach {
$encap_lb = '{
    "References": [
        "/endpoints/' + $_ +'"
    ],
    "Policies": [
        {
            "Type": "ROUTE",
            "DestinationPrefix": "10.10.0.0/24",
            "NeedEncap": true
        }
    ]
}'
$encap_mgmt = '{
    "References": [
        "/endpoints/' + $_ +'"
    ],
    "Policies": [
        {
            "Type": "ROUTE",
            "DestinationPrefix": "10.127.130.36/32",
            "NeedEncap": true
        }
    ]
}'
Invoke-HNSRequest -Method POST -Type policylists -Data $encap_lb
Invoke-HNSRequest -Method POST -Type policylists -Data $encap_mgmt
}

 

 

Here is a video showing how to create the load balancer and access it using its frontend VIP "10.10.0.10" from host and container:

 

Advanced: Setting up ACLs

What if instead of making applications more available, one needs to restrict traffic between containers? l2bridge networks are ideally suited for network access control lists (ACLs) that define policies which limit network access to only those workloads that are explicitly permitted.

For example, to allow inbound network access to TCP port 80 from IP 10.244.3.75 and block all other inbound traffic to container with endpoint “448c0e22-a413-4882-95b5-2d59091c11b8”:

 

ipmo .\hns.psm1 
$json = '{
    "Policies": [
        {
            "Type": "ACL",
            "Action": "Allow",
            "Direction": "In",
            "LocalAddresses": "",
            "RemoteAddresses": "10.244.3.75",
            "LocalPorts": "80",
            "Protocol": 6,
            "Priority": 200
        },
        {
            "Type": "ACL",
            "Action": "Block",
            "Direction": "In",
            "Priority": 300
        },
        {
            "Type": "ACL",
            "Action": "Allow",
            "Direction": "Out",
            "Priority": 300
        }
    ]
}' 
Invoke-HNSRequest -Method POST -Type endpoints -Id "448c0e22-a413-4882-95b5-2d59091c11b8" -Data $acl_json

 

 

Here is a video showing the ACL policy in action and how to apply it:

 

Access control lists and Windows fire-walling is a very deep and complex topic. HNS supports more granular capabilities to implement network micro-segmentation and govern traffic flows than shown above. Most of these enhancements are available via Tigera’s Calico for Windows product and will be incrementally documented here and here.

 

Advanced: Multi-NIC containers

Attaching multiple vNICs to a single container addresses various traffic segregation and operational concerns. For example, assume there are two VLAN-isolated L2bridge networks called “winl2bridge_4096” and “winl2bridge_4097”:

 

# Create “winl2bridge_4096” with VLAN tag 4096
docker network create -d l2bridge --subnet=10.244.4.0/24 -o com.docker.network.windowsshim.dnsservers=10.127.130.7 --gateway=10.244.4.1 -o com.docker.network.windowsshim.enable_outboundnat=true -o com.docker.network.windowsshim.outboundnat_exceptions=10.244.0.0/16,11.96.0.0/24,10.127.130.36/30 -o com.docker.network.windowsshim.vlanid=4096 winl2bridge_4096
# Create “winl2bridge_4097” with VLAN tag 4097
docker network create -d l2bridge --subnet=10.244.5.0/24 -o com.docker.network.windowsshim.dnsservers=10.127.130.7 --gateway=10.244.5.1 -o com.docker.network.windowsshim.enable_outboundnat=true -o com.docker.network.windowsshim.outboundnat_exceptions=10.244.0.0/16,11.96.0.0/24,10.127.130.36/30 -o com.docker.network.windowsshim.vlanid=4097 winl2bridge_4097

 

 

Attaching a container to both networks can be done as follows:

 

# Create container and attach to “winl2bridge_4096”
docker run -d --rm --name "multi_nic_container" --network "winl2bridge_4096" mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 
# Attach to “winl2bridge_4097”
docker network connect "winl2bridge_4097" "multi_nic_container"

 

To add more vNICs, we can create HNS endpoints under a given network and attach them to the container’s network compartment. For example, to add another NIC in network “winl2bridge_4096”:

 

# Get compartment ID
$compartmentId = docker exec "multi_nic_container" powershell.exe "Get-NetCompartment | Select -ExpandProperty CompartmentId"
# Get HNS network ID
$network = get-hnsnetwork | ? Name -Like $(docker network inspect --format='{{.ID}}' winl2bridge_4096)
# Create HNS endpoint under network “winl2bridge_4096”
$hnsEndpoint = New-HnsEndpoint -NetworkId $network.ID -Name my_ep -IPAddress 10.244.4.10 -Verbose
# Attach endpoint to target container’s network compartment
Attach-HnsHostEndpoint -EndpointID $hnsEndpoint.Id -CompartmentID $compartmentId

 

 

By executing all the above, a single container has three vNICs ready to use now (two in “winl2bridge_4096”, one from “winl2bridge_4097”). Every endpoint may have different policies and configurations specifically tailored to meet the needs of the application and business.

Container with multiple endpoints belonging to two different networksContainer with multiple endpoints belonging to two different networks

 

Summary

We have covered several supported capabilities of l2bridge container networking, including:

  • Cross-host container communication (not possible via WinNAT)
  • Logical separation of networks by VLANs
  • Micro-segmentation using ACLs
  • Load balancers
  • Binding container ports to host ports
  • Attaching multiple network adapters to containers

L2bridge networks require upfront configuration to install correctly but offers many useful features as well as enhanced performance and control of the container network. It is always recommended to leverage orchestrators such as Kubernetes which utilize CNI plugins to streamline and automate many of these configuration tasks, while still rewarding advanced users with a similar level of configurability. All of the HNS APIs used above and much more are also open-source in a Golang shim (see hcsshim).

 

As always, thanks for reading and please let us know about your scenarios or questions in the comments section below!

7 Comments

Thanks for sharing this Awesome blogpost with the community :cool:

Brass Contributor

It's somehow similar to our work in 2014 on early namespaces implementation in ReactOS kernel

https://fr.slideshare.net/interfaceULG-innovationManagement/virtualisationlgredurseaudansreactos-140...

Copper Contributor

Hi, thanks for this great post. I am currently setting up a docker dev env which contains seven containers. Unfortunately, we have a "special" corporate intranet. Both of NAT and transparent network doesn't work under this intranet. If I use NAT, I can't access any outside resource from the container. For transparent network, the company network has port security which limit at most two MAC address per port, so only one of seven containers can get the ip address from the network at same time. So I came to L2Bridge. 

My goal is: Outbound connectivity (Internet access)

I did setup follow the guide. This is the host network info.

PS C:\Users\212616592> ipconfig /all

Windows IP Configuration

   Host Name . . . . . . . . . . . . : G6CR726W911E
   Primary Dns Suffix  . . . . . . . : "company sensitive"
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : "company sensitive"

Ethernet adapter vEthernet (NATSwitch):

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter
   Physical Address. . . . . . . . . : 00-15-5D-B5-1A-07
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 172.21.21.1(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . :
   NetBIOS over Tcpip. . . . . . . . : Enabled

Ethernet adapter vEthernet (Ethernet):

   Connection-specific DNS Suffix  . : "company sensitive"
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
   Physical Address. . . . . . . . . : C8-D3-FF-BE-B1-D9
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 10.189.181.26(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.252.0
   Lease Obtained. . . . . . . . . . : Wednesday, April 29, 2020 7:34:25 PM
   Lease Expires . . . . . . . . . . : Monday, May 4, 2020 7:34:21 AM
   Default Gateway . . . . . . . . . : 10.189.180.1
   DHCP Server . . . . . . . . . . . : 10.69.64.200
   DNS Servers . . . . . . . . . . . : 10.220.220.220
                                       10.220.220.221
   NetBIOS over Tcpip. . . . . . . . : Enabled

Ethernet adapter vEthernet (cbr0_ep):

   Connection-specific DNS Suffix  . : "company sensitive"
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #5
   Physical Address. . . . . . . . . : 00-15-5D-F8-BD-3F
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 10.189.181.2(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 10.189.181.1
   DNS Servers . . . . . . . . . . . : 10.220.220.220
                                       10.220.220.221
   NetBIOS over Tcpip. . . . . . . . : Enabled
   Connection-specific DNS Suffix Search List :
                                       logon.ds.ge.com

Ethernet adapter vEthernet (Default Switch):

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
   Physical Address. . . . . . . . . : 00-15-5D-5B-08-C3
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 172.17.102.177(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.240
   Default Gateway . . . . . . . . . :
   NetBIOS over Tcpip. . . . . . . . : Enabled

Ethernet adapter vEthernet (nat):

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
   Physical Address. . . . . . . . . : 00-15-5D-93-BA-2C
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 172.19.192.1(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
   Default Gateway . . . . . . . . . :
   NetBIOS over Tcpip. . . . . . . . : Enabled

This is the docker netwoker info:

Microsoft Windows [Version 10.0.18362.778]
(c) 2019 Microsoft Corporation. All rights reserved.

C:\>ipconfig /all

Windows IP Configuration

   Host Name . . . . . . . . . . . . : 886fafbe55b5
   Primary Dns Suffix  . . . . . . . :
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : company sensitive

Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : company sensitive
   Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
   Physical Address. . . . . . . . . : 00-15-5D-F8-B9-E3
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   Link-local IPv6 Address . . . . . : fe80::7113:574e:2920:63d3%4(Preferred)
   IPv4 Address. . . . . . . . . . . : 10.189.181.62(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 10.189.181.1
   DHCPv6 IAID . . . . . . . . . . . : 67114333
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-26-3D-C8-A3-00-15-5D-F8-B9-E3
   DNS Servers . . . . . . . . . . . : 10.220.220.220
                                       10.220.220.221
   NetBIOS over Tcpip. . . . . . . . : Disabled

This is result when tried to access outsite resource from the container:

C:\>curl https://platform.cloud.coveo.com
curl: (6) Could not resolve host: platform.cloud.coveo.com

One hint which I found: my host ip is 192.168.181.26,default gateway is 192.168.180.1. It's a different pattern. Could you please explain how should I change the settings? 

 

PS: This set up did work in my home network which has same ip/gateway patten.

 

 

 

 

Microsoft
Microsoft

@jamessxxoo This appears to be the case where you are supplying the same subnet as the host and a gateway that actually exists. You can pursue the much simpler option instead of following this guide. You don't need the host endpoint and workarounds, and should be able to get it to work using:

$localContainerSubnet="10.189.181.0/24"
$containerGw="10.189.181.1"
$dnsServer="10.220.220.220"
docker network create -d l2bridge --subnet=$localContainerSubnet -o com.docker.network.windowsshim.dnsservers=$dnsServer --gateway=$containerGw -o com.docker.network.windowsshim.enable_outboundnat=true winl2bridge
 
Should create a l2bridge network that can connect to other VMs, as well as outbound connectivity and DNS resolution.
 
Then to create a container (e.g. "c1") and attach to the winl2bridge network:
docker run -d --rm --name c1 --hostname c1 --network winl2bridge mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019



@Kurt Schenk thank you, the link has been updated.

Copper Contributor

The solution as shown with the host as gateway does not seem to work at all.  No packets are routed out of the container network.  If I change the dummy ARP entry in the container to use the MAC address of the host endpoint instead of the "00-01-e8-eb-2e-4b" in the example, I see bidirectional forwarding start to work.  Is this how it's meant to be implemented?  Is there no way to just have the gateway IP address configured on the host?  Another problem with this approach is it adds an invalid default gateway to the host routing table when the endpoint is created.

Copper Contributor

Hi all,

 

how can I create programmatically an external virtual switch of type "L2Bridge" that can work in an business environment where the "eapol" protocol (Standard 802.1X) is used?

 

Background

Attaching the tool "Wireshark" to a physical network adapter it can be seen when plugging in the ethernet cable into the physical network adapter that the "eapol" protocol is triggered. Triggering means that an "eapol" start packet is sent. Other way to trigger it is by restarting the windows service "dot3svc".

 

Is there an example of an external virtual switch working with the "eapol" protocol?

Yes, when creating an external virtual switch using the Hyper-V UI. In that case the "eapol" protocol can be seen in action by using "Wireshark" as described above.

 

Problem:

Using Powershell an external virtual switch of type "L2Bridge" can be created by issuing the following command: 

New-HnsNetwork -Type "L2Bridge" -Name "myVirtualSwitch" -AddressPrefix "172.30.1.0/24" -Gateway "172.30.1.1" -DNSServer "192.168.178.1"

Using "Wireshark" as described above can be seen that the "eapol" protocol is not triggered.

 

Any idea which settings are missing here?

 

Regards

Gabriel

 

 

Version history
Last update:
‎Jun 17 2020 10:54 AM
Updated by: