Update Rollup 2 for System Center Virtual Machine Manager 2019 is here with exciting new features!
Published Aug 06 2020 10:56 AM 7,654 Views
Microsoft

 

System Center team at Microsoft is committed to partner with customers in their journey to modernize their data centers. We are excited to announce that Update Rollup 2 (UR2) for System Center Virtual Machine Manager (VMM) 2019 is just released and it is packed with exciting new features!

 

VMM 2019 UR2 release includes new features and bug fixes. In this blog, we will introduce you to all the new features in this release. For a quick update on all the bugs fixed in this release, please refer to the KB article here.  System Center team is also working on next Update Rollup (UR3) for SCVMM 2019 

 

Following are the new features released in SCVMM 2019 UR2: 

 

  • Support for managing Windows Server 2012 R2 hosts 
  • Support for managing VMware vSphere 6.7 ESXi hosts 
  • Support for new Linux versions as guest OS
  • Ability to set affinity between vNICs and pNICs 
  • IPv6 Support for SDN 
  • Simplifying Logical Switch Creation 

 

Support for managing Windows Server 2012 R2 hosts  

 

We understand that some enterprises might be using hardware that is not capable of being upgraded to latest Windows Server OS and hence these enterprises face the challenges of managing host servers on various Windows Server OS versions. To make the Windows Server management easier for such enterprises, VMM 2019 UR2 now supports managing 2012 R2 hosts.  

 

Enterprises can now manage Windows Server 2012 R2 servers as hosts, SOFS and remote library shares in addition to the already supported versions of Windows Server 2016 and Windows Server 2019 OS servers. For details of all the hardware OS versions supported, please refer to the documentation here  

 

Windows Servers in the VMM fabric 

Operating System 

Hyper-V Host 

SOFS 

Remote Library Server 

Update Server 

PXE Server 

Windows Server 2012 R2 (Standard and Data Center) 

Y 

Y 

Y 

N 

N 

 

 

Support for managing VMware vSphere 6.7 ESXi hosts 

 

We are glad to announce that SCVMM 2019 UR2 now extends management support to ESXi 6.7 hosts and vCenter 6.7. Enterprises that have both VMware and Hyper-V environments can make use of this feature in VMM so that both VMware and Hyper-V hosts can be managed within the same fabric management toolAlso, customers looking to migrate their VMware environments to Hyper-V can make use of our support for vSphere 6.7 so that there is seamless migration For more details on ESXi server support, please refer here.

 

VMware servers in the VMM 2019 fabric 

VMware 

Versions Supported 

ESX/ESXi 

ESX/ESXi 5.1, 5.5, 6.0, 6.5, 6.7 

vCenter 

5.1, 5.5, 5.8, 6.0, 6.5, 6.7 

 

 

Support for new Linux versions as guest OS

 

In SCVMM 2019 UR2, we have added support for the following Linux versions as guest OS: 

 

  • Red Hat 8.0 
  • Centos 8 
  • Debian 10 
  • Ubuntu 20.04 

 

 

Ability to set affinity between vNICs and pNICS  

 

VMM 2019 UR2 now supports affinity between vNICs and pNICsAffinity between vNICs and pNICs brings in flexibility to route network traffic across teamed pNICsWith this feature, customers can increase throughput by mapping RDMA capable physical adapter with a RDMA settings enabled vNIC. 

 

Use Cases  

  • Customers can route specific type of traffic (eg: Live Migration) to a higher bandwidth physical adapter. 
  • In HCI deployment scenarios, by specifying affinity, customers can leverage SMB multichannel to meet high throughput for SMB traffic. 

Pre-Requisites to set Affinity between vNICs and pNICs  

  •  Logical Switch is deployed on a host 
  • SET teaming property is enabled on the logical switch. 

Configuration  

  • Open Fabric Servers All Hosts > Host group > Hosts > Host.  Right-click Host, select Properties, and navigate to Virtual Switches tab. 
  • Verify that the physical adapters to be teamed are added here. Affinity can be mapped only for physical adapters that are added here.

Krishna_Chakra_0-1596446531151.png

  • Click New virtual network adapter to add a new vNIC to the virtual switch 
  • By default, the affinity value is set as None. This setting corresponds to the existing behavior, where the operating system distributes the traffic from vNIC to any of the teamed physical NICs. 
  • Set the affinity between a vNIC and physical NIC by selecting a physical adapter from the drop-down menu. Once the affinity is defined, traffic from the vNIC is routed to the mapped physical adapter. 

Krishna_Chakra_1-1596446566493.png

 

  • For more information on vNIC to pNIC mapping, please refer to the documentation here.

 

IPv6 Support for SDN 

 

VMM 2019 UR2 now supports IPv6 for SDN deployments. IPv6 support is another exciting feature that helps our customers in their journey to modernize their data centers.  

 

Advantages of IPv6 over IPv4 

IPv6 was mainly created to overcome the IP address space limitation posed by IPv4. Apart from increasing the number of available IP addresses, IPv6 also provides some other advantages like Improved Security and Auto-Configuration. Customers can now enable IPv6 using VMM in their SDN deployments.

 

Regulatory and Compliance Requirements 

IPv6 support for SDN not only helps the SC VMM customers who want to take advantage of IPv6 features but also helps the customers who want to have IPv6 support for meeting regulatory and compliance requirements.  

 

Configuration

To enable IPv6 for SDN deployments, the required changes to setup NC (Network Controller) Gateway, MUX, and SLB (Load Balancer) are highlighted below. In this blog, we will cover the key configuration changes needed for IPv6 support at a high level. For a more detailed explanation about various IPv6 SDN configuration options, please refer to the documentation here.

 

Create the HNV provider network and Create the IPv6 address pool

  • Start the Create Logical Network Wizard. Type a name and optional description for this network.
  • In Settings, select the network type as Virtualized Network and choose Microsoft Network Controller managed Network virtualization (SDN v2).

Krishna_Chakra_0-1596528473391.png

 

 

  • Right-click the HNV Provider logical network > Create IP Pool.
  • Provide a name and optional description, and ensure that the HNV Provider logical network is selected for the logical network.
  • In Network Site you need to select the subnet that this IP address pool will service. 
    Note: To enable IPv6 support, add an IPv6 subnet and create IPv6 address pool. To use IPv6 address space, both IPv4 and IPv6 subnets should be added to the network site. 

 

 

VM Network

  • When you create VM network, to enable IPv6 support, select IPv6 from the dropdown ‘IP address protocol for the VM network’. Please note that dual stack (IPv4 + IPv6) support is not available for VM networks in the current release.

 

Krishna_Chakra_1-1596528506031.png

 

 

SLB (Load Balancer) 

  • To use IPv6 address space, add IPv6 subnet to network site and create IPv6 address pools for the private and public VIP networks.  
  • Add IPv6 address pools when you onboard an SLB service 

Krishna_Chakra_0-1596449783797.png

 

Gateway 

  • While creating GRE VIP logical network, add IPv6 subnet site and create IPv6 address pool. 
  • While onboarding Gateway service, click on ‘Enable IPv6’ checkbox and select IPv6 GRE VIP subnet that you have created previously. 
  • Also select public IPv6 pool and provide the public IPv6 address 

Krishna_Chakra_2-1596449836849.png

 

 

Site to Site VPN connection 

  • To enable IPv6 for site to site VPN connection, routing subnet must be both IPv4 and IPv6. For gateway to work in IPv6, Provide IPv4 and IPv6 addresses separated by ‘;’. 

Krishna_Chakra_8-1596449999614.png

 

 

 

Simplifying Logical Switch Creation 

 

Simplifying Logical switch creation is the second step in our journey to simplify VMM networking for our customers. In 2019 UR1 release, we simplified the process to create logical networks. In 2019 UR2 release, we have made it easier for the customers to configure logical switches. 

 

Based on the feedback we received from the customers with respect to VMM networking, we understood that we need to provide the following to make it easier for the customers to configure VMM networking:

 

  • Smart Defaults and Visual Representations
  • Clear explanation of different options 
  • Topology View 

 

Smart Defaults and Visual Representations

General Screen

In General Screen, the default uplink mode is now shown as Embedded Team. There is now clear explanation suggesting the users to use "Embedded Team" as uplink mode for Windows Server 2016 and above. Similarly, there is explanation suggesting the users to use "Team" as uplink mode for Windows Server 2012. There are also visual representations for "Embedded Team" and "Team" options so that it is more clear for the customers.

 

Krishna_Chakra_0-1596453543398.png

 

 

Krishna_Chakra_1-1596453575877.png

 

Extensions Screen

In the Extensions Screen, the default option is now not to have any extensions pre-selected. 

 

Krishna_Chakra_0-1596453770252.png

 

Uplink Screen

In the Uplink screen, we now show only the relevant load balancing algorithms corresponding to the Uplink mode selected. If the customers choose Embedded Team as the uplink mode, then the only supported load balancing algorithms are Hyper-V Port and Dynamic and the default is Hyper-V Port. When the user moves the cursor on Hyper-V Port and Dynamic algorithms, then we show user friendly informational message that Hyper-V Port is the highly recommended algorithm. 

 

 

Clear explanation of different options 

Settings Screen

In Settings Screen, we have now provided explanation for various options related to Minimum Bandwidth mode. As the customer changes the minimum bandwidth mode, the corresponding explanation also changes.   

 

Krishna_Chakra_0-1596454306944.png

 

Krishna_Chakra_0-1596454830946.png

 

 

Virtual Port Screen

In Virtual Port Screen, we now show the mapping of Port Classification to Port Profile. 

 

Krishna_Chakra_2-1596454888463.png

 

 

We have also simplified the text and layout of the screen where the customers add port classification and port profile as shown below. 

 

 

Krishna_Chakra_0-1596455620145.png

 

 

 

We have now added a check and user-friendly error message when the customer tries to proceed to the next screen by only adding port classification and not port profile.   

 

Krishna_Chakra_1-1596455717835.png

 

 

 

 Topology View

Once the logical switch is created, then the customer can right click on Logical Switch name and then click on View Topology option to view the topology.

 
 

Krishna_Chakra_1-1596456747234.png

 

 

The topology diagram shows the uplink port profiles and virtual network adapters for this logical switch.  

 

 

Krishna_Chakra_0-1596457165373.png

 

 

Uplink Port Profiles - Information regarding Load Balancing Algorithm, Teaming Mode and Network Sites is shown in the topology diagram.

 

 

Krishna_Chakra_1-1596457192384.png

 

Virtual Network Adapters - Information regarding VM Networks, VLANs and Port Classifications is shown in the topology diagram. 

 

Krishna_Chakra_0-1596457419595.png

 

 

We also want to notify our customers that Logical Network Simplification (introduced in 2019 UR1) and Logical Switch Simplification (introduced in 2019 UR2) are just intermediate steps in simplifying VMM networking and our end goal is to have a revamped UX/UI for networking section so that users can easily and intuitively configure VMM networking settings. 

 

 

 

 

 

Version history
Last update:
‎Aug 06 2020 04:01 AM
Updated by: