SCVMM 2012 Networking Best Practices for Hyper-V deployments

Published Feb 15 2019 05:40 AM 176 Views
First published on TECHNET on Apr 06, 2011
I was recently at MMS where I had sessions related to fabric management (network and Storage) and Services modeling within SCVMM 2012. One of the questions we were asked in the birds of the feather session was around best practices related to networking for Hyper-V deployments. It was specific to number of NICs which should be used and around VM networking.

This blog does not include information about partner implementations. For any comprehensive understanding of implementation from OEMs, please refer to the OEM documentation on their approach to networking strategies and recommendations.

Dedicating a management network is common feature of data center solutions. Ideally hosts should be managed via a dedicated network so that it does not compete with the guest traffic requirements and to provide a degree of separation for security and for management ease. It implies dedicating one NIC per host and one port per network device to the management network.

It is important to ensure when you are doing VLAN based network segmentation, components like servers, cluster, network switches, and SCVMM settings are configured correctly to enable rapid provisioning and network segmentations. If you want to support a scenario where VM can fail over to any node and maintain network connectivity, it is suggested to define identical virtual networks on all nodes.

For recommendations on configuration by quantity and type of NIC , please check this link on the live migration Network configuration guide.

Use multiple network adapters, multi-port network adapters, or both on each host server. For converged designs, network technologies that provide teaming or virtual network interface cards (NICs) can be utilized, provided that two or more physical adapters can be teamed for redundancy and multiple virtual NICs and/or VLANs can be presented to the hosts for traffic segmentation and bandwidth control.

The following network connections are required:

· One network dedicated to management purposes on the host machine

· One network dedicated to the clustered shared volumes (CSV) and cluster communication network

· One network dedicated to the live migration network

· One or more networks dedicated to the guest VMs (use 10-gigabyte-per-second [Gbps] network adapters for highest consolidation)

· If using Internet Small Computer System Interface (iSCSI), one network dedicated to iSCSI with multipath I/O (MPIO)

Virtual Machine Networking

Hyper-V guests support two types of virtual network adapters: synthetic and emulated. The faster performing of the two, synthetic, makes use of the Hyper-V VMBus architecture and is the high-performance, native device in the VM. Synthetic devices require that the Hyper-V integration components be installed within the guest. Emulated adapters are available to all guests even if integration components are not available.

Always use synthetic virtual network adapters when possible. Because there are integration services for all supported Hyper-V guest operating systems, the primary reason to use the emulated network adapter is for pre-boot execution environment (PXE) booting.

You can create many virtual networks on the server running Hyper-V to provide a variety of communications channels.

For example, you can create networks to provide the following:

· Communications between VMs only - This type of virtual network is called a private network .

· Communications between the host server and VMs- This type of virtual network is called an internal network

· Communications between a VM and a physical network by creating an association to a physical network adapter on the host server. This type of virtual network is called an external network .

For the private cloud scenario, use one or more external networks per VM , and segregate the networks with VLANs and other network security infrastructure as needed.

Nitin Bhat, Program Manager,  SCVMM

Version history
Last update:
‎Mar 11 2019 08:42 AM
Updated by: