Windows Server 2019 Preview Build 17623 Hyper-V Issue

Occasional Contributor

Hello Everyone, just downloaded the Windows Server 2019 Preview Build 17623 and commenced to build an environment using Hyper-V. The Hyper-V role installed successfully after a restart and I created my first VM in the Hyper-V management console, yet when I try to start up the VM I get the error " Virtual machine <VMname> could not be started because the hypervisor is not running".


Virtualization is enabled in the BIOS on the server and I have removed / reinstalled the Hyper-V role as well. To confirm that it wasn't the physical hardware I installed Server 2016 with the Hyper-V role and the created VM started without any issues. Has anyone else come across this Hyper-V issue in the Windows Server 2019 Preview Build 17623?

12 Replies
I have it running on an HPE microserver and I am able to boot up VM's with no issues. I am using the datacenter version with GUI. Which one are you using?

Thanks for your reply.


I have tried it on both Datacenter and Standard versions with no success. I have download the Windows Server vNext LTSC Preview - Build 17623 iso again and will try again with this latest downloaded iso.  

Let us know how you get on.

What's your guest OS?  Maybe one of us can reproduce the error.

OK. Here is an update.


Installed the Windows Server vNext LTSC Preview - Datacenter version with no problems on my Dell PowerEdge R710 server. Then had no problems installing the Hyper-V role or creating a VM. I started up the VM and no luck. Same error message of "Hypervisor not running and to check that virtualization is enabled in the BIOS". Jumped into the BIOS and confirmed that it was enabled.


I had a look in the Hyper-V event logs and discovered "ID=15350, Severity=Error. The virtualization infrastructure driver (VID) is not running". After seeing this I thought I should check the dell website for any firmware, driver or BIOS updates. And sure enough there was a BIOS update released on the 20th March 2018. Updated the BIOS and the VM is now working.

'Sure enough' is because of Spectre/Meltdown BIOS patching. You might be one of the few people on the planet to get an incidental benefit from that mess.



I installed Server Core on an HP ProLiant DL380 G6, enabled the Hyper-V role, created a vm with Windows 10 and started it without any issue; the only problem found is on the network in the guest vm: infact I created an external virtual switch bound to the nic used on the host but it doesn't connect to internet and doesn't ping the network gateway.



Hey Marco,


Are there IP Address settings on the virtual switch adapter on the Hyper-V Host?


By default the adapter may be called something like "vEthernet (Microsoft Network Adapter Multiplexor Driver - Virtual Switch)". Log into your Server Core, then once you have logged in launch PowerShell within the command prompt. Then run this command to display what adapters you have Get-NetAdapter


Note the ifIndex of the vEthernet (Microsoft Network Adapter Multiplexor Driver - Virtual Switch) adapter, for example 22, and then check your IP settings with this command using the ifIndex.

Get-NetIPConfiguration -InterfaceIndex 22


Let me know how you go





Hello Matt,


the strange thing is that the nic has the address that usually a network takes when there is an unidentified network,


The Get-NetAdapter gives this:


Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
vEthernet (External)      Hyper-V Virtual Ethernet Adapter             16 Up           00-25-B3-E3-69-E8        10 Gbps
Ethernet                  QLogic BCM5709C Gigabit Ethernet ...#49       8 Disconnected 00-25-B3-E3-69-E8          0 bps
Ethernet 3                QLogic BCM5709C Gigabit Ethernet ...#50       6 Disconnected 00-25-B3-E3-69-EA          0 bps
Ethernet 4                QLogic BCM5709C Gigabit Ethernet ...#48       5 Up           00-25-B3-E3-69-E6         1 Gbps
vEthernet (InternalVne... Hyper-V Virtual Ethernet Adapter #2          28 Up           00-15-5D-00-28-01        10 Gbps
Ethernet 2                QLogic BCM5709C Gigabit Ethernet ...#51       4 Up           00-25-B3-E3-69-EC         1 Gbps


and if I apply the command to see the ip this is the result:


InterfaceAlias       : vEthernet (External)
InterfaceIndex       : 16
InterfaceDescription : Hyper-V Virtual Ethernet Adapter
NetProfile.Name      : Unidentified network
IPv4Address          :
IPv6DefaultGateway   :
IPv4DefaultGateway   :
DNSServer            : fec0:0:0:ffff::1


I also seen this with Get-VMNetworkAdapter -All (but I never used previously this command so I don't know if it is so that the ip address doesn't appear for External switch):


InternalVnet40  True                        InternalVnet40 00155D002801 {Ok}
External        True                        External       0025B3E369E8 {Ok}
Scheda di rete  False          vm-w10-mgt   External       00155D002805 {Ok}   {, fe80::5012:480e:323...
Scheda di rete  False          vm-w10-n1    InternalVnet40 00155D002802 {Ok}   {}
Scheda di rete  False          vm-w10-n2    InternalVnet40 00155D002803 {Ok}   {}
Network Adapter False          vm-w2k19-pre External       00155D002806        {}


Seems like that the virtual nic that Hyper-V creates was not bound to the physical nic.


Any suggestion?


I'm also trying if there is the same behavior with the 17623 Semi-Annual build. I'll let you know.






Hello Matt,


I think I found the solution to the issue and posted the reply on the post created about it on this same forum:


Thanks for your support.



Hi Marco,


Good to hear that you might have found a solution.


I am curious about your previous post though, you have both a vEthernet (External) adapter and a vEthernet (Internal) adapter on your VM Host. Are you using the vEthernet (Internal) adapter?


In my environment if my VM Host has more than one physical NIC I tend to Team them for redundancy and performance. I do this before I install the Hyper-V role.


I can post the PowerShell commands that I use to create a teamed NIC. Then after installing Hyper-V role I disable DHCP and tcpip6, set a static IP Address and DNS on the  Hyper-V Virtual Ethernet Adapter.


Also Project Honolulu is a good way to manage Server core versions in conjunction with PowerShell and RSAT.  Below is the link.




Hello Matt,


I created the internal switch to test the communication between 2 virtual machines that have no need to access other networks; I created the external switch to assign it to a management virtual machine where to I want to install Project Honolulu and RSAT to test them.


Because this is a server for a test I don't teamed the nic (until now) and usually, as you do, I disable dhcp and assign a static address; previously, even on my Windows 10 machine, I've seen that if you install the Hyper-V role it creates a virtual adapter and then bind it to the physical and so the former  takes the static ip address of the latter: is it still true or on Windows 10 and on Hyper-V standalone server is so while if I installa Windows Server and then Hyper-V role I may assign different the static ip address to both (virtual and physica) nics?

I thought that create an external switch and check the box to allow to use the management interface does the bind and I have a unique ip address.


What I don't understand?