Forum Discussion
Windows Server 2022 Hyper-V guest state not saved on host reboot
Windows Server 2022 with latest updates.
All VMs are set to "Automatic Stop Action" = Save
But on every reboot the VMs lose their state and the Windows VM reports "The system has rebooted without cleanly shutting down first."
On the host machine there is an error in the system event log
Service Control Manager event id 7043 "The Hyper-V Virtual Machine Management service did not shut down properly after receiving a preshutdown control."
This error occurs around 48 seconds after the reboot is initiated.
Setting WaitToKillServiceTimeout in the registry to a higher timeout didn't change anything.
Manually stopping the Hyper-V Virtual Machine Management service often shows the error 1053: The service did not respond to the start or control request in a timely manner but after 3 to 5 minutes the service is in stopped state but the virtual machines are still running, so even when this service is killed it shouldn't be the cause of the problem.
There are 3 VMs running and manually setting them to the save state only takes a few seconds.
Two of the VMs are Linux VMs, the third is Windows Server 2022
There are no other errors in the system or application eventlog when reboot the host.
Thanks,
Reinhard
13 Replies
- benxenamduclongcomCopper ContributorOki
- MikaelBrass Contributor
Hi again!
By any chance running a guest with FreeBSD or other OS not handling the VSS service?
I found this in the comments, solved my guest shutdown problem. Looks like this single guest mess it up for the whole Hyper-V shutdown procedure...
https://www.altaro.com/hyper-v/extending-hyper-vs-guest-grace-period-host-shutdown/
Check the comment from "Jon W September 2, 2018 at 9:29 am"
Ok i found a way to replicate the issue. If you install anything based on freebsd 11.2 on hyper-v it will log an error “freebsd kernel: hvvss0: Unknown opt from host: 4” when a shutdown or reboot of the host is initiated. It looks like windows attempts to do something with VSS before issuing a shutdown command and it is failing on the freebsd VM causing the whole vss system to fail on Hyper-v causing all the vm’s power to be pulled even Windows VM’s.A fix for me was to disable VSS guest services and the vm’s all reboot as normal now.
(I think I've solved this once before but totally forgot about it... it was way back in 2018 also...)
- BoBandyCopper Contributor
This was driving me insane until I found your post. I spent hours checking my setting and doubting my abilities but in the end I feel vindicated that it wasn't "my issue".
On my Hyper-V host I have 3 Windows Server 2022, 1 PfSense (based on BSD) and 1 Open VPN Access Server (Ubuntu). PfSense was the culprit. After removing the integration service named "Backup (volume shadow copy)" the automatic stop actions started working.
The weird part for me is that it was only happening on 1 out of the 3 Windows Server 2022 VMs. I suspect it had to do with the order in which the VMs were being processed.
Thank you!
- MikaelBrass Contributor
Hi!
Happy to hear you resolved the issue and regained your peace of mind!🙂
I've experienced similar behavior, in my case it was OPNsense. I've had servers where it worked too, and maybe you're onto something here. If FreeBSD is the last one "out the door," the others may have time to shut down properly. Put a lot of time into this but, as you, could not solve it.
I went as far as creating my own solution before I found out that it was FreeBSD causing this issue. I totally customized the shutdown and startup PowerShell procedure to handle what Hyper-V should do.
As figured. A much easier solution is just to disable the VSS service.
Recently, I tested again with the latest FreeBSD 14.2 (and Server 2022 and 2025), and unfortunately, the behavior is not fixed. So until it is, we'd better disable that integrated service.
- Paul ChristelCopper Contributor
Thanks for the info. We do have an Ubuntu 22.04.02 client on this machine. Did you disable Guest Services or the Backup Service on the non-Windows VM to resolve the issue?
Paul
- MikaelBrass Contributor
Unchecked the VSS in setting/guest services and the shutdown/save-state of all guest worked as intended when the host is shutdown or restarted.
For linux i haven't seen this problem. Running a couple of debians on several hosts.
But maybe it's worth a try in your case. If not, you got to "save" and "restore" em with a script.
- Paul ChristelCopper Contributor
Were you able to get this resolved? We have a 2022 Server that is exhibiting this issue as well. We ran Windows Update on the host yesterday and upon reboot, it pulled the plug on all VM's instead of them going to the configured Saved state. Manually putting them in the Saved state via GUI seems to work fine. We were previously on Windows 2012r2 and never did have this issue.
Paul
- MikaelBrass Contributor
Hi!
No didn't had the possibility or time to investigate the issue further. As I'm not running Windows Server on "server"-grade hardware, it's a bit older intel-nuc, I didn't pursue. Still a bit curious why it just kills the vm as it does, and how to troubleshoot it. At the time looking at it I couldn't isolate the cause.
The attached powershell scripts above, vm-up-down..., though; works like a charm. It does pretty much the same thing as the built in function does.- Paul ChristelCopper ContributorThanks for the info. It's pretty irritating to say the least. I tried setting them to Shutdown instead of Save and it pulled the plug, just the same. Manually Saving or Shutting down seems to work properly for me. But shutting down and/or restarting the host just kills everything.
I'll take a look at the scripts when I get a chance.
- MikaelBrass Contributor
Maybe related. https://learn.microsoft.com/en-us/answers/questions/1161178/dirty-shutdown-of-guest-on-squeaky-clean-2022-stan
A question I posted moment ago.
This install drives me up the walls, I can't figure out why guest are killed of in the way they are.
I worked around it with a temporary solution with GPO startup/shutdown scripts. The scripts replicate the behavior of the built in function which should manage this. The code is messy but it does the job.
I've attached them. Check the paths in use or use your own and modify the script.
C:\prg\script (put the scripts here and link them to powershell startup/shutdown with gpedit.msc)
C:\prg\script\vm-log (log and state file goes here, they are created from the script)
Brgs,
- Austin_MBrass Contributor
Hello Reinhard Schuerer,
You will see the complete text of the Hyper-V VM stuck error message displayed when you try to start or stop a VM in Hyper-V as displayed below:
It will encounter an error while trying to change the state of the virtual machine as "VM-name Failed to change state."
The common reasons on every reboot, the VMs lose their state are incorrect network configuration, storage failure, VM power options Routing, and Remote Access configuration, and insufficient permissions to access VM files.
We can encounter the problem using two methods:
Method 1: By using native GUI tools
We can use the (GUI) of Hyper-V and windows to identify the needed vmwp.exe process and end its process. Hyper-V Manager is a tool for virtual machine management in the Windows Hyper-V environment.
Get the VM GUID in Hyper-V Manager.
Right-click the name of your Hyper-V host from Hyper-V Manager
In the context menu, select Hyper-V Settings.
From the Hyper-V Settings window,
Now, click on the Virtual Machines under the left pane to find a path where files of VM are located by default on the Hyper-V host.
Go to Windows Explorer to find the subfolder where the files of the frozen VM are located.
Now, Open the VM folder, and you should see a subfolder and files with a long name consisting of digits and letters.
Thus from there, you can end the vmwp exe task related to the problematic Hyper-V VM.
Now Open Windows Task Manager in the host operating system. Click Start > Run, type taskmgr, and press enter.
You can find the vmwp.exe process with the needed GUID and Right-click the appropriate vmwp exe process in the context menu, select End task to power off the VM, and get the correct stop VM state.
Method 2: By using Process Explorer Tool
You can identify the needed instance for the vmwp exe process related to the problematic VM with the help of Process Explorer.
Now, Unzip the Process Explorer files to a custom folder.
Then, From Hyper-V Manager, open VM settings for your problematic VM, and select VHD options. Then, copy the full path to a virtual hard disk file.
Start the Process Explorer by running the appropriate execution file.
Click on the binocular icon and paste the path to the virtual disk file of the stuck VM in the DLL or Handle substring field, then click on Search.
You have to Right-click the selected vmwp.exe process, and in the context menu, click Kill Process.
Method 3: By using PowerShell to kill the VM process
Open PowerShell and fix the error and run Stop-VM -Force command. You can use the below command to get the unique ID of the VM,
Get-VM "VM-name" | fl *
Now, run these commands to kill the process and stop VM:
$VMWMProc = (Get-WmiObject Win32_Process | ? {$_.Name -match 'VMWP' -and $_.CommandLine -match $VMGUID})
Stop-Process ($VMWMProc.ProcessId) -Force
Now the problematic VM process is killed, and the VM will get stopped. Thus, you can try to edit the VM state from the settings and run the VM again.
Hope the information above helps you to recover back the VMs state.
------------------------
Regards,
Austin_M