Hyper-V
402 TopicsFeedback on ansible hyperv collection
Hi All, I'm looking for feedback of any sort on an ansible collection i've started to manage VM's on Hyper-V. The repos is at gocallag/hyperv (github.com) and the collection can be obtained via ansible-galaxy collection install gocallag.hyperv You can find the collection over at Ansible Galaxy - gocallag.hyperv Probably still quite a few bugs in it, but happy for any feedback.826Views0likes1CommentNo SET-Switch Team possible on Intel X710 NICs?
Hello, we have lot of servers from different vendors using Intel X710 DA2 network cards. They work fine in standalone and they work fine if we create switch independet teams using Server Manager, Regardless of Dynmic or Hyper-V Port. But sadly we can't use these teams in Server 2025 because have to create SET-Switch Teams instead. But as soon as we create an Hyper-V SET-Switch Team with X710 cards, they have limited to no network communication. They still can communicate with some servers, are slow with some ohters, and can't communicate with some at all. Especially communication to other servers, which also use X710 cards with SET-Switches, is zero. SET-Teams with other cards like E810 work just fine. I've read several times that the X710 cards just wont work with SET, even since Server 2016. But I can't really give up on this, since we would have to replace a lot of them. We have tried to disable a lot of features like VMQ, RSS, RCS... but couln't make it work. Firmware and Drivers are the most recent, but it happens with older versions too. Does anyone have a solution? Thank you!40Views0likes0CommentsUntagged VLAN - Server 2025 Hyper-V
Hi, I have a strage issue and not finding a solution. Using Server 2025 with two node Hyper-V cluster. Most of the machines using VLANs which works fine. Some machines using no VLAN config. Which usually means the "Access VLAN 1" regarding our switch configuration. With Server 2019 this worked fine. With Server 2025 same NIC port, same server/NIC hardware "Untagged" VMs don't get any network connection. If I add a second NIC to the VM "Untagged" the NIC get immidiatly an IP address and has a proper connection. If I remove the first NIC, the second NIC stop working. It looks like something has changed with Server 2025 (maybe already with Server 2022). Do you have any idea what kinde of problem I have found? Thanks Jack144Views0likes2CommentsPowerShell counterpart for Failover Cluster Manager "Live Migration Settings"
In Failover Cluster Manager, there's "Live Migration Settings" where I can define what cluster networks I want to carry live migration traffic. Even after some research, I cannot find a PowerShell cmdlet that lets me do the same...28Views0likes0CommentsHyper-V SMB UNC LocalIP/name avoiding loopback
Hello, I have the following scenario: Two standalone servers joined to a domain (I can't set up a failover cluster at the moment) What I want to achieve is: Access the storage resources via UNC SMB paths in order to perform a storage migration from both nodes \\svhpv1\VMs \\svhpv2\VMs But when I try to have svhpv1 access \\svhpv1\VMs, I get an "Access Denied" error. I believe this error occurs because the access is being attempted through the loopback adapter. I’m experiencing the same issue with svhpv2 when trying to access \\svhpv2\VMs. How can I avoid the loopback so I can properly configure my scenario? Any help is appreciated.10Views0likes0CommentsNUMA Problems after In-Place Upgrade 2022 to 2025
We have updated several Hyper-V hosts with AMD Milan processors from Windows 2022 to Windows 2025 using the in-place update method. We are encountering an issue where, after starting about half of the virtual machines, the remaining ones fail to start due to a resource shortage error. The host's RAM is about 70% free. We can only get them to start by enabling the "Allow Spanning" configuration, but this reduces performance, and with so many free resources, this shouldn't be happening. Has anyone else experienced something similar? What has changed in 2025 to cause this issue? The error is: Virtual machine 'R*****2' cannot be started on this server. The virtual machine NUMA topology requirements cannot be satisfied by the server NUMA topology. Try to use the server NUMA topology, or enable NUMA spanning. (Virtual machine ID CA*****3-ED0E-4***4-A****C-E01F*********C4). Event ID: 10002 <EventRecordID>41</EventRecordID> <Correlation /> <Execution ProcessID="5524" ThreadID="8744" /> <Channel>Microsoft-Windows-Hyper-V-Compute-Admin</Channel> <Computer>HOST-JLL</Computer>154Views0likes3CommentsBypass LBFO Teaming deprecation on Hyper-V and Windows Server 2022
Starting with Windows Server 1903 and 1909, Hyper-V virtual switches on an LBFO-type network adapter cluster are deprecated (see documentation). The technology remains supported, but it will not evolve. It is recommended to create an aggregate of type SET. In practice The SET is a very interesting technology that has some constraints. The interfaces used must have identical characteristics: Manufacturer Model Link speed Configuration Even if these constraints do not seem huge, we are very far from the flexibility of LBFO Teaming. As a reminder, this one has absolutely no constraints. In practice the SET is recommended with network interfaces of 10Gb or more. Therefore, we are very far from the target of the LBFO (use of all integrated boards with motherboard pro, Home Lab, refurbish). If SET cannot be used As of Windows Server 2022, it is not possible to use the Hyper-V Management Console to create a virtual switch with LBFO, as it will prompt an error saying that LBFO have been depreciated. However, it is possible to use PowerShell to create this virtual switch. First, create the Teaming of your network cards using the Server Manager, in my case the teaming will be with LACP mode and Dynamic load balancing mode. Then execute the below PowerShell Command to create the virtual switch based on the teaming created in the previous step: New-VMSwitch -Name "LAN" -NetAdapterName "LINK-AGGREGATION" -AllowNetLbfoTeams $true -AllowManagementOS $true In detail: The virtual switch will be named "LAN" The network adapter cluster teaming is named "LINK-AGGREGATION" The aggregate remains usable to access the Hyper-V host. You will see your network teaming up and running on Hyper-V host. Thats it!142KViews5likes10CommentsPowerShell, Hyper-V: Examine network object relationships.
Is it possible in PowerShell do do things like: Get all VMNetworkAdapters connected to a given VMSwitch Get all VMNetworkAdapters provided by a hypervisor (conected to either a VM or the management OS) When I have the name of a VMNetworkAdapter only, determine whether it's connected to the management OS, or a VM, and if connected to a VM, what VM that is without examining each single endpoint (VM, management OS) and creating a database of objects and their relationships that allows me to get the desired information? Certainly not a couple of PowerShell code lines only, and depending on the size and type (remote, local) of the virtualization environment, I can imagine that time is a factor too.90Views0likes2CommentsIncrease the size of user profile disk in my remote desktop server
Hi all experts. I have a server for remote desktop services purposes, Windows 2016 standard, and domain joined. It is configured using User Profile Disk, and the maximum limit is set to 5GB. I want to increase the maximum limit but I can't do it under the collection's properties because that field is grayed out. My questions: How to increase the maximum limit? Please guide me and let me know how. Can I increase the maximum limit for 1 single user only? If yes, please let me know how. I found some info from the web that this can be done by the Diskpart command, is it true? If I follow the Diskpart method, do all user profiles encounter data lost? I need your guidance and input, I appreciate it. Here are some images:Solved863Views0likes6CommentsHyper-V: How do VMs communicate with external?
Simple scenario: VM --> vNIC --> vSwitch (external) --> physNIC --> physSwitch The vNIC assigned to the VM has MAC address aa:aa:aa:aa:aa:aa, the physical NIC (physNIC; the vSwitch of type external is connected to it) has bb:bb:bb:bb:bb:bb. What mechanism ensures that when the VM sends a network packet to the external network (the physical network connected to the physical switch physSwitch), the MAC address of its vNIC (aa:aa:aa:aa:aa:aa) is used, and not the MAC address of the physNIC (bb:bb:bb:bb:bb:bb)? In other words: what makes physSwitch "see" aa:aa:aa:aa:aa:aa when the VM communicates to an external endpoint?33Views0likes0Comments