Forum Discussion

VEROChad's avatar
VEROChad
Copper Contributor
Oct 03, 2025

Issues Testing Azure Stack HCI on Hyper-V (Single Node) – Onboarding and Azure Arc Integration

i am currently testing azure stack hci in our company environment. i have downloaded the latest version of azure stack hci and deployed it on a single-node hyper-v vm setup for evaluation. while i can access the local azure stack hci portal, i am facing several issues when trying to onboard the host into the azure portal. challenges i am facing:
i am running azure stack hci on hyper-v manager with only one node (lab environment). could this be a limitation for azure portal onboarding?
even though i downloaded the latest azure stack hci build, the local portal shows the host as "not eligible." i am not sure why this is happening.
i tried to push two simple vms to azure arc/local using scripts, but i received an error that hyper-v components are not running, even though hyper-v is active.
i also ran into an issue with windows admin center because my subscription is not pay-as-you-go. is this a blocker for testing scenarios?
my questions:

is it possible to push vms into azure arc (or manage them locally) in this single-node hyper-v test environment?
can this be done using infrastructure as code with terraform and powershell, even without the original azure stack hci hardware?
what are the best practices for testing azure stack hci in a lab or non-production setup without purchasing physical nodes?

any guidance, examples, or workarounds would be greatly appreciated. our company is exploring azure hci, but we want to validate the setup in a lab before considering hardware purchases.

2 Replies

  • The challenges you are facing are common in virtualised, single-node Azure Stack HCI lab environments. The good news is that you can indeed push VMs to Azure Arc and manage them locally, using PowerShell and Terraform, even without the original hardware.
    Here is a breakdown of your issues and the best practices for setting up your lab.
    Troubleshooting Your Deployment Issues
    The challenges you've encountered are likely due to missing prerequisites or the nature of running a production-focused OS in a nested virtual environment.
    Single-Node Hyper-V Setup for Onboarding
    Is a single-node setup a limitation for Azure portal onboarding?
    No, but it's complex: Azure Stack HCI (version 23H2 and later) supports single-node deployments for production. For a lab, running it as a nested VM inside Hyper-V is the supported way to evaluate the system without physical hardware.
    The Problem: The issue is less about the single node itself and more about the configuration of the Hyper-V VM hosting it. The deployment process is highly dependent on all hardware and network prerequisites being met, even in a virtual setting.
    "Not Eligible" Status: The "Not Eligible" status in the local portal almost always means that one or more required hardware or configuration prerequisites are not met on the host VM. This is a critical error that blocks deployment to Azure.
    Common reasons for "Not Eligible" in a virtual lab:
    Nested Virtualisation/TPM: If you are using Windows Admin Centre or the local portal for deployment, the check may fail if Nested Virtualisation is not correctly enabled on your host VM, or if a Trusted Platform Module (TPM) 2.0 virtual device is not added and enabled on the guest VM (which is required for Secured-core server features).
    Network Misconfiguration: Azure Stack HCI requires specific network adapter roles, even if you consolidate traffic on one adapter in a single-node configuration. If the virtual NICs aren't properly configured or renamed within the guest OS to match the deployment script's expectations (a common step in virtual deployments), the validation can fail.
    Hyper-V Components Not Running Error
    The error "Hyper-V components are not running" while attempting to push VMs to Azure Arc is misleading.
    Probable Cause: Missing Managed Identity or Extension: This usually points to a problem with the Arc VM Management components not being able to interact with the host Hyper-V layer via the Azure Resource Bridge. Possible causes include:
    The Managed Identity (System-assigned or User-assigned) is not correctly assigned to the target VM for Guest Management.
    Mandatory Arc extensions (like the Azure Connected Machine Agent or host agents) are not installed, not up to date, or are stuck. You can try restarting the host's cloud management group (Stop-ClusterGroup "Cloud Management" followed by Start-ClusterGroup "Cloud Management") or the mochostagent service.
    Non-Pay-As-You-Go Subscription Blocker
    Is a non-Pay-As-You-Go subscription a blocker for testing scenarios?
    Potentially for Windows Admin Centre (WAC): While Azure Stack HCI billing is consumption-based, and a Pay-As-You-Go subscription is often recommended or required for the initial setup, you may encounter administrative or billing limitations with other subscription types (e.g., Enterprise Agreement, Free Trial) that prevent the Resource Providers from registering or the Azure Stack HCI deployment flow from initiating.
    Workaround: Ensure the four required Azure Resource Providers are enabled on your subscription: Microsoft.HybridCompute, Microsoft.GuestConfiguration, Microsoft.HybridConnectivity and Microsoft.AzureStackHCI.
    Path Forward: Lab Setup and Automation
    Pushing VMs to Arc in a Single-Node Environment
    Yes, you can manage VMs via Azure Arc and locally.
    Once your single-node host is successfully registered with Azure Arc as an Azure Stack HCI cluster (a critical initial step), you create a Custom Location in Azure. This custom location acts as a target for deploying and managing VMs from the Azure portal, treating your on-premises Hyper-V hosts as an Azure-managed resource pool.
    Using IaC with Terraform and PowerShell
    Can this be done using infrastructure as code with Terraform and PowerShell?
    Absolutely. Using IaC is a fantastic way to ensure a repeatable, consistent lab environment, especially when the deployment process can be sensitive to prerequisites.
    PowerShell for Host Setup: You'll use PowerShell scripts (often from Microsoft's public GitHub repos for virtual labs) to deploy the host VM, configure its virtual NICs, and perform the initial Active Directory and Azure Arc registration for the host.
    Terraform for VM Lifecycle: Once the Azure Stack HCI host is Arc-enabled and the Custom Location is created, you can use Terraform to manage the virtual machine lifecycle (creation, deletion, and updates).
    The Terraform azurerm provider has an azurerm_stack_hci_deployment_setting resource, and the Azure CLI (az stack-hci-vm create) provides a simplified way to manage the Arc-enabled VMs using IaC principles.
    This is the best way to bypass any GUI limitations and ensure your VM deployments are standardised.
    Best Practices for a Virtual Lab Setup
    The best way to evaluate Azure Stack HCI without purchasing physical nodes is to use a nested virtual machine environment on a powerful workstation or server.
    Use Microsoft's Official Lab Scripts: Do not try to manually install and configure all components. Microsoft provides official scripts for deploying a full Azure Stack HCI environment within Hyper-V, often requiring 2 to 4 nested VMs for a complete cluster simulation.
    Hardware Requirements for the Host PC: Your physical host needs significant resources to run a nested lab effectively.
    CPU: Must support Nested Virtualisation (Intel VT-x with EPT or AMD-V with RVI).
    RAM: Minimum 32GB, but 64GB is strongly recommended to run 2-4 nodes with guest VMs.
    Storage: Fast SSD/NVMe is essential.
    Validate Nested Virtualisation: Run the following command on your physical Hyper-V host to ensure your Azure Stack HCI VM has nested virtualisation enabled:
    PowerShell
    Get-VM -Name "YourHCI_VM_Name" | Select-Object Name, *Virtualization*
    If it's not enabled, use: Set-VMProcessor -VMName "YourHCI_VM_Name" -ExposeVirtualizationExtensions $true
    Network Simplicity: For a lab, follow the documented single-NIC or consolidated network model. Ensure the virtual network adapters are created and configured correctly on the physical host and mapped/renamed correctly within the guest HCI OS to align with the deployment script.
    You can save a lot of time by familiarising yourself with the automated deployment approach for virtual machines.

  • DarkVesper's avatar
    DarkVesper
    Copper Contributor

    Hey VEROChad,

    You’re right to test this in a single-node lab before committing hardware. I hope this helps.

    This is what I've come across:

    1. Single-node limitations: Azure Stack HCI officially requires a minimum of two physical nodes for proper onboarding to Azure and for Arc integration to fully function. Single-node environments aren’t supported for full registration, which is why the host shows as “not eligible.”


    2. Hyper-V component error:  Even if Hyper-V is active, Azure Stack HCI expects specific feature dependencies tied to the validated HCI build and drivers. Running it on a general Hyper-V VM often triggers this mismatch.


    3. Windows Admin Center and Subscription: A Pay-As-You-Go subscription isn’t required to test WAC, but you’ll hit feature limitations when connecting to Azure services. For full hybrid testing, link WAC to a valid Azure subscription (even a trial one).


    ***Workarounds for lab validation

    You can simulate parts of the environment using Azure Arc–enabled servers directly on Windows Server 2022/2025 (bypasses HCI hardware dependency).

    For IaC, Terraform + PowerShell will still work for provisioning, but Arc registration will remain limited.

    Alternatively, consider nested virtualization inside a 2-node Hyper-V cluster if your system resources allow—it’s closer to a true HCI test setup.

    If your goal is to validate management and automation concepts rather than production HA performance, the Arc-enabled servers route is usually the most practical.

    DarkVesper

Resources