I was recently challenged with having to create technical demos for a Microsoft Ignite the tour session called “Migrating to Windows Server 2019” (watch the recording here - awesome session). The demos for this talk centered around taking a bunch of Windows Server 2008 R2 systems running “core infrastructure services” (Active Directory, DHCP and File Servers) and moving those workloads over to systems running Windows Server 2019. This was simulating an on-prem environment that was going through a modernization process. I’ve been making technical demos for a long time now – so it should have been easy to pull off. Fire up my Hyper-V lab server and make some VMs.
Then it hit me. I no longer have a Hyper-V Lab server kicking around I can use as my demo platform. I had recycled my old home lab when I changed jobs close to 5 years ago to go work in Azure Compute. My daily driver laptop was a little anemic on the RAM side to run 6-8 VMs and I need to take snapshots / checkpoints at various points in the building process in order to test and optimize the demo flow. What’s an IT Pro to do?
If you check the lower left side of my laptop stickers – you’ll see a quote “my other computer is an Azure datacenter”. Why not harness this on demand power and only pay for what I use? Let me share with you a recipe for what I built, how I securely access it and what I did to minimize costs.
- Azure Subscription. Be forewarned: This will Hyper-V Host machine will NOT fit inside a trial subscription. You might need to request a quota increase (via for the machine type and core count for the region you choose.
- Resource group: located in a region close to where you are. This will contain all the resources required for this setup and can be easily deleted if you so choose once complete.
- Suitable VM Size that has enough cores, RAM and disk attach to match performance you need to host machines. Go check out Azure VM Sizes documentation for a complete list – I opted for a machine in the General Purpose Dsv3 Series machines for my box based on the estimated 64 Gig of ram (8 machines, 8 GB each) with at least 2 vCPU each (16 cores) just for the VM load – plus the needed headroom for the host box.
Note: this has to be one of the hyperthreaded “v3” machine sizes in order to do paravirtualization.
- A set of ISO files for Windows Server 2008R2 and Windows Server 2019.
- To keep this simple – I created a VM with the interactive Azure Portal experience. You can follow the basic steps by using the “Create a Windows VM in the Azure Portal” document – with the following CHANGES.
- Create your own unique Resource Group name and choose a convenient location
- Change the SIZE of the vm to meet your performance needs. I chose a Standard D64s v3 (64 vcpus, 256 GiB memory)
- Choose your own Administrator account and password.
- DO NOT configure any inbound networking ports at this time
- On the Disk page, I created and attached a new managed disk of 1 TB in size based on premium SSD.
- Once provisioned and running – in the Azure portal I went to the overview page of the VM and clicked on the Connect button.
- Remember – there are NO open ports to the internet because I asked for none to be opened when the VM was created. To make accessing this VM more secure I am going to enable Just-In-Time by clicking on the hyperlink above and clicking the button to enable it.
- Once enabled – the default policy is for port 3389 (RDP access) to be opened to a restricted IP address and only remain open for 3 hrs.
- Click on the Request Access button, once granted, download the RDP file, open it and logon to the remote Azure based VM with the credentials you specified during VM creation.
NOTE: I am using RDP with JIT because I will be copying files between my workstation at work/home into the VM in Azure using the RDP client.
- Once logged on to my remote “Hyper-V Lab Server” in Azure – I needed to enable the Hyper-V role and start doing some paravirtualization. From a PS prompt on the machine:
- Install-WindowsFeature -name hyper-v,rsat-hyper-v-tools
- Create a new Hyper-v switch for the demo environment
- New-VMSwitch -switchname "NatSwitch" -SwitchType internal
- Get the ifIndex value of the new switch called “NatSwitch” (make note of it – mine was 14)
- Set a static IP address on the new Hyper-V Switch interface (use the ifIndex value for InterfaceIndex)
- New-NetIPAddress -IPAddress 192.168.0.1 -prefixLength 24 -InterfaceIndex 14
- Setup NAT for the hyper-v switch interface
- New-NetNat -name MyDemoNAT -InternalIPInterfaceAddressPrefix 192.168.0.0/24
That basically does it.
You now have a box that does Hyper-V with Management tools installed. It is the appropriate size to handle quick demo workloads and since it’s Hyper-V, it supports multiple checkpoints! You made it more secure to access by enabling Just-In-Time (JIT) requests for remote VM connectivity. This JIT connectivity is audited and enabled for 3 hrs, after which time the JIT controlled Network Security Group allowing RDP access will be deleted. You have also created a new Hyper-V switch on which you can now create paravirtualized VMs with NAT connectivity to the internet in order to allow for Guest VMs download updates, tools and other things. Oh yeah – the internet speeds from a datacenter are WONDERFUL.
Bonus Section: Save Money and prevent accidental VM billing
Trust me – I have been there. I forget to spin down a VM in Azure after a demo and go home for the weekend. I then remember the following week that the system has been running all this time and not being used! Keep that from happening by setting up Auto-Shutdown.
I have configured auto-shutdown at 7PM Pacific time and I have it to sending a webhook to one of my Microsoft Teams channels as well as send me an email that it’s about to shut down.
If you want to know how to configure Microsoft Teams to receive this WebHook (it’s on a per-channel basis) check out this article about it.
This box has served me well and for the most part has been inexpensive for me to run. It gets about 5-10 hrs of use on days when it is required – and besides minimal storage costs – I am only charged for VM running time.
Not a bad little solution to replace that land-locked Hyper-V Home server when you need something in a pinch, eh?