Azure Extended Zones bring Azure services closer to users to reduce latency for edge-adjacent workloads. In Azure Virtual Desktop (AVD), only the session host VMs are placed in the Extended Zone, while the AVD control plane/metadata remains in the parent region. The Perth Extended Zone was announced in Dec 2024, entered Public Preview in mid‑2025, and reached General Availability (GA) in Dec 2025. This article documents the first production AVD deployment on the Perth Extended Zone. What makes this deployment “beyond the docs” isn’t just AVD in an Extended Zone. It’s how we delivered an enterprise-grade solution: image engineering via Azure Image Builder + Azure Compute Gallery, Extended Zone replication using Managed Identity + REST, a private-only hub-and-spoke with custom DNS forwarders, and user-driven cost control without Azure portal access.
Introduction
Perth is one of the most geographically isolated major cities globally. For performance‑sensitive, graphics‑heavy engineering workloads, user experience can degrade significantly when desktops are hosted far from the user.
For workloads such as subsurface modelling and GPU‑intensive analysis, reducing latency is critical not just for user experience but also for productivity. This makes Perth an ideal candidate for Azure Extended Zone deployments, where compute resources can operate closer to the workload users while still integrating with Azure’s regional services.
Architecture at a Glance
- Host pool type: Personal (persistent, one VM per user, 'Automatic' assignment)
- Session hosts: NVadsA10 v5 VMs in the Perth Extended Zone — GPU-backed for graphically intensive applications
- Identity: Hybrid — domain-joined to on-premises Active Directory, Entra ID synced
- Access model: Private-only — no public IPs; AVD Private Link + Private Endpoints across all PaaS services
- Network topology: Hub-and-spoke with custom DNS forwarders for FQDN resolution
- Image lifecycle: AIB → Azure Compute Gallery (Australia East) → Replication to Perth Extended Zone
- Deployment automation: GitHub Actions for VM provisioning, domain join, and host pool registration
- Cost control: User-initiated VM deallocation via a "Stop My VDI" desktop shortcut — powered by IMDS and Azure Automation
-
Role assignments: Governed via Saviynt identity governance workflows (RBAC requests/Approvals)
Why Personal Host Pools — and Why No FSLogix
GPU‑accelerated subsurface applications are often per-seat licensed, stateful, and latency sensitive. A pooled host pool model can introduce variability (contention, session density, application behavior).
Personal host pools eliminate that: each engineer owns one VM/VDI. Applications are installed on that VM, and user data is handled separately on a mounted share. When the user finishes their work, they deallocate the VM; when they reconnect, Start VM on Connect brings the VM back automatically.
FSLogix was intentionally not part of this design. In a persistent personal host pool, profile containers add complexity without delivering meaningful benefits for the workload patterns involved.
Image Engineering
During the early phase of the deployment, a simpler snapshot‑based workflow was briefly used to create interim images while the Extended Zone environment and deployment pipeline were being validated. Once the approach stabilized, the deployment aligned with the organization’s existing Azure Image Builder (AIB)–based image engineering process. Images are produced using Azure Image Builder and published to Azure Compute Gallery in Australia East, from where they are replicated to the Perth Extended Zone for session host deployment.
From there, the image version is replicated to the Perth Extended Zone, allowing session hosts deployed in the zone to consume the same centrally governed image while benefiting from low‑latency proximity to users.
Azure Image Builder template (example)
Azure Image Builder templates are maintained as infrastructure as code (Bicep/ARM). These templates define:
- the base platform image
- image customizations (PowerShell or file operations)
- the target Azure Compute Gallery where the resulting image version will be published
Below is a simplified representation of the AIB template structure used in this environment. This is intentionally a skeleton — the real template includes additional customization steps and application packaging.
param imageTemplateName string
param location string = 'australiaeast'
param subnetId string
param uamiId string
param galleryImageId string
param artifactsBaseUri string // e.g., https://<storage>.blob.core.windows.net/<container>
resource imageTemplate 'Microsoft.VirtualMachineImages/imageTemplates@2022-02-14' = {
name: imageTemplateName
location: location
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${uamiId}': {}
}
}
properties: {
vmProfile: {
vmSize: 'Standard_D4s_v5'
vnetConfig: {
subnetId: subnetId
}
// Optional: identities available inside the build VM
userAssignedIdentities: [
uamiId
]
}
source: {
type: 'PlatformImage'
publisher: 'MicrosoftWindowsDesktop'
offer: 'windows-11'
sku: 'win11-24h2-ent'
version: 'latest'
}
customize: [
// 1) Baseline prerequisites / org compliance
{
type: 'PowerShell'
name: 'BaselinePrereqs'
runElevated: true
runAsSystem: true
inline: [
'Write-Output "Apply baseline compliance settings and prerequisites"'
'Enable-WindowsOptionalFeature -Online -FeatureName NetFx3 -All -NoRestart'
'Disable-WindowsOptionalFeature -Online -FeatureName SMB1Protocol -NoRestart'
]
}
// 2) Removes selected built-in AppX packages from the image build VM and deprovisions them so they won't be installed for new user profiles
{
type: 'File'
name: 'DownloadAppxRemovalScript'
sourceUri: '${artifactsBaseUri}/scripts/Remove-Appx_Packages.zip'
destination: 'C:\\Temp\\Remove-Appx_Packages.zip'
}
{
type: 'PowerShell'
name: 'RemoveAppxPackages'
runElevated: true
runAsSystem: true
inline: [
'Expand-Archive C:\\Temp\\Remove-Appx_Packages.zip -DestinationPath C:\\Temp\\RemoveAppx -Force'
'PowerShell.exe -ExecutionPolicy Bypass -File C:\\Temp\\RemoveAppx\\RemoveAppx.ps1'
]
}
// 3) Install core enterprise apps (representative)
{
type: 'PowerShell'
name: 'InstallCoreApps'
runElevated: true
runAsSystem: true
inline: [
'Write-Output "Install core enterprise applications (Office, Teams, etc.)"'
// In practice this calls your internal packaging method
]
}
// 4) Install workload-specific apps (subsurface)
{
type: 'PowerShell'
name: 'InstallWorkloadApps'
runElevated: true
runAsSystem: true
inline: [
'Write-Output "Install subsurface workload applications and configurations"'
]
}
// 5) Cleanup / finalize (optional tattooing/versioning)
{
type: 'PowerShell'
name: 'Cleanup'
runElevated: true
runAsSystem: true
inline: [
'Write-Output "Cleanup temp files and finalize image"'
'Remove-Item C:\\Temp\\* -Recurse -Force -ErrorAction SilentlyContinue'
]
}
]
distribute: [
{
type: 'SharedImage'
runOutputName: 'sigOutput'
galleryImageId: galleryImageId
replicationRegions: [
'australiaeast'
]
}
]
}
}
Build execution
Once the template is deployed, the image build is triggered using Azure PowerShell. A new image version is produced inside the Azure Compute Gallery.
Example:
New-AzResourceGroupDeployment `
-ResourceGroupName <imageBuilderResourceGroup> `
-TemplateFile <imageTemplateFile> `
-ImageTemplateName <templateName> `
-Location australiaeast
Start-AzImageBuilderTemplate `
-ResourceGroupName <imageBuilderResourceGroup> `
-Name <templateName> `
-NoWait
When the build completes successfully, a new image version is available in the gallery, ready to be consumed or replicated. The deployment pipeline then uses that gallery image as the source for the Perth Extended Zone.
Key Extended Zone constraint: image builds occur in the parent region
For AVD in the Perth Extended Zone, the practical and supported deployment pattern is:
Build + publish image in Australia East → replicate to Perth Extended Zone → deploy session host VMs in Perth
This works cleanly because AVD Extended Zone deployments keep control‑plane components in the parent region, and image distribution and replication are naturally anchored there as well. In practice, the image engineering and governance layer remains in the parent region, while the execution layer (the session hosts) runs in the Extended Zone.
Replicating to Perth Extended Zone via Managed Identity + REST
Once the image version exists in Azure Compute Gallery in Australia East, the next step is replicating that image to the Perth Extended Zone so that session hosts deployed there can use it.
This step requires a one‑time prerequisite configuration on the gallery.
Pre-Requisites (One-Time Setup)
Azure Compute Galleries must have a User-Assigned Managed Identity associated before images can replicate to Extended Zones. This is because the gallery needs to verify that the target subscription is enrolled for the Extended Zone — and this verification is performed via the managed identity.
Because this configuration cannot currently be completed through the Azure portal, the association is performed using Azure REST APIs. In this deployment, Postman was used to execute the request and validate the configuration.
⚠️ This is a one-time setup per gallery. Once completed, all future image versions created via the pipeline will automatically replicate to Perth without repeating these steps.
Step 1 — Setup Managed Identity
az identity create \
--name "mi-gallery-perth-replication" \
--resource-group "<RG_NAME>" \
--location "australiaeast"
az role assignment create \
--assignee <IDENTITY_PRINCIPAL_ID> \
--role "Reader" \
--scope "/subscriptions/<Subscription_ID>"
Step 2 — Generate Access Token
Before making the REST API call, generate an Azure access token via CLI:
az account get-access-token --resource https://management.azure.com/
Copy the accessToken value from the JSON output — this will be used in Postman.
Step 3 — Configure Postman Request
Method: PUT
URL:
https://australiaeast.management.azure.com/subscriptions/<Subscription_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.Compute/galleries/<Gallery_name>?api-version=2023-07-03
Authorization — Option A (OAuth 2.0):
| Field | Value |
|---|---|
| Auth Type | OAuth 2.0 |
| Grant Type | Authorization Code |
| Auth URL | https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize |
| Access Token URL | https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token |
| Client ID | Your Azure App Registration Client ID |
| Client Secret | Your Azure App Registration Client Secret |
| Scope | https://management.azure.com/.default |
| Header Prefix | Bearer |
Authorization — Option B (Simpler — Bearer Token):
- Auth Type: Bearer Token
- Token: Paste the accessToken value from the CLI output above
Headers:
| Key | Value |
|---|---|
| Content-Type | application/json |
| Authorization | Bearer {access_token} (auto-added if using OAuth 2.0) |
Request Body (raw → JSON):
{
"location": "australiaeast",
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"/subscriptions/<Subscription_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<managed-identity-name>": {}
}
}
}
Step 4 — Execute and Verify
Click Send in Postman. A successful response will return 200 OK or 201 Created with "provisioningState": "Succeeded" in the response body.
Verify via Azure Portal: Navigate to Azure Compute Gallery → Identity → confirm the managed identity is listed.
Verify via CLI:
az sig show --resource-group <RG_NAME> --gallery-name <Gallery_name> --query identity
Cost Control: User-initiated VM Deallocation via Desktop Shortcut
With personal host pools and no autoscale, idle GPU VMs are a real cost risk. NVadsA10 v5 instances are not inexpensive, and a user who disconnects without logging off leaves a running VM accumulating charges.
The end users in this environment are subsurface engineers — domain specialists who are not expected to be familiar with the Azure Portal. Asking them to manually deallocate their VM via the portal after each session is not a realistic expectation. To remove this dependency entirely, a desktop shortcut labelled "Stop My VDI" is placed on each session host. When clicked, it executes a PowerShell script that triggers VM deallocation automatically — no portal access required. This is configured as part of the VM setup at deployment time.
How It Works
- User finishes their session and clicks the "Stop My VDI" shortcut on the desktop
- The shortcut executes a PowerShell script (not visible to the end users)
- The script queries IMDS to dynamically retrieve the VM's own name and resource group — no hardcoded values
- It calls an Azure Automation Account webhook, passing the VM name and resource group as the payload
- The Automation Account runbook receives the webhook call, connects using its managed identity, and deallocates the VM
- When the user reconnects next time, Start VM on Connect automatically powers the VM back on
Script on VDI to initiate deallocation
$WebhookURI = "WEBHOOK_URL_PLACEHOLDER"
# Pull VM name and RG directly from Azure Instance Metadata Service (IMDS)
$metadata = Invoke-RestMethod -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" -Headers @{Metadata="true"}
$vmName = $metadata.compute.name
$rgName = $metadata.compute.resourceGroupName
$body = @{
VMName = $vmName
ResourceGroup = $rgName
} | ConvertTo-Json -Compress
try {
Invoke-RestMethod -Uri $WebhookURI -Method Post -Body $body -ContentType "application/json"
Write-Host "Deallocate request submitted for $vmName. You will be disconnected shortly."
}
catch {
Write-Host "Webhook call failed: $($_.Exception.Message)"
if ($_.Exception.Response -and $_.Exception.Response.StatusCode) {
Write-Host "HTTP Status: $($_.Exception.Response.StatusCode.value__)"
}
}
💡 The IMDS endpoint (169.254.169.254) is a link-local address accessible only from within the VM — no firewall rules or outbound network paths need to be opened, which is ideal in a private-only architecture. IMDS runs on a non-routable, link-local endpoint (169.254.169.254) and is accessible only from within the VM.
💡The Webhook URI is stored as a secret in the Key Vault and is being picked during deployment.
Automation Account Runbook
The runbook receives the webhook payload, extracts the VM name and resource group, connects using the Automation Account's managed identity, and deallocates the VM:
param (
[object] $WebhookData
)
$body = ConvertFrom-Json $WebhookData.RequestBody
$vmName = $body.VMName
$rgName = $body.ResourceGroup
Connect-AzAccount -Identity
Write-Output "Deallocating VM: $vmName in RG: $rgName"
Stop-AzVM -Name $vmName -ResourceGroupName $rgName -Force
📌 Connect-AzAccount -Identity uses the Automation Account's system-assigned managed identity — no credentials, no service principals, no secrets to manage.
📌 Start VM on Connect is enabled on the host pool, so when the user reconnects after deallocation, Azure automatically powers the VM back on before establishing the session. (Note: This requires the service principal with the name "Azure Virtual Desktop" to be assigned "Desktop Virtualization Power On Contributor" Role)
Alternative: Logoff-Based Trigger
For environments where a desktop shortcut is not suitable, the same script can be deployed as a GPO logoff script so that deallocation is triggered automatically when the user logs off — without requiring any manual action. The script logic and runbook remain identical, only the trigger mechanism changes.
The high-level steps to configure this:
- Save the script (Deallocate-MyVM.ps1) to a network share or local path accessible from all session hosts
- Open Group Policy Management Console (GPMC) and create or edit a GPO linked to the OU containing the session host computer objects
- Navigate to Computer Configuration → Windows Settings → Scripts → Shutdown — or for user-level triggers, User Configuration → Windows Settings → Scripts → Logoff
- Add the PowerShell script as a logoff script, pointing to the saved script path
- Ensure the GPO is applied to the correct Organizational Unit (OU) containing the AVD session hosts
- Test by logging off a session and confirming the Automation Account runbook is triggered and the VM deallocates successfully
📌 The script logic and Automation Account runbook remain identical to the desktop shortcut approach — only the trigger mechanism changes. No changes to the webhook or runbook are needed.
Closing Thoughts
Azure Extended Zones are a strong fit for geographically isolated, latency-sensitive workloads — and Perth is at the frontier of that story. The patterns here (AIB + Compute Gallery publishing, the Extended Zone replication prerequisite via Managed Identity + Postman/REST, private-only hub-spoke DNS, and user-driven cost control) represent the practical engineering behind making an Extended Zone deployment production-ready.