Blog Post

Azure Infrastructure Blog
11 MIN READ

Synchronizing Azure Storage Across Isolated Private Endpoint Networks

deepakkumarv's avatar
deepakkumarv
Icon for Microsoft rankMicrosoft
Mar 24, 2026

A repatriation‑oriented approach for synchronizing Azure Storage Accounts across isolated private endpoint networks without public access, shared DNS, or credential‑based authentication.

Introduction 

In Azure repatriation programs, enterprises often need to migrate large volumes of blob data from a source Azure environment to a target Azure environment under strict security and network isolation constraints. 

A typical repatriation setup includes fully isolated source and target environments, each protected by private endpoints and independent DNS configurations. 

Typical Repatriation Architecture 

Source Environment 

  • Azure Storage Account in Region A 
  • Source subscription in Region A 
  • Public network access disabled 
  • Private Endpoint configured in a source hub-spoke network 
  • Private DNS zone scoped only to the source network 

Target Environment 

  • Azure Storage Account in Region B 
  • Target subscription in Region B 
  • Public network access disabled 
  • Private Endpoint configured in a target hub-spoke network 
  • Independent Private DNS zone (no DNS sharing with source) 

 There is no shared VNet, no shared Private DNS zone, and no direct private connectivity between the two environments. 

Problem Statement

Azure Storage supports server‑side copy operations under many conditions. However, when both the source and destination storage accounts are protected by private endpoints and deployed in isolated virtual networks without shared DNS resolution or network connectivity, server‑side copy operations are not supported.

In such cases, copy attempts commonly fail with the following error:

403 – CannotVerifyCopySource 

This presents a challenge for organizations that need to migrate data securely without:

  • Enabling public network access 
  • Using Shared Access Signatures (SAS) or storage account keys

  • Re-architecting or modifying the source environment 
  • Relaxing established enterprise network isolation boundaries 

Repatriation Optimized Solution Pattern (Recommended) 

Core Design Principle: Anchor the data movement in the target environment. 

.Rather than attempting direct storage‑to‑storage copy across isolated networks, this pattern executes the data transfer from a controlled Azure Virtual Machine (VM) deployed in the target environment, which acts as the authorized client for both the source and target storage accounts.

 

Secure Azure Blob data synchronization across isolated regions and private endpoint networks

 Execution Flow

  1. An Azure VM is deployed in the target subscription 
  2. VM resides in: 
    • The same virtual network as the target storage private endpoint, or
    • A peered virtual network with access to the target private endpoint

      3. Private DNS A records are created in the target private DNS zone for:  

    • Source storage account blob endpoint 
    • Target storage account blob endpoint 

       4. AzCopy runs on the VM using Microsoft Entra ID authentication via Managed Identity

       5. VM reads data from the source storage account

       6. VM writes data to the target storage account 

       7. All data transfer occurs over private networking, without traversing public endpoints

DNS Configuration (Critical for Success) 

Because source and target environments use separate private DNS zones, DNS resolution must be explicitly aligned. 

Required Configuration, with-in the target private DNS zone: 

  • Create A records for:  
    • sourceaccount.blob.core.windows.net 
    • targetaccount.blob.core.windows.net 
  • Map each record to the private IP address of its corresponding private endpoint

This configuration ensures that:

  • AzCopy resolves both storage endpoints to private IP addresses 
  • Public endpoint resolution is avoided
  • Data transfer remains compliant with network isolation and security policies 

⚠️ Without this DNS alignment, AzCopy authentication and transfer will fail, even when network connectivity and role assignments are correctly configured. 

 

Identity & Access Configuration 

The Azure VM uses a Managed Identity for authentication. 

Required Role Assignments 

On Source Storage Account 

  • Storage Blob Data Reader 

On Target Storage Account 

  • Storage Blob Data Contributor 

These assignments provide: 

  • Read-only access to source data
  • Write access to the destination
  • Authentication without embedding credentials, keys, or secrets in scripts

    AzCopy performs data plane operations using Microsoft Entra ID–based RBAC, without accessing storage account keys.

Storage Sync Script for Repatriation 

Script Overview  

The PowerShell script performs the following actions:  

  1. Authenticates using VM's Managed Identity 
  2. Iterates through defined source–destination storage account pairs  
  3. Enumerates all containers in the source storage account  
  4. Creates missing containers in the destination storage account  
  5. Executes azcopy sync for each container  
  6. Logs execution results and handles errors without terminating the entire process

Prerequisites

Ensure the following prerequisites are met before execution:

  • Azure VM with either system-assigned or user-assigned Managed Identity
  • AzCopy installed on the VM
  • Azure PowerShell module installed on the VM
  • VM connected to the same VNet, or a peered VNet, as the storage private endpoints
  • DNS resolution set up for private blob endpoints

 PowerShell Script s 

Note: The following code snippets are provided as examples only and may need to be adapted to match your environment, subscriptions, and naming standards.

Script 1 - Package Installations - Az and AzCopy Modules 

# Ensure script runs with admin privileges
if (-not ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()
    ).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
    Write-Error "Please run PowerShell as Administrator."
    exit
}

# Enforce TLS 1.2 (recommended for PSGallery)
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

# -------------------------------
# Check and Install Az module
# -------------------------------
if (-not (Get-Module -ListAvailable -Name Az)) {
    Write-Output "Az module not found. Installing Az PowerShell module..."
    Install-Module -Name Az -Repository PSGallery -Force -AllowClobber
}
else {
    Write-Output "Az module already installed. Skipping installation."
}

# Verify Az module
Write-Output "Verifying Az module installation..."
Import-Module Az
Get-Module Az -ListAvailable | Select-Object Name, Version

# -------------------------------
# Check & Install AzCopy module
# -------------------------------
Write-Host "Checking AzCopy installation..." -ForegroundColor Cyan

# 1. Check if azcopy is already available in PATH
if (Get-Command azcopy -ErrorAction SilentlyContinue) {
    Write-Host "AzCopy already installed and available in PATH." -ForegroundColor Green
    azcopy --version
    return
}

# 2. Check standard install location
$installPath = Join-Path $env:ProgramFiles "AzCopy"
$targetExe   = Join-Path $installPath "azcopy.exe"

if (Test-Path $targetExe) {
    Write-Host "AzCopy found at $targetExe" -ForegroundColor Green
}
else {
    Write-Host "AzCopy not found. Downloading and installing..." -ForegroundColor Yellow

    # Download
    $azCopyUrl    = "https://aka.ms/downloadazcopy-v10-windows"
    $zipPath      = Join-Path $env:TEMP "azcopy.zip"
    $extractPath  = Join-Path $env:TEMP "azcopy_extract"

    Invoke-WebRequest -Uri $azCopyUrl -OutFile $zipPath

    if (Test-Path $extractPath) {
        Remove-Item $extractPath -Recurse -Force
    }

    Expand-Archive -Path $zipPath -DestinationPath $extractPath -Force

    # Find azcopy.exe inside extracted folder
    $foundExe = Get-ChildItem -Path $extractPath -Recurse -Filter "azcopy.exe" |
                Select-Object -First 1 -ExpandProperty FullName

    if (-not $foundExe) {
        throw "azcopy.exe not found after extraction."
    }

    # Copy to Program Files\AzCopy
    New-Item -ItemType Directory -Path $installPath -Force | Out-Null
    Copy-Item -Path $foundExe -Destination $targetExe -Force
}

# 3. Add AzCopy to Machine PATH (only if missing)
Write-Host "Ensuring AzCopy is added to PATH..." -ForegroundColor Cyan

$machinePath = [Environment]::GetEnvironmentVariable("Path", "Machine")
if ($machinePath -notlike "*$installPath*") {
    [Environment]::SetEnvironmentVariable(
        "Path",
        "$machinePath;$installPath",
        [EnvironmentVariableTarget]::Machine
    )
}

# Update current session PATH so azcopy works immediately
$env:Path = "$env:Path;$installPath"

# 4. Verify installation
Write-Host "Verifying AzCopy installation..." -ForegroundColor Cyan
& $targetExe --version
azcopy --version

Script 2 - Azure Storage Account Blob Synchronization

# Login using the VM's System Assigned Identity
 Connect-AzAccount -Identity 

# Use below command if managed identity is associated with VM
# $clientId = "VMManagedIdentity"
# Connect-AzAccount -Identity -AccountId $clientId
 
# Define source and destination account pairs with subscription IDs

$storagePairs = @(

    @{ SourceAccount = "SourceStorageAccountName"; SourceRG = "SourceResourceGroup"; SourceSub = "SourceSubscriptionId";

       DestAccount = "DestinationStorageAccountName"; DestRG = "DestinationResourceGroup"; DestSub = "DestinationSubscriptionId" }
 
    # Add more pairs as needed

)
 
foreach ($pair in $storagePairs) {
    $sourceAccount = $pair.SourceAccount
    $sourceRG = $pair.SourceRG
    $sourceSub = $pair.SourceSub

    $destAccount = $pair.DestAccount
    $destRG = $pair.DestRG
    $destSub = $pair.DestSub

    Write-Host "`n?? Processing pair: $sourceAccount ($sourceSub) ? $destAccount ($destSub)"

    try {
        # Set context to source subscription and get key
        Set-AzContext -SubscriptionId $sourceSub
        $sourceContext = New-AzStorageContext -StorageAccountName $sourceAccount

        # Set context to destination subscription and get key
        Set-AzContext -SubscriptionId $destSub
        $destContext = New-AzStorageContext -StorageAccountName $destAccount

        # Get all containers from the source account
        $containers = Get-AzStorageContainer -Context $sourceContext

        foreach ($container in $containers) {
            $containerName = $container.Name
            Write-Host "`n?? Syncing container: $containerName"

            try {
                # Check if destination container exists
                $destContainer = Get-AzStorageContainer -Name $containerName -Context $destContext -ErrorAction SilentlyContinue

                if (-not $destContainer) {
                    New-AzStorageContainer -Name $containerName -Context $destContext | Out-Null
                    Write-Host "? Created destination container: $containerName"
                }

                # Build source and destination URLs
                $sourceUrl = "https://$sourceAccount.blob.core.windows.net/$containerName"
                $destUrl = "https://$destAccount.blob.core.windows.net/$containerName"

                # Run AzCopy sync using identity
                azcopy login --identity

                Write-Host "Login Successful"
                Write-Host "Sync Started"
                azcopy sync $sourceUrl $destUrl --recursive=true --compare-hash=MD5 --include-directory-stub=true
                #Write-Host "? Sync completed for container: $containerName"
            } catch {
                Write-Error "? Error syncing container '$containerName': $_"
            }
        }
    } catch {
        Write-Error "? Error processing storage pair $sourceAccount ? $destAccount : $_"

    }
}

Script 3 - Post Validation of Azure Storage Account Blob Synchronization

# =====================================================================
# POST-AZSYNC VALIDATION (PowerShell-only, NO Azure CLI)
# - Uses VM Managed Identity (Connect-AzAccount -Identity)
# - Validates ALL containers: folders, blob count, total bytes
# - Includes 0-byte files and 0-byte folder stubs
# - Exports CSV
# =====================================================================

# ------------------ CONFIG ------------------
$SrcStorageAccount = "SourceStorageAccountName"
$DstStorageAccount = "DestinationStorageAccountName"

# Output folder
$OutDir = ".\PostAzSync-Outputfile"
New-Item -ItemType Directory -Path $OutDir -Force | Out-Null

# Optional: restrict to specific container names
# Example: $OnlyTheseContainers = @("container1","container2")
$OnlyTheseContainers = @()

# Blob listing page size (max allowed depends on service; 5000 is safe)
$PageSize = 5000

# ------------------ AUTH (VM Managed Identity) ------------------
# Uses VM system-assigned identity
Connect-AzAccount -Identity | Out-Null

# ------------------ CONTEXTS (OAuth) ------------------

$SrcCtx = New-AzStorageContext -StorageAccountName $SrcStorageAccount -UseConnectedAccount
$DstCtx = New-AzStorageContext -StorageAccountName $DstStorageAccount -UseConnectedAccount

# ------------------ HELPERS ------------------
function Get-Containers {
    param([Parameter(Mandatory=$true)]$Ctx)

    return (Get-AzStorageContainer -Context $Ctx | Select-Object -ExpandProperty Name | Sort-Object -Unique)
}

function Get-AllBlobsPaged {
    param(
        [Parameter(Mandatory=$true)][string]$Container,
        [Parameter(Mandatory=$true)]$Ctx,
        [int]$MaxCount = 5000
    )

    $all = @()
    $token = $null

    do {
        # Get-AzStorageBlob supports pagination with -MaxCount and -ContinuationToken [3]
        $page = Get-AzStorageBlob -Container $Container -Context $Ctx -MaxCount $MaxCount -ContinuationToken $token

        if ($page) {
            $all += $page
            $token = $page[$page.Count - 1].ContinuationToken
        } else {
            $token = $null
        }
    } while ($token -ne $null)

    return $all
}

function Get-AllPrefixesForBlob {
    param([Parameter(Mandatory=$true)][string]$BlobName)

    $parts = $BlobName -split "/"
    if ($parts.Count -le 1) { return @() }

    $prefixes = @()
    for ($i = 1; $i -le ($parts.Count - 1); $i++) {
        $prefixes += (($parts[0..($i-1)] -join "/") + "/")
    }
    return $prefixes
}

function Get-FolderSummary {
    param(
        [Parameter(Mandatory=$true)]$BlobObjects
    )

    # Includes 0-byte files and 0-byte folder stubs naturally (Length can be 0)
    $expanded = foreach ($b in $BlobObjects) {
        $name = [string]$b.Name
        $size = [int64]$b.Length

        if ([string]::IsNullOrWhiteSpace($name)) { continue }

        foreach ($p in (Get-AllPrefixesForBlob -BlobName $name)) {
            $lvl = ($p.TrimEnd("/") -split "/").Count
            [pscustomobject]@{
                Level     = $lvl
                Folder    = $p
                SizeBytes = $size
            }
        }
    }

    $summary = $expanded |
        Group-Object Level, Folder |
        ForEach-Object {
            $lvl    = $_.Group[0].Level
            $folder = $_.Group[0].Folder
            $total  = ($_.Group | Measure-Object SizeBytes -Sum).Sum
            [pscustomobject]@{
                Level      = $lvl
                Folder     = $folder
                BlobCount  = $_.Count
                TotalBytes = [int64]$total
            }
        } |
        Sort-Object Level, Folder

    return $summary
}

function Compare-Summaries {
    param(
        [Parameter(Mandatory=$true)]$SrcSummary,
        [Parameter(Mandatory=$true)]$DstSummary
    )

    $srcIndex = @{}
    foreach ($r in $SrcSummary) { $srcIndex["$($r.Level)|$($r.Folder)"] = $r }

    $dstIndex = @{}
    foreach ($r in $DstSummary) { $dstIndex["$($r.Level)|$($r.Folder)"] = $r }

    $keys = ($srcIndex.Keys + $dstIndex.Keys) | Sort-Object -Unique

    $compare = foreach ($k in $keys) {
        $s = $srcIndex[$k]
        $d = $dstIndex[$k]
        $parts = $k -split "\|", 2

        $srcCount = if ($s) { $s.BlobCount } else { 0 }
        $dstCount = if ($d) { $d.BlobCount } else { 0 }
        $srcBytes = if ($s) { $s.TotalBytes } else { 0 }
        $dstBytes = if ($d) { $d.TotalBytes } else { 0 }

        [pscustomobject]@{
            Level            = [int]$parts[0]
            Folder           = $parts[1]
            SrcBlobCount     = $srcCount
            DstBlobCount     = $dstCount
            SrcTotalBytes    = $srcBytes
            DstTotalBytes    = $dstBytes
            BlobCountDelta   = ($dstCount - $srcCount)
            TotalBytesDelta  = ($dstBytes - $srcBytes)
        }
    }

    return ($compare | Sort-Object Level, Folder)
}

function Get-ZeroByteObjects {
    param([Parameter(Mandatory=$true)] $BlobObjects)

    if ($null -eq $BlobObjects) { return @() }

    return @(
        $BlobObjects |
        Where-Object { [int64]$_.Length -eq 0 } |
        Select-Object -ExpandProperty Name |
        Sort-Object
    )
}

function Get-ZeroByteFolderStubs {
    param([Parameter(Mandatory=$true)] $BlobObjects)

    if ($null -eq $BlobObjects) { return @() }

    return @(
        $BlobObjects |
        Where-Object { ([int64]$_.Length -eq 0) -and ([string]$_.Name).EndsWith("/") } |
        Select-Object -ExpandProperty Name |
        Sort-Object
    )
}

function Diff-List {
    param(
        [Parameter(Mandatory=$true)] $SrcList,
        [Parameter(Mandatory=$true)] $DstList
    )

    if ($null -eq $SrcList) { $SrcList = @() }
    if ($null -eq $DstList) { $DstList = @() }

    $diff = Compare-Object -ReferenceObject $SrcList -DifferenceObject $DstList
    if (-not $diff) { return @() }

    return @(
        $diff | ForEach-Object {
            $present =
                if ($_.SideIndicator -eq "<=") { "SOURCE" }
                elseif ($_.SideIndicator -eq "=>") { "DESTINATION" }
                else { "UNKNOWN" }

            [pscustomobject]@{
                Name          = $_.InputObject
                PresentOnlyIn = $present
            }
        }
    )
}

function Export-CsvSafe {
    param(
        [Parameter(Mandatory=$true)] $Data,
        [Parameter(Mandatory=$true)] [string] $Path,
        [switch] $NoTypeInformation
    )

    if ($null -eq $Data) { $Data = @() }

    if ($NoTypeInformation) {
        $Data | Export-Csv -Path $Path -NoTypeInformation
    } else {
        $Data | Export-Csv -Path $Path
    }
}

# ------------------ GET CONTAINER SETS ------------------
$srcContainers = Get-Containers -Ctx $SrcCtx
$dstContainers = Get-Containers -Ctx $DstCtx

$allContainers = ($srcContainers + $dstContainers) | Sort-Object -Unique

if ($OnlyTheseContainers.Count -gt 0) {
    $allContainers = $allContainers | Where-Object { $OnlyTheseContainers -contains $_ }
}

if ($allContainers.Count -eq 0) {
    throw "No containers returned. Ensure MI has Storage Blob Data Reader (or higher) on both accounts. [2]"
}

# ------------------ PROCESS ALL CONTAINERS ------------------
$master = @()

foreach ($c in $allContainers) {

    Write-Host "`n==============================" 
    Write-Host "CONTAINER: $c"
    Write-Host "==============================" 

    # Pull blobs from both sides (paged)
    $srcBlobs = @()
    $dstBlobs = @()

    try {
        $srcBlobs = Get-AllBlobsPaged -Container $c -Ctx $SrcCtx -MaxCount $PageSize -ErrorAction Stop
    }
    catch {
        Write-Warning ("SRC error on container {0}: {1}" -f $c, $_.Exception.Message)
        $srcBlobs = @()
    }
    try {
    $dstBlobs = Get-AllBlobsPaged -Container $c -Ctx $DstCtx -MaxCount $PageSize -ErrorAction Stop
    }
    catch {
        Write-Warning ("DST error on container {0}: {1}" -f $c, $_.Exception.Message)
        $dstBlobs = @()
    }

    # Folder summaries
    $srcSummary = Get-FolderSummary -BlobObjects $srcBlobs
    $dstSummary = Get-FolderSummary -BlobObjects $dstBlobs
    $cmp        = Compare-Summaries -SrcSummary $srcSummary -DstSummary $dstSummary

    # 0-byte checks
    $srcZero  = @(Get-ZeroByteObjects -BlobObjects $srcBlobs)
    $dstZero  = @(Get-ZeroByteObjects -BlobObjects $dstBlobs)

    if ($null -eq $srcZero) { $srcZero = @() }
    if ($null -eq $dstZero) { $dstZero = @() }

    $zeroDiff = Diff-List -SrcList $srcZero -DstList $dstZero


    $srcStub  = @(Get-ZeroByteFolderStubs -BlobObjects $srcBlobs)
    $dstStub  = @(Get-ZeroByteFolderStubs -BlobObjects $dstBlobs)

    if ($null -eq $srcStub) { $srcStub = @() }
    if ($null -eq $dstStub) { $dstStub = @() }

    $stubDiff = Diff-List -SrcList $srcStub -DstList $dstStub
    

    # Display 
    $cmp | Format-Table Level, Folder, SrcBlobCount, DstBlobCount, SrcTotalBytes, DstTotalBytes, BlobCountDelta, TotalBytesDelta -AutoSize

    # Export per-container CSVs
    $safe = $c -replace '[^a-zA-Z0-9\-]', '_'
    if ($null -eq $srcSummary) { $srcSummary = @() }
    Export-CsvSafe -Data $srcSummary -Path (Join-Path $OutDir "SRC_${safe}_folder_summary.csv") 
    if ($null -eq $dstSummary) { $dstSummary = @() }
    Export-CsvSafe -Data $dstSummary -Path (Join-Path $OutDir "DST_${safe}_folder_summary.csv") 
    if ($null -eq $cmp) { $cmp = @() }
    Export-CsvSafe -Data $cmp -Path (Join-Path $OutDir "CMP_${safe}_Folder_Diff.csv") 
    if ($null -eq $zeroDiff) { $zeroDiff = @() }
    Export-CsvSafe -Data $zeroDiff -Path (Join-Path $OutDir "CMP_${safe}_ZeroByteFiless_Diff.csv") 
    if ($null -eq $stubDiff) { $stubDiff = @() }
    Export-CsvSafe -Data $stubDiff -Path (Join-Path $OutDir "CMP_${safe}_ZeroByteFolderStubs_Diff.csv")

    # Add to master
    foreach ($row in $cmp) {
        $master += [pscustomobject]@{
            Container       = $c
            Level           = $row.Level
            Folder          = $row.Folder
            SrcBlobCount    = $row.SrcBlobCount
            DstBlobCount    = $row.DstBlobCount
            SrcTotalBytes   = $row.SrcTotalBytes
            DstTotalBytes   = $row.DstTotalBytes
            BlobCountDelta  = $row.BlobCountDelta
            TotalBytesDelta = $row.TotalBytesDelta
        }
    }
}

# Export master compare
Export-CsvSafe -Data $master -Path (Join-Path $OutDir "CMP_MASTER_allcontainers_folder.csv") -NoTypeInformation
Write-Host "All Containers Exported to: $OutDir"

Output of Script - Validation of Azure Storage Account Blob Synchronization

 

| Container                    | Level | Folder  | SrcBlobCount | DstBlobCount | SrcTotalBytes | DstTotalBytes | BlobCountDelta | TotalBytesDelta |

|-------------------------|-------|---------|-----------------|----------------|-------------- |-----------------|-------------------|------------------|

| sample-container1     | 1       | folder1/| 3                      | 3                    | 165               | 165                  | 0                        | 0                        |

| sample-container2     | 1       | folder2/| 10                    | 10                  | 0                   | 0                      | 0                        | 0                        |

Key Features 

No SAS tokens, no storage account keys, and no environment‑specific secrets

Executes from the target VM using Managed Identity

✅Uses azcopy sync to support resumability and large datasets 

Enable Logging for audit and troubleshooting 

 Prevents full script termination due to partial failures. Skips failed containers and continues processing remaining pairs. 

When to Use This Pattern

Recommended Scenarios

  • Azure repatriation or region-to-region migration
  • Source environment is locked down and cannot be modified
  • Separate hub-spoke network architectures per region
  • One-time or phased migration efforts.
  • Environments with strong compliance and security requirements

After successful repatriation and source environment decommissioning, any temporary private endpoint or DNS records created in the target environment can be safely removed.

Less Suitable Scenarios

  • Continuous or near real-time replication requirements
  • Architectures with shared networking between environments
  • Disaster recovery scenarios requiring bi‑directional synchronization

 Decision Matrix: Choosing the Right Pattern 

Requirement 

Target-Anchored Repatriation VM 

Use Case 

Repatriation / migration 

Network Model 

Fully isolated 

DNS Complexity 

Explicit A record management

Source Changes 

None 

Automation Scope 

Scoped per migration 

Recommended For 

One-time or phased moves 

 

Key Takeaway 

For Azure repatriation scenarios involving different regions, subscriptions, and isolated private endpoint networks, a reliable and secure approach is to: Execute AzCopy from a VM in the target environment, explicitly align Private DNS resolution, and authenticate using Managed Identity.

This pattern maintains:

  • Network isolation
  • Zero credential exposure
  • Alignment with enterprise security and compliance controls

while still enabling large‑scale, high‑throughput data migration.

 References 

 

Updated Mar 24, 2026
Version 1.0
No CommentsBe the first to comment