infra
131 TopicsModernizing On‑Prem File Servers: Azure Storage Mover and File Sync
Azure File Sync is a hybrid cloud storage service that centralizes on-premises file shares into Azure Files while preserving the experience of a local Windows file server. It installs an agent on Windows Server(s) to cache and sync files to an Azure file share (the cloud backend). Key features include cloud tiering (keeping only hot files on-prem and tiering cold data to Azure) and multi-site synchronization, so changes propagate across servers via the cloud. In essence, Azure File Sync transforms your file server into a cache for Azure, providing continuous two-way sync between on-premises and the Azure file share. Azure Storage Mover is a fully managed migration service used to transfer file data into Azure Storage (Azure Blob containers or Azure file shares) with minimal downtime. It works by deploying a migration agent near the source storage, which then copies data directly to Azure. The cloud-based Storage Mover resource orchestrates migrations (including initial bulk transfer and optional delta syncs for changes) across multiple shares from a central interface. Unlike File Sync, Storage Mover is not a continuous sync service but rather a one-directional data mover for scenarios like one-time migrations or periodic updates. Side-by-Side Technical Comparison Aspect Azure File Sync Azure Storage Mover Primary Purpose Hybrid file service: Ongoing two-way sync between on-prem file servers and Azure Files, enabling local caching and multi-site data sharing. Migration tool: One-way transfers of file data from on-prem (or other storage) to Azure Storage, optimized for lift-and-shift migrations with minimal downtime. Architecture Agent on Windows Server connects local NTFS volumes to an Azure File Share (cloud endpoint). The Azure Storage Sync Service coordinates sync across servers in a sync group. Supports cloud tiering to offload cold files to cloud. Sync is continuous and multi-directional (all endpoints stay in sync). Cloud service + on-prem agent: An Azure Storage Mover resource manages migration jobs. Lightweight migration agents (VMs or containers) run near your sources, sending data directly to the Azure target (Blob or File share). The service orchestrates project-based migrations but does not keep sources in sync after completion. Supported Sources Windows file servers (NTFS), accessed via SMB/NFS protocols (the agent needs Windows Server OS). Ideal for Windows-based file shares. SMB shares, NFS exports, and similar file systems on any platform (Windows or Linux NAS). Also supports migrating from other clouds’ storage (e.g., S3 buckets) into Azure. Broad support for heterogeneous sources. Supported Targets Azure Files only. (Cloud endpoint is an Azure file share in a Storage Account). Supports SMB (and NFS 4.1 Azure file shares in preview) as target share types. Azure Storage (Blob containers or Azure file shares). For example, can migrate into an Azure Blob (ADLS Gen2) container or an Azure File share depending on scenario. Sync vs. Migration Continuous Sync: Bi-directional; changes on-prem or in Azure propagate to all endpoints. Designed for long-term hybrid operation, not just a one-time move. Batch Migration: One-time or repeated transfer; not a live sync. Typically used to move data entirely to Azure (cutover once done). Supports incremental (delta) syncs to capture changes between migration runs. Performance & Scale Scales to large datasets (tested up to 100 million files per sync group). Throughput can reach hundreds of files/sec for upload/download given sufficient resources. Performance depends on server hardware, network, and Azure Files limits. Built to handle high-volume migrations (100M+ files). Can scale out by deploying multiple agents or using bigger VMs to increase throughput. Performance mainly limited by network bandwidth and source/target IOPS and can be optimized by parallel jobs and delta sync workflows. Integration Deep integration with Azure Files (the back-end store) and Windows Server. Works with Azure Backup for centralized backups or using file share snapshots. Leverages Azure’s redundancy (LRS/ZRS/GRS) for durability; supports Azure AD DS for identity integration to maintain ACLs in cloud. Integrated with Azure Arc (for agent management) and can combine with Azure Data Box for hybrid migrations (offline + online phases). Managed using Azure Portal and CLI, provides logging/monitoring through Azure Monitor for migration jobs. No ongoing infrastructure after migration completes. Advantages of Azure File Sync: Hybrid Cloud Caching: Retains on-premises low-latency file access (via local Windows servers) while using Azure as central storage. End users and apps continue using a local file server interface. Cloud Tiering & Storage Efficiency: Frees up local storage by tiering infrequently used files to Azure. This reduces on-prem disk usage without sacrificing access to full dataset (files are pulled from cloud on demand). Multi-site Sync & Collaboration: Enables near-real-time sync across multiple servers/sites via Azure hub. Great for distributed teams sharing a common file set, replacing need for complex DFS-R setups or manual transfers. Minimal Disruption Migration: Can be used as a no-downtime migration path to Azure Files – sync in background, then cut over clients to the Azure share with identical structure and ACLs. Advantages of Azure Storage Mover: Purpose-Built for Migration: Optimized for transferring data at scale into Azure. Can handle large one-time migrations or scheduled recurrent syncs without continuous agent overhead post-migration. Simplifies multi-terabyte or multi-site migration projects. Heterogeneous Source Support: Works with a variety of source types (SMB, NFS shares on any OS) and can migrate into both Azure Files and Azure Blob, providing flexibility that Azure File Sync can’t (e.g., migrating Linux NFS servers or third-party storage to Azure). Centralized Orchestration: Cloud architects can manage all migrations via a single Azure Storage Mover resource – with projects & jobs tracking progress per share. Logging, error handling, and coordination are unified, unlike scripting copy operations per server. Incremental & Low Downtime: Supports delta synchronization to bring the target up-to-date after an initial bulk copy. This reduces cutover downtime since only last-minute changes need transferring. Also integrates with offline seeding (Data Box) plus online catch-up to minimize network strain and downtime. Common Use Cases and When to Choose Each Azure File Sync – Use Cases: Ideal when you need to maintain on-premises file server access while leveraging cloud storage. Some common scenarios: Branch Office File Sharing: Multiple offices each have a local file server, all synced to a central Azure Files share. Users get fast local access, and the cloud ensures each site’s data stays consistent. File Server Augmentation: You want to extend an existing Windows file server with virtually unlimited cloud capacity (via tiering) rather than fully moving to cloud. Azure File Sync offloads old data and provides cloud backup, but users and apps continue as normal with the local server. Gradual Migration / Testing: You plan to eventually migrate to Azure Files but want a seamless, no-downtime transition. Deploy AFS on the server, let it sync all data to cloud, then optionally eliminate the on-prem server later. This way, users never stop accessing their files during the migration process. Azure Storage Mover – Use Cases: Best when you have a defined migration project to move file shares to Azure, especially for heterogeneous environments or large volumes: Data Center Exit / Large File Share Migration: Moving tens or hundreds of TBs of data from on-prem NAS or file servers to Azure Storage as a one-time project. The Storage Mover’s robust scalability and delta-sync capabilities help ensure a smooth transfer with minimal final cutover downtime. Consolidating Cross-Platform Data to Azure: If you need to migrate non-Windows file systems (Linux NFS, etc.) or even data from other clouds into Azure, Storage Mover supports those sources out-of-the-box. For example, migrating a Linux file repository into Azure Blob for big data analytics. Recurring Scheduled Migrations: In cases where you periodically copy data from on-prem to Azure (e.g., monthly exports from a local system to cloud for archiving), Storage Mover can be run as needed and centrally monitored, without maintaining a constant sync infrastructure in between. Conclusion When to choose which: Use Azure File Sync if your goal is to keep using on-prem servers and need a hybrid solution for the foreseeable future, or if you require distributed caching and continuous sync for collaboration. It’s essentially part of your production architecture for hybrid cloud file storage. Conversely, choose Azure Storage Mover when you want to permanently migrate data into Azure (and possibly decommission on-prem storage), or when dealing with a one-off bulk transfer task. In summary, Azure File Sync is an ongoing hybrid file service, and Azure Storage Mover is a one-time (or scheduled) migration service. Both can complement your cloud strategy, but they address distinct scenarios in a cloud architect’s toolkit.81Views0likes0CommentsHow Azure Monitor's Implementation of Private Link Differs from Other Services
Azure Monitor implements Private Link DNS resolution differently from other Azure services. This article details the more common pattern for DNS resolution to support Private Endpoints, walks through an example from the client perspective using a Storage Account, then describes how Azure Monitor differs and how this may impact your Private Link rollout.8.9KViews6likes1CommentAzure Private Endpoint vs. Service Endpoint: A Comprehensive Guide
When building secure and scalable applications on Microsoft Azure, network connectivity becomes a critical factor. Azure provides two primary methods for enhancing security and connectivity: Private Endpoints and Service Endpoints. While both serve to establish secure connections to Azure resources, they function in distinct ways and cater to different networking needs. This blog will explain the differences between the two, their use cases, and when you should use each. Understanding Service Endpoints Azure Service Endpoints allow you to securely connect to Azure services over an optimized route through the Azure backbone network. When you enable service endpoints on a virtual network, they extend the private IP address space of that virtual network to the service. Essentially, they provide a direct, secure connection to Azure services like Azure Storage, Azure SQL Database, and Azure Key Vault without requiring the traffic to traverse the public internet. Key Characteristics of Service Endpoints: Public Services, Private IP: Service endpoints allow traffic to go through the Azure backbone but still access services using their public IP addresses. However, the traffic is not exposed to the internet. Network Security Group (NSG) Integration: Service endpoints can be secured using NSGs, which control access based on source IP addresses and subnet configurations. No DNS Resolution: Service endpoints use public DNS names to route traffic. Thus, the service endpoint enables network traffic to be routed privately but relies on public DNS resolution. Use Cases for Service Endpoints: Simplified Security: Service endpoints are ideal for connecting to Azure services in a straightforward manner without needing complex configurations. Lower Latency: Since traffic is routed through the Azure backbone network, there’s less congestion compared to public internet traffic. Integration with NSG: Service endpoints allow for tighter security control with Network Security Groups, ensuring only approved subnets and virtual networks can access specific services. Understanding Private Endpoints Private Endpoints, on the other hand, provide a direct, private connection to Azure resources by assigning a private IP address from your virtual network (VNet) to the service. Unlike service endpoints, which rely on public IPs, private endpoints fully encapsulate the service in a private address space. When a service is accessed via a private endpoint, the connection stays within the Azure network, preventing exposure to the public internet. Key Characteristics of Private Endpoints: Private IP Connectivity: Private endpoints map Azure resources to a private IP in your VNet, ensuring all traffic remains private and not exposed to the internet. DNS Resolution: Private endpoints also require DNS configuration so that the private IP address can be resolved for the associated Azure service. Azure offers automatic DNS resolution for private endpoints, but custom DNS configurations can also be set. End-to-End Security: Since the connection is over a private IP, it adds an additional layer of security by preventing any egress or ingress to public networks. Use Cases for Private Endpoints: Critical Security: Private endpoints are perfect for applications requiring high security, such as those handling sensitive data, financial transactions, or proprietary business logic. Strict Regulatory Compliance: If you are dealing with highly regulated industries, private endpoints provide a way to ensure your data is not exposed to the public internet. Network Isolation: Private endpoints are suited for scenarios where you want to fully isolate your Azure resources from the internet and only allow access from within your VNet. Key Differences: Private Endpoint vs. Service Endpoint Feature Private Endpoint Service Endpoint Connection Type Uses a private IP address from your VNet Uses a public IP address but traffic stays within the region DNS Resolution Requires DNS configuration to resolve private IPs Relies on public DNS for resolution Use Case Ideal for critical security and isolated traffic Best for connecting to Azure services with basic security requirements Supported Services Limited to resources that support private endpoints Supports a broader range of Azure services like Storage, SQL, etc. Note: Service endpoints are as secured as the private endpoints when configured appropriately. When to Use Each Option Choose Service Endpoints if: You want to connect to Azure services like Storage, SQL, or Key Vault using the Azure backbone network. Your security requirements do not mandate complete isolation from the public internet. You need to leverage Network Security Groups (NSGs) to limit access from specific subnets or VNets. Choose Private Endpoints if: Your application requires full isolation from the public internet, such as for sensitive workloads or highly regulated data. You want traffic to flow entirely within the private network, ensuring complete confidentiality. You need to maintain strict security standards for applications that interact with services like databases, storage accounts, or other critical infrastructure. Conclusion Both Private Endpoints and Service Endpoints play vital roles in securing connectivity to Azure services, but they cater to different security needs. Service Endpoints offer an easier, simpler way to secure access over the Azure backbone, while Private Endpoints provide complete isolation and enhanced security by assigning a private IP address. By carefully assessing your application's security needs and performance requirements, you can choose the appropriate method to ensure optimal connectivity and compliance with Azure services.14KViews8likes1CommentAzure Firewall and Service Endpoints
In my recent blog series Private Link reality bites I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go! First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: Private Link and VNet Service Endpoints. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read this post on a comparison of the two. You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging service endpoints. Before doing so, please consider that sending high-bandwidth traffic through a firewall might have cost implications and impact on the overall application latency. If you still want to go ahead, this post is going to explain how to do it. The Design Service endpoints have two configuration parts: Source subnet configuration to tunnel traffic to the destination service. Destination service configuration to accept traffic from the source subnet. The key concept to understand is that if traffic from the client is going to be inspected by a firewall before going to the Azure service, then the source subnet is actually the Azure Firewall's subnet, not the original client's subnet: You can configure service endpoints for a specific Azure service on a subnet using the portal, Terraform, Bicep, PowerShell or the Azure CLI. In the portal this is what it looks like: For Azure CLI, enabling service endpoints for Azure Storage accounts in all regions would look like this: ❯ subnet_name=AzureFirewallSubnet ❯ az network vnet subnet update -n $subnet_name --vnet-name $vnet_name -g $rg --service-endpoints Microsoft.Storage.Global -o none --only-show-errors You would then configure your Azure services to accept traffic coming from the Azure Firewall subnet. For example, for Azure Storage Accounts this is what you would see in the portal: Network Rules or Application Rules? Ideally you should use Application Rules in your firewall to make sure that your workloads are accessing the right Azure services, and not exfiltrating data to rogue data services that might be owned by somebody else and still could have the same IP address. This is an example of a star rule granting access to all Azure Storage Accounts, but you should specify your own: I tested with two storage accounts, one in the same region and another one in a different region than the client (the client being in this case the Azure Firewall). Access to both storage accounts is working, and as you can see both accesses are logged in the Storage Account as well as in the Azure Firewall (I have removed some characters from the storage account names to obfuscate them): ❯ ssh $vm_pip "curl -s4 $storage_blob_fqdn1" Hello world! ❯ ssh $vm_pip "curl -s4 $storage_blob_fqdn2" Hello world! ❯ query='StorageBlobLogs | where TimeGenerated > ago(15m) | project AccountName, StatusCode, CallerIpAddress' ❯ az monitor log-analytics query -w $logws_customerid --analytics-query $query -o table AccountName CallerIpAddress StatusCode ---------------------- ----------------- ---------- storagetest????eastus2 10.13.76.72:10066 200 storagetest????westus2 10.13.76.72:11880 200 ❯ query='AzureDiagnostics | where TimeGenerated > ago(15m) | where Category == "AZFWApplicationRule" | project SourceIP, Fqdn_s, Protocol_s, Action_s' ❯ az monitor log-analytics query -w $logws_customerid --analytics-query $query -o table Action_s Fqdn_s Protocol_s SourceIP --------- -------------------------------------------- ---------- ---------- Allow storagetest????eastus2.blob.core.windows.net HTTPS 10.13.76.4 Allow storagetest????westus2.blob.core.windows.net HTTPS 10.13.76.4 The storage account sees as client IP the Azure Firewall's IP (in the subnet 10.13.76.64/26), and the Azure Firewall logs show the actual client IP (in the workload subnet 10.13.76.0/26). If you use network rules you would lose a lot of the flexibility of the Azure Firewall, even if using FQDN-based rules. The reason is that if there were 2 storage accounts with different FQDNs sharing the same IP address, and the same client resolves both FQDNs, when Azure Firewall looks at the packet it will not be able to guess to which storage account each specific packet belongs to. From a routing perspective it would still work though, as long as you have the default SNAT settings of Azure Firewall, which involves translating the IP address when public IP addresses are involved: Conclusion There are some reasons why you might want to pick VNet service endpoints over private link (cost would probably be one of them). If so, there are advantages and disadvantages of sending traffic to Azure services via a firewall. If you decide to inspect traffic to VNet service endpoints with Azure Firewall, hopefully this post has shown you how to do that. What are your thoughts about this?1.1KViews2likes0Comments