VPN Gateway
41 TopicsA Guide to Azure Data Transfer Pricing
Understanding Azure networking charges is essential for businesses aiming to manage their budgets effectively. Given the complexity of Azure networking pricing, which involves various influencing factors, the goal here is to bring a clearer understanding of the associated data transfer costs by breaking down the pricing models into the following use cases: VM to VM VM to Private Endpoint VM to Internal Standard Load Balancer (ILB) VM to Internet Hybrid connectivity Please note this is a first version, with a second version to follow that will include additional scenarios. Disclaimer: Pricing may change over time, check the public Azure pricing calculator for up-to-date pricing information. Actual pricing may vary depending on agreements, purchase dates, and currency exchange rates. Sign in to the Azure pricing calculator to see pricing based on your current program/offer with Microsoft. 1. VM to VM 1.1. VM to VM, same VNet Data transfer within the same virtual network (VNet) is free of charge. This means that traffic between VMs within the same VNet will not incur any additional costs. Doc. Data transfer across Availability Zones (AZ) is free. Doc. 1.2. VM to VM, across VNet peering Azure VNet peering enables seamless connectivity between two virtual networks, allowing resources in different VNets to communicate with each other as if they were within the same network. When data is transferred between VNets, charges apply for both ingress and egress data. Doc: VM to VM, across VNet peering, same region VM to VM, across Global VNet peering Azure regions are grouped into 3 Zones (distinct from Avaialbility Zones within a specific Azure region). The pricing for Global VNet Peering is based on that geographic structure. Data transfer between VNets in different zones incurs outbound and inbound data transfer rates for the respective zones. When data is transferred from a VNet in Zone 1 to a VNet in Zone 2, outbound data transfer rates for Zone 1 and inbound data transfer rates for Zone 2 will be applicable. Doc. 1.3. VM to VM, through Network Virtual Appliance (NVA) Data transfer through an NVA involves charges for both ingress and egress data, depending on the volume of data processed. When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via an NVA (firewall...) in the hub VNet, it incurs VM to VM pricing twice. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 2. VM to Private Endpoint (PE) Private Endpoint pricing includes charges for the provisioned resource and data transfer costs based on traffic direction. For instance, writing to a Storage Account through a Private Endpoint incurs outbound data charges, while reading incurs inbound data charges. Doc: 2.1. VM to PE, same VNet Since data transfer within a VNet is free, charges are only applied for data processing through the Private Endpoint. Cross-region traffic will incur additional costs if the Storage Account and the Private Endpoint are located in different regions. 2.2. VM to PE, across VNet peering Accessing Private Endpoints from a peered network incurs only Private Link Premium charges, with no peering fees. Doc. VM to PE, across VNet peering, same region VM to PE, across VNet peering, PE region != SA region 2.3. VM to PE, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA. However, as per the PE pricing model, there are no charges between the NVA and the PE. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 3. VM to Internal Load Balancer (ILB) Azure Standard Load Balancer pricing is based on the number of load balancing rules as well as the volume of data processed. Doc: 3.1. VM to ILB, same VNet Data transfer within the same virtual network (VNet) is free. However, the data processed by the ILB is charged based on its volume and on the number load balancing rules implemented. Only the inbound traffic is processed by the ILB (and charged), the return traffic goes direct from the backend to the source VM (free of charge). 3.2. VM to ILB, across VNet peering In addition to the Load Balancer costs, data transfer charges between VNets apply for both ingress and egress. 3.3. VM to ILB, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA and VM to ILB charges between the NVA and the ILB/backend resource. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 4. VM to internet 4.1. Data transfer and inter-region pricing model Bandwidth refers to data moving in and out of Azure data centers, as well as data moving between Azure data centers; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing, or Peering. Doc: 4.2. Routing Preference in Azure and internet egress pricing model When creating a public IP in Azure, Azure Routing Preference allows you to choose how your traffic routes between Azure and the Internet. You can select either the Microsoft Global Network or the public internet for routing your traffic. Doc: See how this choice can impact the performance and reliability of network traffic: By selecting a Routing Preference set to Microsoft network, ingress traffic enters the Microsoft network closest to the user, and egress traffic exits the network closest to the user, minimizing travel on the public internet (“Cold Potato” routing). On the contrary, setting the Routing Preference to internet, ingress traffic enters the Microsoft network closest to the hosted service region. Transit ISP networks are used to route traffic, travel on the Microsoft Global Network is minimized (“Hot Potato” routing). Bandwidth pricing for internet egress, Doc: 4.3. VM to internet, direct Data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. It is important to note that default outbound access for VMs in Azure will be retired on September 30 2025, migration to an explicit outbound internet connectivity method is recommended. Doc. 4.4. VM to internet, with a public IP Here a standard public IP is explicitly associated to a VM NIC, that incurs additional costs. Like in the previous scenario, data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. 4.5. VM to internet, with NAT Gateway In addition to the previous costs, data transfer through a NAT Gateway involves charges for both the data processed and the NAT Gateway itself, Doc: 5. Hybrid connectivity Hybrid connectivity involves connecting on-premises networks to Azure VNets. The pricing model includes charges for data transfer between the on-premises network and Azure, as well as any additional costs for using Network Virtual Appliances (NVAs) or Azure Firewalls in the hub VNet. 5.1. H&S Hybrid connectivity without firewall inspection in the hub For an inbound flow, from the ExpressRoute Gateway to a spoke VNet, VNet peering charges are applied once on the spoke inbound. There are no charges on the hub outbound. For an outbound flow, from a spoke VNet to an ER branch, VNet peering charges are applied once, outbound of the spoke only. There are no charges on the hub inbound. Doc. The table above does not include ExpressRoute connectivity related costs. 5.2. H&S Hybrid connectivity with firewall inspection in the hub Since traffic transits and is inspected via a firewall in the hub VNet (Azure Firewall or 3P firewall NVA), the previous concepts do not apply. “Standard” inter-VNet VM-to-VM charges apply between the FW and the destination VM : inbound and outbound on both directions. Once outbound from the source VNet (Hub or Spoke), once inbound on the destination VNet (Spoke or Hub). The table above reflects only data transfer charges within Azure and does not include NVA/Azure Firewall processing costs nor the costs related to ExpressRoute connectivity. 5.3. H&S Hybrid connectivity via a 3rd party connectivity NVA (SDWAN or IPSec) Standard inter-VNet VM-to-VM charges apply between the NVA and the destination VM: inbound and outbound on both directions, both in the Hub VNet and in the Spoke VNet. 5.4. vWAN scenarios VNet peering is charged only from the point of view of the spoke – see examples and vWAN pricing components. Next steps with cost management To optimize cost management, Azure offers tools for monitoring and analyzing network charges. Azure Cost Management and Billing allows you to track and allocate costs across various services and resources, ensuring transparency and control over your expenses. By leveraging these tools, businesses can gain a deeper understanding of their network costs and make informed decisions to optimize their Azure spending.14KViews14likes2CommentsNetwork Redundancy Between AVS, On-Premises, and Virtual Networks in a Multi-Region Design
By Mays_Algebary shruthi_nair Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This article focuses on network redundancy in multi-region architecture. For details on single-region design, refer to this blog. The diagram below illustrates a common network design pattern for multi-region deployments, using either a Hub-and-Spoke or Azure Virtual WAN (vWAN) topology, and serves as the baseline for establishing redundant connectivity throughout this article. In each region, the Hub or Virtual Hub (VHub) extends Azure connectivity to Azure VMware Solution (AVS) via an ExpressRoute circuit. The regional Hub/VHub is connected to on-premises environments by cross-connecting (bowtie) both local and remote ExpressRoute circuits, ensuring redundancy. The concept of weight, used to influence traffic routing preferences, will be discussed in the next section. The diagram below illustrates the traffic flow when both circuits are up and running. Design Considerations If a region loses its local ExpressRoute connection, AVS in that region will lose connectivity to the on-premises environment. However, VNets will still retain connectivity to on-premises via the remote region’s ExpressRoute circuit. The solutions discussed in this article aim to ensure redundancy for both AVS and VNets. Looking at the diagram above, you might wonder: why do we need to set weights at all, and why do the AVS-ER connections (1b/2b) use the same weight as the primary on-premises connections (1a/2a)? Weight is used to influence routing decisions and ensure optimal traffic flow. In this scenario, both ExpressRoute circuits, ER1-EastUS and ER2-WestUS, advertise the same prefixes to the Azure ExpressRoute gateway. As a result, traffic from the VNet to on-premises would be ECMPed across both circuits. To avoid suboptimal routing and ensure that traffic from the VNets prefers the local ExpressRoute circuit, a higher weight is assigned to the local path. It’s also critical that the ExpressRoute gateway connection to on-premises (1a/2a) and to AVS (1b/2b), is assigned the same weight. Otherwise, traffic from the VNet to AVS will follow a less efficient route as AVS routes are also learned over ER1-EastUS via Global Reach. For instance, VNets in EastUS will connect to AVS EUS through ER1-EastUS circuit via Global Reach (as shown by the blue dotted line), instead of using the direct local path (orange line). This suboptimal routing is illustrated in the below diagram. Now let us see what solutions we can have to achieve redundant connectivity. The following solutions will apply to both Hub-and-Spoke and vWAN topology unless noted otherwise. Note: The diagrams in the upcoming solutions will focus only on illustrating the failover traffic flow. Solution1: Network Redundancy via ExpressRoute in Different Peering Location In the solution, deploy an additional ExpressRoute circuit in a different peering location within the same metro area (e.g., ER2–PeeringLocation2), and enable Global Reach between this new circuit and the existing AVS ExpressRoute (e.g., AVS-ER1). If you intend to use this second circuit as a failover path, apply prepends to the on-premises prefixes advertised over it. Alternatively, if you want to use it as an active-active redundant path, do not prepend routes, in this case, both AVS and Azure VNets will ECMP to distribute traffic across both circuits (e.g., ER1–EastUS and ER–PeeringLocation2) when both are available. Note: Compared to the Standard Topology, this design removes both the ExpressRoute cross-connect (bowtie) and weight settings. When adding a second circuit in the same metro, there's no benefit in keeping them, otherwise traffic from the Azure VNet will prefer the local AVS circuit (AVS-ER1/AVS-ER2) to reach on-premises due to the higher weight, as on-premises routes are also learned over AVS circuit (AVS-ER1/AVS-ER2) via Global Reach. Also, when connecting the new circuit (e.g., ER–Peering Location2), remove all weight settings across the connections. Traffic will follow the optimal path based on BGP prepending on the new circuit, or load-balance (ECMP) if no prepend is applied. Note: Use public ASN to prepend the on-premises prefix as AVS circuit (e.g., AVS-ER) will strip the private ASN toward AVS. Solution Insights Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. It could be cost prohibitive depending on the bandwidth of the second circuit. Solution2: Network Redundancy via ExpressRoute Direct In this solution, ExpressRoute Direct is used to provision multiple circuits from a single port pair in each region, for example, ER2-WestUS and ER4-WestUS are created from the same port pair. This allows you to dedicate one circuit for local traffic and another for failover to a remote region. To ensure optimal routing, prepend the on-premises prefixes using public ASN on the newly created circuit (e.g., ER3-EastUS and ER4-WestUS). Remove all weight settings across the connections; traffic will follow the optimal path based on BGP prepending on the new circuit. For instance, if ER1-EastUS becomes unavailable, traffic from AVS and VNets in the EastUS region will automatically route through ER4-WestUS circuit, ensuring continuity. Note: Compared to the Standard Topology, this design connects the newly created ExpressRoute circuits (e.g., ER3-EastUS/ER4-WestUS) to the remote region of ExpressRoute gateway (black dotted lines) instead of having the bowtie to the primary circuits (e.g., ER1-EastUS/ER2-WestUS). Solution Insights Easy to implement if you have ExpressRoute Direct. ExpressRoute Direct supports over- provisioning where you can create logical ExpressRoute circuits on top of your existing ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20 Gbps or 200 Gbps. For example, you can create two 10-Gbps ExpressRoute circuits within a single 10-Gbps ExpressRoute Direct resource (port pair). Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution3: Network Redundancy via ExpressRoute Metro Metro ExpressRoute is a new configuration that enables dual-homed connectivity to two different peering locations within the same city. This setup enhances resiliency by allowing traffic to continue flowing even if one peering location goes down, using the same circuit. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Currently available in select regions, with more being added over time. Cost-effective: Offers redundancy without significantly increasing costs. Solution4: Deploy VPN as a Backup to ExpressRoute This solution mirrors solution 1 for a single region but extends it to multiple regions. In this approach, a VPN serves as the backup path for each region in the event of an ExpressRoute failure. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required; the vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. In this design, you should not cross-connect ExpressRoute circuits (e.g., ER1-EastUS and ER2-WestUS) to the ExpressRoute gateways in the Hub VNets (e.g., Hub-EUS or Hub-WUS). Doing so will lead to routing issues, where the Hub VNet only programs the on-premises routes learned via ExpressRoute. For instance, in the EastUS region, if the primary circuit (ER1-EastUS) goes down, Hub-EUS will receive on-premises routes from both the VPN tunnel and the remote ER2-WestUS circuit. However, it will prefer and program only the ExpressRoute-learned routes from ER2-WestUS circuit. Since ExpressRoute gateways do not support route transitivity between circuits, AVS connected via AVS-ER will not receive the on-premises prefixes, resulting in routing failures. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights Cost-effective and straightforward to deploy. Increased Latency: The VPN tunnel over the internet adds latency due to encryption overhead. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 1G). For details on VPN gateway SKU and tunnel throughput, refer to this link. As you can't cross connect ExpressRoute circuits, VNets will utilize the VPN for failover instead of leveraging remote region ExpressRoute circuit. Solution5: Network Redundancy-Multiple On-Premises (split-prefix) In many scenarios, customers advertise the same prefix from multiple on-premises locations to Azure. However, if the customer can split prefixes across different on-premises sites, it simplifies the implementation of failover strategy using existing ExpressRoute circuits. In this design, each on-premises advertises region-specific prefixes (e.g., 10.10.0.0/16 for EastUS and 10.70.0.0/16 for WestUS), along with a common supernet (e.g., 10.0.0.0/8). Under normal conditions, AVS and VNets in each region use longest prefix match to route traffic efficiently to the appropriate on-premises location. For instance, if ER1-EastUS becomes unavailable, AVS and VNets in EastUS will automatically fail over to ER2-WestUS, routing traffic via the supernet prefix to maintain connectivity. Solution Insights Cost-effective: no additional deployment, using existing ExpressRoute circuits. Advertising specific prefixes over each region might need additional planning. Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution6: Prioritize Network Redundancy for One Region Over Another If you're operating under budget constraints and can prioritize one region (such as hosting critical workloads in a single location) and want to continue using your existing ExpressRoute setup, this solution could be an ideal fit. In this design, assume AVS in EastUS (AVS-EUS) hosts the critical workloads. To ensure high availability, AVS-ER1 is configured with Global Reach connections to both the local ExpressRoute circuit (ER1-EastUS) and the remote circuit (ER2-WestUS). Make sure to prepend the on-premises prefixes advertised to ER2-WestUS using public ASN to ensure optimal routing (no ECMP) from AVS-EUS over both circuits (ER1-EastUS and ER2-WestUS). On the other hand, AVS in WestUS (AVS-WUS) is connected via Global Reach only to its local region ExpressRoute circuit (ER2-WestUS). If that circuit becomes unavailable, you can establish an on-demand Global Reach connection to ER1-EastUS, either manually or through automation (e.g., a triggered script). This approach introduces temporary downtime until the Global Reach link is established. You might be thinking, why not set up Global Reach between the AVS-WUS circuit and remote region circuits (like connecting AVS-ER2 to ER1-EastUS), just like we did for AVS-EUS? Because it would lead to suboptimal routing. Due to AS path prepending on ER2-WestUS, if both ER1-EastUS and ER2-WestUS are linked to AVS-ER2, traffic would favor the remote ER1-EastUS circuit since it presents a shorter AS path. As a result, traffic would bypass the local ER2-WestUS circuit, causing inefficient routing. That is why for AVS-WUS, it's better to use on-demand Global Reach to ER1-EastUS as a backup path, enabled manually or via automation, only when ER2-WestUS becomes unavailable. Note: VNets will failover via local AVS circuit. E.g., HUB-EUS will route to on-prem through AVS-ER1 and ER2-WestUS via Global Reach Secondary (purple line). Solution Insights Cost-effective Workloads hosted in AVS within the non-critical region will experience downtime if the local region ExpressRoute circuit becomes unavailable, until the on-demand Global Reach connection is established. Conclusion Each solution has its own advantages and considerations, such as cost-effectiveness, ease of implementation, and increased resiliency. By carefully planning and implementing these solutions, organizations can ensure operational continuity and optimal traffic routing in multi-region deployments.2.7KViews6likes0CommentsNetwork Redundancy from On-Premises to Azure VMware and VNETs in a Single-Region Design
By shruthi_nair Mays_Algebary Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This guide presents common architectural patterns for building redundant connectivity between on-premises datacenters, Azure Virtual Networks (VNets), and Azure VMware Solution (AVS) within a single-region deployment. AVS allows organizations to run VMware-based workloads directly on Azure infrastructure, offering a streamlined path for migrating existing VMware environments to the cloud without the need for significant re-architecture or modification. Connectivity Between AVS, On-Premises, and Virtual Networks The diagram below illustrates a common network design pattern using either a Hub-and-Spoke or Virtual WAN (vWAN) topology, deployed within a single Azure region. ExpressRoute is used to establish connectivity between on-premises environments and VNets. The same ExpressRoute circuit is extended to connect AVS to the on-premises infrastructure through ExpressRoute Global Reach. Consideration: This design presents a single point of failure. If the ExpressRoute circuit (ER1-EastUS) experiences an outage, connectivity between the on-premises environment, VNets, and AVS will be disrupted. Let’s examine some solutions to establish redundant connectivity in case the ER1-EastUS experiences an outage. Solution1: Network Redundancy via VPN In this solution, one or more VPN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, the VPN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights: Cost-effective and straightforward to deploy. Latency: The VPN tunnel introduces additional latency due to its reliance on the public internet and the overhead associated with encryption. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 10G). For details on VPN gateway SKU and tunnel throughput, refer to this link. Solution2: Network Redundancy via SD-WAN In this solution, SDWAN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, SDWAN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying ARS in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the SDWAN appliance. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the SDWAN and ExpressRoute. Note: In vWAN topology, to ensure optimal convergence from the VNets when failing back to ExpressRoute, prepend the prefixes advertised from on-premises over the SDWAN to make sure it's longer that ExpressRoute learned routes. If you don't prepend, VNets will continue to use SDWAN path to on-prem as primary path. In this design using Azure vWAN, the SD-WAN can be deployed either within the vHub or in a spoke VNet connected to the vHub the same principle applies in both cases. Solution Insights: If you have an existing SDWANs no additional deployment is needed. Bandwidth Considerations: Vendor specific. Management consideration: Third party dependency and need to manage the HA deployment, except for SD-WAN SaaS solution. Solution3: Network Redundancy via ExpressRoute in Different Peering Locations Deploy an additional ExpressRoute circuit at a different peering location and enable Global Reach between ER2-peeringlocation2 and AVS-ER. To use this circuit as a backup path, prepend the on-premises prefixes on the second circuit; otherwise, AVS or VNet will perform Equal-Cost Multi-Path (ECMP) routing across both circuits to on-prem. Note: Use public ASN to prepend the on-premises prefix as AVS-ER will strip the private ASN toward AVS. Refer to this link for more details. Solution Insights: Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Could be cost prohibitive depending on the bandwidth of the second circuit. Solution4: Network Redundancy via Metro ExpressRoute Metro ExpressRoute is the new ExpressRoute configuration. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. This configuration offers enhanced resiliency if one peering location experiences an outage. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Availability is restricted to specific regions within the metro area. Cost-effective Conclusion: The choice of failover connectivity should be guided by the specific latency and bandwidth requirements of your workloads. Ultimately, achieving high availability and ensuring continuous operations depend on careful planning and effective implementation of network redundancy strategies.2.6KViews5likes0CommentsAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.9KViews3likes0CommentsIntroducing Copilot in Azure for Networking: Your AI-Powered Azure Networking Assistant
As cloud networking grows in complexity, managing and operating these services efficiently can be tedious and time consuming. That’s where Copilot in Azure for Networking steps in, a generative AI tool that simplifies every aspect of network management, making it easier for network administrators to stay on top of their Azure infrastructure. With Copilot, network professionals can design, deploy, and troubleshoot Azure Networking services using a streamlined, AI-powered approach. A Comprehensive Networking Assistant for Azure We’ve designed Copilot to really feel like an intuitive assistant you can talk to just like a colleague. Copilot understands networking-related questions in simple terms and responds with actionable solutions, drawing from Microsoft’s expansive networking knowledge base and the specifics of your unique Azure environment. Think of Copilot as an all-encompassing AI-Powered Azure Networking Assistant. It acts as: Your Cloud Networking Specialist by quickly answering questions about Azure networking services, providing product guidance, and configuration suggestions. Your Cloud Network Architect by helping you select the right network services, architectures, and patterns to connect, secure, and scale your workloads in Azure. Your Cloud Network Engineer by helping you diagnose and troubleshoot network connectivity issues with step-by-step guidance. One of the most powerful features of Copilot in Azure is its ability to automatically diagnose common networking issues. Misconfigurations, connectivity failures, or degraded performance? Copilot can help with step-by-step guidance to resolve these issues quickly with minimal input and assistance from the user, simply ask questions like ”Why can’t my VM connect to the internet?”. As seen above, upon the user identifying the source and destination, Copilot can automatically discover the connectivity path and analyze the state and status of all the network elements in the path to pinpoint issues such as blocked ports, unhealthy network devices, or misconfigured Network Security Groups (NSGs). Technical Deep Dive: Contextualized Responses with Real-Time Insights When users ask a question on the Azure Portal, it gets sent to the Orchestrator. This step is crucial to generating a deep semantic understanding of the user’s question, reasoning over all Azure resources, and then determining that the question requires Network-specific capabilities to be answered. Copilot then collects contextual information based on what the user is looking at and what they have access to before dispatching the question to the relevant domain-specific plugins. Those plugins then use their service-specific capabilities to answer the user’s question. Copilot may even combine information from multiple plugins to provide responses to complex questions. In the case of questions relevant to Azure Networking services, Copilot uses real-time data from sources like diagnostic APIs, user logs, Azure metrics, Azure Resource Graph etc. all while maintaining complete privacy and security and only accessing what the user can access as defined in Azure Role based Access Control (RBAC) to help generate data-driven insights that help keep your network operating smoothly and securely. This information is then used by Copilot to help answer the user’s question via a variety of techniques including but not limited to Retrieval-Augmented Generation (RAG) and grounding. To learn more about how Copilot works, including our Responsible AI commitments, see Copilot in Azure Technical Deep Dive | Microsoft Community Hub. Summary: Key Benefits, Capabilities and Sample Prompts Copilot boosts efficiency by automating routine tasks and offering targeted answers, which saves network administrators time while troubleshooting, configuring and architecting their environments. Copilot also helps organizations reduce costs by minimizing manual work and catching errors while empowering customers to resolve networking issues on their own with AI-powered insights backed by Azure expertise. Copilot is equipped with powerful skills to assist users with network product information and selection, resource inventory and topology, and troubleshooting. For product information, Copilot can answer questions about Azure Networking products by leveraging published documentation, helping users with questions like “What type of Firewall is best suited for my environment?”. It offers tailored guidance for selecting and planning network architectures, including specific services like Azure Load Balancer and Azure Firewall. This guidance also extends to resilience-related questions like “What more can I do to ensure my app gateway is resilient?” involving services such as Azure Application Gateway and Azure Traffic Manager, among others. When it comes to inventory and topology, Copilot can help with questions like “What is the data path between my VM and the internet?” by mapping network resources, visualizing topologies, and tracking traffic paths, providing users with clear topology maps and connectivity graphs. For troubleshooting questions like “Why can’t I connect to my VM from on prem?”, Copilot analyzes both the control plane and data plane, offering diagnostics at the network and individual service levels. By using on-behalf-of RBAC, Copilot maintains secure, authorized access, ensuring users interact only with resources permitted by their access level. Looking Forward: Future Enhancements This is only the first step we are taking toward bringing interactive, generative-AI powered capabilities to Azure Networking services and as it evolves over time, future releases will introduce advanced capabilities. We also acknowledge that today Copilot in preview works better with certain Azure Networking services, and we will continue to onboard more services to the capabilities we are launching today. Some of the more advanced capabilities we are working on include predictive troubleshooting where Copilot will anticipate potential issues before they impact network performance. Network optimization capabilities that suggest ways to optimize your network for better performance, resilience and reliability alongside enhanced security capabilities providing insights into network security and compliance, helping organizations meet regulatory requirements starting with the integration of Security Copilot attack investigation capabilities for Azure Firewall. Conclusion Copilot in Azure for Networking is intended to enhance the overall Azure experience and help network administrators easily manage their Azure Networking services. By combining AI-driven insights with user-friendly interfaces, it empowers networking professionals and users to plan, deploy, and operate their Azure Network. These capabilities are now in preview, see Azure networking capabilities using Microsoft Copilot in Azure (preview) | Microsoft Learn to learn more and get started.3.7KViews3likes2CommentsControlling Data Egress in Azure
Regulated companies impose stringent requirements on data governance to prevent data exfiltration. As a Cloud Architect, ensuring the safe and efficient exit of data from our network to external destinations is paramount. This document aims to provide a comprehensive guide to the strategies, best practices, and tools we employ at various customers to maintain robust security measures.6.2KViews3likes0CommentsAzure Networking 2025: Powering cloud innovation and AI at global scale
In 2025, Azure’s networking platform proved itself as the invisible engine driving the cloud’s most transformative innovations. Consider the construction of Microsoft’s new Fairwater AI datacenter in Wisconsin – a 315-acre campus housing hundreds of thousands of GPUs. To operate as one giant AI supercomputer, Fairwater required a single flat, ultra-fast network interconnecting every GPU. Azure’s networking team delivered: the facility’s network fabric links GPUs at 800 Gbps speeds in a non-blocking architecture, enabling 10× the performance of the world’s fastest supercomputer. This feat showcases how fundamental networking is to cloud innovation. Whether it’s uniting massive AI clusters or connecting millions of everyday users, Azure’s globally distributed network is the foundation upon which new breakthroughs are built. In 2025, the surge of AI workloads, data-driven applications, and hybrid cloud adoption put unprecedented demands on this foundation. We responded with bold network investments and innovations. Each new networking feature delivered in 2025, from smarter routing to faster gateways, was not just a technical upgrade but an innovation enabling customers to achieve more. Recapping the year’s major releases across Azure Networking services and key highlights how AI both drive and benefit from these advancements. Unprecedented connectivity for a hybrid and AI era Hybrid connectivity at scale: Azure’s network enhancements in 2025 focused on making global and hybrid connectivity faster, simpler, and ready for the next wave of AI-driven traffic. For enterprises extending on-premises infrastructure to Azure, Azure ExpressRoute private connectivity saw a major leap in capacity: Microsoft announced support for 400 Gbps ExpressRoute Direct ports (available in 2026) to meet the needs of AI supercomputing and massive data volumes. These high-speed ports – which can be aggregated into multi-terabit links – ensure that even the largest enterprises or HPC clusters can transfer data to Azure with dedicated, low-latency links. In parallel, Azure VPN Gateway performance reached new highs, with a generally available upgrade that delivers up to 20 Gbps aggregate throughput per gateway and 5 Gbps per individual tunnel. This is a 3× increase over previous limits, enabling branch offices and remote sites to connect to Azure even more seamlessly without bandwidth bottlenecks. Together, the ExpressRoute and VPN improvements give customers a spectrum of high-performance options for hybrid networking – from offices and datacenters to the cloud – supporting scenarios like large-scale data migrations, resilient multi-site architectures, and hybrid AI processing. Simplified global networking: Azure Virtual WAN (vWAN) continued to mature as the one-stop solution for managing global connectivity. Virtual WAN introduced forced tunneling for Secure Virtual Hubs (now in preview), which allows organizations to route all Internet-bound traffic from branch offices or virtual networks back to a central hub for inspection. This capability simplifies the implementation of a “backhaul to hub” security model – for example, forcing branches to use a central firewall or security appliance – without complex user-defined routing. Empowering multicloud and NVA integration: Azure recognizes that enterprise networks are diverse. Azure Route Server improvements enhanced interoperability with customer equipment and third-party network virtual appliances (NVAs). Notably, Azure Route Server now supports up to 500 virtual network connections (spokes) per route server, a significant scale boost that enables larger hub-and-spoke topologies and simplified Border Gateway Protocol (BGP) route exchange even in very large environments. This helps customers using SD-WAN appliances or custom firewalls in Azure to seamlessly learn routes from hundreds of VNet spokes – maintaining central routing control without manual configuration. Additionally, Azure Route Server introduced a preview of hub routing preference, giving admins the ability to influence BGP route selection (for example, preferring ExpressRoute over a VPN path, or vice versa). This fine-grained control means hybrid networks can be tuned for optimal performance and cost. Resilience and reliability by design Azure’s growth has been underpinned by making the network “resilient by default.” We shipped tools to help validate and improve network resiliency. ExpressRoute Resiliency Insights was released for general availability – delivering an intelligent assessment of an enterprise’s ExpressRoute setup. This feature evaluates how well your ExpressRoute circuits and gateways are architected for high availability (for example, using dual circuits in diverse locations, zone-redundant gateways, etc.) and assigns a resiliency index score as a percentage. It will highlight suboptimal configurations – such as routes advertised on only one circuit, or a gateway that isn’t zone-redundant – and provide recommendations for improvement. Moreover, Resiliency Insights includes a failover simulation tool that can test circuit redundancy by mimicking failures, so you can verify that your connections will survive real-world incidents. By proactively monitoring and testing resilience, Azure is helping customers achieve “always-on” connectivity even in the face of fiber cuts, hardware faults, or other disruptions. Security, governance, and trust in the network As enterprises entrust more core business to Azure, the platform’s networking services advanced on security and governance – helping customers achieve Zero Trust networks and high compliance with minimal complexity. Azure DNS now offers DNS Security Policies with Threat Intelligence feeds (GA). This capability allows organizations to protect their DNS queries from known malicious domains by leveraging continuously updated threat intel. For example, if a known phishing domain or C2 (command-and-control) hostname appears in DNS queries from your environment, Azure DNS can automatically block or redirect those requests. Because DNS is often the first line of detection for malware and phishing activities, this built-in filtering provides a powerful layer of defense that’s fully managed by Azure. It’s essentially a cloud-delivered DNS firewall using Microsoft’s vast threat intelligence – enabling all Azure customers to benefit from enterprise-grade security without deploying additional appliances. Network traffic governance was another focus. The introduction of forced tunneling in Azure Virtual WAN hubs (preview) shared above is a prime example where networking meets security compliance. Optimizing cloud-native and edge networks We previewed DNS intelligent traffic control features – such as filtering DNS queries to prevent data exfiltration and applying flexible recursion policies – which complement the DNS Security offering in safeguarding name resolution. Meanwhile, for load balancing across regions, Azure Traffic Manager’s behind-the-scenes upgrades (as noted earlier) improved reliability, and it’s evolving to integrate with modern container-based apps and edge scenarios. AI-powered networking: Both enabling and enabled by AI We are infusing AI into networking to make management and troubleshooting more intelligent. Networking functionality in Azure Copilot accelerates tasks like never before: it outlines the best practices instantly and troubleshooting that once required combing through docs and logs can be conversational. It effectively democratizes networking expertise, helping even smaller IT teams manage sophisticated networks by leveraging AI recommendations. The future of cloud networking in an AI world As we close out 2025, one message is clear: networking is strategic. The network is no longer a static utility – it is the adaptive circulatory system of the cloud, determining how far and fast customers can go. By delivering higher speeds, greater reliability, tighter security, and easier management, Azure Networking has empowered businesses to connect everything to anything, anywhere – securely and at scale. These advances unlock new scenarios: global supply chains running in real-time over a trusted network, multi-player AR/VR and gaming experiences delivered without lag, and AI models trained across continents. Looking ahead, AI-powered networking will become the norm. The convergence of AI and network tech means we will see more self-optimizing networks that can heal, defend, and tune themselves with minimal human intervention.220Views1like0Comments