vpn gateway
38 TopicsCan only remote into azure vm from DC
Hi all, I have set up a site to site connection from on prem to azure and I can remote in via the main dc on prem but not any other server or ping from any other server to the azure. Why can I only remote into the azure VM from the server that has Routing and remote access? Any ideas on how I can fix this?746Views0likes2CommentsRoute-metrics in Azure P2S VPN
We have the following setup in our environment: Azure VPN Gateway S2S-VPN between gateway and our on-premise datacentre. P2S-VPN between gateway and clients. This P2S VPN is configured with AAD-authentication and the VPN profile is assigned to a client via Intune and XML-configuration. I have attached a stripped down version of our .xml with information that is not sensitive. (azurevpn.xml). It's in the zipped file. This setup is working overall fine, we add some routes to direct the traffic to the right place. We also have a management-VPN deployed that some of our employees use to get access to our network equipment and other administrative devices. This is a Cisco Anyconnect VPN. When connected to both this VPN-profile and the AzureVPN it let's them traverse both the management-net and the "customer"-net and let's them query DNS in both nets. The Anyconnect-VPN just as the AzureVPN has routes assigned to it, which when connected, one of the routes gets assigned a metric of 35. When then the P2S-VPN is connected it assigns the metric 311 on the same route. 311 seems to be the "default" metric on the routes specified in our .xml. This causes the issues in our case and we need to assign a metric lower then 35 to the P2S-route. Is there any way to assign a metric to a route that we push with the .xml? According to the Microsoft Docs here https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-profile-intune which links to this Docs https://docs.microsoft.com/en-us/windows/client-management/mdm/vpnv2-csp says you are able to do this. However if we try to add for example "<metric>25</metric> to the xml this gets ignored on the client. I have attached a section of the AzureVpnCxn.log which is stripped of sensitive information where this can be seen. Please advice1.6KViews0likes1CommentPlease clarify for required certificates for P2S connection in Azure
Hi, For Point-to-Site connection in Azure, certificates of Windows are exported. Depending on Windows system, I have seen different situation in certmgr.msc as below 1st Windows system 2nd Windows system 3rd Windows system Please let me know Which certificates we need to export at certmgr.msc? If we need to export Personal certificate, what I need to do, if no certificates are showing or another certificates (like Adobe) are showing at Personal? Please clarify with additional required information. We’ll be thankful for your assistance. With Regards NndnG528Views0likes1CommentAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.7KViews2likes0CommentsNetwork Redundancy Between AVS, On-Premises, and Virtual Networks in a Multi-Region Design
By Mays_Algebary shruthi_nair Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This article focuses on network redundancy in multi-region architecture. For details on single-region design, refer to this blog. The diagram below illustrates a common network design pattern for multi-region deployments, using either a Hub-and-Spoke or Azure Virtual WAN (vWAN) topology, and serves as the baseline for establishing redundant connectivity throughout this article. In each region, the Hub or Virtual Hub (VHub) extends Azure connectivity to Azure VMware Solution (AVS) via an ExpressRoute circuit. The regional Hub/VHub is connected to on-premises environments by cross-connecting (bowtie) both local and remote ExpressRoute circuits, ensuring redundancy. The concept of weight, used to influence traffic routing preferences, will be discussed in the next section. The diagram below illustrates the traffic flow when both circuits are up and running. Design Considerations If a region loses its local ExpressRoute connection, AVS in that region will lose connectivity to the on-premises environment. However, VNets will still retain connectivity to on-premises via the remote region’s ExpressRoute circuit. The solutions discussed in this article aim to ensure redundancy for both AVS and VNets. Looking at the diagram above, you might wonder: why do we need to set weights at all, and why do the AVS-ER connections (1b/2b) use the same weight as the primary on-premises connections (1a/2a)? Weight is used to influence routing decisions and ensure optimal traffic flow. In this scenario, both ExpressRoute circuits, ER1-EastUS and ER2-WestUS, advertise the same prefixes to the Azure ExpressRoute gateway. As a result, traffic from the VNet to on-premises would be ECMPed across both circuits. To avoid suboptimal routing and ensure that traffic from the VNets prefers the local ExpressRoute circuit, a higher weight is assigned to the local path. It’s also critical that the ExpressRoute gateway connection to on-premises (1a/2a) and to AVS (1b/2b), is assigned the same weight. Otherwise, traffic from the VNet to AVS will follow a less efficient route as AVS routes are also learned over ER1-EastUS via Global Reach. For instance, VNets in EastUS will connect to AVS EUS through ER1-EastUS circuit via Global Reach (as shown by the blue dotted line), instead of using the direct local path (orange line). This suboptimal routing is illustrated in the below diagram. Now let us see what solutions we can have to achieve redundant connectivity. The following solutions will apply to both Hub-and-Spoke and vWAN topology unless noted otherwise. Note: The diagrams in the upcoming solutions will focus only on illustrating the failover traffic flow. Solution1: Network Redundancy via ExpressRoute in Different Peering Location In the solution, deploy an additional ExpressRoute circuit in a different peering location within the same metro area (e.g., ER2–PeeringLocation2), and enable Global Reach between this new circuit and the existing AVS ExpressRoute (e.g., AVS-ER1). If you intend to use this second circuit as a failover path, apply prepends to the on-premises prefixes advertised over it. Alternatively, if you want to use it as an active-active redundant path, do not prepend routes, in this case, both AVS and Azure VNets will ECMP to distribute traffic across both circuits (e.g., ER1–EastUS and ER–PeeringLocation2) when both are available. Note: Compared to the Standard Topology, this design removes both the ExpressRoute cross-connect (bowtie) and weight settings. When adding a second circuit in the same metro, there's no benefit in keeping them, otherwise traffic from the Azure VNet will prefer the local AVS circuit (AVS-ER1/AVS-ER2) to reach on-premises due to the higher weight, as on-premises routes are also learned over AVS circuit (AVS-ER1/AVS-ER2) via Global Reach. Also, when connecting the new circuit (e.g., ER–Peering Location2), remove all weight settings across the connections. Traffic will follow the optimal path based on BGP prepending on the new circuit, or load-balance (ECMP) if no prepend is applied. Note: Use public ASN to prepend the on-premises prefix as AVS circuit (e.g., AVS-ER) will strip the private ASN toward AVS. Solution Insights Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. It could be cost prohibitive depending on the bandwidth of the second circuit. Solution2: Network Redundancy via ExpressRoute Direct In this solution, ExpressRoute Direct is used to provision multiple circuits from a single port pair in each region, for example, ER2-WestUS and ER4-WestUS are created from the same port pair. This allows you to dedicate one circuit for local traffic and another for failover to a remote region. To ensure optimal routing, prepend the on-premises prefixes using public ASN on the newly created circuit (e.g., ER3-EastUS and ER4-WestUS). Remove all weight settings across the connections; traffic will follow the optimal path based on BGP prepending on the new circuit. For instance, if ER1-EastUS becomes unavailable, traffic from AVS and VNets in the EastUS region will automatically route through ER4-WestUS circuit, ensuring continuity. Note: Compared to the Standard Topology, this design connects the newly created ExpressRoute circuits (e.g., ER3-EastUS/ER4-WestUS) to the remote region of ExpressRoute gateway (black dotted lines) instead of having the bowtie to the primary circuits (e.g., ER1-EastUS/ER2-WestUS). Solution Insights Easy to implement if you have ExpressRoute Direct. ExpressRoute Direct supports over- provisioning where you can create logical ExpressRoute circuits on top of your existing ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20 Gbps or 200 Gbps. For example, you can create two 10-Gbps ExpressRoute circuits within a single 10-Gbps ExpressRoute Direct resource (port pair). Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution3: Network Redundancy via ExpressRoute Metro Metro ExpressRoute is a new configuration that enables dual-homed connectivity to two different peering locations within the same city. This setup enhances resiliency by allowing traffic to continue flowing even if one peering location goes down, using the same circuit. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Currently available in select regions, with more being added over time. Cost-effective: Offers redundancy without significantly increasing costs. Solution4: Deploy VPN as a Backup to ExpressRoute This solution mirrors solution 1 for a single region but extends it to multiple regions. In this approach, a VPN serves as the backup path for each region in the event of an ExpressRoute failure. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required; the vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. In this design, you should not cross-connect ExpressRoute circuits (e.g., ER1-EastUS and ER2-WestUS) to the ExpressRoute gateways in the Hub VNets (e.g., Hub-EUS or Hub-WUS). Doing so will lead to routing issues, where the Hub VNet only programs the on-premises routes learned via ExpressRoute. For instance, in the EastUS region, if the primary circuit (ER1-EastUS) goes down, Hub-EUS will receive on-premises routes from both the VPN tunnel and the remote ER2-WestUS circuit. However, it will prefer and program only the ExpressRoute-learned routes from ER2-WestUS circuit. Since ExpressRoute gateways do not support route transitivity between circuits, AVS connected via AVS-ER will not receive the on-premises prefixes, resulting in routing failures. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights Cost-effective and straightforward to deploy. Increased Latency: The VPN tunnel over the internet adds latency due to encryption overhead. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 1G). For details on VPN gateway SKU and tunnel throughput, refer to this link. As you can't cross connect ExpressRoute circuits, VNets will utilize the VPN for failover instead of leveraging remote region ExpressRoute circuit. Solution5: Network Redundancy-Multiple On-Premises (split-prefix) In many scenarios, customers advertise the same prefix from multiple on-premises locations to Azure. However, if the customer can split prefixes across different on-premises sites, it simplifies the implementation of failover strategy using existing ExpressRoute circuits. In this design, each on-premises advertises region-specific prefixes (e.g., 10.10.0.0/16 for EastUS and 10.70.0.0/16 for WestUS), along with a common supernet (e.g., 10.0.0.0/8). Under normal conditions, AVS and VNets in each region use longest prefix match to route traffic efficiently to the appropriate on-premises location. For instance, if ER1-EastUS becomes unavailable, AVS and VNets in EastUS will automatically fail over to ER2-WestUS, routing traffic via the supernet prefix to maintain connectivity. Solution Insights Cost-effective: no additional deployment, using existing ExpressRoute circuits. Advertising specific prefixes over each region might need additional planning. Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution6: Prioritize Network Redundancy for One Region Over Another If you're operating under budget constraints and can prioritize one region (such as hosting critical workloads in a single location) and want to continue using your existing ExpressRoute setup, this solution could be an ideal fit. In this design, assume AVS in EastUS (AVS-EUS) hosts the critical workloads. To ensure high availability, AVS-ER1 is configured with Global Reach connections to both the local ExpressRoute circuit (ER1-EastUS) and the remote circuit (ER2-WestUS). Make sure to prepend the on-premises prefixes advertised to ER2-WestUS using public ASN to ensure optimal routing (no ECMP) from AVS-EUS over both circuits (ER1-EastUS and ER2-WestUS). On the other hand, AVS in WestUS (AVS-WUS) is connected via Global Reach only to its local region ExpressRoute circuit (ER2-WestUS). If that circuit becomes unavailable, you can establish an on-demand Global Reach connection to ER1-EastUS, either manually or through automation (e.g., a triggered script). This approach introduces temporary downtime until the Global Reach link is established. You might be thinking, why not set up Global Reach between the AVS-WUS circuit and remote region circuits (like connecting AVS-ER2 to ER1-EastUS), just like we did for AVS-EUS? Because it would lead to suboptimal routing. Due to AS path prepending on ER2-WestUS, if both ER1-EastUS and ER2-WestUS are linked to AVS-ER2, traffic would favor the remote ER1-EastUS circuit since it presents a shorter AS path. As a result, traffic would bypass the local ER2-WestUS circuit, causing inefficient routing. That is why for AVS-WUS, it's better to use on-demand Global Reach to ER1-EastUS as a backup path, enabled manually or via automation, only when ER2-WestUS becomes unavailable. Note: VNets will failover via local AVS circuit. E.g., HUB-EUS will route to on-prem through AVS-ER1 and ER2-WestUS via Global Reach Secondary (purple line). Solution Insights Cost-effective Workloads hosted in AVS within the non-critical region will experience downtime if the local region ExpressRoute circuit becomes unavailable, until the on-demand Global Reach connection is established. Conclusion Each solution has its own advantages and considerations, such as cost-effectiveness, ease of implementation, and increased resiliency. By carefully planning and implementing these solutions, organizations can ensure operational continuity and optimal traffic routing in multi-region deployments.2.6KViews6likes0CommentsMigrating Basic SKU Public IPs on Azure VPN Gateway to Standard SKU
Background The Basic SKU public IP addresses associated with Azure VPN Gateway are scheduled for retirement in September 2025. Consequently, migration to Standard SKU is essential. This document compares three potential migration methods, providing detailed steps, advantages, disadvantages, and considerations. 1. Using Microsoft's migration tool (Recommended) When using Microsoft's migration tool, the gateway's IP address does not change. There is no need to update the configuration information on the on-premises side, and the current configuration can be used as is. The migration tool is currently available in preview for active-passive VPN gateways with VpnGw1-5 SKUs. For more details, refer to the documentation on Microsoft Learn: About migrating a Basic SKU public IP address to Starndard SKU Steps: Check the availability of the migration tool: Confirm the release date of the migration tool compatible with your VPN gateway configuration through Azure service announcements or VPN Gateway documentation. What's new in Azure VPN Gateway? Migrating a Basic SKU public IP address to Standard SKU | VPN Gateway FAQ Preparation for migration: Verify the gateway subnet: Ensure the gateway subnet is /27 or larger. If it is /28 or smaller, the migration tool will fail. Test: It is advised to evaluate the migration tool in a non-production environment beforehand. Migration planning: Schedule maintenance periods and inform stakeholders. Start the migration: Execute the migration tool provided by Microsoft using Azure Portal. Follow the documentation provided when the tool is released. Ref: How to migrate a Basic SKU public IP address to Standard SKU – Preview. Monitor the migration: Monitor the gateway status through Azure Portal during the migration process. Post-migration verification: Confirm that the VPN connection is functioning correctly after the migration is complete. Advantages: Downtime is estimated to be up to 10 minutes. The migration steps are straightforward. Considerations: The release date of the tool varies by configuration (Active-Passive: April-May 2025, Active-Active: July-August 2025). Gateway subnet size restrictions (/27 or larger required). Cautions: Regularly check the release date of the tool. Verify and adjust the gateway subnet size before migration if necessary. 2. Deleting and recreating the VPN Gateway within the existing virtual network Manual migration without using Microsoft's tool is another option, though it will cause downtime and may alter the IP address of the gateway. This option becomes a viable alternative when the GatewaySubnet is smaller than /27 and the migration tool is unavailable. Steps: Collect current VPN Gateway configuration information: Connection types (site-to-site, VNet-to-VNet, etc.) Connection details (IP address of on-premises VPN device, shared key, gateway IP address of Azure VNet, etc.) IPsec/IKE policies (proposals, hash algorithms, SA lifetime, etc.) BGP configuration (ASN, peer IP address, if used) Routing configuration (custom routes, route tables, etc.) VPN Gateway SKU (record for reference) Resource ID of the public IP address (confirm during deletion) You can use the Azure CLI command below to fetch the VPN Gateway configuration. % az network vnet-gateway show --resource-group <your-resource-group-name> --name <your-vpn-gateway-name> Delete the existing VPN Gateway: Use Azure Portal, Azure CLI, or PowerShell to delete the existing VPN Gateway. Upgrade the public IP addresses to Standard SKU. Employ Azure Portal, Azure CLI, or PowerShell to upgrade disassociated public IPs. For a detailed walkthrough, please consult the Microsoft Learn documentation: Upgrade Basic Public IP Address to Standard SKU in Azure Please be aware that the IP address may change if the original public IP was dynamic or if a new public IP address is created. Refer also to Azure Public IPs are now zone-redundant by default Create a new VPN Gateway (Standard SKU): Leverage Azure Portal, Azure CLI, or PowerShell to create a new VPN Gateway, ensuring the following criteria: Virtual network: Select the existing virtual network. Gateway subnet: Select the existing gateway subnet. If the gateway subnet is smaller than /27, it is advisable to expand it to prevent potential future limitations. Public IP address: Opt for the Standard SKU public IP address upgraded or created in step 3. VPN type: Decide between policy-based or route-based as per the existing configuration. SKU: Select Standard SKU (e.g., VpnGw1, VpnGw2). If zone redundancy is required, select the corresponding zone redundant SKU (e.g., VpnGw1AZ, VpnGw2AZ). Other settings (routing options, active/active configuration, etc.) should adhere to the existing configuration. Reconfigure connections: Based on the gathered configuration information, reestablish VPN connections (site-to-site, VNet-to-VNet, etc.) for the new VPN Gateway. Reset IPsec/IKE policies, shared keys, BGP peering, etc. Reconfigure routing: If necessary, adjust custom routes and route tables to direct to the new VPN Gateway. Test and verify connections: Confirm all connections are correctly established and traffic flows as expected. Advantages: Immediate commencement of migration: No need to wait for a migration tool. Completion within the existing virtual network: No need to create a new virtual network. Considerations: Downtime occurrence: All VPN connections are disrupted between the deletion and recreation of the VPN Gateway. The duration of downtime depends on the creation time of the VPN Gateway and the reconfiguration time of connections. Manual re-entry of configuration information: Existing VPN Gateway configuration information must be manually collected and entered into the new VPN Gateway, which may lead to input errors. Cautions: Consider this approach if downtime is acceptable. Record current configuration details before deletion. The IP address may be subject to change depending on the situation. All the VPN tunnels need to be reestablished. If there are firewalls in place, this new public IP must be whitelisted. 3. Setting up a Standard SKU VPN Gateway in a new virtual network and gradually migrating One approach is to set up a Standard SKU VPN Gateway in a separate virtual network and transition to it gradually. This minimizes downtime by keeping the current VPN Gateway operational while establishing the new environment. Detailed planning and testing are essential to prevent routing switch errors and connection configuration issues. Steps: Create a new virtual network and VPN Gateway: Create a new virtual network to deploy a new VPN Gateway with a Standard SKU public IP address. Create a gateway subnet (/27 or larger recommended) within the new virtual network. Assign a Standard SKU public IP address and create a new VPN Gateway (Standard SKU). Select the necessary SKU (e.g., VPNGW1-5) and zone redundancy if needed (e.g., VPNGW1AZ-5). Configure connections between the new VPN Gateway and on-premises VPN device: Configure IPsec/IKE connections (site-to-site VPN) based on the new VPN Gateway's public IP address and on-premises VPN device information. Configure BGP if necessary. Adjust routing: Adjust routing so that traffic from the on-premises network to Azure goes through the new VPN Gateway. This involves changing the settings of the on-premises VPN device and updating the routing policies of network equipment. Adjust Azure-side routing (user-defined routes: UDR, etc.) to go through the new VPN Gateway if necessary. In a hub-and-spoke architecture, establish peering between the spoke virtual networks and the newly created virtual network. Additionally, ensure that the “Enable 'Spoke-xxx’ to use 'Hub-yyy's' remote gateway or route server” option is configured appropriately. Switch and monitor traffic: Gradually switch traffic to the new VPN Gateway. Monitor the stability and performance of VPN connections during the switch. Stop and delete the old VPN Gateway: Once all traffic is confirmed to go through the new VPN Gateway, stop and delete the old VPN Gateway associated with the Basic SKU public IP address. Delete the Basic SKU public IP address associated with the old VPN Gateway. Advantages: Minimizes downtime: Maintains existing VPN connections while building the new environment, significantly reducing service interruption time. Ease of rollback: Easily revert to the old environment if issues arise. Flexible configuration: Consider more flexible network configurations in the new virtual network. Considerations: Additional cost: Temporary deployment of a new VPN Gateway incurs additional costs. Configuration complexity: Managing multiple VPN Gateways and connections may complicate the configuration. IP address change: The new VPN Gateway will be assigned a new public IP address, requiring changes to the on-premises VPN device settings. Cautions: Detailed migration planning and testing are essential. New VPN tunnels must be established to the newly created Standard SKU public IP addresses. If there are firewalls in place, this new public IP must be whitelisted. Be cautious of routing switch errors. Recommended scenarios: When minimizing downtime is a priority. When network configuration changes are involved. When preparing for rollback. Comparison table of migration methods Migration method Length of downtime IP address change Rollback Configuration complexity Using Microsoft's migration tool Short (up to 10 minutes) None (maintained) Possible until final stage Low Deleting and recreating within existing virtual network Long Conditional Impossible Medium Gradual migration to new virtual network Very short Yes (new) Possible High Conclusion If minimizing downtime is necessary, using Microsoft's migration tool or gradually migrating to a new virtual network are options. The method of deleting and recreating within the existing virtual network involves downtime and should be evaluated thoroughly. The choice of migration method should be based on requirements, acceptable downtime, network configuration complexity, and available resources. Important notes (Common to all methods) Basic SKU public IP addresses are planned to be retired by September 2025. It is essential that migration to Standard SKU is completed by this deadline. Post-migration, the VPN Gateway SKU may be automatically updated to a zone redundant SKU. Please refer to the article on Gateway SKU migration for detailed information regarding the implications of these SKU changes. To learn more about Gateway SKU consolidation and migration, see About VPN Gateway SKU consolidation and migration.6.7KViews1like3CommentsNetwork Redundancy from On-Premises to Azure VMware and VNETs in a Single-Region Design
By shruthi_nair Mays_Algebary Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This guide presents common architectural patterns for building redundant connectivity between on-premises datacenters, Azure Virtual Networks (VNets), and Azure VMware Solution (AVS) within a single-region deployment. AVS allows organizations to run VMware-based workloads directly on Azure infrastructure, offering a streamlined path for migrating existing VMware environments to the cloud without the need for significant re-architecture or modification. Connectivity Between AVS, On-Premises, and Virtual Networks The diagram below illustrates a common network design pattern using either a Hub-and-Spoke or Virtual WAN (vWAN) topology, deployed within a single Azure region. ExpressRoute is used to establish connectivity between on-premises environments and VNets. The same ExpressRoute circuit is extended to connect AVS to the on-premises infrastructure through ExpressRoute Global Reach. Consideration: This design presents a single point of failure. If the ExpressRoute circuit (ER1-EastUS) experiences an outage, connectivity between the on-premises environment, VNets, and AVS will be disrupted. Let’s examine some solutions to establish redundant connectivity in case the ER1-EastUS experiences an outage. Solution1: Network Redundancy via VPN In this solution, one or more VPN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, the VPN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights: Cost-effective and straightforward to deploy. Latency: The VPN tunnel introduces additional latency due to its reliance on the public internet and the overhead associated with encryption. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 10G). For details on VPN gateway SKU and tunnel throughput, refer to this link. Solution2: Network Redundancy via SD-WAN In this solution, SDWAN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, SDWAN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying ARS in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the SDWAN appliance. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the SDWAN and ExpressRoute. Note: In vWAN topology, to ensure optimal convergence from the VNets when failing back to ExpressRoute, prepend the prefixes advertised from on-premises over the SDWAN to make sure it's longer that ExpressRoute learned routes. If you don't prepend, VNets will continue to use SDWAN path to on-prem as primary path. In this design using Azure vWAN, the SD-WAN can be deployed either within the vHub or in a spoke VNet connected to the vHub the same principle applies in both cases. Solution Insights: If you have an existing SDWANs no additional deployment is needed. Bandwidth Considerations: Vendor specific. Management consideration: Third party dependency and need to manage the HA deployment, except for SD-WAN SaaS solution. Solution3: Network Redundancy via ExpressRoute in Different Peering Locations Deploy an additional ExpressRoute circuit at a different peering location and enable Global Reach between ER2-peeringlocation2 and AVS-ER. To use this circuit as a backup path, prepend the on-premises prefixes on the second circuit; otherwise, AVS or VNet will perform Equal-Cost Multi-Path (ECMP) routing across both circuits to on-prem. Note: Use public ASN to prepend the on-premises prefix as AVS-ER will strip the private ASN toward AVS. Refer to this link for more details. Solution Insights: Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Could be cost prohibitive depending on the bandwidth of the second circuit. Solution4: Network Redundancy via Metro ExpressRoute Metro ExpressRoute is the new ExpressRoute configuration. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. This configuration offers enhanced resiliency if one peering location experiences an outage. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Availability is restricted to specific regions within the metro area. Cost-effective Conclusion: The choice of failover connectivity should be guided by the specific latency and bandwidth requirements of your workloads. Ultimately, achieving high availability and ensuring continuous operations depend on careful planning and effective implementation of network redundancy strategies.2.5KViews5likes0CommentsAccelerate designing, troubleshooting & securing your network with Gen-AI powered tools, now GA.
We are thrilled to announce the general availability of Azure Networking skills in Copilot, an extension of Copilot in Azure and Security Copilot designed to enhance cloud networking experience. Azure Networking Copilot is set to transform how organizations design, operate, and optimize their Azure Network by providing contextualized responses tailored to networking-specific scenarios and using your network topology.1.6KViews1like1Comment