virtual network
231 TopicsIKEv2 and Windows 10/11 drops connectivity but stays connected in Windows
I’ve seen this with 2 different customers using IKEv2 User VPNs (virtual wan) and Point to Site gateways in hub and spoke whereby using the VPN in a Always On configuration (device and user tunnel) that after a specific amount of time (56 minutes) the IKEv2 connection will drop the tunnel but stay connected in Windows. To restore the connection, you just reconnect. has anyone else had a similar experience? I’ve seen the issue with ExpressRoute and with/without Azure firewalls in the topology too.1.2KViews0likes1CommentInter-Hub Connectivity Using Azure Route Server
By Mays_Algebary shruthi_nair As your Azure footprint grows with a hub-and-spoke topology, managing User-Defined Routes (UDRs) for inter-hub connectivity can quickly become complex and error-prone. In this article, we’ll explore how Azure Route Server (ARS) can help streamline inter-hub routing by dynamically learning and advertising routes between hubs, reducing manual overhead and improving scalability. Baseline Architecture The baseline architecture includes two Hub VNets, each peered with their respective local spoke VNets as well as with the other Hub VNet for inter-hub connectivity. Both hubs are connected to local and remote ExpressRoute circuits in a bowtie configuration to ensure high availability and redundancy, with Weight used to prefer the local ExpressRoute circuit over the remote one. To maintain predictable routing behavior, the VNet-to-VNet configuration on the ExpressRoute Gateway should be disabled. Note: Adding ARS to an existing Hub with Virtual Network Gateway will cause downtime that expect to last 10 minutes. Scenario 1: ARS and NVA Coexist in the Hub Option A: Full Traffic Inspection ARS and NVA Coexist in the Hub In this scenario, ARS is deployed in each Hub VNet, alongside the Network Virtual Appliances (NVAs). NVA1 in Region1 establishes BGP peering with both the local ARS (ARS1) and the remote ARS (ARS2). Similarly, NVA2 in Region2 peers with both ARS2 (local) and ARS1 (remote). Let’s break down what each BGP peering relationship accomplishes. For clarity, we’ll focus on Region1, though the same logic applies to Region2: NVA1 Peering with Local ARS1 Through BGP peering with ARS1, NVA1 dynamically learns the prefixes of Spoke1 and Spoke2 at the OS level, eliminating the need to manually configure these routes. The same applies for NVA2 learning Spoke3 and Spoke4 prefixes via its BGP peering with ARS2. NVA1 Peering with Remote ARS2 When NVA1 peers with ARS2, the Spoke1 and Spoke2 prefixes are propagated to ARS2. ARS2 then injects these prefixes into NVA2 at both the NIC level with NVA1 as the next hop, and at the OS level. This mechanism removes the need for UDRs on the NVA subnets to enable inter-hub routing. Additionally, ARS2 advertises the Spoke1 and Spoke2 prefixes to both ExpressRoute circuits (EXR2 and EXR1 due to bowtie configuration) via GW2, making them reachable from on-premises through either EXR1 or EXR2. 👉Important: To ensure that ARS2 accepts and propagates Spoke1/Spoke2 prefixes received via NVA1, AS-Override must be enabled. Without AS-Override, BGP loop prevention will block these routes at ARS2, since both ARS1 and ARS2 use the default ASN 65515, and ARS2 will consider the route as already originated locally. The same principle applies in reverse for Spoke3 and Spoke4 prefixes being advertised from NVA2 to ARS1. Traffic Flow Inter-Hub Traffic: Spoke VNets are configured with UDRs that contain only a default route (0.0.0.0/0) pointing to the local NVA as the next hop. Additionally, the “Propagate Gateway Routes” setting should be set to False to ensure all traffic, whether East-West (intra-hub/inter-hub) or North-South (to/from internet), is forced through the local NVA for inspection. Local NVAs will have the next hop to the other region spokes injected at the NIC level by local ARS, pointing to the other region NVA, for example NVA2 will have next hop to Spoke1 and Spoke2 as NVA1 (10.0.1.4) and vice versa. Why are UDRs still needed on spokes if ARS handles dynamic routing? Even with ARS in place, UDRs are required to maintain control of the next hop for traffic inspection. For instance, if Spoke1 and Spoke2 do not have UDRs, they will learn the remote spoke prefixes (e.g., Spoke3/Spoke4) injected via ARS1, which received them from NVA2. This results in Spoke1/Spoke2 attempting to route traffic directly to NVA2, a path that is invalid, since the spokes don’t have the path to NVA2. The UDR ensures traffic correctly routes through NVA1 instead. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: As previously noted, Spokes send all traffic, including traffic to on-premises, via NVA1 due to the default route in the UDR. NVA1 then routes traffic to the local ExpressRoute circuit, using Weight to prefer the local path over the remote. Note: While NVA1 learns on-premises prefixes from both local and remote ARSs at the OS level, this doesn’t affect routing decisions. The actual NIC-level route injection determines the next hop, ensuring traffic is sent via the correct path—even if the OS selects a different “best” route internally. The screenshot below from NVA1 shows four next hops to the on-premises network 10.2.0.0/16. These include the local ARS (ARS1: 10.0.2.5 and 10.0.2.4) and the remote ARS (ARS2: 10.1.2.5 and 10.1.2.4). On-Premises to Azure Traffic Flow In a bowtie ExpressRoute configuration, Azure VNet prefixes are advertised to on-premises through both local and remote ExpressRoute circuits. Because of this dual advertisement, the on-premises network must ensure optimal path selection when routing traffic to Azure. From Azure side, to maintain traffic symmetry, add UDRs at the GatewaySubnet (GW1 and GW2) with specific routes to the local Spoke VNets, using the local NVA as the next hop. This ensures return traffic flows back through the same path it entered. 👉How Does the ExpressRoute Edge Router Select the Optimal Path? You might ask: If Spoke prefixes are advertised by both GW1 and GW2, how does the ExpressRoute edge router choose the best path? (e.g., diagram below shows EXR1 learns Region1 prefixes from GW1 and GW2) Here’s how: Edge routers (like EXR1) receive the same Spoke prefixes from both gateways. However, these routes have different AS-Path lengths: - Routes from the local gateway (GW1) have a shorter AS-Path. - Routes from the remote gateway (GW2) have a longer AS-Path because NVA1’s ASN (e.g., 65001) is prepended twice as part of the AS-Override mechanism. As a result, the edge router (EXR1) will prefer the local path from GW1, ensuring efficient and predictable routing. For example: EXR1 receives Spoke1, Spoke2, and Hub1-VNet prefixes from both GW1 and GW2. But because the path via GW1 has a shorter AS-Path, EXR1 will select that as the best route. (Refer to the diagram below for a visual of the AS-Path difference). Final Traffic Flow: Option-A Insights: This design simplifies UDR configuration for inter-hub routing, especially useful when dealing with non-contiguous prefixes or operating across multiple hubs. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). When on-premises traffic inspection is required, the UDR setup in the GatewaySubnet becomes more complex as the number of Spokes increases. Additionally, each route table is currently limited to 600 UDR entries. As your Azure network scales, keep in mind that Azure Route Server supports a maximum of 8 BGP peers per instance (as of the time writing this article). This limit can impact architectures involving multiple NVAs or hubs. Option B: Bypass On-Premises Inspection If on-premises traffic inspection is not required, NVAs can advertise a supernet prefix summarizing the local Spoke VNets to the remote ARS. This approach provides granular control over which traffic is routed through the NVA and eliminates the need for BGP peering between the local NVA and local ARS. All other aspects of the architecture remain the same as described in Option A. For example, NVA2 can advertise the supernet 192.168.2.0/23 (supernet of Spoke3 and Spoke4) to ARS1. As a result, Spoke1 and Spoke2 will learn this route with NVA2 as the next hop. To ensure proper routing (as discussed earlier) and inter-hub inspection, you need apply a UDR in Spoke1 and Spoke2 that overrides this exact supernet prefix, redirecting traffic to NVA1 as the next hop. At the same time, traffic destined for on-premises will follow the system route through the local ExpressRoute gateway, bypassing NVA1 altogether. In this setup: UDRs on the Spokes should have "Propagate Gateway Routes" set to True. No UDRs are needed in the GatewaySubnet. 👉Can NVA2 Still Advertise Specific Spoke Prefixes? You might wonder: Can NVA2 still advertise specific prefixes (e.g., Spoke3 and Spoke4) learned from ARS2 to ARS1 instead of a supernet? Yes, this is technically possible, but it requires maintaining BGP peering between NVA2 and ARS2. However, this introduces UDR complexity in Spoke1 and Spoke2, as you'd need to manually override each specific prefix. This also defeats the purpose of using ARS for simplified route propagation, undermining the efficiency and scalability of the design. Bypass On-Premises Inspection Final Traffic Flow: Option B: Bypass on-premises inspection traffic flow Option-B Insights: This approach reduces the number of BGP peerings per ARS. Instead of maintaining two BGP sessions (local NVA and remote NVA) per Hub, you can limit it to just one, preserving capacity within ARS’s 8-peer limit for additional inter-hub NVA peerings. Each NVA should advertise a supernet prefix to the remote ARS. This can be challenging if your Spokes don’t use contiguous IP address spaces, as described in Option B. Scenario 2: ARS in the Hub and the NVA in Transit VNet In Scenario 1, we highlighted that when on-premises inspection is required, managing UDRs at the GatewaySubnet becomes increasingly complex as the number of Spoke VNets grows. This is due to the need for UDRs to include specific prefixes for each Spoke VNet. In this scenario, we eliminate the need to apply UDRs at the GatewaySubnet altogether. In this design, the NVA will be deployed in Transit VNet, where: Transit-VNet will be peered with local Spoke VNets and with the local Hub-VNet to enable intra-Hub and on-premises connectivity. Transit-VNet also peered with remote Transit VNets (e.g., Transit-VNet1 peered with Transit-VNet2) to handle inter-Hub connectivity through the NVAs. Additionally, Transit-VNets are peered with remote Hub-VNets, to establish BGP peering with the remote ARS. NVAs OS will need to add static routes for the local Spoke VNets prefixes, it can be specific or it can supernet prefix, which will later be advertised to ARSs over BGP Peering, then ARS will advertise it to on-premises via ExpressRoute. NVAs will BGP peer with local ARS and also with the remote ARS. To understand the reasoning behind this design, let’s take a closer look at the setup in Region1, focusing on how ARS and NVA are configured to connect to Region2. This will help illustrate both inter-hub and on-premises connectivity. The same concept applies in reverse from Region2 to Region1. Inetr-Hub: To enable NVA1 in Region1 to learn prefixes from Region2, NVA2 will configure static routes at the OS level for Spoke3 and Spoke4 (or their supernet prefix) and advertise them to ARS1 via remote BGP peering. As a result, these prefixes will be received by NVA1, both at the NIC level, with NVA2 as the next hop, and at the OS level for proper routing. Spoke1 and Spoke2 will have a UDR with a default route pointing to NVA1 as the next hop. For instance, when Spoke1 needs to communicate with Spoke3, the traffic will first route through NVA1. NVA1 will then forward the traffic to NVA2 using VNet peering between the two Hubs. A similar configuration will be applied in Region2, where NVA1 will configure static routes at the OS level for Spoke1 and Spoke2 (or their supernet prefix) and advertise them to ARS2 via remote BGP peering, as a result, these prefixes will be received by NVA2, both at the NIC level (injected by ARS2), with NVA1 as the next hop, and at the OS level for proper routing. Note: At the OS level, NVA1 learns Spoke3 and Spoke4 prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: Spokes in Region1 route all traffic through NVA1 via a default route defined in their UDRs. Because of BGP peering between NVA1 and ARS1, ARS1 advertises the Spoke1 and Spoke2 (or their supernet prefix) to on-premises through ExpressRoute (EXR1). The Transit-VNet1 (hosting NVA1) is peered with Hub1-VNet, with “Use Remote Gateway” enabled. This allows NVA1 to learn on-premises prefixes from the local ExpressRoute gateway (GW1), and traffic to on-premises is routed through the local ExpressRoute circuit (EXR1) due to higher BGP Weight configuration. Note: At the OS level, NVA1 learns on-prem prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises to Azure Traffic Flow: Through BGP peering with ARS1, NVA1 enables ARS1 to advertise Spoke1 and Spoke2 (or their supernet prefix) to both EXR1 and EXR2 circuits (due to the ExpressRoute bowtie setup). Additionally, due to BGP peering between NVA1 and ARS2, ARS2 also advertises Spoke1 and Spoke2 (or their supernet prefix) to EXR2 and EXR1 circuits. As a result, both ExpressRoute edge routers in Region1 and Region2 learn the same Spoke prefixes (or their supernet prefix) from both GW1 and GW2, with identical AS-Path lengths, as shown below. EXR1 learns Region1 Spokes's supernet prefixes from GW1 and GW2 This causes non-optimal inbound routing, where traffic from on-premises destined to Region1 Spokes may first land in Region2’s Hub2-VNet before traversing to NVA1 in Region1. However, return traffic from Spoke1 and Spoke2 will always exit through Hub1-VNet. To prevent suboptimal routing, configure NVA1 to prepend the AS path for Spoke1 and Spoke2 (or their supernet prefix) when advertising them to the remote ARS2. Likewise, ensure NVA2 prepends the AS path for Spoke3 and Spoke4 (or their supernet prefix) when advertising to ARS1. This approach helps maintain optimal routing under normal conditions and during ExpressRoute failover scenarios. Below diagram shows NVA1 is setting AS-Prepend for Spoke1 and Spoke2 supernet prefix when BGP peer with remote ARS (ARS1), same will apply for NVA2 when advertising Spoke3 and Spoke4 prefixes to ARS1. Final Traffic Flow: Full Inspection: Traffic flow when NVA in Transit-VNet Insights: This solution is ideal when full traffic inspection is required. Unlike Scenario 1 - Option A, it eliminates the need for UDRs in the GatewaySubnet. When ARS is deployed in a VNet (typically in Hub VNets), the VNet will be limited to 500 VNet peerings (as of the time writing this article). However, in this design, Spokes peer with the Transit-VNet instead of directly with the ARS VNet, allowing you to scale beyond the 500-peer limit by leveraging Azure Virtual Network Manager (AVNM) or submitting a support request. Some enterprise customers may encounter the 1,000-route advertisement limit on the ExpressRoute circuit from the ExpressRoute gateway. In traditional hub-and-Spoke designs, there's no native control over what is advertised to ExpressRoute. With this architecture, NVAs provide greater control over route advertisement to the circuit. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). This design does require additional VNet peerings, including: Between Transit-VNets (inter-region), Between Transit-VNets and local Spokes, and Between Transit-VNets and both local and remote Hub-VNets.2.3KViews3likes2CommentsCan only remote into azure vm from DC
Hi all, I have set up a site to site connection from on prem to azure and I can remote in via the main dc on prem but not any other server or ping from any other server to the azure. Why can I only remote into the azure VM from the server that has Routing and remote access? Any ideas on how I can fix this?739Views0likes2CommentsAzure Firewall query
Hi Community, Our customer has a security layer subscription which they want to route and control all other subscription traffic via. Basically, they want to remove direct VPeers between subscriptions and to configure Azure Firewalls to allow them to control and route all other subscriptions traffic. All internet traffic would then be routed down our S2S VPN to our Palo Alto’s in Greenwich for internet access (both ways). However, there may be some machines they would assign Azure Public IP’s to for inbound web server connectivity, but all other access from external clients would be routed via the Palos inbound. Questions: Which one (Azure Firewall or Azure WAN) would be best option? What are the pros and cons? Any reference would be of great help.855Views0likes1CommentWhat is impact of Azure Firewall update from default to custom DNS on other Vnets routing to FW
I have 4 Azure Vnets, One Prod(VMs and AKS), 2nd Dev(VMs and AKS), 3rd(Domain Controllers), 4th Azure Firewall and Application gateway. External traffic is only come from 4th Vnet resources. Vnets peering is set from 1to4, 2to4, 3to4, Route table from 1st, 2nd, 3rd vnets are set to Azure Firewall private IP. All Vnets have DNS server added of Domain controller private IPs. Azure firewall has DNS setting disabled. I am going to enable Firewall DNS settings and add the Domain Controllers DNS and enable DNS proxy. For testing, I am going to add Firewall private IP in DNS of Dev Vnet and restart VMs. But I did not added this in Prod Vnet. What will be the impact on Prod Vnet Apps if they are trying to resolve IPs from domain controller? What will be the impact of Prod apps if they are trying to access azure resources(SQL, storage account)?831Views0likes1Commentrouting table
Hello, I have a virtual network with 192.168.0.0/24. In the virtual network is a firewall with 192.168.0.5. Now I want to route any outgoing traffic on the virtual network through the firewall. If I create a rule 0.0.0.0/0 to 192.168.0.5 - The internal devices can not reach each other. What is the best way to set the routing rules here? Greetings and thanks Stefan610Views0likes1CommentAzure traffic to storage account
Hello, I’ve set up a storage account in Tenant A, located in the AUEast region, with public access. I also created a VM in Tenant B, in the same region (AUEast). I’m able to use IP whitelisting on the storage account in Tenant A to allow traffic only from the VM in Tenant B. However, in the App Insights logs, the traffic appears as 10.X.X.X, likely because the VM is in the same region. I'm unsure why the public IP isn't reflected in the logs. Moreover, I am not sure about this part https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security-limitations#:~:text=You%20can%27t%20use%20IP%20network%20rules%20to%20restrict%20access%20to%20clients%20in%20the%20same%20Azure%20region%20as%20the%20storage%20account.%20IP%20network%20rules%20have%20no%20effect%20on%20requests%20that%20originate%20from%20the%20same%20Azure%20region%20as%20the%20storage%20account.%20Use%20Virtual%20network%20rules%20to%20allow%20same%2Dregion%20requests. This seems contradictory, as IP whitelisting is working on the storage account. I assume the explanation above applies only when the client is hosted in the same tenant and region as the storage account, and not when the client is in a different tenant, even if it's in the same region. I’d appreciate it if someone could shed some light on this. Thanks, Mohsen53Views0likes3CommentsAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.6KViews2likes0CommentsHub spoke design with NVA firewall
I have my Azure landing zone setup but it isn't working as i expected. So i have a vnet named vnet-lz-fw-001 with 2 subnets. External and Trusted. I then have a NVA Watchguard Firewall with an interface on each subnet. I then have 2 further vnets, vnet-lz-prod-001 and vnet-lz-id-001. Each of these vnets has peering to vnet-lz-fw-001 but no peering between each other. vnet-lz-prod-001 and vnet-lz-id-001 have user defined routes to point to each other via the trusted interface on the Watchguard NVA The Watchguard firewall has static routes to point to each subnet in the vnets via the Trusted interface gateway address. Virtual machines in both vnet-lz-prod-001 and vnet-lz-id-001 can ping each other, but when they do its not routing via the Watchguard firewall. Is this as expected behavior? Virtual machines in both vnet-lz-prod-001 and vnet-lz-id-001 can ping the trusted interface on the Watchguard Firewall okSolved74Views0likes1Comment