Forum Widgets
Latest Discussions
Azure ExpressRoute - Cisco Meraki MX or directly into LAN?
We are in the process of deploying Azure ExpressRoute across multiple sites via a provider Layer 2 VPLS circuit and are evaluating our CPE options. Our provider is delivering a Layer 2 handoff to each site, meaning we are responsible for all Layer 3 BGP configuration on the customer edge. We currently run a full Cisco Meraki environment — Meraki MX appliances as our edge firewalls and Meraki MS switches on the LAN side — and are wondering if anyone has successfully terminated an ExpressRoute BGP session directly on a Meraki MX, or alternatively terminated it directly into the LAN without a dedicated edge router in between. Terminating ExpressRoute BGP directly on a Meraki MX appliance — is this even possible given Meraki's limited BGP support? Connecting the Layer 2 provider handoff (dot1Q or QinQ) directly into a Meraki MS LAN switch and routing from there — has anyone made this work, and what were the caveats? Running a dedicated CPE router in front of the Meraki MX — and if so, how did you handle the integration between the CPE router and the Meraki SD-WAN fabric, particularly around route advertisement and traffic steering? Our provider model uses QinQ VLAN tagging with a provider-assigned S-tag and customer-defined C-tags for private and Microsoft peering. Since the provider is only delivering Layer 2, all BGP session establishment, prefix advertisement, and routing policy must be handled entirely on our CPE. Our understanding is that Meraki MX does not support QinQ subinterfaces or the level of BGP policy control needed for ExpressRoute, but we wanted to see if anyone has found a creative workaround before we commit to dedicated CPE hardware at each site. Device recommendations welcome: If a dedicated CPE router is the only viable path, we'd also love to hear what devices others have used successfully for this use case. Our circuit is 1Gbps, so we need something that can handle that throughput comfortably with BGP active — but we're a mid-size enterprise and are looking for cost-effective options rather than carrier-grade platforms. What has worked well for you without breaking the budget? Any real-world experience, gotchas, or recommended architectures would be greatly appreciated, especially from anyone running a Meraki-only environment who has tackled this!GS419Apr 23, 2026Occasional Reader39Views0likes1CommentMy First TechCommunity Post: Azure VPN Gateway BGP Timer Mismatches
This is my first post on the Microsoft TechCommunity. Today is my seven-year anniversary at Microsoft. In my current role as a Senior Cloud Solution Architect supporting Infrastructure in Cloud & AI Platforms, I want to start by sharing a real-world lesson learned from customer engagements rather than a purely theoretical walkthrough. This work and the update of the official documentation on Microsoft Learn is the culmination of nearly two years of support for a very large global SD-WAN deployment with hundreds of site-to-site VPN connections into Azure VPN Gateway. The topic is deceptively simple—BGP timers—but mismatched expectations can cause significant instability when connecting on‑premises environments to Azure. If you’ve ever seen seemingly random BGP session resets, intermittent route loss, or confusing failover behavior, there’s a good chance that a timer mismatch between Azure and your customer premises equipment (CPE) was a contributing factor. Customer Expectation: BGP Timer Negotiation Many enterprise routers and firewalls support aggressive BGP timers and expect them to be negotiated during session establishment. A common configuration I see in customer environments looks like: Keepalive: 10 seconds Hold time: 30 seconds This configuration is not inherently wrong. In fact, it is often used intentionally to speed up failure detection and convergence in conventional network environments. My past experience with short timers was in a national cellular network carrier between core switching routers in adjacent racks, but all other connections used the default timer values. The challenge appears when that expectation is carried into Azure VPN Gateway. Azure VPN Gateway Reality: Fixed BGP Timers Azure VPN Gateway supports BGP but uses fixed timers (60/180) and won’t negotiate down. The timers are documented: The BGP keepalive timer is 60 seconds, and the hold timer is 180 seconds. Azure VPN Gateways use fixed timer values and do not support configurable keepalive or hold timers. This behavior is consistent across supported VPN Gateway SKUs that offer BGP support. Unlike some on‑premises devices, Azure will not adapt its timers downward during session establishment. What Happens During a Timer Mismatch When a CPE is configured with a 30‑second hold timer, it expects to receive BGP keepalives well within that window. Azure, however, sends BGP keepalives every 60 seconds. From the CPE’s point of view: No keepalive is received within 30 seconds The BGP hold timer expires The session is declared dead and torn down Azure may not declare the peer down on the same timeline as the CPE. This mismatch leads to repeated session flaps. The Hidden Side Effect: BGP State and Stability Controls During these rapid teardown and re‑establishment cycles, many CPE platforms rebuild their BGP tables and may increment internal routing metadata. When this occurs repeatedly: Azure observes unexpected and rapid route updates The BGP finite state machine is forced to continually reset and re‑converge BGP session stability is compromised CPE equipment logging may trigger alerts and internal support tickets. The resulting behavior is often described by customers as “Azure randomly drops routes” or “BGP is unstable”, when the instability originates from mismatched BGP timer expectations between the CPE and Azure VPN Gateway. Why This Is More Noticeable on VPN (Not ExpressRoute) This issue is far more common with VPN Gateway than with ExpressRoute. ExpressRoute supports BFD and allows faster failure detection without relying solely on aggressive BGP timers. VPN Gateway does not support BFD, so customers sometimes compensate by lowering BGP timers on the CPE—unintentionally creating this mismatch. The VPN path is Internet/WAN-like where delay/loss/jitter is normal, so conservative timer choices are stability-focused. Updated Azure Documentation The good news is that the official Azure documentation has been updated to clearly state the fixed BGP timer values for VPN Gateway: Keepalive: 60 seconds Hold time: 180 seconds Timer negotiation: Azure uses fixed timers Azure VPN Gateway FAQ | Microsoft Learn This clarification helps set the right expectations and prevents customers from assuming Azure behaves like conventional CPE routers. Practical Guidance If you are connecting a CPE to Azure VPN Gateway using BGP: Do not configure BGP timers lower than Azure’s defaults Align CPE timers to 60 / 180 or higher Avoid using aggressive timers as a substitute for BFD For further resilience: Consider Active‑Active VPN Gateways for better resiliency Use 4 Tunnels commonly implemented in a bowtie configuration for even better resiliency and traffic stability Closing Thoughts This is a great example of how cloud networking often behaves correctly, but differently than conventional on‑premises networking environments. Understanding those differences—and documenting them clearly—can save hours of troubleshooting and frustration. If this post helps even one engineer avoid a late‑night or multi-month BGP debugging session, then it has done its job. I did use AI (M365 Copilot) to aid in formatting and to validate technical accuracy. Otherwise, these are my thoughts. Thanks for reading my first TechCommunity post.262Views4likes0CommentsAzure VM Persistent Route Setup
Hi I hope to get some advice on a routing issue from Azure to an on-premises system. A little background first, please bear with me: We have an on-premises VM that connects to an isolated Thirdparty network via an On-Prem Cisco ASA FW specifically for this purpose. ------------------------------------------------------------------------------- OnPrem VM's IP: 10.100.10.23/24 OnPrem dedicated FW - Local Inside Interface IP: 10.100.10.190 -------------------------------------------------------------------------------- OnPrem dedicated FW - 3rdParty Interface IP: 10.110.255.137 Thirdparty router IP: 10.110.255.138 - This routes to aditional devices on 10.10.227.10 and 20.10.227.10. -------------------------------------------------------------------------------- There are static routes configured for 3rd party FW interface using: 3rdParty Interface - 10.10.227.10 255.255.255.255 - 10.110.255.138 (Gateway IP) 3rdParty Interface - 20.10.227.10 255.255.255.255 - 10.110.255.138 (Gateway IP) -------------------------------------------------------------------------------- The on-premises VM (10.100.10.23) has persistent routes added to allow connectivity: Network Address Netmask Gateway Address Metric 10.10.227.10 255.255.255.255 10.100.10.190 1 20.10.227.10 255.255.255.255 10.100.10.190 1 10.110.255.136 255.255.255.252 10.100.10.190 1 --------------------------------------------------------------------------------- The above works fine on-prem but I now need to migrate the On-Prem VM service into Azure. Azure Side I have created a test Azure VM with a static IP in an isolated subnet (no other devices using it) in the Production subscription of our LZ (Hub and Spoke topology). We have a site-to-site VPN connected to our on-premises FW using a VPN Gateway configured in the Connectivity subscription of our LZ (as expected). We have defined subnets for on-premises address spaces in the Local Network Gateway: 10.100.10.0/24, 10.100.11.0/24, 10.100.13.0/24, 10.100.14.0/24 (Local Subnets) and 172.16.50.0 (VPN client Subnet) --------------------------------------------------------------------------------------- Main Problem that I'm requesting advice for: When I add the defined persistent routes on the Azure VM (IP address: 10.150.1.10/24) as is on the On-Prem VM Network Address Netmask Gateway Address Metric 10.10.227.10 255.255.255.255 10.100.10.190 1 20.10.227.10 255.255.255.255 10.100.10.190 1 10.110.255.136 255.255.255.252 10.100.10.190 1 I'm unable to ping the 10.10.227.10 and 20.10.227.10 addresses, even though the routes have been added by the 3rd party on their network side. All Network Objects, static routes, groups and rules are duplicated on the ASA FW for the Azure VM as is for the On-Prem VM and I can access/ping the ASA FW inside interface no problem . Is there a specific way I need to route the persistent routes from Azure side, have I missed something in the configuration above to get the connectivity I require? Please all advice is welcomed! Thank you Nitroxnitrox2000Mar 13, 2026Copper Contributor123Views0likes2CommentsTraffic processing BGP Azure VPN gateway A/A
Hello, Can someone explain how Azure processes the traffic with implemented a VPN gateway in Active Active mode?. Azure firewall premium is also configured. BGP is without preferences. The user route definition is set up to the next hop Azure firewall . Is it possible in this scenario occurs the asymmetric routing with traffic drop by azure firewall ? In my understand is that, if we need to configure User route definition on Gateway subnet to inspect traffic to peering subnet, so the firewall don't see traffic passing through VPN gateway. Traffic going through ipsec tunnels can go different paths and firewall do not interfere because everything is routed to it by user route definition.LechuFeb 22, 2026Copper Contributor101Views0likes1CommentHelp! - How is VNet traffic reaching vWAN/on‑prem when the VNet isn’t connected to the vWAN hub
Hello, I needed some clarity on how the following is working: Attached is a network diagram of our current setup. The function apps (in VNet-1) initiate a connection(s) to a specific IP:Port or FQDN:Port in the on-premises network(s). A Private DNS zone ensures that any FQDN is resolved to the correct internal IP address of the on-prem endpoint. In our setup, both the function app and the external firewall reside in the same VNet. This firewall is described as “Unattached” because it is not the built-in firewall of a secured vWAN hub, but rather an independent Azure Firewall deployed in that VNet. The VNet has a user-defined default route (0.0.0.0/0) directing all outbound traffic to the firewall’s IP. The firewall then filters the traffic, allowing only traffic destined to whitelisted on-premises IP: Port or FQDN: Port combinations (using IP Groups), and blocking everything else. The critical question and the part that I am unable to figure out is: Once the firewall permits a packet, how does Azure know to route it to the vWAN hub and on to the site-to-site VPN? Because VNet-1 truly has no connection at all to the vWAN hub (no direct attachment, no peering, no VPN from the NVA). But the traffic is still reaching the on-prem sites. Unable to figure out how this is happening. Am I missing something obvious? Any help on this would be appreciated. Thank you!YuktiVerma2025Feb 17, 2026Copper Contributor180Views0likes3CommentsSpoke-Hub-Hub Traffic with VPN Gateway BGP and Firewall Issue
Hello, I’m facing a situation where I’m trying to have Azure Firewall Inspection on the VPN Gateway VNET-VNET Connectivity. It seems to work if I go from SpokeA-HubAFirewall-HubAVPN—HubBVPN-SpokeB but if I try to go from SpokeA-HubAFirewall-HubAVPN-HubBVM or Inbound Resolver it fails to route correctly according to Connectivity Troubleshooter it stops at HubAVPN with Local Error: RouteMissing but then reaches destination health so makes me believe it’s getting there but not following the route I want it to take which might be causing routing issues. What Am I missing here? This connectivity was working before introducing the Azure Firewall for Inspection with the UDR. Is what I’m trying to accomplish not possible? I’ve tried different types of UDR rules on the Gateway Subnet, and this is my most recent configuration. The reason I’m trying to accomplish this is because I’m seeing a similar error in our Hub-Spoke Hybrid environment and I’m trying to replicate the issue. Current Configuration 2x Hubs with Spoke networks attached so example Hub-Spoke-A Configuration: Hub-A Contains following subnets and Resources VPN Gateway - GateWaySubnet Azure Firewall - AzureFirewallSubnet Inbound Private Resolver - PrivateResolverSubnet Virtual Machine – VM Subnet Gateway Subnet has an attached UDR with the following routes Propagation - True Prefix Destination – Hub-B Next Hop Type – Virtual Appliance Next Hope IP – Hub-A Firewall Prefix Destination – Spoke-B Next Hop Type – Virtual Appliance Next Hope IP – Hub-A Firewall Hub-Spoke-B Configuration: Hub-B Contains following subnets and Resources VPN Gateway - GateWaySubnet Azure Firewall - AzureFirewallSubnet Inbound Private Resolver - PrivateResolverSubnet Virtual Machine – VM Subnet Gateway Subnet has an attached UDR with the following Routes Propagation - True Prefix Destination – Hub-A Next Hop Type – Virtual Appliance Next Hope IP – Hub-B Firewall Prefix Destination – Spoke-A Next Hop Type – Virtual Appliance Next Hope IP – Hub-B Firewall Spoke Subnets has an attached UDR with the following Routes Propagation - True Prefix Destination – 0.0.0.0/0 Next Hop Type – Virtual Appliance Next Hope IP – HubA/HubB Firewall (Depending on what hub its peered to) VPN Gateways HA VNET-VNET with BGP Enabled. I can see that it knows the routes and like I said this was working prior introducing the UDRs for force traffic through the azure firewall.CUrti300Nov 20, 2025Copper Contributor350Views0likes2CommentsWhat would be the expected behavior for an NSP?
I'm using a network security perimeter in Azure. In the perimeter there are two resources assigned: A storage Account and An Azure SQL Databse. I'm using the BULK INSERT dbo.YourTable FROM 'sample_data.csv' getting data from the storage account. The NSP is enforced for both resources, so the public connectivity is denied for resources outside the perimeter I have experienced this behavior: the azure SQL CANNOT access the storage account when I run the command. I resolved using: I need to add an outbound rule in the NSP to reach the storage fqdn I need to add an inbound rule in the NSP to allow the public IP of the SQL Azure When I do 1 and 2, azure SQL is able to pump data from the storage. IMHO this is not the expected behavior for two resources in the NSP. I expect that, as they are in the same NSP, they can communicate to each other. I have experienced a different behavior when using keyvault in the same NSP. I'm using the keyvault to get the keys for encryption for the same storage. For the key vault, i didn't have to create any rule to make it able to communicate to the storage, as they are in the same NSP. I know, Azure SQL is in preview for the NSP and the keyvault in GA, but I want to ask if the experienced behavior (the SQL CANNOT connect to the storage even if in the same NSP) is due to a unstable or unimplemented feature, or I'm missing something? What is the expected behavior? Thank you community!!Antonio BuonaiutoNov 19, 2025Copper Contributor107Views0likes1CommentAzure traffic to storage account
Hello, I’ve set up a storage account in Tenant A, located in the AUEast region, with public access. I also created a VM in Tenant B, in the same region (AUEast). I’m able to use IP whitelisting on the storage account in Tenant A to allow traffic only from the VM in Tenant B. However, in the App Insights logs, the traffic appears as 10.X.X.X, likely because the VM is in the same region. I'm unsure why the public IP isn't reflected in the logs. Moreover, I am not sure about this part https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security-limitations#:~:text=You%20can%27t%20use%20IP%20network%20rules%20to%20restrict%20access%20to%20clients%20in%20the%20same%20Azure%20region%20as%20the%20storage%20account.%20IP%20network%20rules%20have%20no%20effect%20on%20requests%20that%20originate%20from%20the%20same%20Azure%20region%20as%20the%20storage%20account.%20Use%20Virtual%20network%20rules%20to%20allow%20same%2Dregion%20requests. This seems contradictory, as IP whitelisting is working on the storage account. I assume the explanation above applies only when the client is hosted in the same tenant and region as the storage account, and not when the client is in a different tenant, even if it's in the same region. I’d appreciate it if someone could shed some light on this. Thanks, Mohsen146Views0likes3CommentsAzure Express Route Peering with on Prem Firewall
Is there any way we can have express route peer BGP directly with on Prem Firewall via /29 subnet The firewall has active / standby and VIP. The express route peering require two /30 . if I have an active standby and VIP on the firewall how is that going to work ?ahmedaljawadSep 23, 2025Copper Contributor111Views0likes2CommentsStorage not reachable from network using service endpoint.
Hello, Here is the situation. The storage (File share )had assigned networks to allow access. We refresh some changes in the NSG from the network using bicep code ( Outbound was permitted all- no change. Inbound - we updated a name of a rule). What happened: no more access to the storage. No more connection on SMB port. The port was reported as closed. We removed the storage configuration of allowed networks ( the status was still Green), we add it back and magically it started to work. Any hints of what could have went wrong? Thank you119Views1like2Comments
Tags
- virtual network51 Topics
- vpn gateway27 Topics
- azure firewall25 Topics
- virtual wan18 Topics
- application gateway13 Topics
- load balancer12 Topics
- azure private link10 Topics
- azure dns10 Topics
- azure expressroute10 Topics
- azure front door8 Topics