azurenetworking
7 TopicsSupercharging NVAs in Azure with Accelerated Connections
Hello folks, If you run firewalls, routers, or SD‑WAN NVAs in Azure and your pain is connection scale rather than raw Mbps, there is a feature you should look at: Accelerated Connections. It shifts connection processing to dedicated hardware in the Azure fleet and lets you size connection capacity per NIC, which translates into higher connections‑per‑second and more total active sessions for your virtual appliances and VMs. This article distills a recent E2E chat I hosted with the Technical Product Manager working on Accelerated Connections and shows you how to enable and operate it safely in production. The demo and guidance below are based on that conversation and the current public documentation. What Accelerated Connections is (and what it is not) Accelerated Connections is configured at the NIC level of your NVAs or VMs. You can choose which NICs participate. That means you might enable it only on your high‑throughput ingress and egress NICs and leave the management NIC alone. It improves two things that matter to infrastructure workloads: Connections per second (CPS). New flows are established much faster. Total active connections. Each NIC can hold far more simultaneous sessions before you hit limits. It does not increase your nominal throughput number. The benefit is stability under high connection pressure, which helps reduce drops and flapping during surges. There is a small latency bump because you introduce another “bump in the wire,” but in application terms it is typically negligible compared to the stability you gain. How it works under the hood In the traditional path, host CPUs evaluate SDN policies for flows that traverse your virtual network. That becomes a bottleneck for connection scale. Accelerated Connections offloads that policy work onto specialized data processing hardware in the Azure fleet so your NVAs and VMs are not capped by host CPU and flow‑table memory constraints. Industry partners have described this as decoupling the SDN stack from the server and shifting the fast‑path onto DPUs residing in purpose‑built appliances, delivered to you as a capability you attach at the vNIC. The result is much higher CPS and active connection scale for virtual firewalls, load balancers, and switches. Sizing the feature per NIC with Auxiliary SKUs You pick a performance tier per NIC using Auxiliary SKU values. Today the tiers are A1, A2, A4, and A8. These map to increasing capacity for total simultaneous connections and CPS, so you can right‑size cost and performance to the NIC’s role. As discussed in my chat with Yusef, the mnemonic is simple: A1 ≈ 1 million connections, A2 ≈ 2 million, A4 ≈ 4 million, A8 ≈ 8 million per NIC, along with increasing CPS ceilings. Choose the smallest tier that clears your peak, then monitor and adjust. Pricing is per hour for the auxiliary capability. Tip: Start with A1 or A2 on ingress and egress NICs of your NVAs, observe CPS and active session counters during peak events, then scale up only if needed. Where to enable it You can enable Accelerated Connections through the Azure portal, CLI, PowerShell, Terraform, or templates. The setting is applied on the network interface. In the portal, export the NIC’s template and you will see two properties you care about: auxiliaryMode and auxiliarySku. Set auxiliaryMode to AcceleratedConnections and choose an auxiliarySku tier (A1, A2, A4, A8). Note: Accelerated Connections is currently a limited GA capability. You may need to sign up before you can configure it in your subscription. Enablement and change windows Standalone VMs. You can enable Accelerated Connections with a stop then start of the VM after updating the NIC properties. Plan a short outage. Virtual Machine Scale Sets. As of now, moving existing scale sets onto Accelerated Connections requires re‑deployment. Parity with the standalone flow is planned, but do not bank on it for current rollouts. Changing SKUs later. Moving from A1 to A2 or similar also implies a downtime window. Treat it as an in‑place maintenance event. Operationally, approach this iteratively. Update a lower‑traffic region first, validate, then roll out broadly. Use active‑active NVAs behind a load balancer so one instance can drain while you update the other. Operating guidance for IT Pros Pick the right NICs. Do not enable on the management NIC. Focus on the interfaces carrying high connection volume. Baseline and monitor. Before enabling, capture CPS and active session metrics from your NVAs. After enabling, verify reductions in connection drops at peak. The point is stability under pressure. Capacity planning. Start at A1 or A2. Move up only if you see sustained saturation at peak. The tiers are designed so you do not pay for headroom you do not need. Expect a tiny latency increase. There is another hop in the path. In real application flows the benefit in fewer drops and higher CPS outweighs the added microseconds. Validate with your own A/B tests. Plan change windows. Enabling on existing VMs and resizing the Auxiliary SKU both involve downtime. Use active‑active pairs behind a load balancer and drain one side while you flip the other Why this matters Customers in regulated and high‑traffic industries like health care often found that connection scale forced them to horizontally expand NVAs, which inflated both cloud spend and licensing, and complicated operations. Offloading the SDN policy work to dedicated hardware allows you to process many more connections on fewer instances, and to do so more predictably. Resources Azure Accelerated Networking overview: https://learn.microsoft.com/azure/virtual-network/accelerated-networking-overview Accelerated connections on NVAs or other VMs (Limited GA): https://learn.microsoft.com/azure/networking/nva-accelerated-connections Manage accelerated networking for Azure Virtual Machines: https://learn.microsoft.com/azure/virtual-network/manage-accelerated-networking Network optimized virtual machine connection acceleration (Preview): https://learn.microsoft.com/azure/virtual-network/network-optimized-vm-network-connection-acceleration Create an Azure Virtual Machine with Accelerated Networking: https://docs.azure.cn/virtual-network/create-virtual-machine-accelerated-networking Next steps Validate eligibility. Confirm your subscription is enabled for Accelerated Connections and that your target regions and VM families are supported. Learn article Select candidate workloads. Prioritize NVAs or VMs that hit CPS or flow‑table limits at peak. Use existing telemetry to pick the first region and appliance pair. 31 Pilot on one NIC per appliance. Enable on the data‑path NIC, start with A1 or A2, then stop/start the VM during a short maintenance window. Measure before and after. 32 Roll out iteratively. Expand to additional regions and appliances using active‑active patterns behind a load balancer to minimize downtime. 33 Right‑size the SKU. If you observe sustained headroom, stay put. If you approach limits, step up a tier during a planned window. 34122Views0likes0CommentsUnlocking Private IP for Azure Application Gateway: Security, Compliance, and Practical Deployment
If you’re responsible for securing, scaling, and optimizing cloud infrastructure, this update is for you. Based on my recent conversation with Vyshnavi Namani, Product Manager on the Azure Networking team, I’ll break down what private IP means for your environment, why it matters, and how to get started. Why Private IP for Application Gateway? Application Gateway has long been the go-to Layer 7 load balancer for web traffic in Azure. It manages, routes, and secures requests to your backend resources, offering SSL offloading and integrated Web Application Firewall (WAF) capabilities. But until now, public IPs were the norm, meaning exposure to the internet and the need for extra security layers. With Private IP, your Application Gateway can be deployed entirely within your virtual network (VNet), isolated from public internet access. This is a huge win for organizations with strict security, compliance, or policy requirements. Now, your traffic stays internal, protected by Azure’s security layers, and only accessible to authorized entities within your ecosystem. Key Benefits for ITPRO 🔒 No Public Exposure With a private-only Application Gateway, no public IP is assigned. The gateway is accessible only via internal networks, eliminating any direct exposure to the public internet. This removes a major attack vector by keeping traffic entirely within your trusted network boundaries. 📌 Granular Network Control Private IP mode grants full control over network policies. Strict NSG rules can be applied (no special exceptions needed for Azure management traffic) and custom route tables can be used (including a 0.0.0.0/0 route to force outbound traffic through on-premises or appliance-based security checkpoints). ☑️ Compliance Alignment Internal-only gateways help meet enterprise compliance and data governance requirements. Sensitive applications remain isolated within private networks, aiding data residency and preventing unintended data exfiltration. Organizations with “no internet exposure” policies can now include Application Gateway without exception. Architectural Considerations and Deployment Prerequisites To deploy Azure Application Gateway with Private IP, you should plan for the following: SKU & Feature Enablement: Use the v2 SKU (Standard_v2 or WAF_v2). The Private IP feature is GA but may require opt-in via the EnableApplicationGatewayNetworkIsolation flag in Azure Portal, CLI, or PowerShell. Dedicated Subnet: Deploy the gateway in a dedicated subnet (no other resources allowed). Recommended size: /24 for v2. This enables clean NSG and route table configurations. NSG Configuration: Inbound: Allow AzureLoadBalancer for health probes and internal client IPs on required ports. Outbound: Allow only necessary internal destinations; apply a DenyAll rule to block internet egress. User-Defined Routes (UDRs): Optional but recommended for forced tunneling. Set 0.0.0.0/0 to route traffic through an NVA, Azure Firewall, or ExpressRoute gateway. Client Connectivity: Ensure internal clients (VMs, App Services, on-prem users via VPN/ExpressRoute) can reach the gateway’s private IP. Use Private DNS or custom DNS zones for name resolution. Outbound Dependencies: For services like Key Vault or telemetry, use Private Link or NAT Gateway if internet access is required. Plan NSG and UDRs accordingly. Management Access: Admins must be on the VNet or connected network to test or manage the gateway. Azure handles control-plane traffic internally via a management NIC. Migration Notes: Existing gateways may require redeployment to switch to private-only mode. Feature registration must be active before provisioning. Practical Scenarios Here are several practical scenarios where deploying Azure Application Gateway with Private IP is especially beneficial: 🔐 Internal-Only Web Applications Organizations hosting intranet portals, HR systems, or internal dashboards can use Private IP to ensure these apps are only accessible from within the corporate network—via VPN, ExpressRoute, or peered VNets. 🏥 Regulated Industries (Healthcare, Finance, Government) Workloads that handle sensitive data (e.g., patient records, financial transactions) often require strict network isolation. Private IP ensures traffic never touches the public internet, supporting compliance with HIPAA, PCI-DSS, or government data residency mandates. 🧪 Dev/Test Environments Development teams can deploy isolated environments for testing without exposing them externally. This reduces risk and avoids accidental data leaks during early-stage development. 🌐 Hybrid Network Architectures In hybrid setups where on-prem systems interact with Azure-hosted services, Private IP gateways can route traffic securely through ExpressRoute or VPN, maintaining internal-only access and enabling centralized inspection via NVAs. 🛡️ Zero Trust Architectures Private IP supports zero trust principles by enforcing least-privilege access, denying internet egress, and requiring explicit NSG rules for all traffic—ideal for organizations implementing segmented, policy-driven networks. Resources https://docs.microsoft.com/azure/application-gateway/ https://learn.microsoft.com/azure/application-gateway/configuration-overview https://learn.microsoft.com/azure/virtual-network/network-security-groups-overview https://learn.microsoft.com/azure/virtual-network/virtual-network-peering-overview Next Steps Evaluate Your Workloads: Identify apps and services that require internal-only access. Plan Migration: Map out your VNets, subnets, and NSGs for a smooth transition. Enable Private IP Feature: Register and deploy in your Azure subscription. Test Security: Validate that only intended traffic flows through your gateway. Final Thoughts Private IP for Azure Application Gateway is an improvement for secure, compliant, and efficient cloud networking. If you’re an ITPRO managing infrastructure, now’s the time check out this feature and level up your Azure architecture. Have questions or want to share your experience? Drop a comment below. Cheers! Pierre227Views0likes0CommentsFortifying Your Cloud: Mastering Azure Networking Services to Protect Your Applications
🔐 Fortifying Your Cloud: Mastering Azure Networking Services to Protect Your Applications In today’s cloud-first world, securing applications is not optional—it’s mission-critical. Join us for an in-depth exploration of Azure network security services and learn how to: ✅ Review and compare networking application delivery services, their features, and limitations ✅ Identify the right security layers for hosting applications in Azure ✅ See a hands-on demo with Azure Front Door and other powerful services in action 📅 Don’t miss this opportunity to strengthen your Azure skills and boost your cloud security expertise. 👉 Register now and take your cloud knowledge from Zero to Hero! 🗓️ Date: 4 September 2025 ⏰ Time: 19:00 (AEST) 🎙️ Speaker: Santhosh Anandakrishnan 📌 Topic: Fortifying Your Cloud: Mastering Azure Networking Services to Protect Your Applications48Views0likes0CommentsDeploy Dynamic Routing (BGP) between Azure VPN and Third-Party Firewall (Palo Alto)
Overview This blog explains how to deploy dynamic routing (BGP) between Azure VPN and a third-party firewall. You can refer to this topology and deployment guide in scenarios where you need VPN connectivity between an on-premises third-party VPN device and Azure VPN, or any cloud environment. What is BGP? Border Gateway Protocol (BGP) is a standardized exterior gateway protocol used to exchange routing information across the internet and between different autonomous systems (AS). It is the protocol that makes the internet work by enabling data routing between different networks. Here are some key points about BGP: Routing Between Autonomous Systems: BGP is used for routing between large networks that are under different administrative control, known as autonomous systems (AS). Each AS is assigned a unique number. Path Vector Protocol: BGP is a path vector protocol, meaning it maintains the path information that gets updated dynamically as routes are added or removed. This helps in making routing decisions based on path attributes. Scalability: BGP is designed to handle a large number of routes, making it highly scalable for use on the internet. Policy-Based Routing: BGP allows network administrators to set policies that can influence routing decisions. For example, administrators can prefer certain routes over others based on specific criteria such as path length or AS path. Peering: BGP peers are routers that establish a connection to exchange routing information. Peering can be either internal (within the same AS) or external (between different AS). Route Advertisement: BGP advertises routes along with various attributes such as AS path, next hop, and network prefix. This helps in making informed decisions on the best route to take. Convergence: BGP can take some time to converge, meaning to stabilize its routing tables after a network change. However, it is designed to be very stable once converged. Use in Azure: In Azure, BGP is used to facilitate dynamic routing in scenarios like connecting Azure VNets to on-premises networks via VPN gateways. This dynamic routing allows for more resilient and flexible network designs. Switching from static routing to BGP for your Azure VPN gateway will enable dynamic routing, allowing the Azure network and your on-premises network to exchange routing information automatically, leading to potentially better failover and redundancy. Why BGP? BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers. Diagram Pre-Requisite Firewall Network: Firewall with three interfaces (Public, Private, Management). Here, the LAB has configured with VM-series Palo Alto firewall. Azure VPN Network: Test VM, Gateway Subnet Test Network Connected to Firewall Network: Azure VM with UDR pointing to Firewall's Internal Interface. The test network should be peered with firewall network. Configuration Part 1: Configure Azure VPN with BGP enabled Create Virtual Network Gateway from marketplace Provide Name, Gateway type (VPN), VPN SKU, VNet (with dedicated Gateway Subnet), Public IP Enable BGP and provide AS number Create Note: Azure will auto provision a local BGP peer with an IP address from Gateway Subnet. After deployment the configuration will look similar to below. Make a note of Public IP and BGP Peer IP generated, we need this while configuring VPN at remote end. Create Local Network Gateway Local Network Gateway represents the firewall VPN network Configuration where you should provide remote configuration parameters. Provide Name, Remote peer Public IP In the Address space specify remote BGP peer IP (/32) (Router ID in case of Palo Alto). Please note that if you are configuring static route instead of dynamic you should advertise entire remote network ranges which you want to communicate through VPN. Here BGP making this process much simpler. In Advanced tab enable BGP and provide remote ASN Number and BGP peer IP create Create Connections with default crypto profile Once the VPN Gateway and Local Network Gateway has provisioned you can build connection which represents IPsec and IKE configurations. Go to VPN GW and under Settings, Add Connection Provide Name, VPN Gateway, Local Network Gateway, Pre-Shared Key Enable BGP If Required, Modify IPsec and IKE Crypto setting, else leave it as default Create Completed the Azure end configuration, now we can move to firewall side. Part 2: Configure Palo Alto Firewall VPN with BGP enabled Create IKE Gateway with default IKE Crypto profile Provide IKE Version, Local VPN Interface, Peer IP, Pre-shared key Create IPSec Tunnel with default IPsec Crypto profile Create Tunnel Interface Create IPsec Tunnel: Provide tunnel Interface, IPsec Crypto profile, IKE Gateway Since we are configuring route-based VPN, tunnel interface is very necessary to route traffic which needed to be encrypted. By this configuration your tunnel should be UP Now finish the remaining BGP Configurations Configure a Loopback interface to represent BGP virtual router, we have provided 10.0.17.5 IP for the interface, which is a free IP from public subnet. Configure virtual router Redistribution Profile Configure Redistribution Profile as below, this configuration ensures what kind of routers needed to be redistributed to BGP peer routers Enable BGP and configure local BGP and peer BGP parameters Provide Router ID, AS number Make sure to enable Install Route Option Configure EBGP Peer Group and Peer with Local BGP Peer IP, Remote (Azure)BGP Peer IP and Remote (Azure) BGP ASN Number. Also Specify Redistribution profile, make sure to enable Allow Redistribute Default Route, if you need to propagate default route to BGP peer router Create Static route for Azure BGP peer, 10.0.1.254/32 Commit changes Test Results Now we can test the connectivity, we have already configured necessary NAT and default route in Firewall. You can see the propagated route in both azure VPN gateway and Palo Alto firewall. FW NAT Name Src Zone Dst Zone Destination Interface Destination Address Service NAT Action nattovm1 any Untrust any untrust_inteface_pub_ip 3389 DNAT to VM1 IP nattovm2 any Untrust any untrust_interface_pub_ip 3000 DNAT to VM2 IP natto internet any Untrust ethernet1/1 default 0.0.0.0/0 SNAT to Eth1/1 Stattic Route configured: Azure VPN GW Connection Status and Propagated routes Azure Test VM1 (10.0.0.4) Effective routes Palo Alto BGP Summary Palo Alto BGP connection status Palo Alto BGP Received Route Palo Alto BGP Propagated Route Final Forwarding table Ping and trace result from Test VM1 to test VM2 Conclusion: BGP simplifies the route advertisement process. There are many more configuration options that we can try in BGP to achieve smooth functioning of routing. BGP also enables automatic redundancy and high availability. Hence, it is always recommended to configure BGP when it comes to production-grade complex networking.5KViews1like0Comments