load balancer
43 TopicsDelivering web applications over IPv6
The IPv4 address space pool has been exhausted for some time now, meaning there is no new public address space available for allocation from Internet Registries. The internet continues to run on IPv4 through technical measures such as Network Address Translation (NAT) and Carrier Grade NAT, and reallocation of address space through IPv4 address space trading. IPv6 will ultimately be the dominant network protocol on the internet, as IPv4 life-support mechanisms used by network operators, hosting providers and ISPs will eventually reach the limits of their scalability. Mobile networks are already changing to IPv6-only APNs; reachability of IPv4-only destinations from these mobile network is through 6-4 NAT gateways, which sometimes causes problems. Client uptake of IPv6 is progressing steadily. Google reports 49% of clients connecting to its services over IPv6 globally, with France leading at 80%. IPv6 client access measured by Google: Meanwhile, countries around the world are requiring IPv6 reachability for public web services. Examples are the United States, European Union member states among which the Netherlands and Norway, and India, and Japan. IPv6 adoption per country measured by Google: Entities needing to comply with these mandates are looking at Azure's networking capabilities for solutions. Azure supports IPv6 for both private and public networking, and capabilities have developed and expanded over time. This article discusses strategies to build and deploy IPv6-enabled public, internet-facing applications that are reachable from IPv6(-only) clients. Azure Networking IPv6 capabilities Azure's private networking capabilities center on Virtual Networks (VNETs) and the components that are deployed within. Azure VNETs are IPv4/IPv6 dual stack capable: a VNET must always have IPv4 address space allocated, and can also have IPv6 address space. Virtual machines in a dual stack VNET will have both an IPv4 and an IPv6 address from the VNET range, and can be behind IPv6 capable External- and Internal Load Balancers. VNETs can be connected through VNET peering, which effectively turns the peered VNETs into a single routing domain. It is now possible to peer only the IPv6 address spaces of VNETs, so that the IPv4 space assigned to VNETs can overlap and communication across the peering is over IPv6. The same is true for connectivity to on-premise over ExpressRoute: the Private Peering can be enabled for IPv6 only, so that VNETs in Azure do not have to have unique IPv4 address space assigned, which may be in short supply in an enterprise. Not all internal networking components are IPv6 capable yet. Most notable exceptions are VPN Gateway, Azure Firewall and Virtual WAN; IPv6 compatibility is on the roadmap for these services, but target availability dates have not been communicated. But now let's focus on Azure's externally facing, public, network services. Azure is ready to let customers publish their web applications over IPv6. IPv6 capable externally facing network services include: - Azure Front Door - Application Gateway - External Load Balancer - Public IP addresses and Public IP address prefixes - Azure DNS - Azure DDOS Protection - Traffic Manager - App Service (IPv6 support is in public preview) IPv6 Application Delivery IPv6 Application Delivery refers to the architectures and services that enable your web application to be accessible via IPv6. The goal is to provide an IPv6 address and connectivity for clients, while often continuing to run your application on IPv4 internally. Key benefits of adopting IPv6 in Azure include: ✅ Expanded Client Reach: IPv4-only websites risk being unreachable to IPv6-only networks. By enabling IPv6, you expand your reach into growing mobile and IoT markets that use IPv6 by default. Governments and enterprises increasingly mandate IPv6 support for public-facing services. ✅Address Abundance & No NAT: IPv6 provides a virtually unlimited address pool, mitigating IPv4 exhaustion concerns. This abundance means each service can have its own public IPv6 address, often removing the need for complex NAT schemes. End-to-end addressing can simplify connectivity and troubleshooting. ✅ Dual-Stack Compatibility: Azure supports dual-stack deployments where services listen on both IPv4 and IPv6. This allows a single application instance or endpoint to serve both types of clients seamlessly. Dual-stack ensures you don’t lose any existing IPv4 users while adding IPv6 capability. ✅Performance and Future Services: Some networks and clients might experience better performance over IPv6. Also, being IPv6-ready prepares your architecture for future Azure features and services as IPv6 integration deepens across the platform. General steps to enable IPv6 connectivity for a web application in Azure are: Plan and Enable IPv6 Addressing in Azure: Define an IPv6 address space in your Azure Virtual Network. Azure allows adding IPv6 address space to existing VNETs, making them dual-stack. A /56 segment for the VNET is recommended, /64 segment for subnets are required (Azure requires /64 subnets). If you have existing infrastructure, you might need to create new subnets or migrate resources, especially since older Application Gateway v1 instances cannot simply be “upgraded” to dual-stack. Deploy or Update Frontend Services with IPv6: Choose a suitable Azure service (Application Gateway, External / Global Load Balancer, etc.) and configure it with a public IPv6 address on the frontend. This usually means selecting *Dual Stack* configuration so the service gets both an IPv4 and IPv6 public IP. For instance, when creating an Application Gateway v2, you would specify IP address type: DualStack (IPv4 & IPv6). Azure Front Door by default provides dual-stack capabilities with its global endpoints. Configure Backends and Routing: Usually your backend servers or services will remain on IPv4. At the time of writing this in October 2025, Azure Application Gateway does not support IPv6 for backend pool addresses. This is fine because the frontend terminates the IPv6 network connection from the client, and the backend initiates an IPv4 connection to the backend pool or origin. Ensure that your load balancing rules, listener configurations, and health probes are all set up to route traffic to these backends. Both IPv4 and IPv6 frontend listeners can share the same backend pool. Azure Front Door does support IPv6 origins. Update DNS Records: Publish a DNS AAAA record for your application’s host name, pointing to the new IPv6 address. This step is critical so that IPv6-only clients can discover the IPv6 address of your service. If your service also has an IPv4 address, you will have both A (IPv4) and AAAA (IPv6) records for the same host name. DNS will thus allow clients of either IP family to connect. (In multi-region scenarios using Traffic Manager or Front Door, DNS configuration might be handled through those services as discussed later). Test IPv6 Connectivity: Once set up, test from an IPv6-enabled network or use online tools to ensure the site is reachable via IPv6. Azure’s services like Application Gateway and Front Door will handle the dual-stack routing, but it’s good to verify that content loads on an IPv6-only connection and that SSL certificates, etc., work over IPv6 as they do for IPv4. Next, we explore specific Azure services and architectures for IPv6 web delivery in detail. External Load Balancer - single region Azure External Load Balancer (also known as Public Load Balancer) can be deployed in a single region to provide IPv6 access to applications running on virtual machines or VM scale sets. External Load Balancer acts as a Layer 4 entry point for IPv6 traffic, distributing connections across backend instances. This scenario is ideal when you have stateless applications or services that do not require Layer 7 features like SSL termination or path-based routing. Key IPv6 Features of External Load Balancer: - Dual-Stack Frontend: Standard Load Balancer supports both IPv4 and IPv6 frontends simultaneously. When configured as dual-stack, the load balancer gets two public IP addresses – one IPv4 and one IPv6 – and can distribute traffic from both IP families to the same backend pool. - Zone-Redundant by Default: Standard Load Balancer is zone-redundant by default, providing high availability across Azure Availability Zones within a region without additional configuration. - IPv6 Frontend Availability: IPv6 support in Standard Load Balancer is available in all Azure regions. Basic Load Balancer does not support IPv6, so you must use Standard SKU. - IPv6 Backend Pool Support: While the frontend accepts IPv6 traffic, the load balancer will not translate IPv6 to IPv4. Backend pool members (VMs) must have private IPv6 addresses. You will need to add private IPv6 addressing to your existing VM IPv4-only infrastructure. This is in contrast to Application Gateway, discussed below, which will terminate inbound IPv6 network sessions and connect to the backend-end over IPv4. - Protocol Support: Supports TCP and UDP load balancing over IPv6, making it suitable for web applications and APIs, but also for non-web TCP- or UDP-based services accessed by IPv6-only clients. To set up an IPv6-capable External Load Balancer in one region, follow this high-level process: Enable IPv6 on the Virtual Network: Ensure the VNET where your backend VMs reside has an IPv6 address space. Add a dual-stack address space to the VNET (e.g., add an IPv6 space like 2001:db8:1234::/56 to complement your existing IPv4 space). Configure subnets that are dual-stack, containing both IPv4 and IPv6 prefixes (/64 for IPv6). Create Standard Load Balancer with IPv6 Frontend: In the Azure Portal, create a new Standard Load Balancer. During creation, configure the frontend IP with both IPv4 and IPv6 public IP addresses. Create or select existing Standard SKU public IP resources – one for IPv4 and one for IPv6. Configure Backend Pool: Add your virtual machines or VM scale set instances to the backend pool. Note that your backend instances will need to have private IPv6 addresses, in addition to IPv4 addresses, to receive inbound IPv6 traffic via the load balancer. Set Up Load Balancing Rules: Create load balancing rules that map frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS) from both the IPv4 and IPv6 frontends to the corresponding backend ports. Configure health probes to ensure only healthy instances receive traffic. Configure Network Security Groups: Ensure an NSG is present on the backend VM's subnet, allowing inbound traffic from the internet to the port(s) of the web application. Inbound traffic is "secure by default" meaning that inbound connectivity from internet is blocked unless there is an NSG present that explicitly allows it. DNS Configuration: Create DNS records for your application: an A record pointing to the IPv4 address and an AAAA record pointing to the IPv6 address of the load balancer frontend. Outcome: In this single-region scenario, IPv6-only clients will resolve your application's hostname to an IPv6 address and connect to the External Load Balancer over IPv6. Example: Consider a web application running on a VM (or a VM scale set) behind an External Load Balancer in Sweden Central. The VM runs the Azure Region and Client IP Viewer containerized application exposed on port 80, which displays the region the VM is deployed in and the calling client's IP address. The load balancer's front-end IPv6 address has a DNS name of ipv6webapp-elb-swedencentral.swedencentral.cloudapp.azure.com. When called from a client with an IPv6 address, the application shows its region and the client's address. Limitations & Considerations: - Standard SKU Required: Basic Load Balancer does not support IPv6. You must use Standard Load Balancer. - Layer 4 Only: Unlike Application Gateway, External Load Balancer operates at Layer 4 (transport layer). It cannot perform SSL termination, cookie-based session affinity, or path-based routing. If you need these features, consider Application Gateway instead. - Dual stack IPv4/IPv6 Backend required: Backend pool members must have private IPv6 addresses to receive inbound IPv6 traffic via the load balancer. The load balancer does not translate between the IPv6 frontend and an IPv4 backend. - Outbound Connectivity: If your backend VMs need outbound internet access over IPv6, you need to configure an IPv6 outbound rule. Global Load Balancer - multi-region Azure Global Load Balancer (aka Cross-Region Load Balancer) provides a cloud-native global network load balancing solution for distributing traffic across multiple Azure regions. Unlike DNS-based solutions, Global Load Balancer uses anycast IP addressing to automatically route clients to the nearest healthy regional deployment through Microsoft's global network. Key Features of Global Load Balancer: - Static Anycast Global IP: Global Load Balancer provides a single static public IP address (both IPv4 and IPv6 supported) that is advertised from all Microsoft WAN edge nodes globally. This anycast address ensures clients always connect to the nearest available Microsoft edge node without requiring DNS resolution. - Geo-Proximity Routing: The geo-proximity load-balancing algorithm minimizes latency by directing traffic to the nearest region where the backend is deployed. Unlike DNS-based routing, there's no DNS lookup delay - clients connect directly to the anycast IP and are immediately routed to the best region. - Layer 4 Pass-Through: Global Load Balancer operates as a Layer 4 pass-through network load balancer, preserving the original client IP address (including IPv6 addresses) for backend applications to use in their logic. - Regional Redundancy: If one region fails, traffic is automatically routed to the next closest healthy regional load balancer within seconds, providing instant global failover without DNS propagation delays. Architecture Overview: Global Load Balancer sits in front of multiple regional Standard Load Balancers, each deployed in different Azure regions. Each regional load balancer serves a local deployment of your application with IPv6 frontends. The global load balancer provides a single anycast IP address that clients worldwide can use to access your application, with automatic routing to the nearest healthy region. Multi-Region Deployment Steps: Deploy Regional Load Balancers: Create Standard External Load Balancers in multiple Azure regions (e.g. Sweden Central, East US2). Configure each with dual-stack frontends (IPv4 and IPv6 public IPs) and connect them to regional VM deployments or VM scale sets running your application. Configure Global Frontend IP address: Create a Global tier public IPv6 address for the frontend, in one of the supported Global Load Balancer home regions . This becomes your application's global anycast address. Create Global Load Balancer: Deploy the Global Load Balancer in the same home region. The home region is where the global load balancer resource is deployed - it doesn't affect traffic routing. Add Regional Backends: Configure the backend pool of the Global Load Balancer to include your regional Standard Load Balancers. Each regional load balancer becomes an endpoint in the global backend pool. The global load balancer automatically monitors the health of each regional endpoint. Set Up Load Balancing Rules: Create load balancing rules mapping frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS). The backend port on the global load balancer must match the frontend port of the regional load balancers. Configure Health Probes: Global Load Balancer automatically monitors the health of regional load balancers every 5 seconds. If a regional load balancer's availability drops to 0, it is automatically removed from rotation, and traffic is redirected to other healthy regions. DNS Configuration: Create DNS records pointing to the global load balancer's anycast IP addresses. Create both A (IPv4) and AAAA (IPv6) records for your application's hostname pointing to the global load balancer's static IPs. Outcome: IPv6 clients connecting to your application's hostname will resolve to the global load balancer's anycast IPv6 address. When they connect to this address, the Microsoft global network infrastructure automatically routes their connection to the nearest participating Azure region. The regional load balancer then distributes the traffic across local backend instances. If that region becomes unavailable, subsequent connections are automatically routed to the next nearest healthy region. Example: Our web application, which displays the region it is in, and the calling client's IP address, now runs on VMs behind External Load Balancers in Sweden Central and East US2. The External Load Balancer's front-ends are in the backend pool of a Global Load Balancer, which has a Global tier front-end IPv6 address. The front-end has an FQDN of `ipv6webapp-glb.eastus2.cloudapp.azure.com` (the region designation `eastus2` in the FQDN refers to the Global Load Balancer's "home region", into which the Global tier public IP must be deployed). When called from a client in Europe, Global Load Balancer directs the request to the instance deployed in Sweden Central. When called from a client in the US, Global Load Balancer directs the request to the instance deployed in US East 2. Features: - Client IP Preservation: The original IPv6 client address is preserved and available to backend applications, enabling IP-based logic and compliance requirements. - Floating IP Support: Configure floating IP at the global level for advanced networking scenarios requiring direct server return or high availability clustering. - Instant Scaling: Add or remove regional deployments behind the global endpoint without service interruption, enabling dynamic scaling for traffic events. - Multiple Protocol Support: Supports both TCP and UDP traffic distribution across regions, suitable for various application types beyond web services. Limitations & Considerations: - Home Region Requirement: Global Load Balancer can only be deployed in specific home regions, though this doesn't affect traffic routing performance. - Public Frontend Only: Global Load Balancer currently supports only public frontends - internal/private global load balancing is not available. - Standard Load Balancer Backends: Backend pool can only contain Standard Load Balancers, not Basic Load Balancers or other resource types. - Same IP Version Requirement: NAT64 translation isn't supported - frontend and backend must use the same IP version (IPv4 or IPv6). - Port Consistency: Backend port on global load balancer must match the frontend port of regional load balancers for proper traffic flow. - Health Probe Dependencies: Regional load balancers must have proper health probes configured for the global load balancer to accurately assess regional health. Comparison with DNS-Based Solutions: Unlike Traffic Manager or other DNS-based global load balancing solutions, Global Load Balancer provides: - Instant Failover: No DNS TTL delays - failover happens within seconds at the network level. - True Anycast: Single IP address that works globally without client-side DNS resolution. - Consistent Performance: Geo-proximity routing through Microsoft's backbone network ensures optimal paths. - Simplified Management: No DNS record management or TTL considerations. This architecture delivers global high availability and optimal performance for IPv6 applications through anycast routing, making it a good solution for latency-sensitive applications requiring worldwide accessibility with near-instant regional failover. Application Gateway - single region Azure Application Gateway can be deployed in a single region to provide IPv6 access to applications in that region. Application Gateway acts as the entry point for IPv6 traffic, terminating HTTP/S from IPv6 clients and forwarding to backend servers over IPv4. This scenario works well when your web application is served from one Azure region and you want to enable IPv6 connectivity for it. Key IPv6 Features of Application Gateway (v2 SKU): - Dual-Stack Frontend: Application Gateway v2 supports both [IPv4 and IPv6 frontends](https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq). When configured as dual-stack, the gateway gets two IP addresses – one IPv4 and one IPv6 – and can listen on both. (IPv6-only is not supported; IPv4 is always paired). IPv6 support requires Application Gateway v2, v1 does not support IPv6. - No IPv6 on Backends: The backend pool must use IPv4 addresses. IPv6 addresses for backend servers are currently not supported. This means your web servers can remain on IPv4 internal addresses, simplifying adoption because you only enable IPv6 on the frontend. - WAF Support: The Application Gateway Web Application Firewall (WAF) will inspect IPv6 client traffic just as it does IPv4. Single Region Deployment Steps: To set up an IPv6-capable Application Gateway in one region, consider the following high-level process: Enable IPv6 on the Virtual Network: Ensure the region’s VNET where the Application Gateway will reside has an IPv6 address space. Configure a subnet for the Application Gateway that is dual-stack (contains both an IPv4 subnet prefix and an IPv6 /64 prefix). Deploy Application Gateway (v2) with Dual Stack Frontend: Create a new Application Gateway using the Standard_v2 or WAF_v2 SKU. Populate Backend Pool: Ensure your backend pool (the target application servers or service) contains (DNS names pointing to) IPv4 addresses of your actual web servers. IPv6 addresses are not supported for backends. Configure Listeners and Rules: Set up listeners on the Application Gateway for your site. When creating an HTTP(S) listener, you choose which frontend IP to use – you would create one listener for IPv4 address and one for IPv6. Both listeners can use the same domain name (hostname) and the same underlying routing rule to your backend pool. Testing and DNS: After the gateway is deployed and configured, note the IPv6 address of the frontend (you can find it in the Gateway’s overview or in the associated Public IP resource). Update your application’s DNS records: create an AAAA record pointing to this IPv6 address (and update the A record to point to the IPv4 if it changed). With DNS in place, test the application by accessing it from an IPv6-enabled client or tool. Outcome: In this single-region scenario, IPv6-only clients will resolve your website’s hostname to an IPv6 address and connect to the Application Gateway over IPv6. The Application Gateway then handles the traffic and forwards it to your application over IPv4 internally. From the user perspective, the service now appears natively on IPv6. Importantly, this does not require any changes to the web servers, which can continue using IPv4. Application Gateway will include the source IPv6 address in an X-Forwarded-For header, so that the backend application has visibility of the originating client's address. Example: Our web application, which displays the region it is deployed in and the calling client's IP address, now runs on a VM behind Application Gateway in Sweden Central. The front-end has an FQDN of `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com`. Application Gateway terminates the IPv6 connection from the client and proxies the traffic to the application over IPv4. The client's IPv6 address is passed in the X-Forwarded-For header, which is read and displayed by the application. Calling the application's API endpoint at `/api/region` shows additional detail, including the IPv4 address of the Application Gateway instance that initiates the connection to the backend, and the original client IPv6 address (with the source port number appended) preserved in the X-Forwarded-For header. { "region": "SwedenCentral", "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "remoteAddress": "::ffff:10.1.0.4", "isPrivateIP": false, "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "connectionInfo": { "remoteAddress": "::ffff:10.1.0.4", "remoteFamily": "IPv6", "localAddress": "::ffff:10.1.1.68", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769" }, "deploymentAdvice": "Public IP detected successfully" } Limitations & Considerations: - Application Gateway v1 SKUs are not supported for IPv6. If you have an older deployment on v1, you’ll need to migrate to v2. - IPv6-only Application Gateway is not allowed. You must have IPv4 alongside IPv6 (the service must be dual-stack). This is usually fine, as dual-stack ensures all clients are covered. - No IPv6 backend addresses: The backend pool must have IPv4 addresses. - Management and Monitoring: Application Gateway logs traffic from IPv6 clients to Log Analytics (the client IP field will show IPv6 addresses). - Security: Azure’s infrastructure provides basic DDoS protection for IPv6 endpoints just as for IPv4. However, it is highly recommended to deploy Azure DDoS Protection Standard: this provides enhanced mitigation tailored to your specific deployment. Consider using the Web Application Firewall function for protection against application layer attacks. Application Gateway - multi-region Mission-critical web applications should be deploy in multiple Azure regions, achieving higher availability and lower latency for users worldwide. In a multi-region scenario, you need a mechanism to direct IPv6 client traffic to the “nearest” or healthiest region. Azure Application Gateway by itself is a regional service, so to use it in multiple regions, we use Azure Traffic Manager for global DNS load balancing, or use Azure Front Door (covered in the next section) as an alternative. This section focuses on the Traffic Manager + Application Gateway approach to multi-region IPv6 delivery. Azure Traffic Manager is a DNS-based load balancer that can distribute traffic across endpoints in different regions. It works by responding to DNS queries with the appropriate endpoint FQDN or IP, based on the routing method (Performance, Priority, Geographic) configured. Traffic Manager is agnostic to the IP version: it either returns CNAMEs, or AAAA records for IPv6 endpoints and A records for IPv4. This makes it suitable for routing IPv6 traffic globally. Architecture Overview: Each region has its own dual-stack Application Gateway. Traffic Manager is configured with an endpoint entry for each region’s gateway. The application’s FQDN is now a domain name hosted by Traffic Manager such as ipv6webapp.traffimanager.net, or a CNAME that ultimately points to it. DNS resolution will go through Traffic Manager, which decides which regional gateway’s FQDN to return. The client then connects directly to that Application Gateway’s IPv6 address, as follows: 1. DNS query: Client asks for ipv6webapp.trafficmanager.net, which is hosted in a Traffic Manager profile. 2. Traffic Manager decision: Traffic Manager sees an incoming DNS request and chooses the best endpoint (say, Sweden Central) based on routing rules (e.g., geographic proximity or lowest latency). 3. Traffic Manager response: Traffic Manager returns the FQDN of the Sweden Central Application Gateway to the client. 4. DNS Resolution: The client resolves regional FQDN and receives a AAAA response containing the IPv6 address. 5. Client connects: The client’s browser connects to the West Europe App Gateway IPv6 address directly. The HTTP/S session is established via IPv6 to that regional gateway, which then handles the request. 6. Failover: If that region becomes unavailable, Traffic Manager’s health checks will detect it and subsequent DNS queries will be answered with the FQDN of the secondary region’s gateway. Deployment Steps for Multi-Region with Traffic Manager: Set up Dual-Stack Application Gateways in each region: Similar to the single-region case, deploy an Azure Application Gateway v2 in each desired region (e.g., one in North America, one in Europe). Configure the web application in each region, these should be parallel deployments serving the same content. Configure a Traffic Manager Profile: In Azure Traffic Manager, create a profile and choose a routing method (such as Performance for nearest region routing, or Priority for primary/backup failover). Add endpoints for each region. Since our endpoints are Azure services with IPs, we can either use Azure endpoints (if the Application Gateways have Azure-provided DNS names) or External endpoints using the IP addresses. The simplest way is to use the Public IP resource of each Application Gateway as an Azure endpoint – ensure each App Gateway’s public IP has a DNS label (so it has a FQDN). Traffic Manager will detect those and also be aware of their IPs. Alternatively, use the IPv6 address as an External endpoint directly. Traffic Manager allows IPv6 addresses and will return AAAA records for them. DNS Setup: Traffic Manager profiles have a FQDN (like ipv6webapp.trafficmanager.net). You can either use that as your service’s CNAME, or you can configure your custom domain to CNAME to the Traffic Manager profile. Health Probing: Traffic Manager continuously checks the health of endpoints. When endpoints are Azure App Gateways, it uses HTTP/S probes to a specified URI path, to each gateway’s address. Make sure each App Gateway has a listener on the probing endpoint (e.g., a health check page) and that health probes are enabled. Testing Failover and Distribution: Test the setup by querying DNS from different geographical locations (to see if you get the nearest region’s IP). Also simulate a region down (stop the App Gateway or backend) and observe if Traffic Manager directs traffic to the other region. Because DNS TTLs are involved, failover isn’t instant but typically within a couple of minutes depending on TTL and probe interval. Considerations in this Architecture: - Latency vs Failover: Traffic Manager as a DNS load balancer directs users at connect time, but once a client has an answer (IP address), it keeps sending to that address until the DNS record TTL expires and it re-resolves. This is fine for most web apps. Ensure the TTL in the Traffic Manager profile is not too high (the default is 30 seconds). - IPv6 DNS and Connectivity: Confirm that each region’s IPv6 address is correctly configured and reachable globally. Azure’s public IPv6 addresses are globally routable. Traffic Manager itself is a global service and fully supports IPv6 in its decision-making. - Cost: Using multiple Application Gateways and Traffic Manager incurs costs for each component (App Gateway is per hour + capacity unit, Traffic Manager per million DNS queries). This is a trade-off for high availability. - Alternative: Azure Front Door: Azure Front Door is an alternative to the Traffic Manager + Application Gateway combination. Front Door can automatically handle global routing and failover at layer 7 without DNS-based limitations, offering potentially faster failover. Azure Front Door is discussed in the next section. In summary, a multi-region IPv6 web delivery with Application Gateways uses Traffic Manager for global DNS load balancing. Traffic Manager will seamlessly return IPv6 addresses for IPv6 clients, ensuring that no matter where an IPv6-only client is, they get pointed to the nearest available regional deployment of your app. This design achieves global resiliency (withstand a regional outage) and low latency access, leveraging IPv6 connectivity on each regional endpoint. Example: The global FQDN of our application is now ipv6webapp.trafficmanager.net and clients will use this FQDN to access the application regardless of their geographical location. Traffic Manager will return the FQDN of one of the regional deployments, `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com` or `ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com` depending on the routing method configured, the health state of the regional endpoints and the client's location. Then the client resolves the regional FQDN through its local DNS server and connects to the regional instance of the application. DNS resolution from a client in Europe: Resolve-DnsName ipv6webapp.trafficmanager.net Name Type TTL Section NameHost ---- ---- --- ------- -------- ipv6webapp.trafficmanager.net CNAME 59 Answer ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com Name : ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com QueryType : AAAA TTL : 10 Section : Answer IP6Address : 2603:1020:1001:25::168 And from a client in the US: Resolve-DnsName ipv6webapp.trafficmanager.net Name Type TTL Section NameHost ---- ---- --- ------- -------- ipv6webapp.trafficmanager.net CNAME 60 Answer ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com Name : ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com QueryType : AAAA TTL : 10 Section : Answer IP6Address : 2603:1030:403:17::5b0 Azure Front Door Azure Front Door is an application delivery network with built-in CDN, SSL offload, WAF, and routing capabilities. It provides a single, unified frontend distributed across Microsoft’s edge network. Azure Front Door natively supports IPv6 connectivity. For applications that have users worldwide, Front Door offers advantages: - Global Anycast Endpoint: Provides anycast IPv4 and IPv6 addresses, advertised out of all edge locations, with automatic A and AAAA DNS record support. - IPv4 and IPv6 origin support: Azure Front Door supports both IPv4 and IPv6 origins (i.e. backends), both within Azure and externally (i.e. accessible over the internet). - Simplified DNS: Custom domains can be mapped using CNAME records. - Layer-7 Routing: Supports path-based routing and automatic backend health detection. - Edge Security: Includes DDoS protection and optional WAF integration. Front Door enables "cross-IP version" scenario's: a client can connect to the Front Door front-end over IPv6, and then Front Door can connect to an IPv4 origin. Conversely, an IPv4-only client can retrieve content from an IPv6 backend via Front Door. Front Door preserves the client's source IP address in the X-Forwarded-For header. Note: Front Door provides managed IPv6 addresses that are not customer-owned resources. Custom domains should use CNAME records pointing to the Front Door hostname rather than direct IP address references. Private Link Integration Azure Front Door Premium introduces Private Link integration, enabling secure, private connectivity between Front Door and backend resources, without exposing them to the public internet. When Private Link is enabled, Azure Front Door establishes a private endpoint within a Microsoft-managed virtual network. This endpoint acts as a secure bridge between Front Door’s global edge network and your origin resources, such as Azure App Service, Azure Storage, Application Gateway, or workloads behind an internal load balancer. Traffic from end users still enters through Front Door’s globally distributed POPs, benefiting from features like SSL offload, caching, and WAF protection. However, instead of routing to your origin over public, internet-facing, endpoints, Front Door uses the private Microsoft backbone to reach the private endpoint. This ensures that all traffic between Front Door and your origin remains isolated from external networks. The private endpoint connection requires approval from the origin resource owner, adding an extra layer of control. Once approved, the origin can restrict public access entirely, enforcing that all traffic flows through Private Link. Private Link integration brings following benefits: - Enhanced Security: By removing public exposure of backend services, Private Link significantly reduces the risk of DDoS attacks, data exfiltration, and unauthorized access. - Compliance and Governance: Many regulatory frameworks mandate private connectivity for sensitive workloads. Private Link helps meet these requirements without sacrificing global availability. - Performance and Reliability: Traffic between Front Door and your origin travels over Microsoft’s high-speed backbone network, delivering low latency and consistent performance compared to public internet paths. - Defense in Depth: Combined with Web Application Firewall (WAF), TLS encryption, and DDoS protection, Private Link strengthens your security posture across multiple layers. - Isolation and Control: Resource owners maintain control over connection approvals, ensuring that only authorized Front Door profiles can access the origin. - Integration with Hybrid Architectures: For scenarios involving AKS clusters, custom APIs, or workloads behind internal load balancers, Private Link enables secure connectivity without requiring public IPs or complex VPN setups. Private Link transforms Azure Front Door from a global entry point into a fully private delivery mechanism for your applications, aligning with modern security principles and enterprise compliance needs. Example: Our application is now placed behind Azure Front Door. We are combining a public backend endpoint and Private Link integration, to show both in action in a single example. The Sweden Central origin endpoint is the public IPv6 endpoint of the regional External Load Balancers and the origin in US East 2 is connected via Private Link integration The global FQDN `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net` and clients will use this FQDN to access the application regardless of their geographical location. The FQDN resolves to Front Door's global anycast address, and the internet will route client requests to the nearest Microsoft edge from this address is advertised. Front Door will then transparently route the request to the nearest origin deployment in Azure. Although public endpoints are used in this example, that traffic will be route over the Microsoft network. From a client in Europe: Calling the application's api endpoint on `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net/api/region` shows some more detail. { "region": "SwedenCentral", "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28", "isPrivateIP": false, "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "connectionInfo": { "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28", "remoteFamily": "IPv6", "localAddress": "2001:db8:1:1::4", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "x-azure-clientip": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21" }, "deploymentAdvice": "Public IP detected successfully" } "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28" is the address from which Front Door sources its request to the origin. From a client in the US: The detailed view shows that the IP address calling the backend instance now is local VNET address. Private Link sources traffic coming in from a local address taken from the VNET it is in. The original client IP address is again preserved in the X-Forwarded-For header. { "region": "eastus2", "clientIp": "2603:1030:501:23::68:55658", "xForwardedFor": "2603:1030:501:23::68:55658", "remoteAddress": "::ffff:10.2.1.5", "isPrivateIP": false, "expressIp": "2603:1030:501:23::68:55658", "connectionInfo": { "remoteAddress": "::ffff:10.2.1.5", "remoteFamily": "IPv6", "localAddress": "::ffff:10.2.2.68", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2603:1030:501:23::68:55658" }, "deploymentAdvice": "Public IP detected successfully" } Conclusion IPv6 adoption for web applications is no longer optional. It is essential as public IPv4 address space is depleted, mobile networks increasingly use IPv6 only and governments mandate IPv6 reachability for public services. Azure's comprehensive dual-stack networking capabilities provide a clear path forward, enabling organizations to leverage IPv6 externally without sacrificing IPv4 compatibility or requiring complete infrastructure overhauls. Azure's externally facing services — including Application Gateway, External Load Balancer, Global Load Balancer, and Front Door — support IPv6 frontends, while Application Gateway and Front Door maintain IPv4 backend connectivity. This architecture allows applications to remain unchanged while instantly becoming accessible to IPv6-only clients. For single-region deployments, Application Gateway offers layer-7 features like SSL termination and WAF protection. External Load Balancer provides high-performance layer-4 distribution. Multi-region scenarios benefit from Traffic Manager's DNS-based routing combined with regional Application Gateways, or the superior performance and failover capabilities of Global Load Balancer's anycast addressing. Azure Front Door provides global IPv6 delivery with edge optimization, built-in security, and seamless failover across Microsoft's network. Private Link integration allows secure global IPv6 distribution while maintaining backend isolation. The transition to IPv6 application delivery on Azure is straightforward: enable dual-stack addressing on virtual networks, configure IPv6 frontends on load balancing services, and update DNS records. With Application Gateway or Front Door, backend applications require no modifications. These Azure services handle the IPv4-to-IPv6 translation seamlessly. This approach ensures both immediate IPv6 accessibility and long-term architectural flexibility as IPv6 adoption accelerates globally.155Views0likes0CommentsDeploying Third-Party Firewalls in Azure Landing Zones: Design, Configuration, and Best Practices
As enterprises adopt Microsoft Azure for large-scale workloads, securing network traffic becomes a critical part of the platform foundation. Azure’s Well-Architected Framework provides the blueprint for enterprise-scale Landing Zone design and deployments, and while Azure Firewall is a built-in PaaS option, some organizations prefer third-party firewall appliances for familiarity, feature depth, and vendor alignment. This blog explains the basic design patterns, key configurations, and best practices when deploying third-party firewalls (Palo Alto, Fortinet, Check Point, etc.) as part of an Azure Landing Zone. 1. Landing Zone Architecture and Firewall Role The Azure Landing Zone is Microsoft’s recommended enterprise-scale architecture for adopting cloud at scale. It provides a standardized, modular design that organizations can use to deploy and govern workloads consistently across subscriptions and regions. At its core, the Landing Zone follows a hub-and-spoke topology: Hub (Connectivity Subscription): Central place for shared services like DNS, private endpoints, VPN/ExpressRoute gateways, Azure Firewall (or third-party firewall appliances), Bastion, and monitoring agents. Provides consistent security controls and connectivity for all workloads. Firewalls are deployed here to act as the traffic inspection and enforcement point. Spokes (Workload Subscriptions): Application workloads (e.g., SAP, web apps, data platforms) are placed in spoke VNets. Additional specialized spokes may exist for Identity, Shared Services, Security, or Management. These are isolated for governance and compliance, but all connectivity back to other workloads or on-premises routes through the hub. Traffic Flows Through Firewalls North-South Traffic: Inbound connections from the Internet (e.g., customer access to applications). Outbound connections from Azure workloads to Internet services. Hybrid connectivity to on-premises datacenters or other clouds. Routed through the external firewall set for inspection and policy enforcement. East-West Traffic: Lateral traffic between spokes (e.g., Application VNet to Database VNet). Communication across environments like Dev → Test → Prod (if allowed). Routed through an internal firewall set to apply segmentation, zero-trust principles, and prevent lateral movement of threats. Why Firewalls Matter in the Landing Zone While Azure provides NSGs (Network Security Groups) and Route Tables for basic packet filtering and routing, these are not sufficient for advanced security scenarios. Firewalls add: Deep packet inspection (DPI) and application-level filtering. Intrusion Detection/Prevention (IDS/IPS) capabilities. Centralized policy management across multiple spokes. Segmentation of workloads to reduce blast radius of potential attacks. Consistent enforcement of enterprise security baselines across hybrid and multi-cloud. Organizations May Choose Depending on security needs, cost tolerance, and operational complexity, organizations typically adopt one of two models for third party firewalls: Two sets of firewalls One set dedicated for north-south traffic (external to Azure). Another set for east-west traffic (between VNets and spokes). Provides the highest security granularity, but comes with higher cost and management overhead. Single set of firewalls A consolidated deployment where the same firewall cluster handles both east-west and north-south traffic. Simpler and more cost-effective, but may introduce complexity in routing and policy segregation. This design choice is usually made during Landing Zone design, balancing security requirements, budget, and operational maturity. 2. Why Choose Third-Party Firewalls Over Azure Firewall? While Azure Firewall provides simplicity as a managed service, customers often choose third-party solutions due to: Advanced features – Deep packet inspection, IDS/IPS, SSL decryption, threat feeds. Vendor familiarity – Network teams trained on Palo Alto, Fortinet, or Check Point. Existing contracts – Enterprise license agreements and support channels. Hybrid alignment – Same vendor firewalls across on-premises and Azure. Azure Firewall is a fully managed PaaS service, ideal for customers who want a simple, cloud-native solution without worrying about underlying infrastructure. However, many enterprises continue to choose third-party firewall appliances (Palo Alto, Fortinet, Check Point, etc.) when implementing their Landing Zones. The decision usually depends on capabilities, familiarity, and enterprise strategy. Key Reasons to Choose Third-Party Firewalls Feature Depth and Advanced Security Third-party vendors offer advanced capabilities such as: Deep Packet Inspection (DPI) for application-aware filtering. Intrusion Detection and Prevention (IDS/IPS). SSL/TLS decryption and inspection. Advanced threat feeds, malware protection, sandboxing, and botnet detection. While Azure Firewall continues to evolve, these vendors have a longer track record in advanced threat protection. Operational Familiarity and Skills Network and security teams often have years of experience managing Palo Alto, Fortinet, or Check Point appliances on-premises. Adopting the same technology in Azure reduces the learning curve and ensures faster troubleshooting, smoother operations, and reuse of existing playbooks. Integration with Existing Security Ecosystem Many organizations already use vendor-specific management platforms (e.g., Panorama for Palo Alto, FortiManager for Fortinet, or SmartConsole for Check Point). Extending the same tools into Azure allows centralized management of policies across on-premises and cloud, ensuring consistent enforcement. Compliance and Regulatory Requirements Certain industries (finance, healthcare, government) require proven, certified firewall vendors for security compliance. Customers may already have third-party solutions validated by auditors and prefer extending those to Azure for consistency. Hybrid and Multi-Cloud Alignment Many enterprises run a hybrid model, with workloads split across on-premises, Azure, AWS, or GCP. Third-party firewalls provide a common security layer across environments, simplifying multi-cloud operations and governance. Customization and Flexibility Unlike Azure Firewall, which is a managed service with limited backend visibility, third-party firewalls give admins full control over operating systems, patching, advanced routing, and custom integrations. This flexibility can be essential when supporting complex or non-standard workloads. Licensing Leverage (BYOL) Enterprises with existing enterprise agreements or volume discounts can bring their own firewall licenses (BYOL) to Azure. This often reduces cost compared to pay-as-you-go Azure Firewall pricing. When Azure Firewall Might Still Be Enough Organizations with simple security needs (basic north-south inspection, FQDN filtering). Cloud-first teams that prefer managed services with minimal infrastructure overhead. Customers who want to avoid manual scaling and VM patching that comes with IaaS appliances. In practice, many large organizations use a hybrid approach: Azure Firewall for lightweight scenarios or specific environments, and third-party firewalls for enterprise workloads that require advanced inspection, vendor alignment, and compliance certifications. 3. Deployment Models in Azure Third-party firewalls in Azure are primarily IaaS-based appliances deployed as virtual machines (VMs). Leading vendors publish Azure Marketplace images and ARM/Bicep templates, enabling rapid, repeatable deployments across multiple environments. These firewalls allow organizations to enforce advanced network security policies, perform deep packet inspection, and integrate with Azure-native services such as Virtual Network (VNet) peering, Azure Monitor, and Azure Sentinel. Note: Some vendors now also release PaaS versions of their firewalls, offering managed firewall services with simplified operations. However, for the purposes of this blog, we will focus mainly on IaaS-based firewall deployments. Common Deployment Modes Active-Active Description: In this mode, multiple firewall VMs operate simultaneously, sharing the traffic load. An Azure Load Balancer distributes inbound and outbound traffic across all active firewall instances. Use Cases: Ideal for environments requiring high throughput, resilience, and near-zero downtime, such as enterprise data centers, multi-region deployments, or mission-critical applications. Considerations: Requires careful route and policy synchronization between firewall instances to ensure consistent traffic handling. Typically involves BGP or user-defined routes (UDRs) for optimal traffic steering. Scaling is easier: additional firewall VMs can be added behind the load balancer to handle traffic spikes. Active-Passive Description: One firewall VM handles all traffic (active), while the secondary VM (passive) stands by for failover. When the active node fails, Azure service principals manage IP reassignment and traffic rerouting. Use Cases: Suitable for environments where simpler management and lower operational complexity are preferred over continuous load balancing. Considerations: Failover may result in a brief downtime, typically measured in seconds to a few minutes. Synchronization between the active and passive nodes ensures firewall policies, sessions, and configurations are mirrored. Recommended for smaller deployments or those with predictable traffic patterns. Network Interfaces (NICs) Third-party firewall VMs often include multiple NICs, each dedicated to a specific type of traffic: Untrust/Public NIC: Connects to the Internet or external networks. Handles inbound/outbound public traffic and enforces perimeter security policies. Trust/Internal NIC: Connects to private VNets or subnets. Manages internal traffic between application tiers and enforces internal segmentation. Management NIC: Dedicated to firewall management traffic. Keeps administration separate from data plane traffic, improving security and reducing performance interference. HA NIC (Active-Passive setups): Facilitates synchronization between active and passive firewall nodes, ensuring session and configuration state is maintained across failovers. This design choice is usually made during Landing Zone design, balancing security requirements, budget, and operational maturity. : NICs of Palo Alto External Firewalls and FortiGate Internal Firewalls in two sets of firewall scenario 4. Key Configuration Considerations When deploying third-party firewalls in Azure, several design and configuration elements play a critical role in ensuring security, performance, and high availability. These considerations should be carefully aligned with organizational security policies, compliance requirements, and operational practices. Routing User-Defined Routes (UDRs): Define UDRs in spoke Virtual Networks to ensure all outbound traffic flows through the firewall, enforcing inspection and security policies before reaching the Internet or other Virtual Networks. Centralized routing helps standardize controls across multiple application Virtual Networks. Depending on the architecture traffic flow design, use appropriate Load Balancer IP as the Next Hop on UDRs of spoke Virtual Networks. Symmetric Routing: Ensure traffic follows symmetric paths (i.e., outbound and inbound flows pass through the same firewall instance). Avoid asymmetric routing, which can cause stateful firewalls to drop return traffic. Leverage BGP with Azure Route Server where supported, to simplify route propagation across hub-and-spoke topologies. : Azure UDR directing all traffic from a Spoke VNET to the Firewall IP Address Policies NAT Rules: Configure DNAT (Destination NAT) rules to publish applications securely to the Internet. Use SNAT (Source NAT) to mask private IPs when workloads access external resources. Security Rules: Define granular allow/deny rules for both north-south traffic (Internet to VNet) and east-west traffic (between Virtual Networks or subnets). Ensure least privilege by only allowing required ports, protocols, and destinations. Segmentation: Apply firewall policies to separate workloads, environments, and tenants (e.g., Production vs. Development). Enforce compliance by isolating workloads subject to regulatory standards (PCI-DSS, HIPAA, GDPR). Application-Aware Policies: Many vendors support Layer 7 inspection, enabling controls based on applications, users, and content (not just IP/port). Integrate with identity providers (Azure AD, LDAP, etc.) for user-based firewall rules. : Example Configuration of NAT Rules on a Palo Alto External Firewall Load Balancers Internal Load Balancer (ILB): Use ILBs for east-west traffic inspection between Virtual Networks or subnets. Ensures that traffic between applications always passes through the firewall, regardless of origin. External Load Balancer (ELB): Use ELBs for north-south traffic, handling Internet ingress and egress. Required in Active-Active firewall clusters to distribute traffic evenly across firewall nodes. Other Configurations: Configure health probes for firewall instances to ensure faulty nodes are automatically bypassed. Validate Floating IP configuration on Load Balancing Rules according to the respective vendor recommendations. Identity Integration Azure Service Principals: In Active-Passive deployments, configure service principals to enable automated IP reassignment during failover. This ensures continuous service availability without manual intervention. Role-Based Access Control (RBAC): Integrate firewall management with Azure RBAC to control who can deploy, manage, or modify firewall configurations. SIEM Integration: Stream logs to Azure Monitor, Sentinel, or third-party SIEMs for auditing, monitoring, and incident response. Licensing Pay-As-You-Go (PAYG): Licenses are bundled into the VM cost when deploying from the Azure Marketplace. Best for short-term projects, PoCs, or variable workloads. Bring Your Own License (BYOL): Enterprises can apply existing contracts and licenses with vendors to Azure deployments. Often more cost-effective for large-scale, long-term deployments. Hybrid Licensing Models: Some vendors support license mobility from on-premises to Azure, reducing duplication of costs. 5. Common Challenges Third-party firewalls in Azure provide strong security controls, but organizations often face practical challenges in day-to-day operations: Misconfiguration Incorrect UDRs, route tables, or NAT rules can cause dropped traffic or bypassed inspection. Asymmetric routing is a frequent issue in hub-and-spoke topologies, leading to session drops in stateful firewalls. Performance Bottlenecks Firewall throughput depends on the VM SKU (CPU, memory, NIC limits). Under-sizing causes latency and packet loss, while over-sizing adds unnecessary cost. Continuous monitoring and vendor sizing guides are essential. Failover Downtime Active-Passive models introduce brief service interruptions while IPs and routes are reassigned. Some sessions may be lost even with state sync, making Active-Active more attractive for mission-critical workloads. Backup & Recovery Azure Backup doesn’t support vendor firewall OS. Configurations must be exported and stored externally (e.g., storage accounts, repos, or vendor management tools). Without proper backups, recovery from failures or misconfigurations can be slow. Azure Platform Limits on Connections Azure imposes a per-VM cap of 250,000 active connections, regardless of what the firewall vendor appliance supports. This means even if an appliance is designed for millions of sessions, it will be constrained by Azure’s networking fabric. Hitting this cap can lead to unexplained traffic drops despite available CPU/memory. The workaround is to scale out horizontally (multiple firewall VMs behind a load balancer) and carefully monitor connection distribution. 6. Best Practices for Third-Party Firewall Deployments To maximize security, reliability, and performance of third-party firewalls in Azure, organizations should follow these best practices: Deploy in Availability Zones: Place firewall instances across different Availability Zones to ensure regional resilience and minimize downtime in case of zone-level failures. Prefer Active-Active for Critical Workloads: Where zero downtime is a requirement, use Active-Active clusters behind an Azure Load Balancer. Active-Passive can be simpler but introduces failover delays. Use Dedicated Subnets for Interfaces: Separate trust, untrust, HA, and management NICs into their own subnets. This enforces segmentation, simplifies route management, and reduces misconfiguration risk. Apply Least Privilege Policies: Always start with a deny-all baseline, then allow only necessary applications, ports, and protocols. Regularly review rules to avoid policy sprawl. Standardize Naming & Tagging: Adopt consistent naming conventions and resource tags for firewalls, subnets, route tables, and policies. This aids troubleshooting, automation, and compliance reporting. Validate End-to-End Traffic Flows: Test both north-south (Internet ↔ VNet) and east-west (VNet ↔ VNet/subnet) flows after deployment. Use tools like Azure Network Watcher and vendor traffic logs to confirm inspection. Plan for Scalability: Monitor throughput, CPU, memory, and session counts to anticipate when scale-out or higher VM SKUs are needed. Some vendors support autoscaling clusters for bursty workloads. Maintain Firmware & Threat Signatures: Regularly update the firewall’s software, patches, and threat intelligence feeds to ensure protection against emerging vulnerabilities and attacks. Automate updates where possible. Conclusion Third-party firewalls remain a core building block in many enterprise Azure Landing Zones. They provide the deep security controls and operational familiarity enterprises need, while Azure provides the scalable infrastructure to host them. By following the hub-and-spoke architecture, carefully planning deployment models, and enforcing best practices for routing, redundancy, monitoring, and backup, organizations can ensure a secure and reliable network foundation in Azure.1.2KViews4likes2CommentsAzure SDK python client to Azure iothub over HAproxy (SSL handshake failure)
I am trying to fix an IP address for Azure Iothub via Load Balencer and HAproxy as suggested in this https://medium.com/cloudzone/azure-iot-hub-how-to-expose-it-using-fixed-ip-and-create-a-more-secure-environment-along-the-way-988661a8f67a: https://i.stack.imgur.com/gyQ9j.png I have configured the HAproxy as suggested to pass the SSL handshake to the server: global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # An alternative list with additional directives can be obtained from # https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend haproxy_iothub bind *:8883 bind *:443 bind *:5671 mode tcp default_backend iothub backend iothub mode tcp server iothub [Server URL]:8883 check server iothub [Server URL]:443 check server iothub [Server URL]:5671 check To simulate the device, I used Azure V2 SDK (azure-iot-device) and defined a proxy option and created a client from a connection string. proxy_opts = ProxyOptions(proxy_type=socks.HTTP, proxy_addr="Proxy_ IP", proxy_port=8883) device_client = IoTHubDeviceClient.create_from_connection_string("IOTHUB_DEVICE_CONNECTION_STRING", websockets=True, proxy_options=proxy_opts ) I was not able to reach the iothub, I tried debugging the library to get more information and it turned out that the blocking occurs due to a general proxy error ("connection closed unexpectedly") in _negotiate_HTTP. socks.HTTPError :504 : Gateway Time-out (in socks.py) HAproxy logging showes : Oct 18 08:48:37 vmss2xigg000000 haproxy[27470]: *..:59000 [18/Oct/2021:08:48:37.451] haproxy_iothub iothub/iothub1 1/1/38 0 -- 1/1/0/0/0 0/0 Any help much appreciated HA-Proxy version 1.8.8-1ubuntu0.11 Azure-iot-device Version 2.8.01.2KViews0likes1CommentWhen measuring the speed using iperf, the speed does not exceed 30 Mbps
Hello! We have encountered a problem when using Azure virtual servers. When measuring the speed using iperf, the speed does not exceed 30 Mbps. Why is the speed so low? Are there restrictions on Azure servers?675Views0likes1CommentCan only remote into azure vm from DC
Hi all, I have set up a site to site connection from on prem to azure and I can remote in via the main dc on prem but not any other server or ping from any other server to the azure. Why can I only remote into the azure VM from the server that has Routing and remote access? Any ideas on how I can fix this?757Views0likes2CommentsNot able to setup azure private endpoint url as webservice/backend for Azure API Management service
Hi all, I have integrated Private endpoint connected to private link service. Private link service is created by azure standard load balancer created by kubernetes load balancer service using below annotations . annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-pls-create: "true" service.beta.kubernetes.io/azure-pls-name: myPLS service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: YOUR SUBNET service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1" service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: SUBNET_IP service.beta.kubernetes.io/azure-pls-proxy-protocol: "false" service.beta.kubernetes.io/azure-pls-visibility: "*" # does not apply here because we will use Front Door later service.beta.kubernetes.io/azure-pls-auto-approval: "YOUR SUBSCRIPTION ID" i am getting expected response i.e response from kubernetes service from Private endpoint ip which confirms that private link and private endpoint integration is working fine. we now want to integrate above private endpoint service with azure api management service so we tried adding private endpoint url as web service url for api management service but api management service is returning 500 error { "statusCode": 500, "message": "Internal server error", "activityId": "76261291-7121-4814-b0e4-66b52284d76c" } I also tried api management service Troubleshoot & analysis page for exact error its showing below error: BackendConnectionFailure An attempt was made to access a socket in a way forbidden by its access permissions <private_endpoint_url>:80 Please help me what i am doing wrong in this implementation Our requirement is to have kubernetes private load balancer and integrate it with azure api management service. so user can access api only through api management service and only api management service should be able to access load balancer service. Thanks in advance712Views0likes1CommentAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.8KViews3likes0CommentsUnlock enterprise AI/ML with confidence: Azure Application Gateway as your scalable AI access layer
As enterprises accelerate their adoption of generative AI and machine learning to transform operations, enhance productivity, and deliver smarter customer experiences, Microsoft Azure has emerged as a leading platform for hosting and scaling intelligent applications. With offerings like Azure OpenAI, Azure Machine Learning, and Cognitive Services, organizations are building copilots, virtual agents, recommendation engines, and advanced analytics platforms that push the boundaries of what is possible. However, scaling these applications to serve global users introduces new complexities: latency, traffic bursts, backend rate limits, quota distribution, and regional failovers must all be managed effectively to ensure seamless user experiences and resilient architectures. Azure Application Gateway: The AI access layer Azure Application Gateway plays a foundational role in enabling AI/ML at scale by acting as a high-performance Layer 7 reverse proxy—built to intelligently route, protect, and optimize traffic between clients and AI services. Hundreds of enterprise customers are already using Azure Application Gateway to efficiently manage traffic across diverse Azure-hosted AI/ML models—ensuring uptime, performance, and security at global scale. The AI delivery challenge Inferencing against AI/ML backends is more than connecting to a service. It is about doing so: Reliably: across regions, regardless of load conditions Securely: protecting access from bad actors and abusive patterns Efficiently: minimizing latency and request cost Scalable: handling bursts and high concurrency without errors Observably: with real-time insights, diagnostics, and feedback loops for proactive tuning Key features of Azure Application Gateway for AI traffic Smart request distribution: Path-based and round-robin routing across OpenAI and ML endpoints. Built-in health probes: Automatically bypass unhealthy endpoints Security enforcement: With WAF, TLS offload, and mTLS to protect sensitive AI/ML workloads Unified endpoint: Expose a single endpoint for clients; manage complexity internally. Observability: Full diagnostics, logs, and metrics for traffic and routing visibility. Smart rewrite rules: Append path, or rewrite headers per policy. Horizontal scalability: Easily scale to handle surges in demand by distributing load across multiple regions, instances, or models. SSE and real-time streaming: Optimize connection handling and buffering to enable seamless AI response streaming. Azure Web Application Firewall (WAF) Protections for AI/ML Workloads When deploying AI/ML workloads, especially those exposed via APIs, model endpoints, or interactive web apps, security is as critical as performance. A modern WAF helps protect not just the application, but also the sensitive models, training data, and inference pipelines behind it. Core Protections: SQL injection – Prevents malicious database queries targeting training datasets, metadata stores, or experiment tracking systems. Cross-site scripting (XSS) – Blocks injected scripts that could compromise AI dashboards, model monitoring tools, or annotation platforms. Malformed payloads – Stops corrupted or adversarial crafted inputs designed to break parsing logic or exploit model pre/post-processing pipelines. Bot protections – Bot Protection Rule Set detects & blocks known malicious bot patterns (credential stuffing, password spraying). Block traffic based on request body size, HTTP headers, IP addresses, or geolocation to prevent oversized payloads or region-specific attacks on model APIs. Enforce header requirements to ensure only authorized clients can access model inference or fine-tuning endpoints. Rate limiting based on IP, headers, or user agent to prevent inference overloads, cost spikes, or denial of service against AI models. By integrating these WAF protections, AI/ML workloads can be shielded from both conventional web threats and emerging AI-specific attack vectors, ensuring models remain accurate, reliable, and secure. Architecture Real-world architectures with Azure Application Gateway Industries across sectors rely on Azure Application Gateway to securely expose AI and ML workloads: Healthcare → Protecting patient-facing copilots and clinical decision support tools with HIPAA-compliant routing, private inference endpoints, and strict access control. Finance → Safeguarding trading assistants, fraud-detection APIs, and customer chatbots with enterprise WAF rules, rate limiting, and region-specific compliance. Retail & eCommerce → Defending product recommendation engines, conversational shopping copilots, and personalization APIs from scraping and automated abuse. Manufacturing & industrial IoT → Securing AI-driven quality control, predictive maintenance APIs, and digital twin interfaces with private routing and bot protection. Education → Hosting learning copilots and tutoring assistants safely behind WAF, preventing misuse while scaling access for students and researchers. Public sector & government → Enforcing FIPS-compliant TLS, private routing, and zero-trust controls for citizen services and AI-powered case management. Telecommunications & media → Protecting inference endpoints powering real-time translation, content moderation, and media recommendations at scale. Energy & utilities → Safeguarding smart grid analytics, sustainability dashboards, and AI-powered forecasting models through secure gateway routing. Advanced integrations Position Azure Application Gateway as the secure, scalable network entry point to your AI infrastructure Private-only Azure Application Gateway: Host AI endpoints entirely within virtual networks for secure internal access SSE support: Configure HTTP settings for streaming completions via Server-Sent Events Azure Application Gateway+ Azure Functions: Build adaptive policies that reroute traffic based on usage, cost, or time of day Azure Application Gateway + API management to protect OpenAI workloads What’s next: Adaptive AI gateways Microsoft is evolving Azure Application Gateway into a more intelligent, AI aware platform with capabilities such as: Auto rerouting to healthy endpoints or more cost-efficient models. Dynamic token management directly within Azure Application Gateway to optimize AI inference usage. Integrated feedback loops with Azure Monitor and Log Analytics for real-time performance tuning. The goal is to transform Azure Application Gateway from a traditional traffic manager into an adaptive inference orchestrator one that predicts failures, optimizes operational costs, and safeguards AI workloads from misuse. Conclusion Azure Application Gateway is not just a load balancer—it’s becoming a critical enabler for enterprise-grade AI delivery. Today, it delivers smart routing, security enforcement, adaptive observability, and a compliance-ready architecture, enabling organizations to scale AI confidently while safeguarding performance and cost. Looking ahead, Microsoft’s vision includes future capabilities such as quota resiliency to intelligently manage and balance AI usage limits, auto-rerouting to healthy endpoints or more cost-efficient models, dynamic token management within Azure Application Gateway to optimize inference usage, and integrated feedback loops with Azure Monitor and Log Analytics for real-time performance tuning. Together, these advancements will transform Azure Application Gateway from a traditional traffic manager into an adaptive inference orchestrator capable of anticipating failures, optimizing costs, and protecting AI workloads from misuse. If you’re building with Azure OpenAI, Machine Learning, or Cognitive Services, let Azure Application Gateway be your intelligent command center—anticipating needs, adapting in real time, and orchestrating every interaction so your AI can deliver with precision, security, and limitless scale. For more information, please visit: What is Azure Application Gateway v2? | Microsoft Learn What Is Azure Web Application Firewall on Azure Application Gateway? | Microsoft Learn Azure Application Gateway URL-based content routing overview | Microsoft Learn Using Server-sent events with Application Gateway (Preview) | Microsoft Learn AI Architecture Design - Azure Architecture Center | Microsoft Learn487Views4likes0CommentsDecoding On-Premises ADC Rules: Migration to Azure Application Gateway
Overview As Azure Application Gateway evolves, many organizations are considering how their existing on-premises solutions—such as F5, NetScaler, and Radware—can transition to leverage Azure’s native services. During this shift to cloud-native architecture, a frequent question arises: ‘Can Application Gateway support my current load balancing configurations?’” The short answer: It depends on your use case. With the right approach, the transition can be smooth, scalable, and secure. Azure Application Gateway, especially when used with Azure-native services like Web Application Firewall (WAF), Azure Front Door, and Azure Firewall, can support common use cases. This guide provides a functional comparison, outlines what’s supported and offers a blueprint for successful migration. Key Capabilities of Application Gateway Azure Application Gateway v2 brings a host of enhancements that align with the needs of modern, cloud-first organizations: Autoscaling & Zone Redundancy Native WAF + Azure DDoS Protection Native support for Header Rewrites, URL-based routing, and SSL termination Integration with Azure Monitor, Log Analytics, Defender for Cloud Azure-native deployment: ARM/Bicep, CLI, GitOps, Terraform, CI/CD These features make App Gateway a strong option for cloud-first and hybrid scenarios, especially for cloud-first and hybrid scenarios. customers benefit from simplified operations, improved agility, and enhanced security. What are ADC Rules? On-premises ADCs (Application Delivery Controllers) often include advanced traffic management features, such as iRules and Citrix policy expressions. These Layer 4–7 devices go beyond basic load balancing, enabling traffic manipulation at various stages of the connection lifecycle. ADCs are powerful, flexible, and often deeply embedded in enterprise traffic logic. If you rely on these features, migration is still possible—Azure Application Gateway supports many commonly used functionalities out of the box. Common ADCs scenarios: Redirects and rewrites IP filtering and geo-blocking Custom error handling Event-driven logic like HTTP_REQUEST, CLIENT_ACCEPTED Application Gateway Feature Patterns ADCs traffic management features are powerful and flexible, often deeply embedded in enterprise traffic flows. Application Gateway does provide native support for many common scenarios. In this guide, we’ll show you how to translate advanced rules typical patterns into configurations. [Note]: When migrating WAF rules, enable detection mode first to identify false positives before enforcing blocks Citrix Features iRule Feature App Gateway v2 Equivalent Supported for App Gateway? Responder Policies Redirects (301/302) Native redirect rules ✅ Rewrite Policies Header rewrites Rewrite Set rules ✅ GSLB + Responder Policies Geo-based Routing Combining with Azure Front Door ✅ Content Switching Policies URL-based routing Path-based routing rules ✅ Responder/ACLs IP filtering WAF custom rules or NSGs ✅ GSLB + Policy Expressions Geo-blocking WAF rules ✅ Content Switching Policies Path-based routing URL path maps ✅ Content Switching / Rewrite Policies Header-based Routing Limited with parameter-based path selection ➗ Advanced Policy Expressions (Regex supported) Regex-based routing Limited regex support via path parameters ➗ Priority Queues / Rate Control Real-time traffic shaping Limited with Azure Front Door ➗ AppExpert with TCP expressions TCP payload inspection Not supported ❌ Not supported Event-driven hooks (HTTP_REQUEST, etc) Not supported ❌ Not Supported Query Pool Not supported ❌ Not supported Per-request scripting Not supported ❌ Deep packet inspection + Policies (limited) Payload-based routing Not supported ❌ Not supported Full scripting (TCL) Not supported ❌ Translating Advanced Rules Migrating features such as iRules and Citrix policy expressions from ADCs is less about line-by-line translation and more about recognizing patterns. Think of it as translating a language—not word-for-word, but intent-for-intent. How to get started: Tool-assisted translation: Use Copilot or GPT-based tools to translate common ADC rule patterns. Inventory & analyze: Break complex rules into modular App Gateway functions (Redirects, Rewrites) Document: Document everything of original goal and their translated equivalents. Where to Configure in Azure You can implement routing and rewrite logic via: Azure portal UI Azure CLI / PowerShell (az network application-gateway) ARM templates / Bicep (for infrastructure-as-code deployments) REST API (for automation/CI-CD pipelines) Example: Configure header rewrite in the portal Open your Application Gateway in the Azure portal Navigate to Rewrites on the sidebar Click + Add Rewrite Set, then apply it to your routing rule Define your rewrite conditions and actions [NOTE]: Not sure what rewrites are? Learn more here about Rewrite HTTP Headers. Rewrite configuration: click + Add Rewrite set to apply a new Rewrite to your routing rule: Resources Application Gateway v1 to v2: Migrate from App Gateway v1 to v2 Best Practices: Architecture Best Practices for Azure Application Gateway v2 - Microsoft Azure Well-Architected Framework | Microsoft Learn Rewrites: https://learn.microsoft.com/en-us/azure/application-gateway/rewrite-http-headers-url Header-based routing: https://learn.microsoft.com/en-us/azure/application-gateway/parameter-based-path-selection-portal Tuning WAF rules: Tune Azure Web Application Firewall for Azure Front Door | Microsoft Learn Conclusion While AI-powered assistants can help interpret and translate common ADC traffic management patterns, manual recreation and validation of rules are still necessary to ensure accuracy and alignment with your specific requirements. Nevertheless, migrating to Application Gateway v2 is not only feasible—it represents a strategic move toward a modern, cloud-native infrastructure. With thoughtful planning and the right mindset, organizations can maintain traffic flexibility while gaining the agility, scalability, and operational efficiency of the Azure ecosystem. If you are unsure whether your current on-premises configuration can be supported in Azure Application Gateway, please consult the official Azure documentation or reach out to Microsoft support for guidance.374Views0likes0CommentsApp Connectivity issue
I have come across an issue being reported by one of the user stating that he is unable to connect to an application on port 5672 hosted behind azure internal load balancer. on my observation from Azure portal post login i see that Azure front end load balancer is marking the front end port as unresponsive/down for service 5672, while the back end port 2009 on azure internal load balancer is seen up on the back end pool virtual F5 .port mapping done properly on azure Error as seen on Azure is “TCP probe out, unhealthy backend instances or unhealthy app listening on port” However when I check on the Virtual F5 the backend server is responding on port 5672 normally, the health checks look ok, thereby the vip is marked as up. is this abnormal behaviour on the application side against 5672 service or something more to check on the azure side which is resulting to TCP probe out error.. pls suggest192Views1like2Comments