application gateway
45 TopicsNetworking out Private VNET in AZURE with a third party app such as payment gateway?
I need to do networking so that my VNET in Azure connects to third party applications such as payment gateways or messaging apps which are in Public internet. Please let me know the options and why we should prefer one over the other?627Views0likes1CommentIntegrating Azure Application Gateway v2 with Azure API Management for secure and scalable API
Why Application Gateway v2 + Azure API Management? - Layer-7 routing with path-based rules, host headers, URL rewrites, and WAF protection (OWASP Core Rule Set). - Azure API Management provides API abstraction, versioning, throttling, caching, JWT validation, and per-API policies. - Combined, App Gateway becomes the internet-facing secure entry point and Azure API Management the control plane for API governance. Scenario: Internet → App Gateway (WAF) → Azure API Management (External) → Backends Best when Azure API Management needs to be publicly reachable but protected by WAF and central routing. [Client] ─HTTPS──> [App Gateway v2 (WAF)] ─HTTPS──> [Azure API Management (External)] ─> [Private/On-prem/Azure Backends] Pros: Simple, fast to implement, WAF in front, supports CDN/Front Door chaining. Cons: Azure API Management is public; additional steps required for IP allow-lists and mTLS. Scenario2: Internet → App Gateway (WAF) → Azure API Management (Internal) via Private Endpoint Azure API Management is internal; only App Gateway is public. Zero-trust friendly. [Client] ─HTTPS──> [App Gateway v2 (WAF, Public)] ─HTTPS──> [Private Endpoint] ─> [Azure API Management (Internal)] ─> [Backends] Pros: Azure API Management is not exposed to the internet; traffic flows through App Gateway + Private Link. Cons: Requires vNet planning, DNS, and App Gateway-to-Private Link name resolution. Scenario3: Azure API Management (External) → App Gateway (Internal) → Private Backends Azure API Management is the public front door; App Gateway does L7 routing to internal services. [Client] ─HTTPS──> [Azure API Management (External)] ─HTTPS──> [App Gateway (Internal/WAF)] ─> [Backends] Pros: Privileged Identity Management security & governance is your internet front door. Cons: More Azure API Management policy work; App Gateway must be reachable from Azure API Management. Network & DNS design checklist: - Virtual networks & subnets: - App Gateway-Subnet (required dedicated subnet) - Azure API Management-Subnet (for internal tier) - Shared-services-Subnet for Bastion/jumpbox/logging - Private Link: Enable Azure API Management private endpoint in Azure API Management-Subnet. - Private DNS Zones: privatelink.azure-api.net for Azure API Management, custom zones for backends. - Name resolution: App Gateway must resolve Azure API Management private FQDN via vNet DNS or Azure Private DNS. - Firewall & NSGs: Restrict inbound/outbound; allow only required ports to Azure API Management, Key Vault, Log Analytics. - Hybrid Connectivity: Site-to-site VPN or ExpressRoute for on-prem backends; consider Azure Firewall or NVA. Certificates & TLS - Custom Domains: - App Gateway: api.contoso.com - Azure API Management: gateway.contoso.com (with custom hostname on Azure API Management) - TLS Ports: HTTPS 443 end-to-end; disable TLS 1.0/1.1. - Cert storage: Use Azure Key Vault for SSL certs; integrate App Gateway & Azure API Management with Key Vault (managed identity). - mTLS (Client Certs): Enforce on Azure API Management with policies; optionally on App Gateway via mutual auth for selected listeners. WAF (Web Application Firewall) on App Gateway v2 - Modes: Detection vs prevention (recommend prevention once tuned). - CRS: Start with 3.2; baseline exclusions for APIs (JSON payloads). - Managed rules: Enable bot protection, set anomaly scoring, create exclusions for headers like Authorization. - Logging: Send WAF logs to Log Analytics; build alerts for blocked requests spikes. Azure API Management Policies – Common patterns - Inbound: `validate-jwt`, `check-header`, `rate-limit`, `ip-filter`, `set-backend-service` - Backend: `retry`, `forward-request` with mTLS - Outbound: `set-header`, `find-and-replace`, `cache` - Global vs API vs Operation: Keep global minimal; override at API/operation for precision. - Dev, Test, Prod: Parameterize via named values and Key Vault references. Example – JWT validation and rate limit: xml <policies> <inbound> <base /> <validate-jwt header-name="Authorization" failed-validation-httpcode="401"> <openid-config url="https://login.microsoftonline.com/<tenant-id>/v2.0/.well-known/openid-configuration" /> <audiences> <audience>api://contoso-app-id</audience> </audiences> <issuers> <issuer>https://sts.windows.net/<tenant-id>/</issuer> </issuers> </validate-jwt> <rate-limit calls="100" renewal-period="60" /> </inbound> <backend> <forward-request /> </backend> <outbound> <base /> </outbound> </policies> ## Terraform – Core Resources (App Gateway v2 + Azure API Management) Note: Simplified example; parameterize for prod, add Key Vault integrations, diagnostics, and role assignments. # Variables (example) variable "location" { default = "eastus" } variable "rg_name" { default = "rg-appgw-apim" } variable "vnet_name" { default = "vnet-core" } variable "appgw_subnet" { default = "snet-appgw" } variable "apim_subnet" { default = "snet-apim" } provider "azurerm" { features {} } resource "azurerm_resource_group" "rg" { name = var.rg_name location = var.location } resource "azurerm_virtual_network" "vnet" { name = var.vnet_name location = var.location resource_group_name = azurerm_resource_group.rg.name address_space = ["10.10.0.0/16"] } resource "azurerm_subnet" "snet_appgw" { name = var.appgw_subnet resource_group_name = azurerm_resource_group.rg.name virtual_network_name = azurerm_virtual_network.vnet.name address_prefixes = ["10.10.1.0/24"] } resource "azurerm_subnet" "snet_apim" { name = var.apim_subnet resource_group_name = azurerm_resource_group.rg.name virtual_network_name = azurerm_virtual_network.vnet.name address_prefixes = ["10.10.2.0/24"] } # Public IP for App Gateway resource "azurerm_public_ip" "appgw_pip" { name = "pip-appgw" resource_group_name = azurerm_resource_group.rg.name location = var.location allocation_method = "Static" sku = "Standard" } # Application Gateway v2 (WAF) resource "azurerm_application_gateway" "appgw" { name = "agw-v2-waf" resource_group_name = azurerm_resource_group.rg.name location = var.location sku { name = "WAF_v2" tier = "WAF_v2" capacity = 2 } gateway_ip_configuration { name = "appgw-ipcfg" subnet_id = azurerm_subnet.snet_appgw.id } frontend_port { name = "https-port" port = 443 } frontend_ip_configuration { name = "appgw-feip" public_ip_address_id = azurerm_public_ip.appgw_pip.id } ssl_certificate { name = "ssl-agw" data = filebase64("certs/agw.pfx") password = var.agw_pfx_password } http_listener { name = "listener-https" frontend_ip_configuration_name = "appgw-feip" frontend_port_name = "https-port" protocol = "Https" ssl_certificate_name = "ssl-agw" host_name = "api.contoso.com" } backend_address_pool { name = "apim-bepool" # For Azure API Management private endpoint, use FQDN via custom probe, or IP when static fqdns = ["gateway.contoso.internal"] } backend_http_settings { name = "https-settings" cookie_based_affinity = "Disabled" port = 443 protocol = "Https" pick_host_name_from_backend_address = true request_timeout = 30 probe_name = "apim-probe" } probe { name = "apim-probe" protocol = "Https" path = "/status-0123456789abcdef" pick_host_name_from_backend_http_settings = true interval = 30 timeout = 30 unhealthy_threshold = 3 } request_routing_rule { name = "route-to-apim" rule_type = "Basic" http_listener_name = "listener-https" backend_address_pool_name = "apim-bepool" backend_http_settings_name = "https-settings" } waf_configuration { enabled = true firewall_mode = "Prevention" rule_set_type = "OWASP" rule_set_version = "3.2" } } # Azure API Management (Developer SKU for demo – use Premium for prod & VNET integration) resource "azurerm_api_management" "apim" { name = "apim-contoso" location = var.location resource_group_name = azurerm_resource_group.rg.name publisher_name = "Contoso" publisher_email = "admin@contoso.com" sku_name = "Developer_1" virtual_network_type = "None" # Use "Internal" for vNet, then add private endpoint } ## Azure DevOps – CI/CD YAML (App Gateway + Azure API Management via Terraform) `yaml trigger: branches: include: [ main ] pool: vmImage: 'ubuntu-latest' variables: TF_VERSION: '1.8.5' ARM_USE_MSI: true stages: - stage: Validate jobs: - job: tf_validate steps: - task: Bash@3 displayName: 'Install Terraform' inputs: targetType: 'inline' script: | sudo apt-get update && sudo apt-get install -y unzip curl -L -o tf.zip https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip unzip tf.zip && sudo mv terraform /usr/local/bin/ terraform -version - task: AzureCLI@2 displayName: 'Terraform init & validate' inputs: azureSubscription: '$(AZURE_SERVICE_CONNECTION)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | terraform init -backend-config=backend.hcl terraform fmt -check terraform validate - stage: Plan jobs: - job: tf_plan steps: - task: AzureCLI@2 inputs: azureSubscription: '$(AZURE_SERVICE_CONNECTION)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | terraform plan -out=tfplan - publish: tfplan artifact: tfplan - stage: Apply condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) jobs: - job: tf_apply steps: - download: current artifact: tfplan - task: AzureCLI@2 inputs: azureSubscription: '$(AZURE_SERVICE_CONNECTION)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | terraform apply -auto-approve tfplan ##Observability & diagnostics - Access Logs: App Gateway & WAF logs to Log Analytics; query with KQL. - Azure API Management Metrics: Requests, backend duration, cache hits; enable diagnostic settings to Log Analytics/Storage/Event Hub. - End-to-end tracing: Correlate `x-correlation-id` across App Gateway, Azure API Management, and backend logs. - Alerts: 4xx/5xx thresholds, WAF blocks spike, Azure API Management throttling events, TLS certificate expiry. ## Security hardening - Enforce TLS 1.2+, disable weak ciphers. - WAF exclusions tuned minimally; regular rule reviews. - Azure API Management IP allow-lists for admin endpoints; use RBAC and separate admin vs gateway hostnames. - Private Endpoints for Azure API Management & backends; deny public network access where possible. - mTLS from App Gateway→Azure API Management or Client→Azure API Management when required (Key Vault for client certs). - DDoS Protection on vNet with public exposure; consider Azure Front Door WAF for global edge. ## Cost & Performance - Right-size App Gateway v2 capacity; enable autoscaling for variable traffic. - Use Azure API Management Premium only if you need vNet, multi-region, or zone redundancy; otherwise consider Standard/Developer for non-prod. - Caching policies in Azure API Management reduce backend load; use response compression. - Health probes optimized for backend responsiveness (avoid tight intervals). ## Troubleshooting - App Gateway 502/504: Check backend health, probe path, SNI/host header, TLS ciphers, DNS resolution to Azure API Management. - Azure API Management 401/403: Validate JWT audience/issuer; clock skew; named values; policy order. - Private Endpoint: DNS record in `privatelink.azure-api.net` exists; App Gateway subnet can resolve; NSG not blocking. - Cert Issues: PFX password correct; full chain present; key usage supports server auth. - Performance: Turn on App Gateway autoscaling; review Azure API Management throttling; check backend rate limits. ## Production checklist - Custom domains & cert rotation via Key Vault - WAF in Prevention with tuned exclusions - Azure API Management policies for auth, rate limiting, cache, headers - Private endpoints + DNS validated end-to-end - Autoscaling & health probes tuned - Diagnostics & alerts configured - CI/CD gated approvals; Terraform state secured - Runbooks for failover & certificate renewal266Views0likes0CommentsDelivering web applications over IPv6
The IPv4 address space pool has been exhausted for some time now, meaning there is no new public address space available for allocation from Internet Registries. The internet continues to run on IPv4 through technical measures such as Network Address Translation (NAT) and Carrier Grade NAT, and reallocation of address space through IPv4 address space trading. IPv6 will ultimately be the dominant network protocol on the internet, as IPv4 life-support mechanisms used by network operators, hosting providers and ISPs will eventually reach the limits of their scalability. Mobile networks are already changing to IPv6-only APNs; reachability of IPv4-only destinations from these mobile network is through 6-4 NAT gateways, which sometimes causes problems. Client uptake of IPv6 is progressing steadily. Google reports 49% of clients connecting to its services over IPv6 globally, with France leading at 80%. IPv6 client access measured by Google: Meanwhile, countries around the world are requiring IPv6 reachability for public web services. Examples are the United States, European Union member states among which the Netherlands and Norway, and India, and Japan. IPv6 adoption per country measured by Google: Entities needing to comply with these mandates are looking at Azure's networking capabilities for solutions. Azure supports IPv6 for both private and public networking, and capabilities have developed and expanded over time. This article discusses strategies to build and deploy IPv6-enabled public, internet-facing applications that are reachable from IPv6(-only) clients. Azure Networking IPv6 capabilities Azure's private networking capabilities center on Virtual Networks (VNETs) and the components that are deployed within. Azure VNETs are IPv4/IPv6 dual stack capable: a VNET must always have IPv4 address space allocated, and can also have IPv6 address space. Virtual machines in a dual stack VNET will have both an IPv4 and an IPv6 address from the VNET range, and can be behind IPv6 capable External- and Internal Load Balancers. VNETs can be connected through VNET peering, which effectively turns the peered VNETs into a single routing domain. It is now possible to peer only the IPv6 address spaces of VNETs, so that the IPv4 space assigned to VNETs can overlap and communication across the peering is over IPv6. The same is true for connectivity to on-premise over ExpressRoute: the Private Peering can be enabled for IPv6 only, so that VNETs in Azure do not have to have unique IPv4 address space assigned, which may be in short supply in an enterprise. Not all internal networking components are IPv6 capable yet. Most notable exceptions are VPN Gateway, Azure Firewall and Virtual WAN; IPv6 compatibility is on the roadmap for these services, but target availability dates have not been communicated. But now let's focus on Azure's externally facing, public, network services. Azure is ready to let customers publish their web applications over IPv6. IPv6 capable externally facing network services include: - Azure Front Door - Application Gateway - External Load Balancer - Public IP addresses and Public IP address prefixes - Azure DNS - Azure DDOS Protection - Traffic Manager - App Service (IPv6 support is in public preview) IPv6 Application Delivery IPv6 Application Delivery refers to the architectures and services that enable your web application to be accessible via IPv6. The goal is to provide an IPv6 address and connectivity for clients, while often continuing to run your application on IPv4 internally. Key benefits of adopting IPv6 in Azure include: ✅ Expanded Client Reach: IPv4-only websites risk being unreachable to IPv6-only networks. By enabling IPv6, you expand your reach into growing mobile and IoT markets that use IPv6 by default. Governments and enterprises increasingly mandate IPv6 support for public-facing services. ✅Address Abundance & No NAT: IPv6 provides a virtually unlimited address pool, mitigating IPv4 exhaustion concerns. This abundance means each service can have its own public IPv6 address, often removing the need for complex NAT schemes. End-to-end addressing can simplify connectivity and troubleshooting. ✅ Dual-Stack Compatibility: Azure supports dual-stack deployments where services listen on both IPv4 and IPv6. This allows a single application instance or endpoint to serve both types of clients seamlessly. Dual-stack ensures you don’t lose any existing IPv4 users while adding IPv6 capability. ✅Performance and Future Services: Some networks and clients might experience better performance over IPv6. Also, being IPv6-ready prepares your architecture for future Azure features and services as IPv6 integration deepens across the platform. General steps to enable IPv6 connectivity for a web application in Azure are: Plan and Enable IPv6 Addressing in Azure: Define an IPv6 address space in your Azure Virtual Network. Azure allows adding IPv6 address space to existing VNETs, making them dual-stack. A /56 segment for the VNET is recommended, /64 segment for subnets are required (Azure requires /64 subnets). If you have existing infrastructure, you might need to create new subnets or migrate resources, especially since older Application Gateway v1 instances cannot simply be “upgraded” to dual-stack. Deploy or Update Frontend Services with IPv6: Choose a suitable Azure service (Application Gateway, External / Global Load Balancer, etc.) and configure it with a public IPv6 address on the frontend. This usually means selecting *Dual Stack* configuration so the service gets both an IPv4 and IPv6 public IP. For instance, when creating an Application Gateway v2, you would specify IP address type: DualStack (IPv4 & IPv6). Azure Front Door by default provides dual-stack capabilities with its global endpoints. Configure Backends and Routing: Usually your backend servers or services will remain on IPv4. At the time of writing this in October 2025, Azure Application Gateway does not support IPv6 for backend pool addresses. This is fine because the frontend terminates the IPv6 network connection from the client, and the backend initiates an IPv4 connection to the backend pool or origin. Ensure that your load balancing rules, listener configurations, and health probes are all set up to route traffic to these backends. Both IPv4 and IPv6 frontend listeners can share the same backend pool. Azure Front Door does support IPv6 origins. Update DNS Records: Publish a DNS AAAA record for your application’s host name, pointing to the new IPv6 address. This step is critical so that IPv6-only clients can discover the IPv6 address of your service. If your service also has an IPv4 address, you will have both A (IPv4) and AAAA (IPv6) records for the same host name. DNS will thus allow clients of either IP family to connect. (In multi-region scenarios using Traffic Manager or Front Door, DNS configuration might be handled through those services as discussed later). Test IPv6 Connectivity: Once set up, test from an IPv6-enabled network or use online tools to ensure the site is reachable via IPv6. Azure’s services like Application Gateway and Front Door will handle the dual-stack routing, but it’s good to verify that content loads on an IPv6-only connection and that SSL certificates, etc., work over IPv6 as they do for IPv4. Next, we explore specific Azure services and architectures for IPv6 web delivery in detail. External Load Balancer - single region Azure External Load Balancer (also known as Public Load Balancer) can be deployed in a single region to provide IPv6 access to applications running on virtual machines or VM scale sets. External Load Balancer acts as a Layer 4 entry point for IPv6 traffic, distributing connections across backend instances. This scenario is ideal when you have stateless applications or services that do not require Layer 7 features like SSL termination or path-based routing. Key IPv6 Features of External Load Balancer: - Dual-Stack Frontend: Standard Load Balancer supports both IPv4 and IPv6 frontends simultaneously. When configured as dual-stack, the load balancer gets two public IP addresses – one IPv4 and one IPv6 – and can distribute traffic from both IP families to the same backend pool. - Zone-Redundant by Default: Standard Load Balancer is zone-redundant by default, providing high availability across Azure Availability Zones within a region without additional configuration. - IPv6 Frontend Availability: IPv6 support in Standard Load Balancer is available in all Azure regions. Basic Load Balancer does not support IPv6, so you must use Standard SKU. - IPv6 Backend Pool Support: While the frontend accepts IPv6 traffic, the load balancer will not translate IPv6 to IPv4. Backend pool members (VMs) must have private IPv6 addresses. You will need to add private IPv6 addressing to your existing VM IPv4-only infrastructure. This is in contrast to Application Gateway, discussed below, which will terminate inbound IPv6 network sessions and connect to the backend-end over IPv4. - Protocol Support: Supports TCP and UDP load balancing over IPv6, making it suitable for web applications and APIs, but also for non-web TCP- or UDP-based services accessed by IPv6-only clients. To set up an IPv6-capable External Load Balancer in one region, follow this high-level process: Enable IPv6 on the Virtual Network: Ensure the VNET where your backend VMs reside has an IPv6 address space. Add a dual-stack address space to the VNET (e.g., add an IPv6 space like 2001:db8:1234::/56 to complement your existing IPv4 space). Configure subnets that are dual-stack, containing both IPv4 and IPv6 prefixes (/64 for IPv6). Create Standard Load Balancer with IPv6 Frontend: In the Azure Portal, create a new Standard Load Balancer. During creation, configure the frontend IP with both IPv4 and IPv6 public IP addresses. Create or select existing Standard SKU public IP resources – one for IPv4 and one for IPv6. Configure Backend Pool: Add your virtual machines or VM scale set instances to the backend pool. Note that your backend instances will need to have private IPv6 addresses, in addition to IPv4 addresses, to receive inbound IPv6 traffic via the load balancer. Set Up Load Balancing Rules: Create load balancing rules that map frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS) from both the IPv4 and IPv6 frontends to the corresponding backend ports. Configure health probes to ensure only healthy instances receive traffic. Configure Network Security Groups: Ensure an NSG is present on the backend VM's subnet, allowing inbound traffic from the internet to the port(s) of the web application. Inbound traffic is "secure by default" meaning that inbound connectivity from internet is blocked unless there is an NSG present that explicitly allows it. DNS Configuration: Create DNS records for your application: an A record pointing to the IPv4 address and an AAAA record pointing to the IPv6 address of the load balancer frontend. Outcome: In this single-region scenario, IPv6-only clients will resolve your application's hostname to an IPv6 address and connect to the External Load Balancer over IPv6. Example: Consider a web application running on a VM (or a VM scale set) behind an External Load Balancer in Sweden Central. The VM runs the Azure Region and Client IP Viewer containerized application exposed on port 80, which displays the region the VM is deployed in and the calling client's IP address. The load balancer's front-end IPv6 address has a DNS name of ipv6webapp-elb-swedencentral.swedencentral.cloudapp.azure.com. When called from a client with an IPv6 address, the application shows its region and the client's address. Limitations & Considerations: - Standard SKU Required: Basic Load Balancer does not support IPv6. You must use Standard Load Balancer. - Layer 4 Only: Unlike Application Gateway, External Load Balancer operates at Layer 4 (transport layer). It cannot perform SSL termination, cookie-based session affinity, or path-based routing. If you need these features, consider Application Gateway instead. - Dual stack IPv4/IPv6 Backend required: Backend pool members must have private IPv6 addresses to receive inbound IPv6 traffic via the load balancer. The load balancer does not translate between the IPv6 frontend and an IPv4 backend. - Outbound Connectivity: If your backend VMs need outbound internet access over IPv6, you need to configure an IPv6 outbound rule. Global Load Balancer - multi-region Azure Global Load Balancer (aka Cross-Region Load Balancer) provides a cloud-native global network load balancing solution for distributing traffic across multiple Azure regions. Unlike DNS-based solutions, Global Load Balancer uses anycast IP addressing to automatically route clients to the nearest healthy regional deployment through Microsoft's global network. Key Features of Global Load Balancer: - Static Anycast Global IP: Global Load Balancer provides a single static public IP address (both IPv4 and IPv6 supported) that is advertised from all Microsoft WAN edge nodes globally. This anycast address ensures clients always connect to the nearest available Microsoft edge node without requiring DNS resolution. - Geo-Proximity Routing: The geo-proximity load-balancing algorithm minimizes latency by directing traffic to the nearest region where the backend is deployed. Unlike DNS-based routing, there's no DNS lookup delay - clients connect directly to the anycast IP and are immediately routed to the best region. - Layer 4 Pass-Through: Global Load Balancer operates as a Layer 4 pass-through network load balancer, preserving the original client IP address (including IPv6 addresses) for backend applications to use in their logic. - Regional Redundancy: If one region fails, traffic is automatically routed to the next closest healthy regional load balancer within seconds, providing instant global failover without DNS propagation delays. Architecture Overview: Global Load Balancer sits in front of multiple regional Standard Load Balancers, each deployed in different Azure regions. Each regional load balancer serves a local deployment of your application with IPv6 frontends. The global load balancer provides a single anycast IP address that clients worldwide can use to access your application, with automatic routing to the nearest healthy region. Multi-Region Deployment Steps: Deploy Regional Load Balancers: Create Standard External Load Balancers in multiple Azure regions (e.g. Sweden Central, East US2). Configure each with dual-stack frontends (IPv4 and IPv6 public IPs) and connect them to regional VM deployments or VM scale sets running your application. Configure Global Frontend IP address: Create a Global tier public IPv6 address for the frontend, in one of the supported Global Load Balancer home regions . This becomes your application's global anycast address. Create Global Load Balancer: Deploy the Global Load Balancer in the same home region. The home region is where the global load balancer resource is deployed - it doesn't affect traffic routing. Add Regional Backends: Configure the backend pool of the Global Load Balancer to include your regional Standard Load Balancers. Each regional load balancer becomes an endpoint in the global backend pool. The global load balancer automatically monitors the health of each regional endpoint. Set Up Load Balancing Rules: Create load balancing rules mapping frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS). The backend port on the global load balancer must match the frontend port of the regional load balancers. Configure Health Probes: Global Load Balancer automatically monitors the health of regional load balancers every 5 seconds. If a regional load balancer's availability drops to 0, it is automatically removed from rotation, and traffic is redirected to other healthy regions. DNS Configuration: Create DNS records pointing to the global load balancer's anycast IP addresses. Create both A (IPv4) and AAAA (IPv6) records for your application's hostname pointing to the global load balancer's static IPs. Outcome: IPv6 clients connecting to your application's hostname will resolve to the global load balancer's anycast IPv6 address. When they connect to this address, the Microsoft global network infrastructure automatically routes their connection to the nearest participating Azure region. The regional load balancer then distributes the traffic across local backend instances. If that region becomes unavailable, subsequent connections are automatically routed to the next nearest healthy region. Example: Our web application, which displays the region it is in, and the calling client's IP address, now runs on VMs behind External Load Balancers in Sweden Central and East US2. The External Load Balancer's front-ends are in the backend pool of a Global Load Balancer, which has a Global tier front-end IPv6 address. The front-end has an FQDN of `ipv6webapp-glb.eastus2.cloudapp.azure.com` (the region designation `eastus2` in the FQDN refers to the Global Load Balancer's "home region", into which the Global tier public IP must be deployed). When called from a client in Europe, Global Load Balancer directs the request to the instance deployed in Sweden Central. When called from a client in the US, Global Load Balancer directs the request to the instance deployed in US East 2. Features: - Client IP Preservation: The original IPv6 client address is preserved and available to backend applications, enabling IP-based logic and compliance requirements. - Floating IP Support: Configure floating IP at the global level for advanced networking scenarios requiring direct server return or high availability clustering. - Instant Scaling: Add or remove regional deployments behind the global endpoint without service interruption, enabling dynamic scaling for traffic events. - Multiple Protocol Support: Supports both TCP and UDP traffic distribution across regions, suitable for various application types beyond web services. Limitations & Considerations: - Home Region Requirement: Global Load Balancer can only be deployed in specific home regions, though this doesn't affect traffic routing performance. - Public Frontend Only: Global Load Balancer currently supports only public frontends - internal/private global load balancing is not available. - Standard Load Balancer Backends: Backend pool can only contain Standard Load Balancers, not Basic Load Balancers or other resource types. - Same IP Version Requirement: NAT64 translation isn't supported - frontend and backend must use the same IP version (IPv4 or IPv6). - Port Consistency: Backend port on global load balancer must match the frontend port of regional load balancers for proper traffic flow. - Health Probe Dependencies: Regional load balancers must have proper health probes configured for the global load balancer to accurately assess regional health. Comparison with DNS-Based Solutions: Unlike Traffic Manager or other DNS-based global load balancing solutions, Global Load Balancer provides: - Instant Failover: No DNS TTL delays - failover happens within seconds at the network level. - True Anycast: Single IP address that works globally without client-side DNS resolution. - Consistent Performance: Geo-proximity routing through Microsoft's backbone network ensures optimal paths. - Simplified Management: No DNS record management or TTL considerations. This architecture delivers global high availability and optimal performance for IPv6 applications through anycast routing, making it a good solution for latency-sensitive applications requiring worldwide accessibility with near-instant regional failover. Application Gateway - single region Azure Application Gateway can be deployed in a single region to provide IPv6 access to applications in that region. Application Gateway acts as the entry point for IPv6 traffic, terminating HTTP/S from IPv6 clients and forwarding to backend servers over IPv4. This scenario works well when your web application is served from one Azure region and you want to enable IPv6 connectivity for it. Key IPv6 Features of Application Gateway (v2 SKU): - Dual-Stack Frontend: Application Gateway v2 supports both [IPv4 and IPv6 frontends](https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq). When configured as dual-stack, the gateway gets two IP addresses – one IPv4 and one IPv6 – and can listen on both. (IPv6-only is not supported; IPv4 is always paired). IPv6 support requires Application Gateway v2, v1 does not support IPv6. - No IPv6 on Backends: The backend pool must use IPv4 addresses. IPv6 addresses for backend servers are currently not supported. This means your web servers can remain on IPv4 internal addresses, simplifying adoption because you only enable IPv6 on the frontend. - WAF Support: The Application Gateway Web Application Firewall (WAF) will inspect IPv6 client traffic just as it does IPv4. Single Region Deployment Steps: To set up an IPv6-capable Application Gateway in one region, consider the following high-level process: Enable IPv6 on the Virtual Network: Ensure the region’s VNET where the Application Gateway will reside has an IPv6 address space. Configure a subnet for the Application Gateway that is dual-stack (contains both an IPv4 subnet prefix and an IPv6 /64 prefix). Deploy Application Gateway (v2) with Dual Stack Frontend: Create a new Application Gateway using the Standard_v2 or WAF_v2 SKU. Populate Backend Pool: Ensure your backend pool (the target application servers or service) contains (DNS names pointing to) IPv4 addresses of your actual web servers. IPv6 addresses are not supported for backends. Configure Listeners and Rules: Set up listeners on the Application Gateway for your site. When creating an HTTP(S) listener, you choose which frontend IP to use – you would create one listener for IPv4 address and one for IPv6. Both listeners can use the same domain name (hostname) and the same underlying routing rule to your backend pool. Testing and DNS: After the gateway is deployed and configured, note the IPv6 address of the frontend (you can find it in the Gateway’s overview or in the associated Public IP resource). Update your application’s DNS records: create an AAAA record pointing to this IPv6 address (and update the A record to point to the IPv4 if it changed). With DNS in place, test the application by accessing it from an IPv6-enabled client or tool. Outcome: In this single-region scenario, IPv6-only clients will resolve your website’s hostname to an IPv6 address and connect to the Application Gateway over IPv6. The Application Gateway then handles the traffic and forwards it to your application over IPv4 internally. From the user perspective, the service now appears natively on IPv6. Importantly, this does not require any changes to the web servers, which can continue using IPv4. Application Gateway will include the source IPv6 address in an X-Forwarded-For header, so that the backend application has visibility of the originating client's address. Example: Our web application, which displays the region it is deployed in and the calling client's IP address, now runs on a VM behind Application Gateway in Sweden Central. The front-end has an FQDN of `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com`. Application Gateway terminates the IPv6 connection from the client and proxies the traffic to the application over IPv4. The client's IPv6 address is passed in the X-Forwarded-For header, which is read and displayed by the application. Calling the application's API endpoint at `/api/region` shows additional detail, including the IPv4 address of the Application Gateway instance that initiates the connection to the backend, and the original client IPv6 address (with the source port number appended) preserved in the X-Forwarded-For header. { "region": "SwedenCentral", "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "remoteAddress": "::ffff:10.1.0.4", "isPrivateIP": false, "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769", "connectionInfo": { "remoteAddress": "::ffff:10.1.0.4", "remoteFamily": "IPv6", "localAddress": "::ffff:10.1.1.68", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769" }, "deploymentAdvice": "Public IP detected successfully" } Limitations & Considerations: - Application Gateway v1 SKUs are not supported for IPv6. If you have an older deployment on v1, you’ll need to migrate to v2. - IPv6-only Application Gateway is not allowed. You must have IPv4 alongside IPv6 (the service must be dual-stack). This is usually fine, as dual-stack ensures all clients are covered. - No IPv6 backend addresses: The backend pool must have IPv4 addresses. - Management and Monitoring: Application Gateway logs traffic from IPv6 clients to Log Analytics (the client IP field will show IPv6 addresses). - Security: Azure’s infrastructure provides basic DDoS protection for IPv6 endpoints just as for IPv4. However, it is highly recommended to deploy Azure DDoS Protection Standard: this provides enhanced mitigation tailored to your specific deployment. Consider using the Web Application Firewall function for protection against application layer attacks. Application Gateway - multi-region Mission-critical web applications should be deploy in multiple Azure regions, achieving higher availability and lower latency for users worldwide. In a multi-region scenario, you need a mechanism to direct IPv6 client traffic to the “nearest” or healthiest region. Azure Application Gateway by itself is a regional service, so to use it in multiple regions, we use Azure Traffic Manager for global DNS load balancing, or use Azure Front Door (covered in the next section) as an alternative. This section focuses on the Traffic Manager + Application Gateway approach to multi-region IPv6 delivery. Azure Traffic Manager is a DNS-based load balancer that can distribute traffic across endpoints in different regions. It works by responding to DNS queries with the appropriate endpoint FQDN or IP, based on the routing method (Performance, Priority, Geographic) configured. Traffic Manager is agnostic to the IP version: it either returns CNAMEs, or AAAA records for IPv6 endpoints and A records for IPv4. This makes it suitable for routing IPv6 traffic globally. Architecture Overview: Each region has its own dual-stack Application Gateway. Traffic Manager is configured with an endpoint entry for each region’s gateway. The application’s FQDN is now a domain name hosted by Traffic Manager such as ipv6webapp.traffimanager.net, or a CNAME that ultimately points to it. DNS resolution will go through Traffic Manager, which decides which regional gateway’s FQDN to return. The client then connects directly to that Application Gateway’s IPv6 address, as follows: 1. DNS query: Client asks for ipv6webapp.trafficmanager.net, which is hosted in a Traffic Manager profile. 2. Traffic Manager decision: Traffic Manager sees an incoming DNS request and chooses the best endpoint (say, Sweden Central) based on routing rules (e.g., geographic proximity or lowest latency). 3. Traffic Manager response: Traffic Manager returns the FQDN of the Sweden Central Application Gateway to the client. 4. DNS Resolution: The client resolves regional FQDN and receives a AAAA response containing the IPv6 address. 5. Client connects: The client’s browser connects to the West Europe App Gateway IPv6 address directly. The HTTP/S session is established via IPv6 to that regional gateway, which then handles the request. 6. Failover: If that region becomes unavailable, Traffic Manager’s health checks will detect it and subsequent DNS queries will be answered with the FQDN of the secondary region’s gateway. Deployment Steps for Multi-Region with Traffic Manager: Set up Dual-Stack Application Gateways in each region: Similar to the single-region case, deploy an Azure Application Gateway v2 in each desired region (e.g., one in North America, one in Europe). Configure the web application in each region, these should be parallel deployments serving the same content. Configure a Traffic Manager Profile: In Azure Traffic Manager, create a profile and choose a routing method (such as Performance for nearest region routing, or Priority for primary/backup failover). Add endpoints for each region. Since our endpoints are Azure services with IPs, we can either use Azure endpoints (if the Application Gateways have Azure-provided DNS names) or External endpoints using the IP addresses. The simplest way is to use the Public IP resource of each Application Gateway as an Azure endpoint – ensure each App Gateway’s public IP has a DNS label (so it has a FQDN). Traffic Manager will detect those and also be aware of their IPs. Alternatively, use the IPv6 address as an External endpoint directly. Traffic Manager allows IPv6 addresses and will return AAAA records for them. DNS Setup: Traffic Manager profiles have a FQDN (like ipv6webapp.trafficmanager.net). You can either use that as your service’s CNAME, or you can configure your custom domain to CNAME to the Traffic Manager profile. Health Probing: Traffic Manager continuously checks the health of endpoints. When endpoints are Azure App Gateways, it uses HTTP/S probes to a specified URI path, to each gateway’s address. Make sure each App Gateway has a listener on the probing endpoint (e.g., a health check page) and that health probes are enabled. Testing Failover and Distribution: Test the setup by querying DNS from different geographical locations (to see if you get the nearest region’s IP). Also simulate a region down (stop the App Gateway or backend) and observe if Traffic Manager directs traffic to the other region. Because DNS TTLs are involved, failover isn’t instant but typically within a couple of minutes depending on TTL and probe interval. Considerations in this Architecture: - Latency vs Failover: Traffic Manager as a DNS load balancer directs users at connect time, but once a client has an answer (IP address), it keeps sending to that address until the DNS record TTL expires and it re-resolves. This is fine for most web apps. Ensure the TTL in the Traffic Manager profile is not too high (the default is 30 seconds). - IPv6 DNS and Connectivity: Confirm that each region’s IPv6 address is correctly configured and reachable globally. Azure’s public IPv6 addresses are globally routable. Traffic Manager itself is a global service and fully supports IPv6 in its decision-making. - Cost: Using multiple Application Gateways and Traffic Manager incurs costs for each component (App Gateway is per hour + capacity unit, Traffic Manager per million DNS queries). This is a trade-off for high availability. - Alternative: Azure Front Door: Azure Front Door is an alternative to the Traffic Manager + Application Gateway combination. Front Door can automatically handle global routing and failover at layer 7 without DNS-based limitations, offering potentially faster failover. Azure Front Door is discussed in the next section. In summary, a multi-region IPv6 web delivery with Application Gateways uses Traffic Manager for global DNS load balancing. Traffic Manager will seamlessly return IPv6 addresses for IPv6 clients, ensuring that no matter where an IPv6-only client is, they get pointed to the nearest available regional deployment of your app. This design achieves global resiliency (withstand a regional outage) and low latency access, leveraging IPv6 connectivity on each regional endpoint. Example: The global FQDN of our application is now ipv6webapp.trafficmanager.net and clients will use this FQDN to access the application regardless of their geographical location. Traffic Manager will return the FQDN of one of the regional deployments, `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com` or `ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com` depending on the routing method configured, the health state of the regional endpoints and the client's location. Then the client resolves the regional FQDN through its local DNS server and connects to the regional instance of the application. DNS resolution from a client in Europe: Resolve-DnsName ipv6webapp.trafficmanager.net Name Type TTL Section NameHost ---- ---- --- ------- -------- ipv6webapp.trafficmanager.net CNAME 59 Answer ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com Name : ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com QueryType : AAAA TTL : 10 Section : Answer IP6Address : 2603:1020:1001:25::168 And from a client in the US: Resolve-DnsName ipv6webapp.trafficmanager.net Name Type TTL Section NameHost ---- ---- --- ------- -------- ipv6webapp.trafficmanager.net CNAME 60 Answer ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com Name : ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com QueryType : AAAA TTL : 10 Section : Answer IP6Address : 2603:1030:403:17::5b0 Azure Front Door Azure Front Door is an application delivery network with built-in CDN, SSL offload, WAF, and routing capabilities. It provides a single, unified frontend distributed across Microsoft’s edge network. Azure Front Door natively supports IPv6 connectivity. For applications that have users worldwide, Front Door offers advantages: - Global Anycast Endpoint: Provides anycast IPv4 and IPv6 addresses, advertised out of all edge locations, with automatic A and AAAA DNS record support. - IPv4 and IPv6 origin support: Azure Front Door supports both IPv4 and IPv6 origins (i.e. backends), both within Azure and externally (i.e. accessible over the internet). - Simplified DNS: Custom domains can be mapped using CNAME records. - Layer-7 Routing: Supports path-based routing and automatic backend health detection. - Edge Security: Includes DDoS protection and optional WAF integration. Front Door enables "cross-IP version" scenario's: a client can connect to the Front Door front-end over IPv6, and then Front Door can connect to an IPv4 origin. Conversely, an IPv4-only client can retrieve content from an IPv6 backend via Front Door. Front Door preserves the client's source IP address in the X-Forwarded-For header. Note: Front Door provides managed IPv6 addresses that are not customer-owned resources. Custom domains should use CNAME records pointing to the Front Door hostname rather than direct IP address references. Private Link Integration Azure Front Door Premium introduces Private Link integration, enabling secure, private connectivity between Front Door and backend resources, without exposing them to the public internet. When Private Link is enabled, Azure Front Door establishes a private endpoint within a Microsoft-managed virtual network. This endpoint acts as a secure bridge between Front Door’s global edge network and your origin resources, such as Azure App Service, Azure Storage, Application Gateway, or workloads behind an internal load balancer. Traffic from end users still enters through Front Door’s globally distributed POPs, benefiting from features like SSL offload, caching, and WAF protection. However, instead of routing to your origin over public, internet-facing, endpoints, Front Door uses the private Microsoft backbone to reach the private endpoint. This ensures that all traffic between Front Door and your origin remains isolated from external networks. The private endpoint connection requires approval from the origin resource owner, adding an extra layer of control. Once approved, the origin can restrict public access entirely, enforcing that all traffic flows through Private Link. Private Link integration brings following benefits: - Enhanced Security: By removing public exposure of backend services, Private Link significantly reduces the risk of DDoS attacks, data exfiltration, and unauthorized access. - Compliance and Governance: Many regulatory frameworks mandate private connectivity for sensitive workloads. Private Link helps meet these requirements without sacrificing global availability. - Performance and Reliability: Traffic between Front Door and your origin travels over Microsoft’s high-speed backbone network, delivering low latency and consistent performance compared to public internet paths. - Defense in Depth: Combined with Web Application Firewall (WAF), TLS encryption, and DDoS protection, Private Link strengthens your security posture across multiple layers. - Isolation and Control: Resource owners maintain control over connection approvals, ensuring that only authorized Front Door profiles can access the origin. - Integration with Hybrid Architectures: For scenarios involving AKS clusters, custom APIs, or workloads behind internal load balancers, Private Link enables secure connectivity without requiring public IPs or complex VPN setups. Private Link transforms Azure Front Door from a global entry point into a fully private delivery mechanism for your applications, aligning with modern security principles and enterprise compliance needs. Example: Our application is now placed behind Azure Front Door. We are combining a public backend endpoint and Private Link integration, to show both in action in a single example. The Sweden Central origin endpoint is the public IPv6 endpoint of the regional External Load Balancers and the origin in US East 2 is connected via Private Link integration The global FQDN `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net` and clients will use this FQDN to access the application regardless of their geographical location. The FQDN resolves to Front Door's global anycast address, and the internet will route client requests to the nearest Microsoft edge from this address is advertised. Front Door will then transparently route the request to the nearest origin deployment in Azure. Although public endpoints are used in this example, that traffic will be route over the Microsoft network. From a client in Europe: Calling the application's api endpoint on `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net/api/region` shows some more detail. { "region": "SwedenCentral", "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28", "isPrivateIP": false, "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "connectionInfo": { "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28", "remoteFamily": "IPv6", "localAddress": "2001:db8:1:1::4", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21", "x-azure-clientip": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21" }, "deploymentAdvice": "Public IP detected successfully" } "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28" is the address from which Front Door sources its request to the origin. From a client in the US: The detailed view shows that the IP address calling the backend instance now is local VNET address. Private Link sources traffic coming in from a local address taken from the VNET it is in. The original client IP address is again preserved in the X-Forwarded-For header. { "region": "eastus2", "clientIp": "2603:1030:501:23::68:55658", "xForwardedFor": "2603:1030:501:23::68:55658", "remoteAddress": "::ffff:10.2.1.5", "isPrivateIP": false, "expressIp": "2603:1030:501:23::68:55658", "connectionInfo": { "remoteAddress": "::ffff:10.2.1.5", "remoteFamily": "IPv6", "localAddress": "::ffff:10.2.2.68", "localPort": 80 }, "allHeaders": { "x-forwarded-for": "2603:1030:501:23::68:55658" }, "deploymentAdvice": "Public IP detected successfully" } Conclusion IPv6 adoption for web applications is no longer optional. It is essential as public IPv4 address space is depleted, mobile networks increasingly use IPv6 only and governments mandate IPv6 reachability for public services. Azure's comprehensive dual-stack networking capabilities provide a clear path forward, enabling organizations to leverage IPv6 externally without sacrificing IPv4 compatibility or requiring complete infrastructure overhauls. Azure's externally facing services — including Application Gateway, External Load Balancer, Global Load Balancer, and Front Door — support IPv6 frontends, while Application Gateway and Front Door maintain IPv4 backend connectivity. This architecture allows applications to remain unchanged while instantly becoming accessible to IPv6-only clients. For single-region deployments, Application Gateway offers layer-7 features like SSL termination and WAF protection. External Load Balancer provides high-performance layer-4 distribution. Multi-region scenarios benefit from Traffic Manager's DNS-based routing combined with regional Application Gateways, or the superior performance and failover capabilities of Global Load Balancer's anycast addressing. Azure Front Door provides global IPv6 delivery with edge optimization, built-in security, and seamless failover across Microsoft's network. Private Link integration allows secure global IPv6 distribution while maintaining backend isolation. The transition to IPv6 application delivery on Azure is straightforward: enable dual-stack addressing on virtual networks, configure IPv6 frontends on load balancing services, and update DNS records. With Application Gateway or Front Door, backend applications require no modifications. These Azure services handle the IPv4-to-IPv6 translation seamlessly. This approach ensures both immediate IPv6 accessibility and long-term architectural flexibility as IPv6 adoption accelerates globally.263Views1like0CommentsQUIC based HTTP/3 with Application Gateway: Feature information Private Preview
Azure Application Gateway now supports HTTP/3 QUIC. As part of private preview, Application Gateway users can create HTTP/3 enabled Listeners which can support either of HTTP/1.1 or HTTP/2 along with HTTP/3. Note: HTTP/3, if enabled on one listener, will be available on that listener only. If some of your clients do not support HTTP/3, there’s no panic. They will still be able to communicate with HTTP/3 enabled listeners using previous HTTP versions. Why should HTTP/3 with Application Gateway be used? HTTP/3 is the latest version of the Hypertext Transfer Protocol built on the top of QUIC which operates over UDP. It represents a significant leap forward in terms of user experience, efficiency, and security. Here are some compelling reasons why migrating to HTTP/3 could greatly benefit your organization: Faster Web Page Loading (~200ms advantage): If you run a website or web application, implementing HTTP/3 can lead to faster page load times and improved user experiences. HTTP/3's reduced connection establishment latency and multiplexing capabilities help deliver resources more efficiently. Table below shows latency numbers of different HTTP versions. HTTPS (TCP+TLS) QUIC 1-RTT QUIC 0-RTT* First time connection 300ms 100ms 100ms Repeat Connection 200ms 50ms 0ms *0-RTT comes with its share of security risks and is not part of the private preview Enhanced Web Application Performance: Applications that make use of multiple resources, like images, scripts, and stylesheets, can benefit from HTTP/3's multiplexing and concurrent stream support. Mobile Applications: If you develop mobile apps, integrating HTTP/3 can enhance data transfer speed and responsiveness, which is especially important on mobile networks where latency can be higher. Reducing HOL Blocking: HTTP/3's use of QUIC helps mitigate head-of-line blocking, where the delay of one resource can block the delivery of others. This is especially advantageous for applications that require efficient resource loading. Security: HTTP/3's integration with QUIC provides improved security features by design, reducing the risk of certain types of attacks compared to previous versions of HTTP. Presently, 26.5% of the internet traffic is on HTTP/3 and there has been a steady increase in the adoption compared to HTTP/2 which has seen a decreasing trend (by ~10% in the last 12 months) owing to some of its demerits (explained in the sections later). How should HTTP/3 with Application Gateway be enabled? Prerequisite: You have an existing Application Gateway resource on standard_v2 SKU only. Please reach out to us @ quicforappgw@microsoft.com with the Resource URI on which you want the HTTP/3 feature enabled and we'll take you along with the next steps. What all HTTP/3 features are supported in private preview? HTTP/3 will be supported only in the front leg of the connection and backends will continue to be HTTP1.1. Application Gateway will support client-initiated connection migration (explained below) Application Gateway will support PMTU discovery. Application Gateway can advertise support for HTTP/3 via alt-svc header as part of HTTP1/2 response. (Image below explains the flow) What is HTTP/3 & QUIC? TCP (Transmission Control Protocol) (RFC793) has been the most widely used transport layer protocol since its inception. But, with the advent of more real time applications, the evolution of the edge, and an ever increasing need to reduce latency and congestion, using TCP is becoming untenable. UDP (User Datagram Protocol) (RFC768) was always seen as an alternative to TCP especially in instances where connectionless-less-reliable transmission was okey-dokey! But UDP suffered with the implementation of congestion control. TLS (Transport Layer Security) (RFC8446) adds another layer over TCP after the 3-way handshake for TLS negotiation to establish session key and session data encryption. Though the combination provides reliability and security, increased connection establishment has made application developers smirk than smile. QUIC (Quick UDP Internet Connections) (RFC9000) attempts to bridge these UDP gaps by inducing the TCP niceties and attempts to reduce the TCP ossification in the network. Put in brief, TCP encapsulated and encrypted in a UDP payload is QUIC. It appears like a bidirectional concealed UDP packet sequence to the external network. To the endpoints, it provides an advantage over TCP by deliberately concealing the transport parameters from the network and by shifting the responsibility of the flow control and the encryption service to the application layer from the transport layer. Pre-HTTP/3 protocols: HTTP/1.1 and HTTP/2 are done over TCP. HTTP/1.x versions have slow response times and never satisfy faster-load-times hungry webpages. HTTP/1.1, being a textual protocol, does a below average job in resource prioritization by transmitting the request and response headers as plain text. Without multiplexing capabilities, network requests are served in an ordered and blocking manner. With this approach, HTTP/1.1 suffers from HTTP Head of Line (HOL) blocking where the client waits for the previous requests to be serviced before sending another resulting in the subsequent blocked requests on a single TCP connection. Imagine a webpage needing multiple resources to load (Images, CSS, HTML files, Js files etc) the complete page! To overcome all these HTTP/1.1 limitations, HTTP/2 was brought in. It introduced header field compression by binary framing layer and creating a stream for communication reducing the amount of data in the header. Concurrent exchanges on the same connection by interleaving request and response messages and efficient coding of HTTP header fields. Prioritization of requests allowed more important requests complete quicker thus improving performance. HTTP/2 protocol communication involved binary encoded frames that carried data mapped to messages (request/response) in a stream which contained identifiers and priority information multiplexed in a single TCP connection. Figure-1 shows the flow of protocol communication in HTTP/2. All these enhancements mean lesser no. of TCP connections, longer-lived connections, less competition with other flows leading to better network utilization. By allowing multiple HTTP requests over a single TCP connection, HTTP/2 resolved HTTP HOL blocking issue but created the TCP HOL blocking issue. In the event of a network blip like network congestion, unavailability of network or change of a cell in a mobile network which might lead to loss of a packet throwing a TCP connection into a tizzy as it ensures that the order of packets transmitted and received are same. A loss of one packet will mean everything stops until the lost packet is retransmitted. In the case of multiple requests multiplexed onto a single TCP connection, all the requests are blocked although the “lost packet” in real impacts only one request. With increasing no. of mobile friendly apps, increase in the usage of cellular networks, and, in countries with not so good networks and high chances of network blips, such an issue can cause interruption to services. Enter QUIC based HTTP/3: HTTP/3 is based on QUIC. It is designed to be faster than TCP with lower latency, lesser overhead during connection establishment and quicker data transfer over the established connection. QUIC is based on UDP and offers 0-RTT and 1-RTT handshakes compared to 3-way handshakes of TCP. This is possible as it supports additional streams. HTTP/3 retains all the niceties of HTTP/2 like server push mechanism, multiplexing of requests over single connection via streams, resource prioritization. It ensures the issue of TCP HOL blocking is resolved. “Lost packets” along the way will not interrupt the data transfer. QUIC sees to it that transferring other data is uninterrupted while the issue of the “lost packet” is being resolved. QUIC based HTTP/3 features and use-cases: Faster connection establishment The regular 3-way handshake gives way to the 1-RTT and 0-RTT handshakes based on QUIC which will lead to a drop in the connection establishment by 66%-95%. The 1-RTT and the 0-RTT connection establishment helps in the improvement of page load times in web browsing immensely. Instant messaging applications, voice assistants, transactional systems (financial transactions, online purchases) benefit from quick connection establishment. In these scenarios, 1-RTT connection establishment can make a noticeable difference in reducing initial delays and enhancing overall user satisfaction. Financial institutions will find a wide range of benefits due the low latency with their mobile apps, online banking portals, provide customers with real-time notifications, effective API integration and many such use cases. Independent HTTP Streams (no TCP HOL Blocking) TCP HOL blocking occurs when a single delayed or lost packet holds up the delivery of subsequent packets, impacting overall communication efficiency. Avoiding TCP HOL blocking can offer significant advantages in real-life scenarios where minimizing latency, improving responsiveness, and optimizing data transmission are crucial. Removing unnecessary bottlenecks and making communication smoother results in happy customers. Web browsing without HOL blocking will help in fetching multiple resources in the page leading to quicker page loading times and thus providing the users with a rich browsing experience. Without HOL blocking, messages in an instant messaging application are delivered promptly without being held up providing the end user a fluid experience. IoT devices that transmit sensor data and updates will be able to deliver all the necessary data without being delayed by a single lost or slow packet, ensuring timely and accurate reporting. Avoiding HOL blocking in financial transactions ensures that data related to transactions is transmitted without unnecessary delays, contributing to real-time processing and confirmations without which CSAT is impacted vastly. Connection Migration Customers are always on the move. Especially with the ever-improving cellular networks, they are seldom stuck to a single network or a cell in the network. This nature of being on the move constantly will mean constant registration with the network and establishing connections frequently and deriving data from different servers. In the traditional HTTP and TCP method, this would lead to several drops in the connectivity. But that is a thing of the past with QUIC and HTTP/3. The QUIC-HTTP/3 combine provides users with a Connection Migration feature. During the QUIC connection establishment, the server provides the client with a set of Connection IDs (CID) as part of the QUIC header. Using this CID, the client can retain an existing connection despite moving networks and attaining new IP addresses. With the help of the connection migration, uninterrupted web browsing would be possible for users. IoT devices’ that need to maintain continuous communication will find the connection migration extremely useful. Users moving from private to public WiFi networks at malls, airports and other public places will be provided with seamless app experience. How to sign up? https://forms.office.com/r/iGeYgrmydA14KViews4likes10CommentsCan only remote into azure vm from DC
Hi all, I have set up a site to site connection from on prem to azure and I can remote in via the main dc on prem but not any other server or ping from any other server to the azure. Why can I only remote into the azure VM from the server that has Routing and remote access? Any ideas on how I can fix this?764Views0likes2CommentsDisabling TCP Timestamps on application gateways
Hello, We use Application Gatways for a number of apps. Our 3rd party vulnerability scanner discovered the AGW exposes the uptime of the system. Is there a way to disable this on the AGW? I found this post in UserVoice from 2017 where someone asked for the same option: https://feedback.azure.com/forums/217313-networking/suggestions/32683267-need-a-function-to-disable-the-timestamp-in-tcp-op. If it's not possible, it's not possible. I haven't found documentation on it, so my guess is there's currently no way to disable it. I get this is low risk, I just need to do a little more digging until I write this one off as a known issue / accepted risk. Thank you2.4KViews0likes1CommentUsing Application Gateway to secure access to the Azure OpenAI Service: Customer success story
Introduction A large enterprise customer set out to build a generative AI application using Azure OpenAI. While the app would be hosted on-premises, the customer wanted to leverage the latest large language models (LLMs) available through Azure OpenAI. However, they faced a critical challenge: how to securely access Azure OpenAI from an on-prem environment without private network connectivity or a full Azure landing zone. This blog post walks through how customers overcame these limitations using Application Gateway as a reverse proxy in front of their Azure Open AI along with other Azure services, to meet their security and governance requirements. Customer landscape and challenges The customer’s environment lacked: Private network connectivity (no Site-to-Site VPN or ExpressRoute). This was due to using a new Azure Government environment and not having a cloud operations team set up yet Common network topology such as Virtual WAN and Hub-Spoke network design A full Enterprise Scale Landing Zone (ESLZ) of common infrastructure Security components like private DNS zones, DNS resolvers, API Management, and firewalls This meant they couldn’t use private endpoints or other standard security controls typically available in mature Azure environments. Security was non-negotiable. Public access to Azure OpenAI was unacceptable. Customer needs to: Restrict access to specific IP CIDR ranges from on-prem user machines and data centers Limit ports communicating with Azure OpenAI Implement a reverse proxy with SSL termination and Web Application Firewall (WAF) Use a customer-provided SSL certificate to secure traffic Proposed solution To address these challenges, the customer designed a secure architecture using the following Azure components: Key Azure services Application Gateway – Layer 7 reverse proxy, SSL termination & Web Application Firewall (WAF) Public IP – Allows communication over public internet between customer’s IP addresses & Azure IP addresses Virtual Network – Allows control of network traffic in Azure Network Security Group (NSG) – Layer 4 network controls such as port numbers, service tags using five-tuple information (source, source port, destination, destination port, protocol) Azure OpenAI – Large Language Model (LLM) NSG configuration Inbound Rules: Allow traffic only from specific IP CIDR ranges and HTTP(S) ports Outbound Rules: Target AzureCloud.<region> with HTTP(S) ports (no service tag for Azure OpenAI yet) Application Gateway setup SSL Certificate: Issued by the customer’s on-prem Certificate Authority HTTPS Listener: Uses the on-prem certificate to terminate SSL Traffic flow: Decrypt incoming traffic Scan with WAF Re-encrypt using a well-known Azure CA Override backend hostname Custom health probe: Configured to detect a 404 response from Azure OpenAI (since no health check endpoint exists) Azure OpenAI configuration IP firewall restrictions: Only allow traffic from the Application Gateway subnet Outcome By combining Application Gateway, NSGs, and custom SSL configurations, the customer successfully secured their Azure OpenAI deployment—without needing a full ESLZ or private connectivity. This approach enabled them to move forward with their generative AI app while maintaining enterprise-grade security and governance.500Views1like0CommentsUnlock visibility, flexibility, and cost efficiency with Application Gateway logging enhancements
Introduction In today’s cloud-native landscape, organizations are accelerating the deployment of web applications at unprecedented speed. But with rapid scale comes increased complexity—and a growing need for deep, actionable visibility into the underlying infrastructure. As businesses embrace modern architectures, the demand for scalable, secure, and observable web applications continues to rise. Azure Application Gateway is evolving to meet these needs, offering enhanced logging capabilities that empower teams to gain richer insights, optimize costs, and simplify operations. This article highlights three powerful enhancements that are transforming how teams use logging in Azure Application Gateway: Resource-specific tables Data collection rule (DCR) transformations Basic log plan Resource-specific tables improve organization and query performance. DCR transformations give teams fine-grained control over the structure and content of their log data. And the basic log plan makes comprehensive logging more accessible and cost-effective. Together, these capabilities deliver a smarter, more structured, and cost-aware approach to observability. Resource-specific tables: Structured and efficient logging Azure Monitor stores logs in a Log Analytics workspace powered by Azure Data Explorer. Previously, when you configured Log Analytics, all diagnostic data for Application Gateway was stored in a single, generic table called AzureDiagnostics. This approach often led to slower queries and increased complexity, especially when working with large datasets. With resource-specific logging, Application Gateway logs are now organised into dedicated tables, each optimised for a specific log type: AGWAccessLogs- Contains access log information AGWPerformanceLogs-Contains performance metrics and data AGWFirewallLogs-Contains Web Application Firewall (WAF) log data This structured approach delivers several key benefits: Simplified queries – Reduces the need for complex filtering and data manipulation Improved schema discovery – Makes it easier to understand log structure and fields Enhanced performance – Speeds up both ingestion and query execution Granular access control – Allows you to grant Azure role-based access control (RBAC) permissions on specific tables Example: Azure diagnostics vs. resource-specific table approach Traditional AzureDiagnostics query: AzureDiagnostics | where ResourceType == "APPLICATIONGATEWAYS" and Category == "ApplicationGatewayAccessLog" | extend clientIp_s = todynamic(properties_s).clientIP | where clientIp_s == "203.0.113.1" New resource-specific table query: AGWAccessLogs | where ClientIP == "203.0.113.1" The resource-specific approach is cleaner, faster, and easier to maintain as it eliminates complex filtering and data manipulation. Data collection rules (DCR) log transformations: Take control of your log pipeline DCR transformations offer a flexible way to shape log data before it reaches your Log Analytics workspace. Instead of ingesting raw logs and filtering them post-ingestion, you can now filter, enrich, and transform logs at the source, giving you greater control and efficiency. Why DCR transformations matter: Optimize costs: Reduce ingestion volume by excluding non-essential data Support compliance: Strip out personally identifiable information (PII) before logs are stored, helping meet GDPR and CCPA requirements Manage volume: Ideal for high-throughput environments where only actionable data is needed Real-world use cases Whether you're handling high-traffic e-commerce workloads, processing sensitive healthcare data, or managing development environments with cost constraints, DCR transformations help tailor your logging strategy to meet specific business and regulatory needs. For implementation guidance and best practices, refer to Transformations Azure Monitor - Azure Monitor Basic log plan - Cost-effective logging for low-priority data Not all logs require real-time analysis. Some are used for occasional debugging or compliance audits. The Basic log plan in Log Analytics provides a cost-effective way to retain high-volume, low-priority diagnostic data—without paying for premium features you may not need. When to use the Basic log plan Save on costs: Pay-as-you-go pricing with lower ingestion rates Debugging and forensics: Retain data for troubleshooting and incident analysis, without paying premium costs for features you don't use regularly Understanding the trade-offs While the Basic plan offers significant savings, it comes with limitations: No real-time alerts: Not suitable for monitoring critical health metrics Query constraints: Limited KQL functionality and additional query costs Choose the Basic plan when deep analytics and alerting aren’t required and focus premium resources on critical logs. Building a smart logging strategy with Azure Application Gateway To get the most out of Azure Application Gateway logging, combine the strengths of all three capabilities: Assess your needs: Identify which logs require real-time monitoring versus those used for compliance or debugging Design for efficiency: Use the Basic log plan for low-priority data, and reserve standard tiers for critical logs Transform at the source: Apply DCR transformations to reduce costs and meet compliance before ingestion Query with precision: Use resource-specific tables to simplify queries and improve performance This integrated approach helps teams achieve deep visibility, maintain compliance, and manage costs.347Views0likes0CommentsAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.8KViews3likes0Comments