azure networking
62 TopicsInter-Hub Connectivity Using Azure Route Server
By Mays_Algebary shruthi_nair As your Azure footprint grows with a hub-and-spoke topology, managing User-Defined Routes (UDRs) for inter-hub connectivity can quickly become complex and error-prone. In this article, we’ll explore how Azure Route Server (ARS) can help streamline inter-hub routing by dynamically learning and advertising routes between hubs, reducing manual overhead and improving scalability. Baseline Architecture The baseline architecture includes two Hub VNets, each peered with their respective local spoke VNets as well as with the other Hub VNet for inter-hub connectivity. Both hubs are connected to local and remote ExpressRoute circuits in a bowtie configuration to ensure high availability and redundancy, with Weight used to prefer the local ExpressRoute circuit over the remote one. To maintain predictable routing behavior, the VNet-to-VNet configuration on the ExpressRoute Gateway should be disabled. Note: Adding ARS to an existing Hub with Virtual Network Gateway will cause downtime that expect to last 10 minutes. Scenario 1: ARS and NVA Coexist in the Hub Option A: Full Traffic Inspection ARS and NVA Coexist in the Hub In this scenario, ARS is deployed in each Hub VNet, alongside the Network Virtual Appliances (NVAs). NVA1 in Region1 establishes BGP peering with both the local ARS (ARS1) and the remote ARS (ARS2). Similarly, NVA2 in Region2 peers with both ARS2 (local) and ARS1 (remote). Let’s break down what each BGP peering relationship accomplishes. For clarity, we’ll focus on Region1, though the same logic applies to Region2: NVA1 Peering with Local ARS1 Through BGP peering with ARS1, NVA1 dynamically learns the prefixes of Spoke1 and Spoke2 at the OS level, eliminating the need to manually configure these routes. The same applies for NVA2 learning Spoke3 and Spoke4 prefixes via its BGP peering with ARS2. NVA1 Peering with Remote ARS2 When NVA1 peers with ARS2, the Spoke1 and Spoke2 prefixes are propagated to ARS2. ARS2 then injects these prefixes into NVA2 at both the NIC level with NVA1 as the next hop, and at the OS level. This mechanism removes the need for UDRs on the NVA subnets to enable inter-hub routing. Additionally, ARS2 advertises the Spoke1 and Spoke2 prefixes to both ExpressRoute circuits (EXR2 and EXR1 due to bowtie configuration) via GW2, making them reachable from on-premises through either EXR1 or EXR2. 👉Important: To ensure that ARS2 accepts and propagates Spoke1/Spoke2 prefixes received via NVA1, AS-Override must be enabled. Without AS-Override, BGP loop prevention will block these routes at ARS2, since both ARS1 and ARS2 use the default ASN 65515, and ARS2 will consider the route as already originated locally. The same principle applies in reverse for Spoke3 and Spoke4 prefixes being advertised from NVA2 to ARS1. Traffic Flow Inter-Hub Traffic: Spoke VNets are configured with UDRs that contain only a default route (0.0.0.0/0) pointing to the local NVA as the next hop. Additionally, the “Propagate Gateway Routes” setting should be set to False to ensure all traffic, whether East-West (intra-hub/inter-hub) or North-South (to/from internet), is forced through the local NVA for inspection. Local NVAs will have the next hop to the other region spokes injected at the NIC level by local ARS, pointing to the other region NVA, for example NVA2 will have next hop to Spoke1 and Spoke2 as NVA1 (10.0.1.4) and vice versa. Why are UDRs still needed on spokes if ARS handles dynamic routing? Even with ARS in place, UDRs are required to maintain control of the next hop for traffic inspection. For instance, if Spoke1 and Spoke2 do not have UDRs, they will learn the remote spoke prefixes (e.g., Spoke3/Spoke4) injected via ARS1, which received them from NVA2. This results in Spoke1/Spoke2 attempting to route traffic directly to NVA2, a path that is invalid, since the spokes don’t have the path to NVA2. The UDR ensures traffic correctly routes through NVA1 instead. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: As previously noted, Spokes send all traffic, including traffic to on-premises, via NVA1 due to the default route in the UDR. NVA1 then routes traffic to the local ExpressRoute circuit, using Weight to prefer the local path over the remote. Note: While NVA1 learns on-premises prefixes from both local and remote ARSs at the OS level, this doesn’t affect routing decisions. The actual NIC-level route injection determines the next hop, ensuring traffic is sent via the correct path—even if the OS selects a different “best” route internally. The screenshot below from NVA1 shows four next hops to the on-premises network 10.2.0.0/16. These include the local ARS (ARS1: 10.0.2.5 and 10.0.2.4) and the remote ARS (ARS2: 10.1.2.5 and 10.1.2.4). On-Premises to Azure Traffic Flow In a bowtie ExpressRoute configuration, Azure VNet prefixes are advertised to on-premises through both local and remote ExpressRoute circuits. Because of this dual advertisement, the on-premises network must ensure optimal path selection when routing traffic to Azure. From Azure side, to maintain traffic symmetry, add UDRs at the GatewaySubnet (GW1 and GW2) with specific routes to the local Spoke VNets, using the local NVA as the next hop. This ensures return traffic flows back through the same path it entered. 👉How Does the ExpressRoute Edge Router Select the Optimal Path? You might ask: If Spoke prefixes are advertised by both GW1 and GW2, how does the ExpressRoute edge router choose the best path? (e.g., diagram below shows EXR1 learns Region1 prefixes from GW1 and GW2) Here’s how: Edge routers (like EXR1) receive the same Spoke prefixes from both gateways. However, these routes have different AS-Path lengths: - Routes from the local gateway (GW1) have a shorter AS-Path. - Routes from the remote gateway (GW2) have a longer AS-Path because NVA1’s ASN (e.g., 65001) is prepended twice as part of the AS-Override mechanism. As a result, the edge router (EXR1) will prefer the local path from GW1, ensuring efficient and predictable routing. For example: EXR1 receives Spoke1, Spoke2, and Hub1-VNet prefixes from both GW1 and GW2. But because the path via GW1 has a shorter AS-Path, EXR1 will select that as the best route. (Refer to the diagram below for a visual of the AS-Path difference). Final Traffic Flow: Option-A Insights: This design simplifies UDR configuration for inter-hub routing, especially useful when dealing with non-contiguous prefixes or operating across multiple hubs. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). When on-premises traffic inspection is required, the UDR setup in the GatewaySubnet becomes more complex as the number of Spokes increases. Additionally, each route table is currently limited to 600 UDR entries. As your Azure network scales, keep in mind that Azure Route Server supports a maximum of 8 BGP peers per instance (as of the time writing this article). This limit can impact architectures involving multiple NVAs or hubs. Option B: Bypass On-Premises Inspection If on-premises traffic inspection is not required, NVAs can advertise a supernet prefix summarizing the local Spoke VNets to the remote ARS. This approach provides granular control over which traffic is routed through the NVA and eliminates the need for BGP peering between the local NVA and local ARS. All other aspects of the architecture remain the same as described in Option A. For example, NVA2 can advertise the supernet 192.168.2.0/23 (supernet of Spoke3 and Spoke4) to ARS1. As a result, Spoke1 and Spoke2 will learn this route with NVA2 as the next hop. To ensure proper routing (as discussed earlier) and inter-hub inspection, you need apply a UDR in Spoke1 and Spoke2 that overrides this exact supernet prefix, redirecting traffic to NVA1 as the next hop. At the same time, traffic destined for on-premises will follow the system route through the local ExpressRoute gateway, bypassing NVA1 altogether. In this setup: UDRs on the Spokes should have "Propagate Gateway Routes" set to True. No UDRs are needed in the GatewaySubnet. 👉Can NVA2 Still Advertise Specific Spoke Prefixes? You might wonder: Can NVA2 still advertise specific prefixes (e.g., Spoke3 and Spoke4) learned from ARS2 to ARS1 instead of a supernet? Yes, this is technically possible, but it requires maintaining BGP peering between NVA2 and ARS2. However, this introduces UDR complexity in Spoke1 and Spoke2, as you'd need to manually override each specific prefix. This also defeats the purpose of using ARS for simplified route propagation, undermining the efficiency and scalability of the design. Bypass On-Premises Inspection Final Traffic Flow: Option B: Bypass on-premises inspection traffic flow Option-B Insights: This approach reduces the number of BGP peerings per ARS. Instead of maintaining two BGP sessions (local NVA and remote NVA) per Hub, you can limit it to just one, preserving capacity within ARS’s 8-peer limit for additional inter-hub NVA peerings. Each NVA should advertise a supernet prefix to the remote ARS. This can be challenging if your Spokes don’t use contiguous IP address spaces, as described in Option B. Scenario 2: ARS in the Hub and the NVA in Transit VNet In Scenario 1, we highlighted that when on-premises inspection is required, managing UDRs at the GatewaySubnet becomes increasingly complex as the number of Spoke VNets grows. This is due to the need for UDRs to include specific prefixes for each Spoke VNet. In this scenario, we eliminate the need to apply UDRs at the GatewaySubnet altogether. In this design, the NVA will be deployed in Transit VNet, where: Transit-VNet will be peered with local Spoke VNets and with the local Hub-VNet to enable intra-Hub and on-premises connectivity. Transit-VNet also peered with remote Transit VNets (e.g., Transit-VNet1 peered with Transit-VNet2) to handle inter-Hub connectivity through the NVAs. Additionally, Transit-VNets are peered with remote Hub-VNets, to establish BGP peering with the remote ARS. NVAs OS will need to add static routes for the local Spoke VNets prefixes, it can be specific or it can supernet prefix, which will later be advertised to ARSs over BGP Peering, then ARS will advertise it to on-premises via ExpressRoute. NVAs will BGP peer with local ARS and also with the remote ARS. To understand the reasoning behind this design, let’s take a closer look at the setup in Region1, focusing on how ARS and NVA are configured to connect to Region2. This will help illustrate both inter-hub and on-premises connectivity. The same concept applies in reverse from Region2 to Region1. Inetr-Hub: To enable NVA1 in Region1 to learn prefixes from Region2, NVA2 will configure static routes at the OS level for Spoke3 and Spoke4 (or their supernet prefix) and advertise them to ARS1 via remote BGP peering. As a result, these prefixes will be received by NVA1, both at the NIC level, with NVA2 as the next hop, and at the OS level for proper routing. Spoke1 and Spoke2 will have a UDR with a default route pointing to NVA1 as the next hop. For instance, when Spoke1 needs to communicate with Spoke3, the traffic will first route through NVA1. NVA1 will then forward the traffic to NVA2 using VNet peering between the two Hubs. A similar configuration will be applied in Region2, where NVA1 will configure static routes at the OS level for Spoke1 and Spoke2 (or their supernet prefix) and advertise them to ARS2 via remote BGP peering, as a result, these prefixes will be received by NVA2, both at the NIC level (injected by ARS2), with NVA1 as the next hop, and at the OS level for proper routing. Note: At the OS level, NVA1 learns Spoke3 and Spoke4 prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: Spokes in Region1 route all traffic through NVA1 via a default route defined in their UDRs. Because of BGP peering between NVA1 and ARS1, ARS1 advertises the Spoke1 and Spoke2 (or their supernet prefix) to on-premises through ExpressRoute (EXR1). The Transit-VNet1 (hosting NVA1) is peered with Hub1-VNet, with “Use Remote Gateway” enabled. This allows NVA1 to learn on-premises prefixes from the local ExpressRoute gateway (GW1), and traffic to on-premises is routed through the local ExpressRoute circuit (EXR1) due to higher BGP Weight configuration. Note: At the OS level, NVA1 learns on-prem prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises to Azure Traffic Flow: Through BGP peering with ARS1, NVA1 enables ARS1 to advertise Spoke1 and Spoke2 (or their supernet prefix) to both EXR1 and EXR2 circuits (due to the ExpressRoute bowtie setup). Additionally, due to BGP peering between NVA1 and ARS2, ARS2 also advertises Spoke1 and Spoke2 (or their supernet prefix) to EXR2 and EXR1 circuits. As a result, both ExpressRoute edge routers in Region1 and Region2 learn the same Spoke prefixes (or their supernet prefix) from both GW1 and GW2, with identical AS-Path lengths, as shown below. EXR1 learns Region1 Spokes's supernet prefixes from GW1 and GW2 This causes non-optimal inbound routing, where traffic from on-premises destined to Region1 Spokes may first land in Region2’s Hub2-VNet before traversing to NVA1 in Region1. However, return traffic from Spoke1 and Spoke2 will always exit through Hub1-VNet. To prevent suboptimal routing, configure NVA1 to prepend the AS path for Spoke1 and Spoke2 (or their supernet prefix) when advertising them to the remote ARS2. Likewise, ensure NVA2 prepends the AS path for Spoke3 and Spoke4 (or their supernet prefix) when advertising to ARS1. This approach helps maintain optimal routing under normal conditions and during ExpressRoute failover scenarios. Below diagram shows NVA1 is setting AS-Prepend for Spoke1 and Spoke2 supernet prefix when BGP peer with remote ARS (ARS1), same will apply for NVA2 when advertising Spoke3 and Spoke4 prefixes to ARS1. Final Traffic Flow: Full Inspection: Traffic flow when NVA in Transit-VNet Insights: This solution is ideal when full traffic inspection is required. Unlike Scenario 1 - Option A, it eliminates the need for UDRs in the GatewaySubnet. When ARS is deployed in a VNet (typically in Hub VNets), the VNet will be limited to 500 VNet peerings (as of the time writing this article). However, in this design, Spokes peer with the Transit-VNet instead of directly with the ARS VNet, allowing you to scale beyond the 500-peer limit by leveraging Azure Virtual Network Manager (AVNM) or submitting a support request. Some enterprise customers may encounter the 1,000-route advertisement limit on the ExpressRoute circuit from the ExpressRoute gateway. In traditional hub-and-Spoke designs, there's no native control over what is advertised to ExpressRoute. With this architecture, NVAs provide greater control over route advertisement to the circuit. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). This design does require additional VNet peerings, including: Between Transit-VNets (inter-region), Between Transit-VNets and local Spokes, and Between Transit-VNets and both local and remote Hub-VNets.2.3KViews3likes2CommentsPrescaling in Azure Firewall is now generally available
Azure Firewall protects your applications and workloads with cloud-native network security that automatically scales based on your traffic needs. Today, we’re excited to announce the general availability of prescaling in Azure Firewall – a new capability that gives you more control and predictability over how your firewall scales. Why pre-scaling? Today, Azure Firewall automatically scales in response to real-time traffic demand. For organizations with predictable traffic patterns – such as seasonal events, business campaigns, holidays, or planned migrations – the ability to plan capacity in advance can provide greater confidence and control. That’s where prescaling comes in. With prescaling, you can: Plan ahead– Set a baseline number of firewall capacity units to ensure capacity is already in place before demand rises. Stay flexible – Define both minimum and maximum capacity unit values, so your firewall always has room to grow while staying within your chosen bounds. See clearly – Monitor capacity trends with a new observed capacity metric and configure alerts to know when scaling events occur. You can think of it as adding extra checkout counters before a holiday rush – when the customers arrive, you’re already prepared to serve them without delays or bottlenecks. Example scenarios E-commerce sales events – Scale up before a holiday shopping promotion to handle the surge in online buyers. Workload migrations – Ensure sufficient capacity is ready during a large data or VM migration window. Seasonal usage – For industries like education, gaming, or media streaming, pre-scale ahead of known peak seasons. Getting started in Azure Portal Navigate to your Azure Firewall resource in the Azure Portal. Select Scaling options in settings. By default, every Azure Firewall starts in autoscaling mode. To enable prescaling, simply switch to pre-scaling mode in the Azure Portal and configure your desired capacity range: Minimum capacity: 2 or higher. Maximum capacity: up to 50, depending on your needs. Monitor the scaling behavior with the observed capacity metric. Billing and availability Pre-scaling uses a new Capacity Unit Hour meter. Charges apply based on the number of firewall instances you configure. Standard: $0.07 per capacity unit hour Premium: $0.11 per capacity unit hour ✨ Next steps Prescaling gives you predictable performance and proactive control over your firewall, helping you confidently handle the traffic patterns that matter most to your business. 🚀 Try prescaling today and share your feedback with the team. Learn more about how to configure and monitor this feature in the Azure Firewall prescaling documentation.679Views0likes0CommentsHow Azure network security can help you meet NIS2 compliance
With the adoption of the NIS2 Directive EU 2022 2555, cybersecurity obligations for both public and private sector organizations have become more strict and far reaching. NIS2 aims to establish a higher common level of cybersecurity across the European Union by enforcing stronger requirements on risk management, incident reporting, supply chain protection, and governance. If your organization runs on Microsoft Azure, you already have powerful services to support your NIS2 journey. In particular Azure network security products such as Azure Firewall, Azure Web Application Firewall WAF, and Azure DDoS Protection provide foundational controls. The key is to configure and operate them in a way that aligns with the directive’s expectations. Important note This article is a technical guide based on the NIS2 Directive EU 2022 2555 and Microsoft product documentation. It is not legal advice. For formal interpretations, consult your legal or regulatory experts. What is NIS2? NIS2 replaces the original NIS Directive 2016 and entered into force on 16 January 2023. Member states must transpose it into national law by 17 October 2024. Its goals are to: Expand the scope of covered entities essential and important entities Harmonize cybersecurity standards across member states Introduce stricter supervisory and enforcement measures Strengthen supply chain security and reporting obligations Key provisions include: Article 20 management responsibility and governance Article 21 cybersecurity risk management measures Article 23 incident notification obligations These articles require organizations to implement technical, operational, and organizational measures to manage risks, respond to incidents, and ensure leadership accountability. Where Azure network security fits The table below maps common NIS2 focus areas to Azure network security capabilities and how they support compliance outcomes. NIS2 focus area Azure services and capabilities How this supports compliance Incident handling and detection Azure Firewall Premium IDPS and TLS inspection, Threat Intelligence mode, Azure WAF managed rule sets and custom rules, Azure DDoS Protection, Azure Bastion diagnostic logs Detect, block, and log threats across layers three to seven. Provide telemetry for triage and enable response workflows that are auditable. Business continuity and resilience Azure Firewall availability zones and autoscale, Azure Front Door or Application Gateway WAF with zone redundant deployments, Azure Monitor with Log Analytics, Traffic Manager or Front Door for failover Improve service availability and provide data for resilience reviews and disaster recovery scenarios. Access control and segmentation Azure Firewall policy with DNAT, network, and application rules, NSGs and ASGs, Azure Bastion for browser based RDP SSH without public IPs, Private Link Enforce segmentation and isolation of critical assets. Support Zero Trust and least privilege for inbound and egress. Vulnerability and misconfiguration defense Azure WAF Microsoft managed rule set based on OWASP CRS. Azure Firewall Premium IDPS signatures Reduce exposure to common web exploits and misconfigurations for public facing apps and APIs. Encryption and secure communications TLS policy: Application Gateway SSL policy; Front Door TLS policy; App Service/PaaS minimum TLS. Inspection: Azure Firewall Premium TLS inspection Inspect and enforce encrypted communication policies and block traffic that violates TLS requirements. Inspect decrypted traffic for threats. Incident reporting and evidence Azure Network Security diagnostics, Log Analytics, Microsoft Sentinel incidents, workbooks, and playbooks Capture and retain telemetry. Correlate events, create incident timelines, and export reports to meet regulator timelines. NIS2 articles in practice Article 21 cybersecurity risk management measures Azure network controls contribute to several required measures: Prevention and detection. Azure Firewall blocks unauthorized access and inspects traffic with IDPS. Azure DDoS Protection mitigates volumetric and protocol attacks. Azure WAF prevents common web exploits based on OWASP guidance. Logging and monitoring. Azure Firewall, WAF, DDoS, and Bastion resources produce detailed resource logs and metrics in Azure Monitor. Ingest these into Microsoft Sentinel for correlation, analytics rules, and automation. Control of encrypted communications. Azure Firewall Premium provides TLS inspection to reveal malicious payloads inside encrypted sessions. Supply chain and service provider management. Use Azure Policy and Defender for Cloud to continuously assess configuration and require approved network security baselines across subscriptions and landing zones. Article 23 incident notification Build an evidence friendly workflow with Sentinel: Early warning within twenty four hours. Use Sentinel analytics rules on Firewall, WAF, DDoS, and Bastion logs to generate incidents and trigger playbooks that assemble an initial advisory. Incident notification within seventy two hours. Enrich the incident with additional context such as mitigation actions from DDoS, Firewall and WAF. Final report within one month. Produce a summary that includes root cause, impact, and corrective actions. Use Workbooks to export charts and tables that back up your narrative. Article 20 governance and accountability Management accountability. Track policy compliance with Azure Policy initiatives for Firewall, DDoS and WAF. Use exemptions rarely and record justification. Centralized visibility. Defender for Cloud’s network security posture views and recommendations give executives and owners a quick view of exposure and misconfigurations. Change control and drift prevention. Manage Firewall, WAF, and DDoS through Network Security Hub and Infrastructure as Code with Bicep or Terraform. Require pull requests and approvals to enforce four eyes on changes. Network security baseline Use this blueprint as a starting point. Adapt to your landing zone architecture and regulator guidance. Topology and control plane Hub and spoke architecture with a centralized Azure Firewall Premium in the hub. Enable availability zones. Deploy Azure Bastion Premium in the hub or a dedicated management VNet; peer to spokes. Remove public IPs from management NICs and disable public RDP SSH on VMs. Use Network Security Hub for at-scale management. Require Infrastructure as Code for all network security resources. Web application protection Protect public apps with Azure Front Door Premium WAF where edge inspection is required. Use Application Gateway WAF v2 for regional scenarios. Enable the Microsoft managed rule set and the latest version. Add custom rules for geo based allow or deny and bot management. enable rate limiting when appropriate. DDoS strategy Enable DDoS Network Protection on virtual networks that contain internet facing resources. Use IP Protection for single public IP scenarios. Configure DDoS diagnostics and alerts. Stream to Sentinel. Define runbooks for escalation and service team engagement. Firewall policy Enable IDPS in alert and then in alert and deny for high confidence signatures. Enable TLS inspection for outbound and inbound where supported. Enforce FQDN and URL filtering for egress. Require explicit allow lists for critical segments. Deny inbound RDP SSH from the internet. Allow management traffic only from Bastion subnets or approved management jump segments. Logging, retention, and access Turn on diagnostic settings for Firewall, WAF, DDoS, and Application Gateway or Front Door. Send to Log Analytics and an archive storage account for long term retention. Set retention per national law and internal policy. Azure Monitor Log Analytics supports table-level retention and archive for up to 12 years, many teams keep a shorter interactive window and multi-year archive for audits. Restrict access with Azure RBAC and Customer Managed Keys where applicable. Automation and playbooks Build Sentinel playbooks for regulator notifications, ticket creation, and evidence collection. Maintain dry run versions for exercises. Add analytics for Bastion session starts to sensitive VMs, excessive failed connection attempts, and out of hours access. Conclusion Azure network security services provide the technical controls most organizations need in order to align with NIS2. When combined with policy enforcement, centralized logging, and automated detection and response, they create a defensible and auditable posture. Focus on layered protection, secure connectivity, and real time response so that you can reduce exposure to evolving threats, accelerate incident response, and meet NIS2 obligations with confidence. References NIS2 primary source Directive (EU) 2022/2555 (NIS2). https://eur-lex.europa.eu/eli/dir/2022/2555/oj/eng Azure Firewall Premium features (TLS inspection, IDPS, URL filtering). https://learn.microsoft.com/en-us/azure/firewall/premium-features Deploy & configure Azure Firewall Premium. https://learn.microsoft.com/en-us/azure/firewall/premium-deploy IDPS signature categories reference. https://learn.microsoft.com/en-us/azure/firewall/idps-signature-categories Monitoring & diagnostic logs reference. https://learn.microsoft.com/en-us/azure/firewall/monitor-firewall-reference Web Application Firewall WAF on Azure Front Door overview & features. https://learn.microsoft.com/en-us/azure/frontdoor/web-application-firewall WAF on Application Gateway overview. https://learn.microsoft.com/en-us/azure/web-application-firewall/overview Examine WAF logs with Log Analytics. https://learn.microsoft.com/en-us/azure/application-gateway/log-analytics Rate limiting with Front Door WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-rate-limit Azure DDoS Protection Service overview & SKUs (Network Protection, IP Protection). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview Quickstart: Enable DDoS IP Protection. https://learn.microsoft.com/en-us/azure/ddos-protection/manage-ddos-ip-protection-portal View DDoS diagnostic logs (Notifications, Mitigation Reports/Flows). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-view-diagnostic-logs Azure Bastion Azure Bastion overview and SKUs. https://learn.microsoft.com/en-us/azure/bastion/bastion-overview Deploy and configure Azure Bastion. https://learn.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal Disable public RDP and SSH on Azure VMs. https://learn.microsoft.com/en-us/azure/virtual-machines/security-baseline Azure Bastion diagnostic logs and metrics. https://learn.microsoft.com/en-us/azure/bastion/bastion-diagnostic-logs Microsoft Sentinel Sentinel documentation (onboard, analytics, automation). https://learn.microsoft.com/en-us/azure/sentinel/ Azure Firewall solution for Microsoft Sentinel. https://learn.microsoft.com/en-us/azure/firewall/firewall-sentinel-overview Use Microsoft Sentinel with Azure WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/waf-sentinel Architecture & routing Hub‑spoke network topology (reference). https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/hub-spoke Azure Firewall Manager & secured virtual hub. https://learn.microsoft.com/en-us/azure/firewall-manager/secured-virtual-hub273Views0likes0CommentsUsing Application Gateway to secure access to the Azure OpenAI Service: Customer success story
Introduction A large enterprise customer set out to build a generative AI application using Azure OpenAI. While the app would be hosted on-premises, the customer wanted to leverage the latest large language models (LLMs) available through Azure OpenAI. However, they faced a critical challenge: how to securely access Azure OpenAI from an on-prem environment without private network connectivity or a full Azure landing zone. This blog post walks through how customers overcame these limitations using Application Gateway as a reverse proxy in front of their Azure Open AI along with other Azure services, to meet their security and governance requirements. Customer landscape and challenges The customer’s environment lacked: Private network connectivity (no Site-to-Site VPN or ExpressRoute). This was due to using a new Azure Government environment and not having a cloud operations team set up yet Common network topology such as Virtual WAN and Hub-Spoke network design A full Enterprise Scale Landing Zone (ESLZ) of common infrastructure Security components like private DNS zones, DNS resolvers, API Management, and firewalls This meant they couldn’t use private endpoints or other standard security controls typically available in mature Azure environments. Security was non-negotiable. Public access to Azure OpenAI was unacceptable. Customer needs to: Restrict access to specific IP CIDR ranges from on-prem user machines and data centers Limit ports communicating with Azure OpenAI Implement a reverse proxy with SSL termination and Web Application Firewall (WAF) Use a customer-provided SSL certificate to secure traffic Proposed solution To address these challenges, the customer designed a secure architecture using the following Azure components: Key Azure services Application Gateway – Layer 7 reverse proxy, SSL termination & Web Application Firewall (WAF) Public IP – Allows communication over public internet between customer’s IP addresses & Azure IP addresses Virtual Network – Allows control of network traffic in Azure Network Security Group (NSG) – Layer 4 network controls such as port numbers, service tags using five-tuple information (source, source port, destination, destination port, protocol) Azure OpenAI – Large Language Model (LLM) NSG configuration Inbound Rules: Allow traffic only from specific IP CIDR ranges and HTTP(S) ports Outbound Rules: Target AzureCloud.<region> with HTTP(S) ports (no service tag for Azure OpenAI yet) Application Gateway setup SSL Certificate: Issued by the customer’s on-prem Certificate Authority HTTPS Listener: Uses the on-prem certificate to terminate SSL Traffic flow: Decrypt incoming traffic Scan with WAF Re-encrypt using a well-known Azure CA Override backend hostname Custom health probe: Configured to detect a 404 response from Azure OpenAI (since no health check endpoint exists) Azure OpenAI configuration IP firewall restrictions: Only allow traffic from the Application Gateway subnet Outcome By combining Application Gateway, NSGs, and custom SSL configurations, the customer successfully secured their Azure OpenAI deployment—without needing a full ESLZ or private connectivity. This approach enabled them to move forward with their generative AI app while maintaining enterprise-grade security and governance.408Views1like0CommentsUnlock visibility, flexibility, and cost efficiency with Application Gateway logging enhancements
Introduction In today’s cloud-native landscape, organizations are accelerating the deployment of web applications at unprecedented speed. But with rapid scale comes increased complexity—and a growing need for deep, actionable visibility into the underlying infrastructure. As businesses embrace modern architectures, the demand for scalable, secure, and observable web applications continues to rise. Azure Application Gateway is evolving to meet these needs, offering enhanced logging capabilities that empower teams to gain richer insights, optimize costs, and simplify operations. This article highlights three powerful enhancements that are transforming how teams use logging in Azure Application Gateway: Resource-specific tables Data collection rule (DCR) transformations Basic log plan Resource-specific tables improve organization and query performance. DCR transformations give teams fine-grained control over the structure and content of their log data. And the basic log plan makes comprehensive logging more accessible and cost-effective. Together, these capabilities deliver a smarter, more structured, and cost-aware approach to observability. Resource-specific tables: Structured and efficient logging Azure Monitor stores logs in a Log Analytics workspace powered by Azure Data Explorer. Previously, when you configured Log Analytics, all diagnostic data for Application Gateway was stored in a single, generic table called AzureDiagnostics. This approach often led to slower queries and increased complexity, especially when working with large datasets. With resource-specific logging, Application Gateway logs are now organised into dedicated tables, each optimised for a specific log type: AGWAccessLogs- Contains access log information AGWPerformanceLogs-Contains performance metrics and data AGWFirewallLogs-Contains Web Application Firewall (WAF) log data This structured approach delivers several key benefits: Simplified queries – Reduces the need for complex filtering and data manipulation Improved schema discovery – Makes it easier to understand log structure and fields Enhanced performance – Speeds up both ingestion and query execution Granular access control – Allows you to grant Azure role-based access control (RBAC) permissions on specific tables Example: Azure diagnostics vs. resource-specific table approach Traditional AzureDiagnostics query: AzureDiagnostics | where ResourceType == "APPLICATIONGATEWAYS" and Category == "ApplicationGatewayAccessLog" | extend clientIp_s = todynamic(properties_s).clientIP | where clientIp_s == "203.0.113.1" New resource-specific table query: AGWAccessLogs | where ClientIP == "203.0.113.1" The resource-specific approach is cleaner, faster, and easier to maintain as it eliminates complex filtering and data manipulation. Data collection rules (DCR) log transformations: Take control of your log pipeline DCR transformations offer a flexible way to shape log data before it reaches your Log Analytics workspace. Instead of ingesting raw logs and filtering them post-ingestion, you can now filter, enrich, and transform logs at the source, giving you greater control and efficiency. Why DCR transformations matter: Optimize costs: Reduce ingestion volume by excluding non-essential data Support compliance: Strip out personally identifiable information (PII) before logs are stored, helping meet GDPR and CCPA requirements Manage volume: Ideal for high-throughput environments where only actionable data is needed Real-world use cases Whether you're handling high-traffic e-commerce workloads, processing sensitive healthcare data, or managing development environments with cost constraints, DCR transformations help tailor your logging strategy to meet specific business and regulatory needs. For implementation guidance and best practices, refer to Transformations Azure Monitor - Azure Monitor Basic log plan - Cost-effective logging for low-priority data Not all logs require real-time analysis. Some are used for occasional debugging or compliance audits. The Basic log plan in Log Analytics provides a cost-effective way to retain high-volume, low-priority diagnostic data—without paying for premium features you may not need. When to use the Basic log plan Save on costs: Pay-as-you-go pricing with lower ingestion rates Debugging and forensics: Retain data for troubleshooting and incident analysis, without paying premium costs for features you don't use regularly Understanding the trade-offs While the Basic plan offers significant savings, it comes with limitations: No real-time alerts: Not suitable for monitoring critical health metrics Query constraints: Limited KQL functionality and additional query costs Choose the Basic plan when deep analytics and alerting aren’t required and focus premium resources on critical logs. Building a smart logging strategy with Azure Application Gateway To get the most out of Azure Application Gateway logging, combine the strengths of all three capabilities: Assess your needs: Identify which logs require real-time monitoring versus those used for compliance or debugging Design for efficiency: Use the Basic log plan for low-priority data, and reserve standard tiers for critical logs Transform at the source: Apply DCR transformations to reduce costs and meet compliance before ingestion Query with precision: Use resource-specific tables to simplify queries and improve performance This integrated approach helps teams achieve deep visibility, maintain compliance, and manage costs.290Views0likes0CommentsMicrosoft Azure scales Hollow Core Fiber (HCF) production through outsourced manufacturing
Introduction As cloud and AI workloads surge, the pressure on datacenter (DC), Metro and Wide Area Network (WAN) networks has never been greater. Microsoft is tackling the physical limits of traditional networking head-on. From pioneering research in microLED technologies to deploying Hollow Core Fiber (HCF) at global scale, Microsoft is reimagining connectivity to power the next era of cloud networking. Azure’s HCF journey has been one of relentless innovation, collaboration, and a vision to redefine the physical layer of the cloud. Microsoft’s HCF, based on the proprietary Double Nested Antiresonant Nodeless Fiber (DNANF) design, delivers up to 47% faster data transmission and approximately 33% lower latency compared to conventional Single Mode Fiber (SMF), bringing significant advantages to the network that powers Azure. Today, Microsoft is announcing a major milestone: the industrial scale-up of HCF production, powered by new strategic manufacturing collaborations with Corning Incorporated (Corning) and Heraeus Covantics (Heraeus). These collaborations will enable Azure to increase the global fiber production of HCF to meet the demands of the growing network infrastructure, advancing the performance and reliability customers expect for cloud and AI workloads. Real-world benefits for Azure customers Since 2023, Microsoft has deployed HCF across multiple Azure regions, with production links meeting performance and reliability targets. As manufacturing scales, Azure plans to expand deployment of the full end-to-end HCF network solution to help increase capacity, resiliency, and speed for customers, with the potential to set new benchmarks for latency and efficiency in fiber infrastructure. Why it matters Microsoft’s proprietary HCF design brings the following improvements for Azure customers: Increased data transmission speeds with up to 33% lower latency. Enhanced signal performance that improves data transmission quality for customers. Improved optical efficiency resulting in higher bandwidth rates compared to conventional fiber. How Microsoft is making it possible To operationalize HCF across Azure with production grade performance, Microsoft is: Deploying a standardized HCF solution with end-to-end systems and components for operational efficiency, streamlined network management, and reliable connectivity across Azure’s infrastructure. Ensuring interoperability with standard SMF environments, enabling seamless integration with existing optical infrastructure in the network for faster deployment and scalable growth. Creating a multinational production supply chain to scale next generation fiber production, ensuring the volumes and speed to market needed for widespread HCF deployment across the Azure network. Scaling up and out With Corning and Heraeus as Microsoft’s first HCF manufacturing collaborators, Azure plans to accelerate deployment to meet surging demand for high-performance connectivity. These collaborations underscore Microsoft’s commitment to enhancing its global infrastructure and delivering a reliable customer experience. They also reinforce Azure’s continued investment in deploying HCF, with a vision for this technology to potentially set the global benchmark for high-capacity fiber innovation. “This milestone marks a new chapter in reimagining the cloud’s physical layer. Our collaborations with Corning and Heraeus establish a resilient, global HCF supply chain so Azure can deliver a standardized, world-class customer experience with ultra-low latency and high reliability for modern AI and cloud workloads.” - Jamie Gaudette, Partner Cloud Network Engineering Manager at Microsoft To scale HCF production, Microsoft will utilize Corning’s established U.S. facilities, while Heraeus will produce out of its sites in both Europe and the U.S. "Corning is excited to expand our longtime collaboration with Microsoft, leveraging Corning’s fiber and cable manufacturing facilities in North Carolina to accelerate the production of Microsoft's Hollow Core Fiber. This collaboration not only strengthens our existing relationship but also underscores our commitment to advancing U.S. leadership in AI innovation and infrastructure. By working closely with Microsoft, we are poised to deliver solutions that meet the demands of AI workloads, setting new benchmarks for speed and efficiency in fiber infrastructure." - Mike O'Day, Senior Vice President and General Manager, Corning Optical Communications “We started our work on HCF a decade ago, teamed up with the Optoelectronics Research Centre (ORC) at the University of Southampton and then with Lumenisity prior to its acquisition. Now, we are excited to continue working with Microsoft on shaping the datacom industry. With leading solutions in glass, tube, preform, and fiber manufacturing, we are ready to scale this disruptive HCF technology to significant volumes. We’ll leverage our proven track record of taking glass and fiber innovations from the lab to widespread adoption, just as we did in the telecom industry, where approximately 2 billion kilometers of fiber are made using Heraeus products.” - Dr. Jan Vydra, Executive Vice President Fiber Optics, Heraeus Covantics Azure engineers are working alongside Corning and Heraeus to operationalize Microsoft manufacturing process intellectual property (IP), deliver targeted training programs, and drive the yield, metrology, and reliability improvements required for scaled production. The collaborations are foundational to a growing standardized, global ecosystem that supports: Glass preform/tubing supply Fiber production at scale Cable and connectivity for deployment into carrier‑grade environments Building on a foundation of innovation: Microsoft’s HCF program In 2022, Microsoft acquired Lumenisity, a spin‑out from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK. That same year, Microsoft launched the world’s first state‑of‑the‑art HCF fabrication facility in the UK to expand production and drive innovation. This purpose-built site continues to support long‑term HCF research, prototyping, and testing, ensuring that Azure remains at the forefront of HCF technology. Working with industry leaders, Microsoft has developed a proven end‑to‑end ecosystem of components, equipment, and HCF‑specific hardware necessary and successfully proven in production deployments and operations. Pushing the boundaries: recent breakthrough research Today, the University of Southampton announced a landmark achievement in optical communications: in collaboration with Azure Fiber researchers, they have demonstrated the lowest signal loss ever recorded for optical fibers (<0.1 dB/km) using research-grade DNANF HCF technology (see figure 4). This breakthrough, detailed in a research paper published in Nature Photonics earlier this month, paves the way for a potential revolution in the field, enabling unprecedented data transmission capacities and longer unamplified spans. ecords at around 1550nm [1] 2002 Nagayama et al. 1 [2] 2025 Sato et al. 2 [3] 2025 research-grade DNANF HCF Petrovich et al. 3 This breakthrough highlights the potential for this technology to transform global internet infrastructure and DC connectivity. Expected benefits include: Faster: Approximately 47% faster, reducing latency, powering real-time AI inference, cloud gaming and other interactive workloads. More capacity: A wider optical spectrum window enabling exponentially greater bandwidth. Future-ready: Lays the groundwork for quantum-safe links, quantum computing infrastructure, advanced sensing, and remote laser delivery. Looking ahead: Unlocking the future of cloud networking The future of cloud networking is being built today! With record-breaking [3] fiber innovations, a rapidly expanding collaborative ecosystem, and the industrialized scale to deliver next-generation performance, Azure continues to evolve to meet the demands for speed, reliability, and connectivity. As we accelerate the deployment of HCF across our global network, we’re not just keeping pace with the demands of AI and cloud, we’re redefining what’s possible. References: [1] Nagayama, K., Kakui, M., Matsui, M., Saitoh, T. & Chigusa, Y. Ultra-low-loss (0.1484 dB/km) pure silica core fibre and extension of transmission distance. Electron. Lett. 38, 1168–1169 (2002). [2] Sato, S., Kawaguchi, Y., Sakuma, H., Haruna, T. & Hasegawa, T. Record low loss optical fiber with 0.1397 dB/km. In Proc. Optical Fiber Communication Conference (OFC) 2024 Tu2E.1 (Optica Publishing Group, 2024). [3] Petrovich, M., Numkam Fokoua, E., Chen, Y., Sakr, H., Isa Adamu, A., Hassan, R., Wu, D., Fatobene Ando, R., Papadimopoulos, A., Sandoghchi, S., Jasion, G., & Poletti, F. Broadband optical fibre with an attenuation lower than 0.1 decibel per kilometre. Nat. Photon. (2025). https://doi.org/10.1038/s41566-025-01747-5 Useful Links: The Deployment of Hollow Core Fiber (HCF) in Azure’s Network How hollow core fiber is accelerating AI | Microsoft Azure Blog Learn more about Microsoft global infrastructure7KViews6likes0CommentsIntroducing WireGuard In-Transit Encryption for AKS (Public Preview)
As organizations continue to scale containerized workloads in Azure Kubernetes Service (AKS), the need to secure network traffic between applications and services has never been more critical especially in regulated or security-sensitive environments. We’re excited to announce the public preview of WireGuard-based in-transit encryption in AKS, a new capability in Advanced Container Networking Services that enhances inter-node traffic protection with minimal operational overhead. What is WireGuard? WireGuard is a modern, high-performance VPN protocol known for its simplicity, and robust cryptography. Integrated into the Cilium data plane and managed as part of AKS networking, WireGuard offers an efficient way to encrypt traffic transparently within your cluster. With this new feature, WireGuard is now natively supported as part of Azure CNI powered by Cilium with Advanced Container Networking services, no need for third-party encryption tools or custom key management systems. What Gets Encrypted? The WireGuard integration in AKS focuses on the most critical traffic path: ✅ Encrypted: Inter-node pod traffic: Network communication between pods running on different nodes in the AKS cluster. This traffic traverses the underlying network infrastructure and is encrypted using WireGuard to ensure confidentiality and integrity. ❌ Not encrypted: Same-node pod traffic: Communication between pods that are running on the same node. Since this traffic does not leave the node, it bypasses WireGuard and remains unencrypted. Node-generated traffic: Traffic initiated by the node itself, which is currently not routed through WireGuard and thus not encrypted. This scope strikes the right balance between strong protection and performance by securing the most critical traffic, which is data that leaves the host and traverses the network. Key Benefits Simple Configuration: Enable WireGuard with just a few flags during AKS cluster creation or update. Automatic Key Management: Each node generates and exchanges WireGuard keys automatically—no need for manual configuration. Transparent to Applications: No application-level changes are required. Encryption happens at the network layer. Cloud-Native Integration: Fully managed as part of Advanced Container Networking Services and Cilium, offering a seamless and reliable experience Architecture: How It Works When WireGuard is enabled: Each node generates a unique public/private key pair. The public keys are securely shared between nodes via the CiliumNode custom resource. A dedicated network interface (cilium_wg0) is created and managed by the Cilium agent running on each node. Peers are dynamically updated, and keys are rotated automatically every 120 seconds to minimize risk. This mechanism ensures that only validated nodes can participate in encrypted communication. WireGuard and VNet Encryption AKS now offers two powerful in-transit encryption options: Feature WireGuard Encryption VNet Encryption Scope Pod-to-pod inter-node traffic All traffic in the VNet VM Support Works on all VM SKUs Requires hardware support (e.g., Gen2 VMs) Deployment Flexibility Cloud-agnostic, hybrid ready Azure-only Performance Software-based, moderate CPU usage Hardware-accelerated, low overhead Choose WireGuard if you want encryption flexibility across clouds or have VM SKUs that don’t support VNet encryption . Choose VNet Encryption for full-network coverage and ultra-low CPU overhead. Conclusion and Next Steps WireGuard in AKS, now in public preview, delivers strong encryption that protects traffic as it leaves the host and traverses the network right where it's needed most. It offers a balanced approach to securing container networking without compromising usability. Ready to get started? Check out our how-to guide for step-by-step instructions on enabling WireGuard in your cluster and securing your container networking with ease. Explore more about Advanced Container Networking Services: Container Network Observability L7 network policies FQDN-based Policy405Views0likes0CommentsIntroducing the new Network Security Hub in Azure
Background: Since its launch in 2020, Azure Firewall Manager has supported customers in securing their networks. But the role of network security has since evolved, from a foundational requirement to a strategic priority for organizations. Today, organizations must protect every endpoint, server, and workload, as attackers continually search for the weakest link. Over the years, we’ve heard consistent feedback about the importance of centralized management, easier service discovery, and streamlined monitoring across their network security tools. These capabilities can make the difference between a minor incident and a major breach. That’s why we’re excited to introduce a new, unified Network Security hub experience. This updated hub brings together Azure Firewall, Web Application Firewall, and DDoS Protection—enabling you to manage, configure, and monitor all your network security services in one place. While Azure Firewall Manager offered some of this functionality, the name didn’t reflect the broader scope of protection and control that customers need. With this new experience, Firewall Manager has expanded into the Network Security Hub, making it easier to discover, configure, and monitor the right security services with just a few clicks. The result: less time navigating, more time securing your environment. What you’ll notice: Streamlined navigation: Whether you search for Azure Firewall, Web Application Firewall, DDoS Protection, or Firewall Manager, you’ll now be directed to the new Network Security hub. This unified entry point presents all relevant services in context—helping you stay focused and quickly find what you need, without feeling overwhelmed. Overview of services: The hub’s landing page provides a high-level view of each recommended solution, including key use cases, documentation links, and pricing details—so you can make informed decisions faster. Common scenarios: Explore typical deployment architectures and step-by-step guidance for getting started, right from the overview page. Related services: We’ve consolidated overlapping or closely related services to reduce noise and make your options clearer. The result? Fewer, more meaningful choices that are easier to evaluate and implement. New insights: We've enhanced the security coverage interface to show how many of your key resources are protected by Azure Firewall, DDoS Protection, and Web Application Firewall. Additionally, our integration with Azure Advisor now provides tailored recommendations to help you strengthen your security posture, reduce costs, and optimize Azure Firewall performance. What this means for you: No changes to Firewall Manager pricing or support: This is a user experience update only for Firewall Manager. You can continue to deploy Firewall policies and create Hub Virtual Network or Secured Virtual Hub deployments —now within the streamlined Network Security hub experience. Aligned marketing and documentation: We’ve updated our marketing pages and documentation to reflect this new experience, making it easier to find the right guidance and stay aligned with the latest best practices. Faster decision-making: With a clearer, more intuitive layout, it’s easier to discover the right service and act with confidence. Better product experience: This update brings greater cohesion to the Azure Networking portfolio, helping you get started quickly and unlock more value from day one Before: The original landing page was primarily focused on setting up Firewall Policies and Secured Virtual Hub, offering a limited view of Azure’s broader network security capabilities. After: The updated landing page delivers a more comprehensive and intuitive experience, with clear guidance on how to get started with each product—alongside common deployment scenarios to help you configure and operationalize your network security stack with ease. Before: The previous monitoring and security coverage experience was cluttered and difficult to navigate, making it harder to get a quick sense of your environment’s protection status. After: The updated Security Coverage view is cleaner and more intuitive. We've streamlined the layout and added Azure Advisor integration, so you can now quickly assess protection status across key services and receive actionable recommendations in one place. The expansion of Firewall Manager into the Network Security hub is part of a greater strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. You can learn more about this initiative in this blog. This shift is designed to better align with customer needs and industry best practices—by emphasizing core services, consolidating related offerings, and phasing out legacy experiences. The result is a more cohesive, intuitive, and efficient product experience across Azure Networking. 📣 If you have any thoughts or suggestions about the user interface, feel free to drop them in the feedback form available in the Network Security hub on the Azure Portal. Documentation links: Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Learn2.1KViews1like0CommentsAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.2.6KViews2likes0CommentsAzure CNI Overlay for Application Gateway for Containers and Application Gateway Ingress Controller
What are Azure CNI Overlay and Application Gateway? Azure CNI Overlay leverages logical network spaces for pod IP assignment management (IPAM). This provides enhanced IP scalability with reduced management responsibilities. Application Gateway for Containers is the latest and most recommended container L7 load-balancing solution. It introduces a new scalable control plane and data plane to address the performance demands and modern workloads being deployed to AKS clusters on Azure. Azure network control plane configures routing between Application Gateway and overlay pods. Why is the feature needed? As businesses increasingly use containerized solutions, managing container networks at scale has become a priority. Within container network management, IP address exhaustion, scalability and application load balancing performance are highly requested and discussed in many forums. Azure CNI Overlay is the default container networking IPAM mode on AKS. In the overlay design, AKS nodes use IPs from Azure virtual network (VNet) IP address range and pods are addressed from an overlay IP address range. The overlay pods can communicate with each other directly via a different routing domain. Overlay IP addresses can be reused across multiple clusters in the same VNet, provisioning a solution for IP exhaustion and increasing IP scale to over 1M. Azure CNI Overlay supporting Application Gateway for Containers provides customers with a more performant, reliable, and scalable container networking solution. Meanwhile, Azure CNI Overlay supporting AGIC provides customers with full feature parity if they choose to upgrade AKS clusters from kubenet to Azure CNI Overlay. Key Benefits High scale with Azure CNI Overlay combined with a high-performance ingress solution Azure CNI Overlay provides direct pod to pod routing with high IP scale using direct azure native routing with no encapsulation overhead. IPs can be reused across clusters in the same VNET allowing customers to conserve IP addresses. Application Gateway for Containers is the latest and most recommended container L7 load-balancing solution. Installing Application Gateway for Containers on AKS clusters with Azure CNI Overlay provides customers with the best solution combination of IP scalability and ingress solution on Azure. Feature parity between kubenet and Azure CNI Overlay With the retirement announcement of kubenet, we expect to see customers upgrade their AKS container networking solution from kubenet to Azure CNI Overlay soon. This feature allows customers to maintain business continuity during the transitioning process. Learn More Read more about Azure CNI Overlay and Application Gateway for Containers. Learn more on how to upgrade AKS clusters’ IPAM to Azure CNI Overlay. Learn more about Azure Kubernetes Service and Application Gateway.430Views2likes0Comments