azure network security
197 TopicsNavigating the 2025 holiday season: Insights into Azure’s DDoS defense
The holiday season continues to be one of the most demanding periods for online businesses. Traffic surges, higher transaction volumes, and user expectations for seamless digital experiences all converge, making reliability a non-negotiable requirement. For attackers, this same period presents an opportunity: even brief instability can translate into lost revenue, operational disruption, and reputational impact. This year, the most notable shift wasn’t simply the size of attacks, but how they were executed. We observed a rise in burst‑style DDoS events, fast-ramping, high-intensity surges distributed across multiple resources, designed to overwhelm packet processing and connection-handling layers before traditional bandwidth metrics show signs of strain. From November 15, 2025 through January 5, 2026, Azure DDoS Protection helped customers maintain continuity through sustained Layer 3 and Layer 4 attack traffic, underscoring two persistent realities: Most attacks remain short, automated, and frequently create constant background attack traffic. The upper limit of attacker capability continues to grow, with botnets across the industry regularly demonstrating multi‑Tbps scale. The holiday season once again reinforced that DDoS resilience must be treated as a continuous operational discipline. Rising volume and intensity Between November 15 and January 5, Azure mitigated approximately 174,054 inbound DDoS attacks. While many were small and frequent, the distribution revealed the real shift: 16% exceeded 1M packets per second (pps). ~3% surpassed 10M pps, up significantly from 0.2% last year. Even when individual events are modest, the cumulative impact of sustained attack traffic can be operationally draining—consuming on-call cycles, increasing autoscale and egress costs, and creating intermittent instability that can provide cover for more targeted activity. Operational takeaway: Treat DDoS mitigation as an always-on requirement. Ensure protection is enabled across all internet-facing entry points, align alerting to packet rate trends, and maintain clear triage workflows. What the TCP/UDP mix is telling us this season TCP did what it usually does during peak season: it carried the fight. TCP floods made up ~72% of activity, and ACK floods dominated (58.7%) a reliable way to grind down packet processing and connection handling. UDP was ~24%, showing up as sharp, high-intensity bursts; amplification (like NTP) appeared, but it wasn’t the main play. Put together, it’s a familiar one-two punch: sustain TCP/ACK pressure to exhaust the edge, then spike UDP to jolt stability and steal attention. The goal isn’t just to saturate bandwidth, it’s to push services into intermittent instability, where things technically stay online but feel broken to users. TCP-heavy pressure: Make sure your edge and backends can absorb a surge in connections without falling over—check load balancer limits, connection/state capacity, and confirm health checks won’t start flapping during traffic spikes. UDP burst patterns: Rely on automated detection and mitigation—these bursts are often over before a human can respond. Reduce exposure: Inventory any internet-facing UDP services and shut down, restrict, or isolate anything you don’t truly need. Attack duration: Attackers continued to favor short-lived bursts designed to outrun manual response, but we also saw a notable shift in “who” felt the impact most. High-sensitivity workloads, especially gaming, experienced some of the highest packet-per-second and bandwidth-driven spikes, often concentrated into bursts lasting from a few minutes to several minutes. Even when these events were brief, the combination of high PPS + high bandwidth can be enough to trigger jitter, session drops, match instability, or rapid scaling churn. Overall, 34% of attacks lasted 5 minutes or less, and 83% ended within 40 minutes, reinforcing the same lesson: modern DDoS patterns are optimized for speed and disruption, not longevity. For latency- and session-sensitive services, “only a few minutes” can still be a full outage experience. Attack duration is an attacker advantage when defenses rely on humans to notice, diagnose, and react. Design for minute-long spikes: assume attacks will be short, sharp, and high PPS such that your protections should engage automatically. Watch the right signals: alert on PPS spikes and service health (disconnect rates, latency/jitter), not bandwidth alone. Botnet-driven surges: Azure observed rapid rotation of botnet traffic associated with Aisuru and KimWolf targeting public-facing endpoints. The traffic was highly distributed across regions and networks. In several instances, when activity was mitigated in one region, similar traffic shifted to alternate regions or segments shortly afterward. “Relocation” behavior is the operational signature of automated botnet playbooks: probe → hit → shift → retry. If defenses vary by region or endpoint, attackers will find the weakest link quickly. Customers should standardize protection posture, ensure consistent DDoS policies and thresholds across regions. Monitor by setting the right alerts and notifications. The snapshot below captures the Source-side distribution at that moment, showing which industry verticals were used to generate the botnet traffic during the observation window The geography indicators below reflect where the traffic was observed egressing onto the internet, and do not imply attribution or intent by any provider or country. Preparing for 2026 As organizations transition into 2026, the lessons from the 2025 holiday season marked by persistent and evolving DDoS threats, including the rise of DDoS-for-hire services, massive botnets underscore the critical need for proactive, resilient cybersecurity. Azure's proven ability to automatically detect, mitigate, and withstand advanced attacks (such as record-breaking volumetric incidents) highlights the value of always-on protections to maintain business continuity and safeguard digital services during peak demand periods. Adopting a Zero Trust approach is essential in this landscape, as it operates on the principle of "never trust, always verify," assuming breaches are inevitable and requiring continuous validation of access and traffic principles that complement DDoS defenses by limiting lateral movement and exposure even under attack. To achieve comprehensive protection, implement layered security: deploy Azure DDoS Protection for network-layer (Layers 3 and 4) volumetric mitigation with always-on monitoring, adaptive tuning, telemetry, and alerting; combine it with Azure Web Application Firewall (WAF) to defend the application layer (Layer 7) against sophisticated techniques like HTTP floods; and integrate Azure Firewall for additional network perimeter controls. Key preparatory steps include identifying public-facing exposure points, establishing normal traffic baselines, conducting regular DDoS simulations, configuring alerts for active mitigations, forming a dedicated response team, and enabling expert support like the DDoS Rapid Response (DRR) team when needed. By prioritizing these multi-layered defenses and a well-practiced response plan, organizations can significantly enhance resilience against the evolving DDoS landscape in 2026.148Views0likes0CommentsA Practical Guide to Azure DDoS Protection Cost Optimization
Introduction Azure provides infrastructure-level DDoS protection by default to protect Azure’s own platform and services. However, this protection does not extend to customer workloads or non-Microsoft managed resources like Application Gateway, Azure Firewall, or virtual machines with public IPs. To protect these resources, Azure offers enhanced DDoS protection capabilities (Network Protection and IP Protection) that customers can apply based on workload exposure and business requirements. As environments scale, it’s important to ensure these capabilities are applied deliberately and aligned with actual risk. For more details on how Azure DDoS protection works, see Understanding Azure DDoS Protection: A Closer Look. Why Cost Optimization Matters Cost inefficiencies related to Azure DDoS Protection typically emerge as environments scale: New public IPs are introduced Virtual networks evolve Workloads change ownership Protection scope grows without clear alignment to workload exposure The goal here is deliberate, consistent application of enhanced protection matched to real risk rather than historical defaults. Scoping Enhanced Protection Customer workloads with public IPs require enhanced DDoS protection to be protected against targeted attacks. Enhanced DDoS protection provides: Advanced mitigation capabilities Detailed telemetry and attack insights Mitigation tuned to specific traffic patterns Dedicated support for customer workloads When to apply enhanced protection: Workload Type Enhanced Protection Recommended? Internet-facing production apps with direct customer impact Yes Business-critical systems with compliance requirements Yes Internal-only workloads behind private endpoints Typically not needed Development/test environments Evaluate based on exposure Best Practice: Regularly review public IP exposure and workload criticality to ensure enhanced protection aligns with current needs. Understanding Azure DDoS Protection SKUs Azure offers two ways to apply enhanced DDoS protection: DDoS Network Protection and DDoS IP Protection. Both provide DDoS protection for customer workloads. Comparison Table Feature DDoS Network Protection DDoS IP Protection Scope Virtual network level Individual public IP Pricing model Fixed base + overage per IP Per protected IP Included IPs 100 public IPs N/A DDoS Rapid Response (DRR) Included Not available Cost protection guarantee Included Not available WAF discount Included Not available Best for Production environments with many public IPs Selective protection for specific endpoints Management Centralized Granular Cost efficiency Lower per-IP cost at scale (100+ IPs) Lower total cost for few IPs (< 15) DDoS Network Protection DDoS Network Protection can be applied in two ways: VNet-level protection: Associate a DDoS Protection Plan with virtual networks, and all public IPs within those VNets receive enhanced protection Selective IP linking: Link specific public IPs directly to a DDoS Protection Plan without enabling protection for the entire VNet This flexibility allows you to protect entire production VNets while also selectively adding individual IPs from other environments to the same plan. For more details on selective IP linking, see Optimizing DDoS Protection Costs: Adding IPs to Existing DDoS Protection Plans. Ideal for: - Production environments with multiple internet-facing workloads - Mixed environments where some VNets need full coverage and others need selective protection - Scenarios requiring centralized visibility, management, and access to DRR, cost protection, and WAF discounts DDoS IP Protection DDoS IP Protection allows enhanced protection to be applied directly to individual public IPs, with per-IP billing. This is a standalone option that does not require a DDoS Protection Plan. Ideal for: Environments with fewer than 15 IPs requiring protection Cases where DRR, cost protection, and WAF discounts are not needed Quick enablement without creating a protection plan Decision Tree: Choosing the Right SKU Now that you know the main scenarios, the decision tree below can help you determine which SKU best fits your environment based on feature requirements and scale: Network Protection exclusive features: DDoS Rapid Response (DRR): Access to Microsoft DDoS experts during active attacks Cost protection: Resource credits for scale-out costs incurred during attacks WAF discount: Reduced pricing on Azure Web Application Firewall Consolidating Protection Plans at Tenant Level A single DDoS Protection Plan can protect multiple virtual networks and subscriptions within a tenant. Each plan includes: Fixed monthly base cost 100 public IPs included Overage charges for additional IPs beyond the included threshold Cost Comparison Example Consider a customer with 130 public IPs requiring enhanced protection: Configuration Plans Base Cost Overage Total Monthly Cost Two separate plans 2 $2,944 × 2 = $5,888 $0 ~$5,888 Single consolidated plan 1 $2,944 30 IPs × $30 = $900 ~$3,844 Savings: ~$2,044/month ($24,528/year) by consolidating to a single plan. In both cases, the same public IPs receive the same enhanced protection. The cost difference is driven entirely by plan architecture. How to Consolidate Plans Use the PowerShell script below to list existing DDoS Protection Plans and associate virtual networks with a consolidated plan. Run this script from Azure Cloud Shell or a local PowerShell session with the [Az module](https://learn.microsoft.com/powershell/azure/install-azure-powershell) installed. The account running the script must have Network Contributor role (or equivalent) on the virtual networks being modified and Reader access to the DDoS Protection Plan. # List all DDoS Protection Plans in your tenant Get-AzDdosProtectionPlan | Select-Object Name, ResourceGroupName, Id # Associate a virtual network with an existing DDoS Protection Plan $ddosPlan = Get-AzDdosProtectionPlan -Name "ConsolidatedDDoSPlan" -ResourceGroupName "rg-security" $vnet = Get-AzVirtualNetwork -Name "vnet-production" -ResourceGroupName "rg-workloads" $vnet.DdosProtectionPlan = New-Object Microsoft.Azure.Commands.Network.Models.PSResourceId $vnet.DdosProtectionPlan.Id = $ddosPlan.Id $vnet.EnableDdosProtection = $true Set-AzVirtualNetwork -VirtualNetwork $vnet Preventing Protection Drift Protection drift occurs when the resources covered by DDoS protection no longer align with the resources that actually need it. This mismatch can result in wasted spend (protecting resources that are no longer critical) or security gaps (missing protection on newly deployed resources). Common causes include: Applications are retired but protection remains Test environments persist longer than expected Ownership changes without updating protection configuration Quarterly Review Checklist List all public IPs with enhanced protection enabled Verify each protected IP maps to an active, production workload Confirm workload criticality justifies enhanced protection Review ownership tags and update as needed Remove protection from decommissioned or non-critical resources Validate DDoS Protection Plan consolidation opportunities Sample Query: List Protected Public IPs Use the following PowerShell script to identify all public IPs currently receiving DDoS protection in your environment. This helps you audit which resources are protected and spot candidates for removal. Run this from Azure Cloud Shell or a local PowerShell session with the Az module installed. The account must have Reader access to the subscriptions being queried. # List all public IPs with DDoS protection enabled Get-AzPublicIpAddress | Where-Object { $_.DdosSettings.ProtectionMode -eq "Enabled" -or ($_.IpConfiguration -and (Get-AzVirtualNetwork | Where-Object { $_.EnableDdosProtection -eq $true }).Subnets.IpConfigurations.Id -contains $_.IpConfiguration.Id) } | Select-Object Name, ResourceGroupName, IpAddress, @{N='Tags';E={$_.Tag | ConvertTo-Json -Compress}} For a comprehensive assessment of all public IPs and their DDoS protection status across your environment, use the DDoS Protection Assessment Tool. Making Enhanced Protection Costs Observable Ongoing visibility into DDoS Protection costs enables proactive optimization rather than reactive bill shock. When costs are surfaced early, you can spot scope creep before it impacts your budget, attribute spending to specific workloads, and measure whether your optimization efforts are paying off. The following sections cover three key capabilities: budget alerts to notify you when spending exceeds thresholds, Azure Resource Graph queries to analyze protection coverage, and tagging strategies to attribute costs by workload. Setting Up Cost Alerts Navigate to Azure Cost Management + Billing Select Cost alerts > Add Configure: o Scope: Subscription or resource group o Budget amount: Based on expected DDoS Protection spend o Alert threshold: 80%, 100%, 120% o Action group: Email security and finance teams Tagging Strategy for Cost Attribution Apply consistent tags to track DDoS protection costs by workload: # Tag public IPs for cost attribution $pip = Get-AzPublicIpAddress -Name "pip-webapp" -ResourceGroupName "rg-production" $tags = @{ "CostCenter" = "IT-Security" "Workload" = "CustomerPortal" "Environment" = "Production" "DDoSProtectionTier" = "NetworkProtection" } Set-AzPublicIpAddress -PublicIpAddress $pip -Tag $tags Summary This guide covered how to consolidate DDoS Protection Plans to avoid paying multiple base costs, select the appropriate SKU based on IP count and feature needs, apply protection selectively with IP linking, and prevent configuration drift through regular reviews. These practices help ensure you're paying only for the protection your workloads actually need. References Review Azure DDoS Protection pricing Enable DDoS Network Protection for a virtual network Configure DDoS IP Protection Configure Cost Management alerts159Views0likes0CommentsZero Trust with Azure Firewall, Azure DDoS Protection and Azure WAF: A practical use case
Introduction Zero Trust has emerged as the defining security ethos of the modern enterprise. It is guided by a simple but powerful principle: “Never trust, always verify.” This principle is more relevant now than ever as cyberattacks continue to trend upward in both frequency and impact, affecting organizations of every size and industry. No entity large or small can assume immunity. As a result, adopting Zero Trust is no longer optional, it is a foundational requirement for designing secure, resilient architectures. A key tenet of Zero Trust is the assumption of breach, thus designing systems with the expectation that threats may already exist both outside and inside the network perimeter. To implement this principle, you need multiple, independent security controls that inspect traffic at different layers and enforce least privilege access continuously. Relying on a single security control, even a highly capable one, leaves gaps that modern attackers are adept at exploiting. It is within this context that combining the use of Azure Firewall, Azure DDoS Protection and Azure Web Application Firewall (WAF) services to secure Web Applications while protecting the network perimeter becomes important. Together, these services deliver comprehensive protection across the network and application layers. Defense-in-depth: Why Azure WAF, Azure DDoS Protection and Azure Firewall are essential for Zero Trust In these sections ahead, we examine the common network and application-layer attack vectors that target modern web applications and illustrate how Azure WAF, Azure DDoS protection, and Azure Firewall, when layered strategically, work in tandem to mitigate these threats. The architecture The test environment was designed to reflect a common Azure deployment pattern: Azure DDoS Protection at the edge, to defend against a comprehensive set of network layer (layer 3/4) attacks Azure Application Gateway with WAF, inspecting inbound HTTP traffic for application-layer threats Azure Firewall Premium behind the gateway, providing network-layer protection, deep packet inspection, and outbound traffic governance. A backend subnet hosting an intentionally vulnerable application (OWASP Juice Shop) to simulate real-world attack scenarios. Traffic flows through the DDoS first, then WAF, and then the firewall, before reaching the backend. Outbound traffic from the backend is routed through the firewall for inspection. This ensures that all inbound and outbound traffic is scrutinized. Two access paths that will be tested: Via the Application Gateway public IP, where traffic passes through DDoS, WAF and Firewall. Via the Firewall public IP using a DNAT rule, where traffic bypasses WAF and is inspected only by the Firewall. The following scenarios illustrate how this complementary protection strengthens overall resilience: Scenario 1: SQL injection (application-layer attack) Let’s say an attacker on the internet attempts to access the application’s login endpoint via the Application Gateway IP address and injects a SQL payload into the input field. For example, the attacker submits a request containing the following payload in the User ID field: ?id=' OR 1=1 -- Azure WAF will receive the request, analyze, and if Azure WAF is deployed in Prevention mode, it will immediately detect the SQL injection attempt using its built-in Managed Ruleset. Upon detection, Azure WAF will return a WAF block page, preventing the request from ever reaching the application. By contrast, when the same application is accessed through a firewall-only path (for example, via a DNAT rule on Azure Firewall that exposes the application on port 443), Azure Firewall allows the traffic as it does not perform deep Application layer inspection and SQL injection payloads when embedded within the HTTP request body, appear legitimate at the network layer. Here is a snapshot of the attacker gaining access to the admin role when they insert this SQL injection attack without Azure WAF and only Azure Firewall in the path. Scenario 2: Volumetric and application-layer DDoS attacks Next, the attacker launches a volumetric network layer DDoS (SYN/UDP floods) to saturate bandwidth, but Azure DDoS Network Protection absorbs and scrubs the attack at the edge, so no traffic reaches Application Gateway, WAF, or Firewall. When the network layer attack fails, they shift to HTTP flood attack at the application layer, overwhelming the web application with a high volume of requests. Some requests include exploit attempts, while others are designed purely to exhaust application resources. Azure WAF here, can identify malicious patterns such as: Automated bots lacking proper headers Abnormal request rates Known exploit payloads embedded within requests Malicious IP addresses Note: Azure DDoS Protection is a comprehensive service that provides protection across network layers (Layer 3 and 4), while HTTP DDoS Protection specifically targets application-layer attacks (Layer 7) and is integrated with Azure WAF. They are complementary services designed to defend against different types of threats within the Azure environment. Additionally, if the botnet’s IPs are known threat actors or malicious traffic, Azure Firewall’s threat intelligence and IDPS will be able to flag this traffic too. Together, these services form a complementary, defense-in-depth strategy for protecting Azure workloads against distributed denial-of-service attacks. Scenario 3: Path Traversal Attempt/Information leak: (Application-Layer Attack) Next, the attacker sends HTTP requests to access sensitive system files such as /etc/passwd by sending crafted HTTP requests to the application via the Application Gateway public IP address. The request successfully passes through Azure Application Gateway WAF, as it does not trigger a managed rule violation in this case. However, when the request reaches Azure Firewall, the Firewall’s IDPS detects the malicious pattern in the HTTP header and blocks the connection before it can reach the backend workload. Because the backend connection is denied by Azure Firewall, Application Gateway is unable to establish a successful response and returns a 504 Gateway Timeout to the client, rather than a 403 Forbidden response that would typically be generated by WAF when it blocks traffic. Below is the log from Azure Firewall showing that its able to detect this traffic as – Attempted Information Leak. As seen below, the traffic passed Application Gateway+WAF but was caught by Azure Firewall: This scenario highlights an important architectural outcome: The combination of WAF and Azure Firewall provides layered enforcement, even if an attack manages to slip past Azure WAF, Azure Firewall adds an additional enforcement layer to ensure the application remains protected. Now, let’s look at some more Network Layer attacks: Scenario 4: Network reconnaissance and breach In this scenario, port 3389 is exposed on Application Gateway using the L4 TCP Proxy option. Now, the attacker attempts to scan the Application Gateway on all the ports/protocols and found that port 3389 was open along with other ports such as ports 80, 8080, 3000. Azure WAF will alert us for Layer 7/Application exploit but cannot verify/validate the attack on port 3389 since it was purely Layer 3/4 and contained no HTTP payload for WAF inspection. The L4 proxy listener on App Gateway simply forwards the raw TCP connections to the Azure Firewall behind it. Azure Firewall, however, performs full network‑layer inspection across all ports and protocols, allowing it to detect and alert on this type of L3/L4 reconnaissance even when App Gateway had the port open via the TCP proxy feature. As seen below the traffic passed Application Gateway+WAF but was caught by Azure Firewall since it is non-HTTP: The attacker then tries a different approach: Now the attacker somehow compromises a workstation inside our network and attempts to move laterally to the web server via RDP on port 3389 and/or attempts to exfiltrate and try to access something outside of the network. Azure Firewall located inside the VNet blocks the RDP attempt (if there is no rule allowing it) and if there is, its IDPS flags/blocks the traffic as suspicious. In this case, Azure WAF will not be involved but Azure Firewall inspects this internal and/or outbound traffic and blocks it. This illustrates how a combination of the two stops the attacker at multiple points: firewall foiled the reconnaissance and lateral movement/exfiltration, WAF foiled the application exploit. We can see below the outbound malicious attempt caught by Azure Firewall IDPS: In summary, Azure WAF is like the “bodyguard at the application’s front door” – inspecting every HTTP request in detail and ejecting those carrying hidden weapons or exhibiting bad behavior. It focuses on the web layer, which Azure Firewall or DDoS alone cannot fully protect. If we only had the WAF and no network firewall or DDoS, we’d be safe from many web attacks but would remain exposed to network-level threats (e.g., someone trying to RDP into a VM, or flooding a non-HTTP service). Conversely, if we had only the firewall, a crafty attacker could still exploit a vulnerability in our web app with a well-crafted HTTP request that looks “allowed” to the firewall – that’s where the WAF comes in to catch it. Azure Firewall on the other hand, acts as the “moat and drawbridge” to your cloud network: it keeps out the obvious bad guys at the gate, tightly limits what’s allowed in or out (no implicit trust for internal IPs), and uses threat intel + signatures to sniff out known threats in any traffic it passes, even outbound traffic. The table below shows the traffic flow that will be filtered by Azure WAF vs Azure Firewall. As you can see, layered security is fundamental to Zero Trust Conclusion In a Zero Trust architecture, security cannot rely on implicit trust or a single layer of defense. The combination of Azure Firewall Premium, Azure DDoS protection and Azure Application Gateway WAF exemplifies defense-in-depth by protecting both network and application layers. Organizations hosting internet-facing applications should adopt this layered strategy to reduce exposure to modern threats, prevent lateral movement, and maintain strict control over outbound traffic. By implementing these services together, you align with Microsoft’s recommended best practices for Zero Trust and significantly strengthen your cloud security posture. References: Implement a Zero Trust network for web applications by using Azure Firewall and Azure Application Gateway What is Azure Web Application Firewall? Azure DDoS Protection Overview | Microsoft Learn What is Azure Firewall? Architecture designs using Azure WAF and Azure Firewall together Zero Trust Assessment Overview | Microsoft Learn2.3KViews2likes2CommentsHow to find Azure Firewall SNAT exhaustion on a compute unit?
I have some applications on my AKS cluster which frequently hits a application server. This application server URL is behind Akamai. When ever applications want to establish an SSL connection with the external URL, it has to go through Azure Firewall. I have occasional connection timeouts if a connection is not establish to Akamai servers in 1s. I saw that the SNAT Utilization is in the range of 10 percent to 30%. I know this graph is smoothened out across instances and can't see which Azure Firewall compute unit is getting overloaded if I have frequent access to one URL which goes through Akamai? Is there any way to find out the reason for these occasional connection timeouts with the external IPs(Akamai)? through logs or by any other means? I believe lack of the SNAT metrics by firewall compute unit is responsible for such tough debugging.49Views0likes1CommentOPNsense Firewall as Network Virtual Appliance (NVA) in Azure
This blog is available as a video on YouTube: youtube.com/watch?v=JtnIFiB7jkE Introduction to OPNsense In today’s cloud-driven world, securing your infrastructure is more critical than ever. One powerful solution is OPNsense. OPNsense is a powerful open-source firewall that can be used to secure your virtual networks. Originally forked from pfSense, which itself evolved from m0n0wall. OPNsense could run on Windows, MacOS, Linux including OpenBSD and FreeBSD. It provides a user-friendly web interface for configuration and management. What makes OPNsense Firewall stand out is its rich feature set: VPN Support for point-to-site and site-to-site connections using technologies like WireGuard and OpenVPN. DNS Management with options such as OpenDNS and Unbound DNS. Multi-network handling enabling you to manage different LANs seamlessly. Advanced security features including intrusion detection and forward proxy integration. Plugin ecosystem supporting official and community extensions for third-party integrations. In this guide, you’ll learn how to install and configure OPNsense Firewall on an Azure Virtual Machine, leveraging its capabilities to secure your cloud resources effectively. We'll have three demonstrations: Installing OPNsense on an Azure virtual machine Setting up point-to-site VPN using WireGuard Here is the architecture we want to achieve in this blog, except the Hb and Spoke configuration which is planned for the second part coming soon. 1. Installing OPNsense on an Azure Virtual Machine There are three ways to have OPNsense in a virtual machine. Create a VM from scratch and install OPNsense. Install using the pre-packaged ISO image created by Deciso the company that maintains OPNsense. Use a pre-built VM image from the Azure Marketplace. In this demo, we will use the first approach to have more control over the installation and configuration. We will create an Azure VM with FreeBSD OS and then install OPNsense using a shell script through the Custom Script Extension. All the required files are in this repository: github.com/HoussemDellai/azure-network-course/205_nva_opnsense. The shell script configureopnsense.sh will install OPNsense and apply a predefined configuration file config.xml to set up the firewall rules, VPN, and DNS settings. It will take 4 parameters: GitHub path where the script and config file are hosted, in our case it is /scripts/. OPNsense version to install, currently set to 25.7. Gateway IP address for the trusted subnet. Public IP address of the untrusted subnet. This shell script is executed after the VM creation using the Custom Script Extension in Terraform represented in the file vm_extension_install_opnsense.tf. OPNsense is intended to be used an NVA so it would be good to apply some of the good practices. One of these practices is to have two network interfaces: Trusted Interface: Connected to the internal network (spokes). Untrusted Interface: Connected to the internet (WAN). This setup allows OPNsense to effectively manage and secure traffic between the internal network and the internet. Second good practice is to start with a predefined configuration file config.xml that includes the basic settings for the firewall, VPN, and DNS. This approach saves time and ensures consistency across deployments. It is recommended to start with closed firewall rules and then open them as needed based on your security requirements. But for demo purposes, we will allow all traffic. Third good practice is to use multiple instances of OPNsense in a high-availability setup to ensure redundancy and failover capabilities. However, for simplicity, we will use a single instance in this demo. Let's take a look at the resources that will be created by Terraform using the AzureRM provider: Resource Group Virtual Network (VNET) named vnet-hub with two subnets: Trusted Subnet: Internal traffic between spokes. Untrusted Subnet: Exposes the firewall to the internet. Network Security Group (NSG): attached to the untrusted subnet, with rules allowing traffic to the VPN, OPNsense website and to the internet. Virtual Machine: with the following configuration: FreeBSD OS image using version 14.1. VM size: Standard_D4ads_v6 with NVMe disk for better performance. Admin credentials: feel free to change the username and password with more security. Two NICs (trusted and untrusted) with IP forwarding enabled to allow traffic to pass through the firewall. NAT Gateway: attached to the untrusted subnet for outbound internet connectivity. Apply Terraform configuration To deploy the resources, run the following commands in your terminal from within the 205_nva_opnsense directory: terraform init terraform apply -auto-approve Terraform provisions the infrastructure and outputs resource details. In the Azure portal you should see the newly created resources. Accessing the OPNsense dashboard To access the OPNsense dashboard: Get the VM’s public IP from the Azure portal or from Terraform output. Paste it into your browser. Accept the TLS warning (TLS is not configured yet). Log in with Username: root and Password: opnsense you can change it later in the dashboard. You now have access to the OPNsense dashboard where you can: Monitor traffic and reports. Configure firewall rules for LAN, WAN, and VPN. Set up VPNs (WireGuard, OpenVPN, IPsec). Configure DNS services (OpenDNS, UnboundDNS). Now that the OPNsense firewall is up and running, let's move to the next steps to explore some of its features like VPN. 2. Setting up Point-to-Site VPN using WireGuard We’ll demonstrate how to establish a WireGuard VPN connection to OPNsense firewall. The configuration file config.xml used during installation already includes the necessary settings for WireGuard VPN. For more details on how to set up WireGuard on OPNsense, refer to the official documentation. We will generate a Wireguard peer configuration using the OPNsense dashboard. Navigate to VPN > WireGuard > Peer generator then add a name for the peer, fill in the IP address for the OPNsense which is the public IP of the VM in Azure, use the same IP if you want to use the pre-configured UnboundDNS. Then copy the generated configuration and click on Store and generate next and Apply. Next we'll use that configuration to set up WireGuard on a Windows client. Here you can either use your current machine as a client or create a new Windows VM in Azure. We'll go with this second option for better isolation. We'll deploy the client VM using Terraform file vpn_client_vm_win11.tf. Make sur it is deployed using command terraform apply -auto-approve. Once the VM is ready, connect to it using RDP, download and install WireGuard. Alternatively, you can install WireGuard using the following Winget command: winget install -e --id WireGuard.WireGuard --accept-package-agreements --accept-source-agreements Launch WireGuard application, click on Add Tunnel > Add empty tunnel..., then paste the peer configuration generated from OPNsense and save it. Then click on Activate to start the VPN connection. We should see the data transfer starting. We'll verify the VPN connection by pinging the VM, checking the outbound traffic passes through the Nat Gateway's IPs and also checking the DNS resolution using UnboundDNS configured in OPNsense. ping 10.0.1.4 # this is the trusted IP of OPNsense in Azure # Pinging 10.0.1.4 with 32 bytes of data: # Reply from 10.0.1.4: bytes=32 time=48ms TTL=64 # ... curl ifconfig.me/ip # should display the public IP of the Nat Gateway in Azure # 74.241.132.239 nslookup microsoft.com # should resolve using UnboundDNS configured in OPNsense # Server: UnKnown # Address: 135.225.126.162 # Non-authoritative answer: # Name: microsoft.com # Addresses: 2603:1030:b:3::152 # 13.107.246.53 # 13.107.213.53 # ... The service endpoint ifconfig.me is used to get the public IP address of the client. You can use any other similar service. What's next ? Now that you have OPNsense firewall set up as an NVA in Azure and have successfully established a WireGuard VPN connection, we can explore additional features and configurations such as integrating OPNsense into a Hub and Spoke network topology. That will be covered in the next part of this blog. Special thanks to 'behind the scene' contributors I would like to thank my colleagues Stephan Dechoux thanks to whom I discovered OPNsense and Daniel Mauser who provided a good lab for setting up OPNsense in Azure available here https://github.com/dmauser/opnazure. Disclaimer The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.Application layer DDoS protection using the HTTP DDoS Ruleset in Azure WAF
Today, Distributed Denial of Service (DDoS) attacks can strike as soon as public connectivity is enabled, highlighting their widespread prevalence. Factors such as easily accessible botnets, the explosion of IoT devices, and the growth of API-driven workloads, e-commerce platforms, and global web applications have made these attacks easier to launch and more impactful. Importantly, attackers are no longer focusing solely on the network layer, they increasingly target the application layer. Application-layer DDoS attacks often mimic normal user activity, making detection and mitigation far more challenging than traditional network-layer attacks. The most common types of Application layer/HTTP based DDOS attacks are outlined below. Common HTTP-based DDoS attacks: HTTP floods: Large volumes of valid looking GET or POST requests are sent to webpages or APIs, overwhelming application gateways and backend services without saturating network bandwidth. API abuse attacks: Attackers repeatedly call specific API endpoints, such as authentication, search, or checkout that trigger expensive backend operations, quickly exhausting compute and database resources. Slow HTTP attacks: Connections are deliberately kept open by sending data very slowly, consuming server threads and connection limits while generating relatively little traffic. TLS-intensive attacks: A high number of encrypted connections are initiated to increase CPU usage during TLS handshakes, impacting application gateways and load balancers. In order to defend against these sophisticated threats, organizations need application-aware protection that can identify abnormal behavior patterns rather than relying only on traffic volume. This is precisely the capability provided by the HTTP DDoS Ruleset for Azure Application Gateway WAF. What Is the HTTP DDoS Ruleset? The HTTP DDoS Ruleset is a built-in capability of Azure Application Gateway WAF designed to protect your applications from large-scale HTTP floods at the application layer. Unlike static rate-limiting or manual IP blocking, this ruleset uses adaptive learning to understand what “normal” traffic looks like for your gateway and then automatically mitigates anomalies. Key features Baseline learning: The ruleset observes traffic for about 24 hours to establish a normal request pattern per gateway. Dynamic detection: When incoming requests exceed the learned baseline, the ruleset identifies potential abuse (Client-specific or IP specific limits are applied only when the overall request volume to the gateway exceeds its learned baseline). Automated mitigation: Offending clients are blocked and are placed in a “penalty box” for the defined time (15 minutes). Sensitivity levels: Choose low, medium, or high to control aggressiveness. Medium is recommended for most workloads. Leverages Microsoft’s vast global network’s threat intelligence to establish a stricter baseline for suspected botnet traffic and when exceeded, blocks them and places those suspected bots into the penalty box. Threat intelligence plays a critical role here. By continuously aggregating data from global telemetry, threat intelligence systems can identify sources that are likely participating in coordinated attacks. When applied to HTTP DDoS protection, this intelligence allows suspected bot traffic to be treated differently from normal user traffic. Instead of relying only on static blocklists, botnet-aware defenses use reputation, behavior, and historical signals to apply throttling or penalties dynamically. This approach reduces the attack surface, limits the impact of distributed bot-driven floods, and avoids unnecessary disruption to legitimate users. Threat intelligence shifts DDoS defense from a purely reactive posture to a more informed, proactive one, making it far more effective against today’s botnet-driven application-layer attacks. Enabling and validating the HTTP DDoS Ruleset: Getting started with the HTTP DDoS Ruleset on Application Gateway WAF is simple. Enable the Ruleset: In the Azure portal, open your WAF policy. Note: Currently the ruleset is available only in the preview portal: https://preview.portal.azure.com/ Under Managed Rules, Click on Assign and then assign the HTTP DDoS Ruleset_1.0 (Preview) and save. Each rule can be configured to either Log traffic for observation or Deny traffic for active mitigation. Sensitivity can be adjusted to High, Medium, or Low, allowing you to balance detection speed and accuracy. Higher sensitivity enforces lower thresholds and detects anomalies sooner, while lower sensitivity raises thresholds to reduce false positives. Medium sensitivity is the default and recommended setting for most workloads. Once enabled, the ruleset is evaluated early in the WAF pipeline, before custom rules are processed. This ensures that HTTP-based DDoS protection cannot be bypassed by DDoS protection. The ruleset works alongside the Default Rule Set (DRS) and any custom rules for comprehensive security. After the policy is applied to an Application Gateway, the ruleset enters a learning phase that lasts at least 24 hours. During this time, it observes traffic patterns to establish normal baselines for the gateway. No detection or blocking occurs during this period, allowing the ruleset to understand typical application behavior before enforcement begins. Metrics: Once the learning phase completes, traffic surges that exceed the learned baseline are reflected in the Application Gateway metrics. These metrics provide immediate visibility into when the HTTP DDoS ruleset is actively detecting and mitigating abnormal behavior. Metric – WAF Penalty Box Size This metric shows how many IP addresses are currently inside the penalty box, meaning that the WAF has detected them exceeding the learned HTTP DDoS baseline and is temporarily blocking them. A spike here indicates that multiple clients crossed their thresholds at the same time, often during an attack or load-test scenario. Metric – WAF Penalty Box Hits This metric represents how many IPs entered the penalty box. Every time a client breaches its threshold, the ruleset logs a hit and places that IP into the penalty box for approximately 15 minutes. Multiple hits often correlate with repeated spikes or sustained abusive traffic patterns. Logs: For deeper analysis, enabling diagnostic settings allows you to inspect HTTP DDoS Ruleset events directly in the logs. These logs provide granular details about which IPs were flagged, why they were flagged, and how far they exceeded expected thresholds. Example of DetailedData from a log: RemoteAddress: 4.x.x.x (Public IP) crossed threshold. Expected: 4400.000000 request per 900 seconds, Actual: 8407.000000 requests per 900 seconds. KQL queries to retrieve these logs: Resource specific logs: AGWFirewallLogs | where RuleSetType == "Microsoft_HTTPDDoSRuleSet" Diagnostic logs: AzureDiagnostics | where Category == "ApplicationGatewayFirewallLog" | where ruleSetType_s == "Microsoft_HTTPDDoSRuleSet" Note: Identify IPs repeatedly flagged and confirm they’re malicious, not legitimate clients. Conclusion: The threat landscape continues to evolve, and defenses must evolve with it. Leveraging the HTTP DDoS Ruleset in Azure Application Gateway WAF helps ensure protections keep pace with modern application-layer attacks. With built-in visibility through metrics and logs, teams can better understand traffic behavior and operate their WAF with greater confidence. Next Steps: Access the HTTP DDoS ruleset for Application Gateway via the preview portal: https://preview.portal.azure.com/ HTTP DDoS Ruleset (Preview) - Application Gateway WAF | Microsoft Learn Azure Web Application Firewall (WAF) policy overview | Microsoft Learn747Views1like0CommentsHow Azure network security can help you meet NIS2 compliance
With the adoption of the NIS2 Directive EU 2022 2555, cybersecurity obligations for both public and private sector organizations have become more strict and far reaching. NIS2 aims to establish a higher common level of cybersecurity across the European Union by enforcing stronger requirements on risk management, incident reporting, supply chain protection, and governance. If your organization runs on Microsoft Azure, you already have powerful services to support your NIS2 journey. In particular Azure network security products such as Azure Firewall, Azure Web Application Firewall WAF, and Azure DDoS Protection provide foundational controls. The key is to configure and operate them in a way that aligns with the directive’s expectations. Important note This article is a technical guide based on the NIS2 Directive EU 2022 2555 and Microsoft product documentation. It is not legal advice. For formal interpretations, consult your legal or regulatory experts. What is NIS2? NIS2 replaces the original NIS Directive 2016 and entered into force on 16 January 2023. Member states must transpose it into national law by 17 October 2024. Its goals are to: Expand the scope of covered entities essential and important entities Harmonize cybersecurity standards across member states Introduce stricter supervisory and enforcement measures Strengthen supply chain security and reporting obligations Key provisions include: Article 20 management responsibility and governance Article 21 cybersecurity risk management measures Article 23 incident notification obligations These articles require organizations to implement technical, operational, and organizational measures to manage risks, respond to incidents, and ensure leadership accountability. Where Azure network security fits The table below maps common NIS2 focus areas to Azure network security capabilities and how they support compliance outcomes. NIS2 focus area Azure services and capabilities How this supports compliance Incident handling and detection Azure Firewall Premium IDPS and TLS inspection, Threat Intelligence mode, Azure WAF managed rule sets and custom rules, Azure DDoS Protection, Azure Bastion diagnostic logs Detect, block, and log threats across layers three to seven. Provide telemetry for triage and enable response workflows that are auditable. Business continuity and resilience Azure Firewall availability zones and autoscale, Azure Front Door or Application Gateway WAF with zone redundant deployments, Azure Monitor with Log Analytics, Traffic Manager or Front Door for failover Improve service availability and provide data for resilience reviews and disaster recovery scenarios. Access control and segmentation Azure Firewall policy with DNAT, network, and application rules, NSGs and ASGs, Azure Bastion for browser based RDP SSH without public IPs, Private Link Enforce segmentation and isolation of critical assets. Support Zero Trust and least privilege for inbound and egress. Vulnerability and misconfiguration defense Azure WAF Microsoft managed rule set based on OWASP CRS. Azure Firewall Premium IDPS signatures Reduce exposure to common web exploits and misconfigurations for public facing apps and APIs. Encryption and secure communications TLS policy: Application Gateway SSL policy; Front Door TLS policy; App Service/PaaS minimum TLS. Inspection: Azure Firewall Premium TLS inspection Inspect and enforce encrypted communication policies and block traffic that violates TLS requirements. Inspect decrypted traffic for threats. Incident reporting and evidence Azure Network Security diagnostics, Log Analytics, Microsoft Sentinel incidents, workbooks, and playbooks Capture and retain telemetry. Correlate events, create incident timelines, and export reports to meet regulator timelines. NIS2 articles in practice Article 21 cybersecurity risk management measures Azure network controls contribute to several required measures: Prevention and detection. Azure Firewall blocks unauthorized access and inspects traffic with IDPS. Azure DDoS Protection mitigates volumetric and protocol attacks. Azure WAF prevents common web exploits based on OWASP guidance. Logging and monitoring. Azure Firewall, WAF, DDoS, and Bastion resources produce detailed resource logs and metrics in Azure Monitor. Ingest these into Microsoft Sentinel for correlation, analytics rules, and automation. Control of encrypted communications. Azure Firewall Premium provides TLS inspection to reveal malicious payloads inside encrypted sessions. Supply chain and service provider management. Use Azure Policy and Defender for Cloud to continuously assess configuration and require approved network security baselines across subscriptions and landing zones. Article 23 incident notification Build an evidence friendly workflow with Sentinel: Early warning within twenty four hours. Use Sentinel analytics rules on Firewall, WAF, DDoS, and Bastion logs to generate incidents and trigger playbooks that assemble an initial advisory. Incident notification within seventy two hours. Enrich the incident with additional context such as mitigation actions from DDoS, Firewall and WAF. Final report within one month. Produce a summary that includes root cause, impact, and corrective actions. Use Workbooks to export charts and tables that back up your narrative. Article 20 governance and accountability Management accountability. Track policy compliance with Azure Policy initiatives for Firewall, DDoS and WAF. Use exemptions rarely and record justification. Centralized visibility. Defender for Cloud’s network security posture views and recommendations give executives and owners a quick view of exposure and misconfigurations. Change control and drift prevention. Manage Firewall, WAF, and DDoS through Network Security Hub and Infrastructure as Code with Bicep or Terraform. Require pull requests and approvals to enforce four eyes on changes. Network security baseline Use this blueprint as a starting point. Adapt to your landing zone architecture and regulator guidance. Topology and control plane Hub and spoke architecture with a centralized Azure Firewall Premium in the hub. Enable availability zones. Deploy Azure Bastion Premium in the hub or a dedicated management VNet; peer to spokes. Remove public IPs from management NICs and disable public RDP SSH on VMs. Use Network Security Hub for at-scale management. Require Infrastructure as Code for all network security resources. Web application protection Protect public apps with Azure Front Door Premium WAF where edge inspection is required. Use Application Gateway WAF v2 for regional scenarios. Enable the Microsoft managed rule set and the latest version. Add custom rules for geo based allow or deny and bot management. enable rate limiting when appropriate. DDoS strategy Enable DDoS Network Protection on virtual networks that contain internet facing resources. Use IP Protection for single public IP scenarios. Configure DDoS diagnostics and alerts. Stream to Sentinel. Define runbooks for escalation and service team engagement. Firewall policy Enable IDPS in alert and then in alert and deny for high confidence signatures. Enable TLS inspection for outbound and inbound where supported. Enforce FQDN and URL filtering for egress. Require explicit allow lists for critical segments. Deny inbound RDP SSH from the internet. Allow management traffic only from Bastion subnets or approved management jump segments. Logging, retention, and access Turn on diagnostic settings for Firewall, WAF, DDoS, and Application Gateway or Front Door. Send to Log Analytics and an archive storage account for long term retention. Set retention per national law and internal policy. Azure Monitor Log Analytics supports table-level retention and archive for up to 12 years, many teams keep a shorter interactive window and multi-year archive for audits. Restrict access with Azure RBAC and Customer Managed Keys where applicable. Automation and playbooks Build Sentinel playbooks for regulator notifications, ticket creation, and evidence collection. Maintain dry run versions for exercises. Add analytics for Bastion session starts to sensitive VMs, excessive failed connection attempts, and out of hours access. Conclusion Azure network security services provide the technical controls most organizations need in order to align with NIS2. When combined with policy enforcement, centralized logging, and automated detection and response, they create a defensible and auditable posture. Focus on layered protection, secure connectivity, and real time response so that you can reduce exposure to evolving threats, accelerate incident response, and meet NIS2 obligations with confidence. References NIS2 primary source Directive (EU) 2022/2555 (NIS2). https://eur-lex.europa.eu/eli/dir/2022/2555/oj/eng Azure Firewall Premium features (TLS inspection, IDPS, URL filtering). https://learn.microsoft.com/en-us/azure/firewall/premium-features Deploy & configure Azure Firewall Premium. https://learn.microsoft.com/en-us/azure/firewall/premium-deploy IDPS signature categories reference. https://learn.microsoft.com/en-us/azure/firewall/idps-signature-categories Monitoring & diagnostic logs reference. https://learn.microsoft.com/en-us/azure/firewall/monitor-firewall-reference Web Application Firewall WAF on Azure Front Door overview & features. https://learn.microsoft.com/en-us/azure/frontdoor/web-application-firewall WAF on Application Gateway overview. https://learn.microsoft.com/en-us/azure/web-application-firewall/overview Examine WAF logs with Log Analytics. https://learn.microsoft.com/en-us/azure/application-gateway/log-analytics Rate limiting with Front Door WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-rate-limit Azure DDoS Protection Service overview & SKUs (Network Protection, IP Protection). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview Quickstart: Enable DDoS IP Protection. https://learn.microsoft.com/en-us/azure/ddos-protection/manage-ddos-ip-protection-portal View DDoS diagnostic logs (Notifications, Mitigation Reports/Flows). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-view-diagnostic-logs Azure Bastion Azure Bastion overview and SKUs. https://learn.microsoft.com/en-us/azure/bastion/bastion-overview Deploy and configure Azure Bastion. https://learn.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal Disable public RDP and SSH on Azure VMs. https://learn.microsoft.com/en-us/azure/virtual-machines/security-baseline Azure Bastion diagnostic logs and metrics. https://learn.microsoft.com/en-us/azure/bastion/bastion-diagnostic-logs Microsoft Sentinel Sentinel documentation (onboard, analytics, automation). https://learn.microsoft.com/en-us/azure/sentinel/ Azure Firewall solution for Microsoft Sentinel. https://learn.microsoft.com/en-us/azure/firewall/firewall-sentinel-overview Use Microsoft Sentinel with Azure WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/waf-sentinel Architecture & routing Hub‑spoke network topology (reference). https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/hub-spoke Azure Firewall Manager & secured virtual hub. https://learn.microsoft.com/en-us/azure/firewall-manager/secured-virtual-hub804Views0likes1CommentDNS flow trace logs in Azure Firewall are now generally available
Background Azure Firewall helps secure your network by filtering traffic and enforcing policies for your workloads and applications. DNS Proxy, a key capability in Azure Firewall, enables the firewall to act as a DNS forwarder for DNS traffic. Today, we’re introducing the general availability of DNS flow trace logs — a new logging capability that provides end-to-end visibility into DNS traffic and name resolution across your environment, such as viewing critical metadata including query types, response codes, queried domains, upstream DNS servers, and the source and destination IPs of each request. Why DNS flow trace logs? Existing Azure Firewall DNS Proxy logs provide visibility for DNS queries as they initially pass through Azure Firewall. While helpful, customers have asked for deeper insights to troubleshoot, audit, and analyze DNS behavior more comprehensively. DNS flow trace logs address this by offering richer, end-to-end logging, including DNS query paths, cache usage, forwarding decisions, and resolution outcomes. With these logs, you can: Troubleshoot faster with detailed query and response information throughout the full resolution flow Validate caching behavior by determining whether Azure Firewall’s DNS cache was used Gain deeper insights into query types, response codes, forwarding logic, and errors Example scenarios Custom DNS configurations – Verify traffic forwarding paths and ensure custom DNS servers are functioning and responding as expected Connectivity issues – Debug DNS resolution issues that prevent apps from connecting to critical services. Getting started in Azure Portal Navigate to your Azure Firewall resource in the Azure Portal. Select Diagnostic settings under Monitoring. Choose an existing diagnostic setting or create a new one. Under Log, select DNS flow trace logs. Stream logs to Log Analytics, Storage, or Event Hub as needed. Save the settings. Azure Firewall logging ✨ Next steps DNS flow trace logs give you greater visibility and control over DNS traffic in Azure Firewall, helping you secure, troubleshoot, and optimize your network with confidence. 🚀 Try DNS flow trace logs today, now generally available – and share your feedback with the team Learn more about how to configure and monitor these logs in the Azure Firewall monitoring data reference documentation.1.6KViews0likes0Comments