azure network security
193 TopicsOPNsense Firewall as Network Virtual Appliance (NVA) in Azure
This blog is available as a video on YouTube: youtube.com/watch?v=JtnIFiB7jkE Introduction to OPNsense In today’s cloud-driven world, securing your infrastructure is more critical than ever. One powerful solution is OPNsense. OPNsense is a powerful open-source firewall that can be used to secure your virtual networks. Originally forked from pfSense, which itself evolved from m0n0wall. OPNsense could run on Windows, MacOS, Linux including OpenBSD and FreeBSD. It provides a user-friendly web interface for configuration and management. What makes OPNsense Firewall stand out is its rich feature set: VPN Support for point-to-site and site-to-site connections using technologies like WireGuard and OpenVPN. DNS Management with options such as OpenDNS and Unbound DNS. Multi-network handling enabling you to manage different LANs seamlessly. Advanced security features including intrusion detection and forward proxy integration. Plugin ecosystem supporting official and community extensions for third-party integrations. In this guide, you’ll learn how to install and configure OPNsense Firewall on an Azure Virtual Machine, leveraging its capabilities to secure your cloud resources effectively. We'll have three demonstrations: Installing OPNsense on an Azure virtual machine Setting up point-to-site VPN using WireGuard Here is the architecture we want to achieve in this blog, except the Hb and Spoke configuration which is planned for the second part coming soon. 1. Installing OPNsense on an Azure Virtual Machine There are three ways to have OPNsense in a virtual machine. Create a VM from scratch and install OPNsense. Install using the pre-packaged ISO image created by Deciso the company that maintains OPNsense. Use a pre-built VM image from the Azure Marketplace. In this demo, we will use the first approach to have more control over the installation and configuration. We will create an Azure VM with FreeBSD OS and then install OPNsense using a shell script through the Custom Script Extension. All the required files are in this repository: github.com/HoussemDellai/azure-network-course/205_nva_opnsense. The shell script configureopnsense.sh will install OPNsense and apply a predefined configuration file config.xml to set up the firewall rules, VPN, and DNS settings. It will take 4 parameters: GitHub path where the script and config file are hosted, in our case it is /scripts/. OPNsense version to install, currently set to 25.7. Gateway IP address for the trusted subnet. Public IP address of the untrusted subnet. This shell script is executed after the VM creation using the Custom Script Extension in Terraform represented in the file vm_extension_install_opnsense.tf. OPNsense is intended to be used an NVA so it would be good to apply some of the good practices. One of these practices is to have two network interfaces: Trusted Interface: Connected to the internal network (spokes). Untrusted Interface: Connected to the internet (WAN). This setup allows OPNsense to effectively manage and secure traffic between the internal network and the internet. Second good practice is to start with a predefined configuration file config.xml that includes the basic settings for the firewall, VPN, and DNS. This approach saves time and ensures consistency across deployments. It is recommended to start with closed firewall rules and then open them as needed based on your security requirements. But for demo purposes, we will allow all traffic. Third good practice is to use multiple instances of OPNsense in a high-availability setup to ensure redundancy and failover capabilities. However, for simplicity, we will use a single instance in this demo. Let's take a look at the resources that will be created by Terraform using the AzureRM provider: Resource Group Virtual Network (VNET) named vnet-hub with two subnets: Trusted Subnet: Internal traffic between spokes. Untrusted Subnet: Exposes the firewall to the internet. Network Security Group (NSG): attached to the untrusted subnet, with rules allowing traffic to the VPN, OPNsense website and to the internet. Virtual Machine: with the following configuration: FreeBSD OS image using version 14.1. VM size: Standard_D4ads_v6 with NVMe disk for better performance. Admin credentials: feel free to change the username and password with more security. Two NICs (trusted and untrusted) with IP forwarding enabled to allow traffic to pass through the firewall. NAT Gateway: attached to the untrusted subnet for outbound internet connectivity. Apply Terraform configuration To deploy the resources, run the following commands in your terminal from within the 205_nva_opnsense directory: terraform init terraform apply -auto-approve Terraform provisions the infrastructure and outputs resource details. In the Azure portal you should see the newly created resources. Accessing the OPNsense dashboard To access the OPNsense dashboard: Get the VM’s public IP from the Azure portal or from Terraform output. Paste it into your browser. Accept the TLS warning (TLS is not configured yet). Log in with Username: root and Password: opnsense you can change it later in the dashboard. You now have access to the OPNsense dashboard where you can: Monitor traffic and reports. Configure firewall rules for LAN, WAN, and VPN. Set up VPNs (WireGuard, OpenVPN, IPsec). Configure DNS services (OpenDNS, UnboundDNS). Now that the OPNsense firewall is up and running, let's move to the next steps to explore some of its features like VPN. 2. Setting up Point-to-Site VPN using WireGuard We’ll demonstrate how to establish a WireGuard VPN connection to OPNsense firewall. The configuration file config.xml used during installation already includes the necessary settings for WireGuard VPN. For more details on how to set up WireGuard on OPNsense, refer to the official documentation. We will generate a Wireguard peer configuration using the OPNsense dashboard. Navigate to VPN > WireGuard > Peer generator then add a name for the peer, fill in the IP address for the OPNsense which is the public IP of the VM in Azure, use the same IP if you want to use the pre-configured UnboundDNS. Then copy the generated configuration and click on Store and generate next and Apply. Next we'll use that configuration to set up WireGuard on a Windows client. Here you can either use your current machine as a client or create a new Windows VM in Azure. We'll go with this second option for better isolation. We'll deploy the client VM using Terraform file vpn_client_vm_win11.tf. Make sur it is deployed using command terraform apply -auto-approve. Once the VM is ready, connect to it using RDP, download and install WireGuard. Alternatively, you can install WireGuard using the following Winget command: winget install -e --id WireGuard.WireGuard --accept-package-agreements --accept-source-agreements Launch WireGuard application, click on Add Tunnel > Add empty tunnel..., then paste the peer configuration generated from OPNsense and save it. Then click on Activate to start the VPN connection. We should see the data transfer starting. We'll verify the VPN connection by pinging the VM, checking the outbound traffic passes through the Nat Gateway's IPs and also checking the DNS resolution using UnboundDNS configured in OPNsense. ping 10.0.1.4 # this is the trusted IP of OPNsense in Azure # Pinging 10.0.1.4 with 32 bytes of data: # Reply from 10.0.1.4: bytes=32 time=48ms TTL=64 # ... curl ifconfig.me/ip # should display the public IP of the Nat Gateway in Azure # 74.241.132.239 nslookup microsoft.com # should resolve using UnboundDNS configured in OPNsense # Server: UnKnown # Address: 135.225.126.162 # Non-authoritative answer: # Name: microsoft.com # Addresses: 2603:1030:b:3::152 # 13.107.246.53 # 13.107.213.53 # ... The service endpoint ifconfig.me is used to get the public IP address of the client. You can use any other similar service. What's next ? Now that you have OPNsense firewall set up as an NVA in Azure and have successfully established a WireGuard VPN connection, we can explore additional features and configurations such as integrating OPNsense into a Hub and Spoke network topology. That will be covered in the next part of this blog. Special thanks to 'behind the scene' contributors I would like to thank my colleagues Stephan Dechoux thanks to whom I discovered OPNsense and Daniel Mauser who provided a good lab for setting up OPNsense in Azure available here https://github.com/dmauser/opnazure. Disclaimer The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.Application layer DDoS protection using the HTTP DDoS Ruleset in Azure WAF
Today, Distributed Denial of Service (DDoS) attacks can strike as soon as public connectivity is enabled, highlighting their widespread prevalence. Factors such as easily accessible botnets, the explosion of IoT devices, and the growth of API-driven workloads, e-commerce platforms, and global web applications have made these attacks easier to launch and more impactful. Importantly, attackers are no longer focusing solely on the network layer, they increasingly target the application layer. Application-layer DDoS attacks often mimic normal user activity, making detection and mitigation far more challenging than traditional network-layer attacks. The most common types of Application layer/HTTP based DDOS attacks are outlined below. Common HTTP-based DDoS attacks: HTTP floods: Large volumes of valid looking GET or POST requests are sent to webpages or APIs, overwhelming application gateways and backend services without saturating network bandwidth. API abuse attacks: Attackers repeatedly call specific API endpoints, such as authentication, search, or checkout that trigger expensive backend operations, quickly exhausting compute and database resources. Slow HTTP attacks: Connections are deliberately kept open by sending data very slowly, consuming server threads and connection limits while generating relatively little traffic. TLS-intensive attacks: A high number of encrypted connections are initiated to increase CPU usage during TLS handshakes, impacting application gateways and load balancers. In order to defend against these sophisticated threats, organizations need application-aware protection that can identify abnormal behavior patterns rather than relying only on traffic volume. This is precisely the capability provided by the HTTP DDoS Ruleset for Azure Application Gateway WAF. What Is the HTTP DDoS Ruleset? The HTTP DDoS Ruleset is a built-in capability of Azure Application Gateway WAF designed to protect your applications from large-scale HTTP floods at the application layer. Unlike static rate-limiting or manual IP blocking, this ruleset uses adaptive learning to understand what “normal” traffic looks like for your gateway and then automatically mitigates anomalies. Key features Baseline learning: The ruleset observes traffic for about 24 hours to establish a normal request pattern per gateway. Dynamic detection: When incoming requests exceed the learned baseline, the ruleset identifies potential abuse (Client-specific or IP specific limits are applied only when the overall request volume to the gateway exceeds its learned baseline). Automated mitigation: Offending clients are blocked and are placed in a “penalty box” for the defined time (15 minutes). Sensitivity levels: Choose low, medium, or high to control aggressiveness. Medium is recommended for most workloads. Leverages Microsoft’s vast global network’s threat intelligence to establish a stricter baseline for suspected botnet traffic and when exceeded, blocks them and places those suspected bots into the penalty box. Threat intelligence plays a critical role here. By continuously aggregating data from global telemetry, threat intelligence systems can identify sources that are likely participating in coordinated attacks. When applied to HTTP DDoS protection, this intelligence allows suspected bot traffic to be treated differently from normal user traffic. Instead of relying only on static blocklists, botnet-aware defenses use reputation, behavior, and historical signals to apply throttling or penalties dynamically. This approach reduces the attack surface, limits the impact of distributed bot-driven floods, and avoids unnecessary disruption to legitimate users. Threat intelligence shifts DDoS defense from a purely reactive posture to a more informed, proactive one, making it far more effective against today’s botnet-driven application-layer attacks. Enabling and validating the HTTP DDoS Ruleset: Getting started with the HTTP DDoS Ruleset on Application Gateway WAF is simple. Enable the Ruleset: In the Azure portal, open your WAF policy. Note: Currently the ruleset is available only in the preview portal: https://preview.portal.azure.com/ Under Managed Rules, Click on Assign and then assign the HTTP DDoS Ruleset_1.0 (Preview) and save. Each rule can be configured to either Log traffic for observation or Deny traffic for active mitigation. Sensitivity can be adjusted to High, Medium, or Low, allowing you to balance detection speed and accuracy. Higher sensitivity enforces lower thresholds and detects anomalies sooner, while lower sensitivity raises thresholds to reduce false positives. Medium sensitivity is the default and recommended setting for most workloads. Once enabled, the ruleset is evaluated early in the WAF pipeline, before custom rules are processed. This ensures that HTTP-based DDoS protection cannot be bypassed by DDoS protection. The ruleset works alongside the Default Rule Set (DRS) and any custom rules for comprehensive security. After the policy is applied to an Application Gateway, the ruleset enters a learning phase that lasts at least 24 hours. During this time, it observes traffic patterns to establish normal baselines for the gateway. No detection or blocking occurs during this period, allowing the ruleset to understand typical application behavior before enforcement begins. Metrics: Once the learning phase completes, traffic surges that exceed the learned baseline are reflected in the Application Gateway metrics. These metrics provide immediate visibility into when the HTTP DDoS ruleset is actively detecting and mitigating abnormal behavior. Metric – WAF Penalty Box Size This metric shows how many IP addresses are currently inside the penalty box, meaning that the WAF has detected them exceeding the learned HTTP DDoS baseline and is temporarily blocking them. A spike here indicates that multiple clients crossed their thresholds at the same time, often during an attack or load-test scenario. Metric – WAF Penalty Box Hits This metric represents how many IPs entered the penalty box. Every time a client breaches its threshold, the ruleset logs a hit and places that IP into the penalty box for approximately 15 minutes. Multiple hits often correlate with repeated spikes or sustained abusive traffic patterns. Logs: For deeper analysis, enabling diagnostic settings allows you to inspect HTTP DDoS Ruleset events directly in the logs. These logs provide granular details about which IPs were flagged, why they were flagged, and how far they exceeded expected thresholds. Example of DetailedData from a log: RemoteAddress: 4.x.x.x (Public IP) crossed threshold. Expected: 4400.000000 request per 900 seconds, Actual: 8407.000000 requests per 900 seconds. KQL queries to retrieve these logs: Resource specific logs: AGWFirewallLogs | where RuleSetType == "Microsoft_HTTPDDoSRuleSet" Diagnostic logs: AzureDiagnostics | where Category == "ApplicationGatewayFirewallLog" | where ruleSetType_s == "Microsoft_HTTPDDoSRuleSet" Note: Identify IPs repeatedly flagged and confirm they’re malicious, not legitimate clients. Conclusion: The threat landscape continues to evolve, and defenses must evolve with it. Leveraging the HTTP DDoS Ruleset in Azure Application Gateway WAF helps ensure protections keep pace with modern application-layer attacks. With built-in visibility through metrics and logs, teams can better understand traffic behavior and operate their WAF with greater confidence. Next Steps: Access the HTTP DDoS ruleset for Application Gateway via the preview portal: https://preview.portal.azure.com/ HTTP DDoS Ruleset (Preview) - Application Gateway WAF | Microsoft Learn Azure Web Application Firewall (WAF) policy overview | Microsoft Learn493Views1like0CommentsHow Azure network security can help you meet NIS2 compliance
With the adoption of the NIS2 Directive EU 2022 2555, cybersecurity obligations for both public and private sector organizations have become more strict and far reaching. NIS2 aims to establish a higher common level of cybersecurity across the European Union by enforcing stronger requirements on risk management, incident reporting, supply chain protection, and governance. If your organization runs on Microsoft Azure, you already have powerful services to support your NIS2 journey. In particular Azure network security products such as Azure Firewall, Azure Web Application Firewall WAF, and Azure DDoS Protection provide foundational controls. The key is to configure and operate them in a way that aligns with the directive’s expectations. Important note This article is a technical guide based on the NIS2 Directive EU 2022 2555 and Microsoft product documentation. It is not legal advice. For formal interpretations, consult your legal or regulatory experts. What is NIS2? NIS2 replaces the original NIS Directive 2016 and entered into force on 16 January 2023. Member states must transpose it into national law by 17 October 2024. Its goals are to: Expand the scope of covered entities essential and important entities Harmonize cybersecurity standards across member states Introduce stricter supervisory and enforcement measures Strengthen supply chain security and reporting obligations Key provisions include: Article 20 management responsibility and governance Article 21 cybersecurity risk management measures Article 23 incident notification obligations These articles require organizations to implement technical, operational, and organizational measures to manage risks, respond to incidents, and ensure leadership accountability. Where Azure network security fits The table below maps common NIS2 focus areas to Azure network security capabilities and how they support compliance outcomes. NIS2 focus area Azure services and capabilities How this supports compliance Incident handling and detection Azure Firewall Premium IDPS and TLS inspection, Threat Intelligence mode, Azure WAF managed rule sets and custom rules, Azure DDoS Protection, Azure Bastion diagnostic logs Detect, block, and log threats across layers three to seven. Provide telemetry for triage and enable response workflows that are auditable. Business continuity and resilience Azure Firewall availability zones and autoscale, Azure Front Door or Application Gateway WAF with zone redundant deployments, Azure Monitor with Log Analytics, Traffic Manager or Front Door for failover Improve service availability and provide data for resilience reviews and disaster recovery scenarios. Access control and segmentation Azure Firewall policy with DNAT, network, and application rules, NSGs and ASGs, Azure Bastion for browser based RDP SSH without public IPs, Private Link Enforce segmentation and isolation of critical assets. Support Zero Trust and least privilege for inbound and egress. Vulnerability and misconfiguration defense Azure WAF Microsoft managed rule set based on OWASP CRS. Azure Firewall Premium IDPS signatures Reduce exposure to common web exploits and misconfigurations for public facing apps and APIs. Encryption and secure communications TLS policy: Application Gateway SSL policy; Front Door TLS policy; App Service/PaaS minimum TLS. Inspection: Azure Firewall Premium TLS inspection Inspect and enforce encrypted communication policies and block traffic that violates TLS requirements. Inspect decrypted traffic for threats. Incident reporting and evidence Azure Network Security diagnostics, Log Analytics, Microsoft Sentinel incidents, workbooks, and playbooks Capture and retain telemetry. Correlate events, create incident timelines, and export reports to meet regulator timelines. NIS2 articles in practice Article 21 cybersecurity risk management measures Azure network controls contribute to several required measures: Prevention and detection. Azure Firewall blocks unauthorized access and inspects traffic with IDPS. Azure DDoS Protection mitigates volumetric and protocol attacks. Azure WAF prevents common web exploits based on OWASP guidance. Logging and monitoring. Azure Firewall, WAF, DDoS, and Bastion resources produce detailed resource logs and metrics in Azure Monitor. Ingest these into Microsoft Sentinel for correlation, analytics rules, and automation. Control of encrypted communications. Azure Firewall Premium provides TLS inspection to reveal malicious payloads inside encrypted sessions. Supply chain and service provider management. Use Azure Policy and Defender for Cloud to continuously assess configuration and require approved network security baselines across subscriptions and landing zones. Article 23 incident notification Build an evidence friendly workflow with Sentinel: Early warning within twenty four hours. Use Sentinel analytics rules on Firewall, WAF, DDoS, and Bastion logs to generate incidents and trigger playbooks that assemble an initial advisory. Incident notification within seventy two hours. Enrich the incident with additional context such as mitigation actions from DDoS, Firewall and WAF. Final report within one month. Produce a summary that includes root cause, impact, and corrective actions. Use Workbooks to export charts and tables that back up your narrative. Article 20 governance and accountability Management accountability. Track policy compliance with Azure Policy initiatives for Firewall, DDoS and WAF. Use exemptions rarely and record justification. Centralized visibility. Defender for Cloud’s network security posture views and recommendations give executives and owners a quick view of exposure and misconfigurations. Change control and drift prevention. Manage Firewall, WAF, and DDoS through Network Security Hub and Infrastructure as Code with Bicep or Terraform. Require pull requests and approvals to enforce four eyes on changes. Network security baseline Use this blueprint as a starting point. Adapt to your landing zone architecture and regulator guidance. Topology and control plane Hub and spoke architecture with a centralized Azure Firewall Premium in the hub. Enable availability zones. Deploy Azure Bastion Premium in the hub or a dedicated management VNet; peer to spokes. Remove public IPs from management NICs and disable public RDP SSH on VMs. Use Network Security Hub for at-scale management. Require Infrastructure as Code for all network security resources. Web application protection Protect public apps with Azure Front Door Premium WAF where edge inspection is required. Use Application Gateway WAF v2 for regional scenarios. Enable the Microsoft managed rule set and the latest version. Add custom rules for geo based allow or deny and bot management. enable rate limiting when appropriate. DDoS strategy Enable DDoS Network Protection on virtual networks that contain internet facing resources. Use IP Protection for single public IP scenarios. Configure DDoS diagnostics and alerts. Stream to Sentinel. Define runbooks for escalation and service team engagement. Firewall policy Enable IDPS in alert and then in alert and deny for high confidence signatures. Enable TLS inspection for outbound and inbound where supported. Enforce FQDN and URL filtering for egress. Require explicit allow lists for critical segments. Deny inbound RDP SSH from the internet. Allow management traffic only from Bastion subnets or approved management jump segments. Logging, retention, and access Turn on diagnostic settings for Firewall, WAF, DDoS, and Application Gateway or Front Door. Send to Log Analytics and an archive storage account for long term retention. Set retention per national law and internal policy. Azure Monitor Log Analytics supports table-level retention and archive for up to 12 years, many teams keep a shorter interactive window and multi-year archive for audits. Restrict access with Azure RBAC and Customer Managed Keys where applicable. Automation and playbooks Build Sentinel playbooks for regulator notifications, ticket creation, and evidence collection. Maintain dry run versions for exercises. Add analytics for Bastion session starts to sensitive VMs, excessive failed connection attempts, and out of hours access. Conclusion Azure network security services provide the technical controls most organizations need in order to align with NIS2. When combined with policy enforcement, centralized logging, and automated detection and response, they create a defensible and auditable posture. Focus on layered protection, secure connectivity, and real time response so that you can reduce exposure to evolving threats, accelerate incident response, and meet NIS2 obligations with confidence. References NIS2 primary source Directive (EU) 2022/2555 (NIS2). https://eur-lex.europa.eu/eli/dir/2022/2555/oj/eng Azure Firewall Premium features (TLS inspection, IDPS, URL filtering). https://learn.microsoft.com/en-us/azure/firewall/premium-features Deploy & configure Azure Firewall Premium. https://learn.microsoft.com/en-us/azure/firewall/premium-deploy IDPS signature categories reference. https://learn.microsoft.com/en-us/azure/firewall/idps-signature-categories Monitoring & diagnostic logs reference. https://learn.microsoft.com/en-us/azure/firewall/monitor-firewall-reference Web Application Firewall WAF on Azure Front Door overview & features. https://learn.microsoft.com/en-us/azure/frontdoor/web-application-firewall WAF on Application Gateway overview. https://learn.microsoft.com/en-us/azure/web-application-firewall/overview Examine WAF logs with Log Analytics. https://learn.microsoft.com/en-us/azure/application-gateway/log-analytics Rate limiting with Front Door WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-rate-limit Azure DDoS Protection Service overview & SKUs (Network Protection, IP Protection). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview Quickstart: Enable DDoS IP Protection. https://learn.microsoft.com/en-us/azure/ddos-protection/manage-ddos-ip-protection-portal View DDoS diagnostic logs (Notifications, Mitigation Reports/Flows). https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-view-diagnostic-logs Azure Bastion Azure Bastion overview and SKUs. https://learn.microsoft.com/en-us/azure/bastion/bastion-overview Deploy and configure Azure Bastion. https://learn.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal Disable public RDP and SSH on Azure VMs. https://learn.microsoft.com/en-us/azure/virtual-machines/security-baseline Azure Bastion diagnostic logs and metrics. https://learn.microsoft.com/en-us/azure/bastion/bastion-diagnostic-logs Microsoft Sentinel Sentinel documentation (onboard, analytics, automation). https://learn.microsoft.com/en-us/azure/sentinel/ Azure Firewall solution for Microsoft Sentinel. https://learn.microsoft.com/en-us/azure/firewall/firewall-sentinel-overview Use Microsoft Sentinel with Azure WAF. https://learn.microsoft.com/en-us/azure/web-application-firewall/waf-sentinel Architecture & routing Hub‑spoke network topology (reference). https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/hub-spoke Azure Firewall Manager & secured virtual hub. https://learn.microsoft.com/en-us/azure/firewall-manager/secured-virtual-hub637Views0likes1CommentDNS flow trace logs in Azure Firewall are now generally available
Background Azure Firewall helps secure your network by filtering traffic and enforcing policies for your workloads and applications. DNS Proxy, a key capability in Azure Firewall, enables the firewall to act as a DNS forwarder for DNS traffic. Today, we’re introducing the general availability of DNS flow trace logs — a new logging capability that provides end-to-end visibility into DNS traffic and name resolution across your environment, such as viewing critical metadata including query types, response codes, queried domains, upstream DNS servers, and the source and destination IPs of each request. Why DNS flow trace logs? Existing Azure Firewall DNS Proxy logs provide visibility for DNS queries as they initially pass through Azure Firewall. While helpful, customers have asked for deeper insights to troubleshoot, audit, and analyze DNS behavior more comprehensively. DNS flow trace logs address this by offering richer, end-to-end logging, including DNS query paths, cache usage, forwarding decisions, and resolution outcomes. With these logs, you can: Troubleshoot faster with detailed query and response information throughout the full resolution flow Validate caching behavior by determining whether Azure Firewall’s DNS cache was used Gain deeper insights into query types, response codes, forwarding logic, and errors Example scenarios Custom DNS configurations – Verify traffic forwarding paths and ensure custom DNS servers are functioning and responding as expected Connectivity issues – Debug DNS resolution issues that prevent apps from connecting to critical services. Getting started in Azure Portal Navigate to your Azure Firewall resource in the Azure Portal. Select Diagnostic settings under Monitoring. Choose an existing diagnostic setting or create a new one. Under Log, select DNS flow trace logs. Stream logs to Log Analytics, Storage, or Event Hub as needed. Save the settings. Azure Firewall logging ✨ Next steps DNS flow trace logs give you greater visibility and control over DNS traffic in Azure Firewall, helping you secure, troubleshoot, and optimize your network with confidence. 🚀 Try DNS flow trace logs today, now generally available – and share your feedback with the team Learn more about how to configure and monitor these logs in the Azure Firewall monitoring data reference documentation.1.3KViews0likes0CommentsUsing Packet Capture for troubleshooting Azure Firewall flows
This blog is written in collaboration with @GustavoModena Introduction Azure Firewall is a cloud-native and intelligent network firewall security service that provides best of breed threat protection for your cloud workloads running in Azure. It’s a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Azure Firewall provides both east-west and north-south traffic inspection, and it is offered in three SKUs: Basic, Standard and Premium. Azure Firewall also brings powerful logs and metrics to monitor your traffic and operations within the firewall. These logs and metrics include Traffic Analysis, Performance and Health Metrics, and Audit Trail. However, there are situations where you may need a comprehensive network packet capture to troubleshoot and investigate an incident reported by users. We are happy to announce that Microsoft just released the new Packet capture feature and it is Generally Available for Azure Firewall. The Packet capture feature in Azure Firewall is intended for troubleshooting purposes and will allow customers and engineers to debug connectivity issues by tracing packets passing through their Azure Firewall. Azure Firewall Packet Capture shows two packets per flow, one for incoming direction and one for outgoing direction, so you can accurately correlate requests and responses during troubleshooting. What is a network packet capture? Network packet capture is a process that involves capturing network packets as they traverse a network interface. It's a valuable tool for network troubleshooting, analysis, and security monitoring. A network packet capture involves intercepting Internet Protocol (IP) packets for analysis and then saving the packets captured to output files, typically saved in the “.pcap” file extension. Network engineers often utilize packet capturing for troubleshooting and monitoring network traffic to identify security threats. In the event of a data breach or other incident, packet captures offer essential forensic evidence that supports investigations. From a malicious actor’s viewpoint, packet captures can be used to steal passwords and other sensitive data. Unlike active reconnaissance techniques like port scanning, packet capturing can be conducted covertly, leaving no trace for investigators. How Does a Packet Capture Work? Packet captures can be performed using networking equipment like routers, firewalls or switches, or even an engineer’s laptop or desktop. Regardless of the method, packet capture involves creating copies of some or all packets passing through a particular point in the network. Capturing packets from a specific device on the network is the simplest way to start troubleshooting, but there are a few caveats. By default, network interfaces only monitor traffic destined for them. For a more comprehensive view of network traffic, you’ll need to set the interface to promiscuous mode or monitor mode. Many routers, firewalls and other network devices have embedded packet capture functions that can be used to quickly troubleshoot directly from the device's admin console. This capability is now available in Azure Firewall. Scenario (VNET to VNET) In this blog we have VM-1 (10.10.0.4) unsuccessfully trying to establish HTTP (TCP 80)/HTTPS (TCP 443) connection to VM-2 (10.10.0.132) via Azure Firewall. Using Azure Firewall Packet Capture to investigate the connection issue In this section, we will use Azure Firewall Packet Capture to understand why an HTTP/HTTPS connection between VM-1 and VM-2 is not working properly. For this demonstration, we are not going to review the rules and Azure Firewall logs, as the purpose of the blog is to demonstrate the new Packet Capture feature, and we are assuming that the Azure Firewall is configured correctly. Let’s start by making sure that we have all the required resources to take the packet captures from Azure Firewall: Azure Firewall with Management NIC enabled Storage account with a container in which you can store the packet captures Once you have all the required resources available, follow the next steps to start running a Packet Capture via Azure Firewall: Create a SAS URL to the container in the storage account: In the Azure Portal go to Storage Account > Containers and select the 3 ellipses at the very right side of the name of the container that you want to use to store the packet captures and select “Generate SAS”. When defining the parameters of the SAS select “Write” under Permissions, so Azure Firewall will be able to successfully save the packet captures. Then click on “Generate SAS token and URL”. Now, we must go to the Azure Firewall > Packet Capture (under Help) to start running the packet capture. On the Packet Capture page, provide the following information: Packet capture name - the name of one or more capture files. Output SAS URL - the SAS URL of the storage container you created previously. Next, complete the Basic settings for the packet capture: Maximum number of packets - You should limit the packet capture to a set number of packets. Time limit (seconds) - Since the packet capture is intended for troubleshooting purposes, you should limit the capture time. Protocols - the protocols you want the capture to save (values: Any, TCP, UDP, ICMP). TCP Flags - if TCP or Any is selected, you can select which types of packets to save (values: FIN, SYN, RST, PSH, ACK, URG) If both the Maximum number of packets and Time limit are set, the capture ends when the earliest condition is met. So, either when the maximum number of packets is received or when the time limit is reached. In the Filtering section, you can add the source, destination, and destination ports to include in the capture. You must add at least one filter. The packet capture saves bidirectional traffic that matches each row in the filter section. For the source and destination fields you can list multiple commas separated values in a single filter including IP addresses and IP blocks. Select Run Packet Capture after you're done with your configuration. Once the packet capture is complete, you will navigate to the container used in the storage account and download the pcap files. Note that you will see multiple pcap files, this is because each virtual machine in the backend of the firewall has its own file. Analyzing the Packet Captures When using Azure Firewall Packet Capture, you will always see two packets for every single packet in the flow. This is because the firewall captures both the incoming and outgoing directions of the traffic. Understanding this behavior is critical for accurate troubleshooting, as it ensures you can correlate the original request with its corresponding response. The additional scenarios below will explain how to match these incoming and outgoing flows effectively. To analyze the pcap files you need a network protocol analyzer tool. In this blog we are using Wireshark. Note: The intent of this blog is not to show how to use it nor to do advanced troubleshooting using Wireshark. With the pcap files downloaded to your computer, open the files to start your investigation. Since we have multiple files due to the number of active Azure Firewall instances at the time of the packet capture, it may be easier to merge the files. To merge the pcap files, first open one of them using Wireshark and then go to File > Merge and select the second file. There are different ways to merge them, but here we are using “Merge packets chronologically”. Once the pcap files are merged, you will start your investigation by using filters. In this scenario, we want to investigate why an HTTP request from VM-1 to VM-2 on port TCP 80 is not working, and we are using the following filter: Wireshark filter: tcp.port==80 && tcp.port==50245 && ip.addr==10.10.0.132 (VM-2’s IP address) Ok, so here we can see that VM-1 (10.10.0.4) sends a SYN packet from port 53945 to VM-2 (10.10.0.132) on port 80, then VM-2 sends a reset back to VM-1. This behavior shows us that the traffic is successfully passing through Azure Firewall (allowed), and the issue may possibly be something on VM-2. After involving the application team, they have found an issue related to the IIS configuration and it is now fixed as we can see the TCP request being established on ports 80 and 443 in the screenshot below. Other Scenarios DNAT (Inbound traffic) In this scenario we are connecting from a client via Internet to the Azure Firewall’s public IP, using DNAT rules on port 8443. You can see in the screenshot below the incoming request (TCP 3-way handshake) and all the hops until it gets to the Web Server. L3 (and source IP) differs from the incoming packet since its SNATed at L3 while L4 remains the same. For taking the packet capture in this scenario, we are using the following filters: Source: 71.28.90.56,52.176.62.243,10.10.0.64/26,10.10.0.128/26 Destination: 71.28.90.56,52.176.62.243,10.10.0.64/26,10.10.0.128/26 Destination ports: 8443,443 Check below to understand what each one of the IP/IP ranges and ports are used as filters: Client Public IP: 71.28.90.56 Azure Firewall Public IP: 52.176.62.243 Azure Firewall Instance Private IP: 10.10.0.69 (this IP is included in the IP range 10.10.0.64/26) Web Server Private IP: 10.10.0.132 (this IP is included in the IP range 10.10.0.128/26 Azure Firewall Listening Port: 8443 Web Server Listening (translated) Port: 443 In DNAT scenarios, you will notice two SYN packets for the same flow. SYN 1 represents the incoming packet with its original 5-tuple (source IP, destination IP, source port, destination port, protocol), while SYN 2 corresponds to the same flow but with a different 5-tuple after translation by Azure Firewall. This behavior contrasts with VNET-to-VNET flows, where the 5-tuple remains unchanged. When you are SNATing, connecting to/from the Internet, or processing application rules, to see both incoming and outgoing packets you need to make sure that both Public IP address and subnet address space are included. Internet Access (Outbound traffic) In this scenario, we are connecting from an Azure VM to the public IP via Azure Firewall using Network rules. The screenshot illustrates the TCP three-way handshake followed by the HTTP GET request. Notice two SYN packets: one originating from the client to the destination and another from the Azure Firewall instance IP to the destination. In the first two lines, packets flow from the Azure VM IP to the external public IP, followed by the SNATed packet from the Azure Firewall instance IP to the same external address. For this packet capture, the following filters were applied: Source: 10.10.0.132, 10.10.0.0/26 Destination: 151.101.195.5 Destination ports: 80,443 Check below to understand what each one of the IP/IP ranges and ports are used as filters: Azure VM: 10.10.0.132 Azure Firewall Subnet: 10.10.0.0/26 (10.10.0.5 is the instance IP) External Public IP: 151.101.195.5 External Public IP Port: 80 Application Rule Traffic: In this scenario, we are connecting from an Azure VM to the public IP via Azure Firewall using Application rules. While the original request originates from the VM with source IP 10.0.2.4, the Layer 4 details differ from the incoming packet because, during application rule evaluation, the firewall establishes a new outbound connection acting as a proxy. As shown in the image, the SNAT IP of the Azure Firewall instance (10.0.0.5) initiates the connection to the public IP 140.82.112.4. HTTP or TLS keys can be used to match incoming and outgoing packets. L7 remains the same. For packet capture in this scenario, the following filters are applied: Source: 10.0.2.4, 10.0.0.0/24 Destination: 140.82.112.4 Destination ports: 80,443 Check below to understand what each one of the IP/IP ranges and ports are used as filters: Azure VM: 10.0.2.4 Azure Firewall Subnet: 10.0.0.0/24 (10.10.0.5 is the instance SNAT IP) External Public IP: 140.82.112.4 External Public IP Port: 80,443 VNET to VNET with SNAT: In this scenario, the client VM 10.1.0.4 initiates the connection to the server VM 10.0.2.4 but we have enabled SNAT to happen by default. So, the Firewall’s Private IP 172.16.0.5 (SNAT) will initiate a connection with the destination web server as we can see in the below image. For packet capture in this scenario, the following filters are applied: Source: 10.1.0.4, 172.16.0.0/24 Destination: 10.2.0.4 Destination ports: 80,443 Check below to understand what each one of the IP/IP ranges and ports are used as filters: Azure VM: 10.1.0.4 Azure Firewall Subnet: 172.16.0.0/24 (172.16.0.5 is the instance SNAT IP) Web Server Private IP: 10.2.0.4 Web Server Port: 80 Conclusion The availability of Azure Firewall Packet Capture is crucial for effective network and security troubleshooting. It allows network administrators and security professionals to monitor, analyze, and diagnose network traffic in real-time, providing invaluable insights into potential issues and vulnerabilities. By capturing and examining data packets, they can identify anomalies, detect malicious activities, and ensure the integrity and performance of the network. This proactive approach not only enhances the overall security posture but also minimizes downtime and improves the reliability of network services, making packet capture an indispensable tool in the modern IT landscape.1.5KViews0likes1CommentPublic Preview: Custom WAF Block Status & Body for Azure Application Gateway
Introduction Azure Application Gateway Web Application Firewall (WAF) now supports custom HTTP status codes and custom response bodies for blocked requests. This Public Preview feature gives you more control over user experience and client-side handling, aligning with capabilities already available on Azure Front Door WAF. Why this matters Previously, WAF returned a fixed 403 response with a generic message. Now you can: Set a custom status code (e.g., 403, 429) to match your app logic. Provide a custom response body (e.g., a friendly error page or troubleshooting steps). Ensure consistency across all blocked requests under WAF policy. This feature improves user experience (UX), helps with compliance, and simplifies troubleshooting. Key capabilities Custom Status Codes: Allowed values: 200, 403, 405, 406, 429, 990–999. Custom Response Body: Up to 32 KB, base64-encoded for ARM/REST. Policy-level setting: Applies to all blocked requests under that WAF policy. Limit: Up to 20 WAF policies with custom block response per Application Gateway. Configure in the Azure Portal Follow these steps: Sign in to the https://portal.azure.com. Navigate to your WAF Policy linked to the Application Gateway. Under Settings, select Policy settings. In the Custom block response section: Block response status code: Choose from allowed values (e.g., 403 or 429). Block response body: Enter your custom message (plain text or HTML). Save the policy. Apply the policy to your Application Gateway if not already associated. Configure via CLI az network application-gateway waf-policy update \ --name MyWafPolicy \ --resource-group MyRG \ --custom-block-response-status-code 429 \ --custom-block-response-body "$(base64 custompage.html)" Configure via PowerShell Set-AzApplicationGatewayFirewallPolicy ` -Name MyWafPolicy ` -ResourceGroupName MyRG ` -CustomBlockResponseStatusCode 429 ` -CustomBlockResponseBody (Get-Content custompage.html -Encoding Byte | [System.Convert]::ToBase64String) Tip: For ARM/REST, the body must be base64-encoded. Best practices Use meaningful status codes (e.g., 429 for rate limiting). Keep the response body lightweight and informative. Test thoroughly to ensure downstream systems handle custom codes correctly. Resources Configure Custom Response code Learn more about Application Gateway WAF348Views0likes0CommentsPrescaling in Azure Firewall is now generally available
Azure Firewall protects your applications and workloads with cloud-native network security that automatically scales based on your traffic needs. Today, we’re excited to announce the general availability of prescaling in Azure Firewall – a new capability that gives you more control and predictability over how your firewall scales. Why pre-scaling? Today, Azure Firewall automatically scales in response to real-time traffic demand. For organizations with predictable traffic patterns – such as seasonal events, business campaigns, holidays, or planned migrations – the ability to plan capacity in advance can provide greater confidence and control. That’s where prescaling comes in. With prescaling, you can: Plan ahead– Set a baseline number of firewall capacity units to ensure capacity is already in place before demand rises. Stay flexible – Define both minimum and maximum capacity unit values, so your firewall always has room to grow while staying within your chosen bounds. See clearly – Monitor capacity trends with a new observed capacity metric and configure alerts to know when scaling events occur. You can think of it as adding extra checkout counters before a holiday rush – when the customers arrive, you’re already prepared to serve them without delays or bottlenecks. Example scenarios E-commerce sales events – Scale up before a holiday shopping promotion to handle the surge in online buyers. Workload migrations – Ensure sufficient capacity is ready during a large data or VM migration window. Seasonal usage – For industries like education, gaming, or media streaming, pre-scale ahead of known peak seasons. Getting started in Azure Portal Navigate to your Azure Firewall resource in the Azure Portal. Select Scaling options in settings. By default, every Azure Firewall starts in autoscaling mode. To enable prescaling, simply switch to pre-scaling mode in the Azure Portal and configure your desired capacity range: Minimum capacity: 2 or higher. Maximum capacity: up to 50, depending on your needs. Monitor the scaling behavior with the observed capacity metric. Billing and availability Pre-scaling uses a new Capacity Unit Hour meter. Charges apply based on the number of firewall instances you configure. Standard: $0.07 per capacity unit hour Premium: $0.11 per capacity unit hour ✨ Next steps Prescaling gives you predictable performance and proactive control over your firewall, helping you confidently handle the traffic patterns that matter most to your business. 🚀 Try prescaling today and share your feedback with the team. Learn more about how to configure and monitor this feature in the Azure Firewall prescaling documentation.1.3KViews0likes0CommentsAzure DDoS Protection now supports QUIC protocol — Securing the future of HTTP/3 traffic
The internet’s transport layer is undergoing one of its most significant evolutions in decades. QUIC (Quick UDP Internet Connections) — the protocol underpinning HTTP/3 — is rapidly becoming the default for high performance, secure communication on the web. From YouTube streaming to WhatsApp messaging, QUIC is already powering billions of connections daily. Recognizing both its potential and its unique security challenges, Microsoft has now integrated full QUIC mitigation capabilities into Azure DDoS Protection. This protection is enabled by default — no configuration required — ensuring that customers adopting HTTP/3 can do so with confidence. What is QUIC and why it matters QUIC was originally developed by Google and standardized by the IETF in 2021 (RFC 9000). Unlike traditional HTTP/2 over TCP, QUIC runs over UDP port 443, combining transport and security layers into a single handshake. This allows a secure, encrypted connection to be established in just one round trip — or even zero round trips for repeat connections. Technical advantages of QUIC include: Integrated TLS 1.3 — Encryption is built into the protocol, eliminating the need for separate TLS negotiation. Multiplexed streams without head of line blocking — Independent streams mean packet loss in one stream doesn’t stall others. Connection migration — QUIC connections survive IP address changes, ideal for mobile devices switching between Wi-Fi and cellular. Faster recovery from loss — QUIC uses packet numbers instead of TCP sequence numbers, improving loss detection and retransmission. These features make QUIC ideal for latency sensitive workloads such as video streaming, online gaming, and real-time collaboration tools. The DDoS challenge for QUIC: While QUIC’s design improves performance and security, its reliance on UDP introduces a distinct threat profile that goes beyond traditional UDP floods. QUIC’s handshake, encryption model, and connection identifiers create attack surfaces unique to the protocol. Key QUIC‑specific DDoS vectors include: Initial Packet Floods with Fake Handshakes Attackers send large volumes of QUIC Initial packets containing incomplete or malformed TLS Client Hello messages. This forces the server to allocate cryptographic resources for each bogus attempt, consuming CPU and memory. Connection ID Exhaustion QUIC uses Connection IDs to maintain state across IP changes. Attackers can rapidly cycle through random Connection IDs to bypass per‑IP rate limits. This can overwhelm connection tracking tables. Version Negotiation Abuse Attackers send unsupported or random QUIC version numbers to trigger repeated version negotiation responses from the server. This consumes bandwidth and processing without establishing a valid session. Malformed Frame Injection QUIC frames (STREAM, ACK, CRYPTO, etc.) can be deliberately malformed to trigger parsing errors or excessive error handling. Unlike generic UDP payloads, these require QUIC‑aware inspection to detect. Amplification via Retry Packets QUIC Retry packets can be abused in reflection attacks if the server responds with larger payloads than the request. Attackers spoof victim IPs to direct amplified traffic toward them. Why this is different from generic UDP floods: Generic UDP attacks typically rely on raw packet volume or reflection from open services. QUIC attacks exploit protocol‑level behaviors — handshake processing, version negotiation, and Connection ID handling — that require stateful, QUIC‑aware mitigation. Traditional UDP filtering cannot distinguish between a legitimate QUIC Initial packet and a crafted one designed to exhaust resources. Azure DDoS Protection — QUIC mitigation [built-in]: Azure DDoS Protection now supports QUIC mitigation by default. This enhancement applies to all customers automatically — no opt-in or no manual tuning is required. Technical capabilities include: Protocol Compliance Validation — Ensures QUIC packets conform to RFC specifications, including fixed bit checks, version enforcement, and valid Connection ID lengths. Initial Packet Verification — Validates that QUIC initial packets contain a proper TLS Client Hello with Server Name Indication (SNI), blocking spoofed or incomplete handshakes. Source & Destination Rate Limiting — Controls excessive connection attempts per 4tuple (source IP, destination IP, source port, destination port). Global Limit IDs (GLID) — Applies connection and packet rate limits globally across the mitigation platform. Retry Authentication — Issues a cryptographic cookie challenge to verify client authenticity before allowing session establishment. Packet Rate Limiting by Connection ID — Limits both long header (initial) and short header (post handshake) packet rates to prevent floods. Malformed Packet Filtering — Drops packets with unsupported frames, invalid versions, or missing headers. Version Pinning — Prevents downgrade attacks by enforcing negotiated QUIC versions. All existing Layer 4 protections for UDP traffic — such as flood detection, anomaly scoring, and adaptive thresholds — are fully applied to QUIC. Real-world impact: Without effective mitigation, QUIC based services are highly susceptible to a range of disruptive threats. UDP floods can quickly overwhelm servers, consume resources and render applications unresponsive. Amplification attacks, which exploit the stateless nature of UDP, can multiply inbound traffic by factors of ten to a hundred, creating massive spikes that cripple performance. Such attacks often lead to high packet loss, degraded user experiences, and service interruptions. They can also drive-up infrastructure costs significantly, as organizations are forced to handle large volumes of malicious traffic that consume bandwidth and processing power. With Azure DDoS Protection in place, these risks are proactively addressed. Intelligent rate limiting and packet filtering mechanisms stop floods before they impact service availability. Spoofed packet blocking prevents reflection attacks from ever reaching the application layer. The result is a consistently reliable, low latency connection for QUIC enabled applications, even under hostile network conditions. By scrubbing malicious traffic before it reaches customer workloads, Azure also helps reduce operational costs, ensuring that resources are spent serving legitimate users rather than absorbing attack traffic. Who benefits from QUIC DDoS mitigation: The benefits of QUIC aware DDoS protection extend across industries and use cases. Web applications and APIs built on HTTP/3 gain the performance advantages of QUIC without inheriting its security risks. Streaming platforms such as YouTube or Twitch can deliver high quality, uninterrupted video experiences to millions of viewers, even during attempted network disruptions. Messaging and VoIP services like WhatsApp, Discord, and Zoom maintain crystal clear communication and low latency, which are critical for user satisfaction. Online gaming platforms, where milliseconds matter, can preserve smooth gameplay and prevent lag spikes caused by malicious traffic. Financial services and real-time transaction systems also stand to benefit, as they can maintain secure, uninterrupted operations in environments where downtime or delays could have significant business and compliance implications. Looking ahead: Microsoft is committed to continuously strengthening QUIC protection within Azure DDoS Protection. Efforts are already underway to expand mitigation capabilities ensuring broader coverage across the global network and to detect and neutralize threats faster and with greater precision, adapting to the evolving tactics of attackers. Just as importantly, Microsoft is actively gathering feedback from customers and internal teams to refine mitigation strategies, ensuring that QUIC protection remains both robust and aligned with real world usage patterns. These ongoing enhancements will help customers confidently adopt and scale QUIC based services, knowing that their performance and security are safeguarded by default. Conclusion: QUIC is the future of fast, secure internet communication — and Azure DDoS Protection is ready for it. With always-on, default-enabled QUIC mitigation, Azure customers can confidently adopt HTTP/3 without worrying about the unique DDoS risks that come with UDP based protocols. Your applications stay fast. Your users stay connected. Your infrastructure stays protected.553Views2likes1Comment