azure waf
60 TopicsAzure WAF Tuning for Entra External ID
Introduction This blog is the second part of a series on tuning Azure Web Application Firewall (WAF) for Entra External ID/Azure Active Directory (AD) and serves as a follow-up to the earlier blog on Azure WAF Tuning with AD B2C Applications. With the introduction of Microsoft Entra External ID in 2024, some customers may experience similar challenges to those faced with Azure AD B2C, particularly around false positives generated by Azure WAF. This blog aims to provide guidance on how to best tune Azure WAF to handle traffic from Entra External ID, ensuring a seamless and secure experience for your users. Difference between Azure AD B2C and Entra External ID Microsoft Entra External ID is an identity platform that lets organizations manage external users securely while offering greater flexibility and scalability. It streamlines the integration of external identities with applications, delivers a seamless sign-in experience, and provides advanced security and compliance features for both B2C and B2B scenarios. Azure AD B2C was designed for customer-facing applications, providing features such as social identity providers, custom branding, and self-service password reset. While it served many use cases effectively, Microsoft has since introduced Entra External ID as a more advanced and unified solution for both consumer-oriented app developers and businesses seeking secure B2B collaboration. Entra External ID includes enhanced security, compliance, and scalability features, making it a comprehensive solution for managing external identities. Note: Effective May 1, 2025, Azure AD B2C is no longer available to purchase for new customers. Microsoft is transitioning all new external identity scenarios to Microsoft Entra External ID. Existing Azure AD B2C tenants will continue to be supported until at least 2030. Please refer to Azure AD B2C end of sale. Understanding the Challenge Similar to Azure AD B2C, Microsoft Entra External ID uses OAuth/OpenID Connect flows. During sign-in, parameters such as code, and id_token can contain base64-encoded strings that appear suspicious to WAF’s Managed Rule Sets. Some of the commonly triggered rules belong to paranoia level 2 (PL2) rules, a more aggressive paranoia level that may trigger blocks (see the section on paranoia levels later in this post). The two PL2 rules we encounter most are: 942430 - This rule checks for a high number of special characters in request data indicating a possible SQL injection attempt. This rule is disabled by default in DRS 2.1. 942440 – This rule detects sequences in request data that resemble SQL comments which are often used in SQL injection attacks. This rule is disabled by default in DRS 2.1 (Azure Front Door WAF) and is replaced by the Microsoft Threat Intelligence Center (MSTIC) rule 99031002. In Application Gateway WAF, users can manually disable it. These rules often flag tokens or code parameters as potential SQL injection attempts because they detect certain characters and patterns in the tokens. Learn more about Azure WAF rulesets. Tuning Azure WAF for Entra External ID In this blog, we’ll demonstrate the tuning process using: Microsoft Entra External ID Tenant – configured for external-facing authentication. Static Web App (SWA) – front-end code with ‘.auth login’ using Microsoft Entra ID, allowing flexibility to create a custom application. Azure Front Door Premium with a WAF Policy – running DRS 2.1 managed rules. Custom Domain – ensures all traffic, including OAuth callbacks, flows via Azure Front Door with WAF. We begin by registering our static web app to Entra external ID. In our Microsoft Entra Admin center, we navigate to Identity > Applications > App registrations. We select ‘New registration’ and fill in the details for our static web app as below: Once registered, we proceed to grant authentication and finally grant admin consent. To learn more about registering an application in Entra ID refer to How to register an app in Microsoft Entra ID - Microsoft identity platform | Microsoft Learn. Once we verify authentication to our static web app, we configure Azure Front Door with WAF to protect the web app. Learn how to configure Azure Front Door with WAF by following this tutorial. We have configured our Static Web App to be accessed through an Azure Front Door endpoint (with WAF), ensuring all authentication requests also flow through WAF for inspection and protection. To achieve this, we use a custom domain so that the OAuth callbacks and user traffic are routed exclusively through the Azure Front Door rather than the default - .azurestaticapps.net domain. Please refer to Add a custom domain to Azure Front Door | Microsoft Learn to learn more about Azure Front Door custom domains. Resolving False Positives When your WAF policy is running in prevention mode, genuine user authentication requests might sometimes be blocked - this is called a false positive. When a false positive occurs, a user successfully authenticates, but they see a block page (example below) instead of the static web app. However, you might notice that sometimes the very same sign-in flow completes successfully with no block at all. This seemingly “inconsistent” behavior arises because WAFs running the Default Rule Set (DRS) 2.x uses anomaly scoring. In Anomaly scoring each triggered rule contributes a severity-based score: Notice (2), Warning (3), Error (4), and Critical (5). If the total anomaly score exceeds the threshold of 5, the request is blocked, otherwise it is allowed. Since OAuth tokens (such as code or id_token) change with each login attempt, they may sometimes trigger enough rules to reach this threshold – resulting in a block. To address this challenge, we take the following steps: Evaluate the WAF Logs Remediate with: Exclusions Custom rules Change the rule actions Disable rules Evaluate paranoia levels Evaluating the WAF Logs Understanding the logs generated by the WAF is critical to diagnosing and resolving these issues. It is recommended to initially run the WAF policy in detection mode to log traffic without blocking requests, allowing you to identify and fine-tune exclusions before enabling prevention mode. Once you have reviewed the logs and adjusted configurations as needed, you can switch to prevention mode to actively block malicious requests. In our setup, we evaluate the WAF behavior directly in prevention mode to observe real-time blocks and evaluate which rules were triggered. You can use the following KQL query in your Log Analytics Workspace to identify blocked requests: AzureDiagnostics | where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog" | where action_s == "Block" We observe several requests that were blocked: Each WAF log entry provides a tracking reference which is a unique identifier assigned to each WAF event to make it easier to correlate and trace specific requests across different logs. You can use this reference to correlate WAF events with application logs or client-side tracing tools. This is especially helpful in complex authentication flows involving multiple redirects as the tracking reference can pinpoint exactly which request in the chain was blocked. Using the tracking reference from our first log, we can identify the specific rules whose score added up for the block: We can identify that rules 942430 and 942440—both of which target SQL injection signatures—are contributing to the anomaly score and ultimately causing the block. These rules belong to the Default Rule Set (DRS) 2.x and are designed to detect common SQL injection attempts by scanning for suspicious tokens, special characters, or query-like substrings. Both 942430 and 942440 reside in the OWASP ModSecurity Core Rule Set (CRS) family, which specifically targets SQL injection attempts by looking for patterns often used in malicious queries. In the official CRS documentation, these rules focus on suspicious characters (e.g., ‘, -, ;), SQL keywords (SELECT, UNION), and encoded strings. More specifically: Rule 942430 – “Restricted SQL Character Anomaly Detection” - This rule counts the number of special characters (like =, +, ', etc.) within request parameters. If it detects too many in a single payload (for example, “# of special characters exceeded (12)”), it flags the request as highly suspicious. OAuth tokens, being base64-encoded, can contain =, +, and other symbols that push this count over the threshold, triggering a false positive. Rule 942440 – “SQL Comment Sequence Detected” - This rule looks for typical SQL comment patterns such as -- or #. If it detects any substring resembling a comment sequence—possibly even certain base64 fragments—it interprets the request as a potential SQL injection. Because Entra External ID tokens are randomized, they may occasionally contain partial sequences that match the rule’s detection logic, again leading to false positives. For more details on the specific rule IDs and their matching criteria, see the OWASP ModSecurity Core Rule Set (CRS) documentation, or the Azure WAF rule reference for Managed Rule Sets. Understanding the purpose and scope of these rules helps you decide how to best tune WAF—whether by using exclusions, custom rules, or other remediation methods—to accommodate legitimate Entra External ID tokens without compromising your application’s overall security. Remediation Options Exclusions Exclusions tell the WAF to bypass certain parameters, cookies, or headers for specified rules. This is often the safest approach when you trust specific values—such as the id_token and code parameters from Microsoft Entra ID. By removing these from inspection, you prevent them from contributing to the anomaly score. In the Azure portal, you can configure exclusions under your WAF policy’s Managed Rule Set settings, matching the relevant parameter name (e.g., code) to a specific location (like Request Body or QueryParam). Learn more about WAF exclusion lists in Azure Front Door and how to configure exclusion lists for Azure Front Door. In our own environment, we determined that rule 942430 often triggered on the code parameter, while rule 942440 flagged the id_token parameter—both located in the request body. To address this in a granular way, we created exclusion lists so that 942430 ignores code in the request body and 942440 ignores id_token in the request body. This ensures that WAF still inspects all other fields and traffic for malicious patterns, but no longer incorrectly penalizes these legitimate Entra External ID tokens. Below are screenshots illustrating these exclusions in action: Excluding in_code Parameter from SQL Comment Sequence Detection: Excluding code Parameter from SQL Comment Sequence Detection: Custom Rules Custom rules give you finer control over how the WAF handles requests - beyond what you can configure with exclusions or default managed rules. You can allow or block specific traffic patterns based on conditions such as request path, HTTP method, header contents, or query parameters. For a straightforward scenario, you might match the path /.auth/login/aad/callback (or whichever callback path your app uses), set the Action to Allow, and give the custom rule a lower priority so it’s evaluated first. This ensures legitimate Entra External ID traffic bypasses the SQL injection checks. If your tokens or parameters follow a partially predictable pattern (for instance, a certain prefix or structure in the base 64-encoded string), you can use regex matching in your custom rule to be more selective. For example, you might match the request parameter code with a regex like ^[A-Za-z0-9_-]+(\.[A-Za-z0-9_-]+)*$, which loosely allows base 64 token formats. This approach is handy if you notice tokens often trigger a variety of sub-rules. By allowing only the known safe format you drastically cut false positives while still blocking anything that strays from the legitimate pattern. Whether you’re using simple path matches or advanced regex, carefully scope your conditions—limit them to the specific parameters or paths you know are safe. A broad “Allow everything that includes ‘callback’” could inadvertently open a hole for real attacks. Change Rule Actions In some cases, you may prefer to keep the relevant rule enabled but change its action from Block to Log. This ensures that even if the rule matches, the request isn’t automatically blocked—rather, it’s logged for further investigation. If you later identify genuine threats in these tokens, you can revert to a stricter rule action or add more precise exclusions. Disable Specific Rules In some cases, you may decide that certain rules (for example, those repeatedly generating false positives but offering little relevant protection for your application) should be disabled entirely. This approach is generally a last resort because it removes that protective layer for all traffic rather than targeting just the problematic parameters. However, in our scenario, the two problematic rules - 942430 (Restricted SQL Character Anomaly Detection) and 942440 (SQL Comment Sequence Detected) - are disabled or replaced by default in DRS 2.1. If you’re seeing them trigger, you may be on an older rule set or a configuration where they remain active. Disabling them manually in the Managed Rule Set will stop the false positives, but you should monitor your WAF logs closely for genuine attacks that might have otherwise been caught by those rules. Where possible, consider upgrading to the latest rule set which often addresses these issues by default. Evaluate the Paranoia Levels Paranoia levels (PL) determine how aggressively rules in the OWASP Core Rule Set (CRS) detect and block potential threats in a Web Application Firewall (WAF). OWASP CRS defines four paranoia levels (PL1–PL4), each offering progressively stricter security controls: PL1 (Default): Offers baseline protection against common web attacks, minimizes false positives, and is appropriate for most applications. PL2: Adds additional rules targeting more sophisticated threats, which may result in more false positives. PL3: Provides stricter rules suitable for applications requiring high security, though these typically require extensive tuning. PL4: Implements the most aggressive security rules, suitable for highly secure environments, requiring extensive management and tuning efforts. For more detailed explanations of each paranoia level, refer to the OWASP CRS Paranoia Levels documentation. The Azure WAF managed rulesets - DRS 2.1 and CRS 3.2 – each contain rules with an assigned paranoia level. By default, these rulesets contain rules that include paranoia levels 1 and 2 (PL1 and PL2). To reduce false positives, you can either disable specific PL2 rules or set their action to 'log' instead of blocking. Azure WAF currently does not support rules from paranoia levels 3 and 4. For more information on Azure WAF paranoia levels refer to Azure Front Door WAF paranoia levels. In our scenario, rules 942430 and 942440 are classified under PL2 and carry a default anomaly score of 5. Triggering either rule individually exceeds the anomaly scoring threshold, causing legitimate authentication requests - such as those from OAuth/OpenID Connect flows used by Entra External ID - to be inadvertently blocked. To manage and reduce these false positives effectively: Initially configure your Azure WAF policy to use only PL1 rules to significantly lower the likelihood of false positives. Review and selectively re-enable necessary PL2 rules after analyzing WAF logs, applying specific exclusions or custom rules as described in the remediation section. Conclusion Azure Web Application Firewall (WAF) provides robust protection against web threats. However, when integrating with Microsoft Entra External ID for authentication, careful tuning is essential to avoid false positives that disrupt legitimate user access. By reviewing WAF logs, creating precise exclusions, implementing targeted custom rules, and leveraging the latest managed rule sets, organizations can optimize their security posture while ensuring smooth authentication experiences. Proper WAF tuning maintains a crucial balance between application security and user experience References Configure Microsoft Entra External ID with Azure Web Application Firewall - Microsoft Entra External ID | Microsoft Learn Tune Azure Web Application Firewall for Azure Front Door | Microsoft Learn Troubleshoot - Azure Web Application Firewall | Microsoft Learn Azure Web Application Firewall DRS rule groups and rules | Microsoft Learn Azure WAF tuning with AD B2C applications | Microsoft Community Hub368Views0likes0CommentsAzure WAF Integration in Security Copilot is Now Generally Available
We’re excited to announce the general availability (GA) of Azure Web Application Firewall (WAF) integration with Microsoft Security Copilot. This marks a significant advancement in web application protection, bringing together Azure WAF’s industry-leading defense with the AI-powered capabilities of Security Copilot to transform how security teams detect, investigate, and respond to threats. Why This Integration Is a Game-Changer Modern web applications face relentless threats - from SQL injections and cross-site scripting (XSS) to bot attacks and sophisticated Layer 7 DDoS attempts. Defending against these threats requires more than just reactive measures; it demands intelligent, scalable solutions. With Azure WAF now integrated into Security Copilot, security teams can gain: Proactive threat analysis: Quickly uncover attack patterns and identify emerging threats. Optimized WAF configurations: Use AI insights to fine-tune rules and policies. Accelerated investigations: Leverage Copilot’s generative AI to streamline incident triage and response. This integration enables teams to work smarter and faster - turning raw data into actionable intelligence with the help of natural language prompts and AI-guided workflows. Seamless Protection Across Azure Platforms Azure WAF protects applications behind Azure Front Door and Azure Application Gateway, offering centralized, cloud-native security at scale. Now, with Security Copilot, analyzing WAF diagnostic logs no longer requires manual parsing or deep scripting expertise. Instead, AI delivers contextual insights directly to your SOC teams, cloud admins, and DevSecOps engineers. Whether you're investigating blocked requests or tuning security policies, this integration helps reduce operational overhead while strengthening your overall security posture. What Can You Do with Azure WAF in Security Copilot Let’s explore some of the core capabilities now available: SQL Injection (SQLi) Attack Analysis Understand why Azure WAF blocked specific SQLi attempts through detailed summaries of diagnostic logs and correlation of related events over time. Cross-Site Scripting (XSS) Attack Insights Get clear explanations for WAF’s enforcement actions against XSS attacks, with trend analysis across your environment. Top Offending IPs Analysis Identify the most malicious IPs triggering WAF rules, along with insights into the behaviors and rule patterns that led to their blocking. Most Triggered Rules and Actions Gain visibility into your most active WAF rules - helping prioritize tuning efforts and enhance threat detection effectiveness. These capabilities are designed to turn WAF data into actionable knowledge - without the need for custom queries or extensive log review. Built for the Future of Intelligent Security As threats continue to evolve, so must our defenses. The Azure WAF and Security Copilot integration represents the next generation of web application protection - combining automation, AI reasoning, and expert knowledge to deliver adaptive security at cloud scale. By augmenting your team with AI, you can stay ahead of attackers, protect critical apps, and respond faster than ever before. Learn More and Get Started The GA of Azure WAF integration in Microsoft Security Copilot is more than just a feature release - it’s a new paradigm for web application security. Explore the capabilities today by visiting the Azure WAF documentation. Want to talk to us? Reach out to the Azure WAF product team to share feedback or request a demo. Let’s build a more secure web, together.622Views1like0CommentsSecuring web applications with Azure Front Door WAF CAPTCHA (Preview)
Introduction Web applications today are constantly under siege from a range of threats, including automated bots and scrapers, as well as credential-focused threats such as credential stuffing and brute-force attacks. As attackers leverage advanced automation and increasingly sophisticated attack methods, organizations need more robust and interactive security measures capable of distinguishing between legitimate users and malicious traffic. To address these evolving challenges, Azure Front Door’s Web Application Firewall (WAF) now introduces CAPTCHA, currently available in public preview. This feature incorporates a critical interactive verification step, validating real human users while blocking automated malicious traffic in real-time. By integrating CAPTCHA directly within the WAF, organizations can secure crucial user flows - such as logins, registrations, and checkout processes - from bots and scripted attacks aiming to compromise credentials, create fraudulent accounts, or harvest data, all while preserving a seamless experience for genuine users. Overview of Azure WAF Front Door CAPTCHA CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a security mechanism designed to differentiate human users from automated bots by presenting interactive challenges that only humans can reliably complete. Azure Front Door’s WAF implementation of CAPTCHA delivers this capability seamlessly, integrating directly into web traffic processing to offer real-time protection. Azure Front Door WAF CAPTCHA is a dynamic security challenge automatically triggered when a client's request matches a WAF rule configured with the CAPTCHA action. When activated, users are presented with an interactive CAPTCHA challenge in their browser and can verify themselves either by solving a visual puzzle or completing an audio-based task. Once successfully solved, the user's request proceeds normally, while automated scripts and bots unable to complete the challenge are immediately blocked, effectively preventing malicious traffic. By clearly distinguishing human users from bots, Azure WAF CAPTCHA strengthens application defenses. Key benefits include: Account and Access Protection - Azure WAF CAPTCHA helps protect authentication and user account workflows from automated abuse and unauthorized access. Block Automated Account Creation - Stops bots from registering fake or spam accounts during sign-up. Prevent Account Takeovers - Stop suspicious login attempts to protect against stolen credentials. Stop Brute-Force Logins - Prevents automated password guessing and account breaches Data and Resource Protection - Use CAPTCHA to defend web content and inventory from unauthorized scraping and resource hoarding. Limit Web Scraping - Restricts bots from extracting proprietary data like pricing or content. Prevent Inventory Hoarding - Protects e-commerce and ticketing platforms from bulk purchasing by bots. Fraud and Abuse Prevention – Use CAPTCHA to reduce the risk of automated abuse in transaction and engagement workflows. Block Fake Transactions - Stops abuse of discounts, gift cards, or loyalty programs by scripted bots. Reduce Spam and Abusive Inputs - Ensures form and comment submissions are from real users, not bots. Application-Layer DDoS Defense- Acts as a first line of defense to block high-volume bot requests targeting application resources. Azure WAF Front Door CAPTCHA Key Features Azure WAF CAPTCHA in Azure Front Door is designed to be flexible, easy to configure, and deeply integrated into WAF’s existing policy model. Below are the key features that define how CAPTCHA is activated, managed, and monitored. Policy Settings Azure WAF CAPTCHA includes a configurable policy setting that defines how long a user remains validated after successfully completing a challenge. This is controlled through the CAPTCHA challenge cookie, which is injected into the user's browser upon solving the challenge. The cookie name is afd_azwaf_captcha and it determines how long a user is exempt from repeated challenges. The cookie validity period can be set between 5 and 1,440 minutes, with a default of 30 minutes. Once the cookie expires, the user will be prompted to complete the CAPTCHA again if they trigger a matching rule. This setting helps balance security and user experience by reducing repetitive challenges for legitimate users while still enforcing protection over time. Integration with Bot Manager Rules CAPTCHA can be enabled directly in the Bot Manager rulesets, allowing administrators to apply CAPTCHA as an enforcement action. To enable the CAPTCHA challenge within the Bot Manager's managed rules, users can navigate to the managed rules section in their WAF policy and adjust the actions for each rule group. This setup is ideal for mitigating automated logins, credential stuffing, and other bot-driven behaviors with minimal configuration. Custom Rule Support For more targeted scenarios, CAPTCHA can be configured as the action in a custom rule. This allows precise control over when and where the challenge is triggered - based on URI, method, headers, geo-location, or user-agent patterns. Common examples include applying CAPTCHA to login endpoints, sign-up forms, or regions known for bot traffic. Monitoring Detailed logs and metrics are captured whenever the CAPTCHA challenge is triggered. This allows security administrators to track the CAPTCHA challenges and analyze traffic patterns and security incidents. The “Web Application Firewall CAPTCHA Request Count” metric within Azure Front Door displays the number of CAPTCHA requests evaluated by the Web Application Firewall: When WAF diagnostic logging is enabled, each CAPTCHA event is written to the AzureDiagnostics table. These logs can be queried to see which endpoints triggered challenges, the outcome of each event (Issued, Passed, Valid, or Blocked), the client IP and user agent, and the timestamp of the interaction. By analyzing this data, you can calculate solve rates, identify problem spots where users are repeatedly challenged or blocked, and fine-tune your rules to improve both security and user experience. Pricing Azure Front Door WAF CAPTCHA is currently in public preview and pricing details are available on Pricing - Front Door | Microsoft Azure. Enabling and using the CAPTCHA challenge CAPTCHA in the Bot Manager ruleset As described in the previous section, the CAPTCHA challenge can be enabled within both the Bot Manager ruleset and custom rules. To enable it within the Bot Manager ruleset, simply navigate to the Managed Rules section of your WAF policy in Azure Front Door, select the Bot Manager rule you want to configure, and change the action to CAPTCHA challenge. Within the Policy Settings, you can adjust the CAPTCHA challenge cookie’s validity period, with options ranging from 5 to 1,440 minutes. To demonstrate how Azure WAF Front Door issues a CAPTCHA challenge via a Bot Manager rule, we will simulate bot-like requests using PowerShell. In our setup, we have configured Azure Front Door with a WAF Policy that has the Bot Manager 1.1 ruleset enabled and action set to CAPTCHA for the rules - Bot100100 (Malicious bots detected by threat intelligence) and Bot100200 (Malicious bots that have falsified their identity). Behind this Azure Front Door, a web application is running and is actively protected by the WAF. We use two PowerShell snippets—one sending a known crawler User-Agent, the other spoofing a high-risk IP via X-Forwarded-For—to trigger the CAPTCHA rule. You can use Postman, Visual studio, or any other HTTP client to send these requests; this example uses PowerShell. From the results we observe a 403 Forbidden status code in both cases, indicating that WAF issued the CAPTCHA challenge and then blocked the request because no valid token was returned. In the Front Door WAF diagnostic logs, we can view the requests: This confirms that the Bot Manager rule correctly triggered the CAPTCHA action and enforced a block since the client could not complete the interactive challenge. CAPTCHA in custom rules For custom rules, you define exactly when the CAPTCHA challenge appears by creating a match-type or rate limit rule with action set to CAPTCHA. In the custom rule’s Policy Settings, you can also configure the CAPTCHA cookie lifetime - anywhere from 5 to 1,440 minutes - so that users remain validated for the duration you choose. To demonstrate the CAPTCHA challenge in action, we set up a simple scenario using Azure Front Door with a WAF policy with our custom rule created above. Behind the Front Door endpoint, a demo web application is running. The rule inspects the RequestUri and issues a CAPTCHA challenge when the URI contains /ftp. In Policy Settings, we set the CAPTCHA cookie validity to 5 minutes. In our browser, we navigate to our web application and click on the link that leads to the /ftp path. The browser briefly displays the CAPTCHA form, confirming that the challenge is active. We are presented with the CAPTCHA challenge page, select the puzzle option and proceed to solve it: After solving the puzzle, the afd_azwaf_captcha cookie appears under Response Headers. The same cookie will be sent with each subsequent request, preventing repeated challenges within the cookie lifetime and ensuring smooth navigation. The Front Door WAF logs provide detailed insights into CAPTCHA challenge requests, showing the issued, passed challenges as well as active challenges: Conclusion Malicious bots continue to threaten web applications with automated account creation, credential abuse, and data scraping. Azure Front Door WAF’s CAPTCHA challenge delivers an interactive verification step that stops sophisticated bots at the edge, complementing Bot Manager and JavaScript challenge protections. By issuing puzzles or audio challenges only on high-risk requests and tracking outcomes through built-in metrics and logs, CAPTCHA ensures genuine users navigate your site without interruption while blocking automated attacks. Together, these features provide a powerful, adaptive defense against evolving bot threats, helping organizations maintain application integrity and deliver a seamless experience for real users. References Introduction to Azure Web Application Firewall | Microsoft Learn Public Preview of Azure WAF CAPTCHA Challenge for Azure Front Door | Microsoft Community Hub Azure Front Door Web Application Firewall CAPTCHA (preview) | Microsoft Learn Web Application Firewall (WAF) on Azure Front Door | Microsoft Learn Web application firewall custom rule for Azure Front Door | Microsoft Learn574Views0likes0CommentsPublic Preview of Azure WAF CAPTCHA Challenge for Azure Front Door
Modern web applications face an ever-growing array of automated threats, including bots, web scrapers, and brute-force attacks. Many of these attacks evade traditional security measures such as IP blocking, geo-restrictions, and rate limiting, which struggle to differentiate between legitimate users and automated traffic. As cyber threats become more sophisticated, businesses require stronger, more adaptive security solutions. Azure Front Door’s Web Application Firewall (WAF) now introduces CAPTCHA in public preview—an interactive mechanism designed to verify human users and block malicious automated traffic in real time. By requiring suspicious traffic to successfully complete a CAPTCHA challenge, WAF ensures that only legitimate users can access applications while keeping bots at bay. This capability is particularly valuable for common login and sign-up workflows, mitigating the risk of account takeovers, credential stuffing attacks, and brute-force intrusions that threaten sensitive user data. Key Benefits of CAPTCHA on Azure Front Door WAF Prevent Automated Attacks – Blocks bots from accessing login pages, forms, and other critical website elements. Secure User Accounts – Mitigates credential stuffing and brute-force attempts to protect sensitive user information. Reduce Spam & Fraud – Ensures only real users can submit comments, register accounts, or complete transactions. Easy Deployment & Management – Requires minimal configuration, reducing operational overhead while maintaining a robust security posture. How CAPTCHA Works When a client request matches a WAF rule configured for CAPTCHA enforcement, the user is presented with an interactive CAPTCHA challenge to confirm they are human. Upon successful completion, Azure WAF validates the request and allows access to the application. Requests that fail the challenge are blocked, preventing bots from proceeding further. Getting Started CAPTCHA is now available in public preview for Azure WAF. Administrators can configure this feature within their WAF policy settings to strengthen bot mitigation strategies and improve security posture effortlessly. To learn more and start protecting your applications today, visit our Azure WAF documentation.720Views0likes0CommentsProtect against Next.js CVE-2025-29927 with Azure Web Application Firewall (WAF)
Next.js versions prior to 12.3.5, 13.5.9, 14.2.25 and 15.2.3 are vulnerable to bypass authorization checks within a Next.js application, according to a recently published CVE-2025-29927. It is strongly recommended to upgrade Next.js to a patched version – 12.3.5, 13.5.9, 14.2.25 or 15.2.3. For Next.js versions 11.x which currently have no patches available, or other applications based on Next.js which cannot be patched immediately - creating a custom rule in Azure WAF can help mitigate this vulnerability. The custom rule will block any request with the header 'x-middleware-subrequest' which is used as part of this attack. Note that this rule is not exploitation-specific rule, but rather a more general approach which would block legitimate sites using auth middleware. Create a custom rule to mitigate CVE-2025-29927 In Azure Portal, navigate to your Web Application Firewall, then select 'Settings' -> 'Custom Rules'. Click on 'Add Custom Rule' button. Give your custom rule a name, like 'CVE202529927', and set its priority from 1-100. In the Conditions, select Match Type = 'String', Match Variable = 'RequestHeaders', and 'Header Name' = 'x-middleware-subrequest'. Under 'Operation' select 'is', set 'Operator' to 'Regex' and the Match Values to '.*' Under 'Then' section, select 'Deny Traffic' to block any request containing this header. Click 'OK' to complete editing, then select 'Save' to apply the new rule. You can find more information about Custom Rules on Azure WAF for Application Gateway here or for Azure Front Door here.701Views0likes0CommentsMastering Regex with GitHub Copilot for Enhanced Azure WAF Security
Written in collaboration with davidfrazee Introduction Azure Web Application Firewall (WAF) is a cloud native security service that provides protection for web applications from common exploits and vulnerabilities. It provides centralized protection for applications hosted on Azure Front Door and Azure Application Gateway ensuring that malicious traffic is detected and blocked before reaching the application backend. Azure WAF leverages managed rulesets to actively protect web applications from threats and attacks. These rule sets are maintained by Azure, with the Default Ruleset (DRS) including rules from the Microsoft Threat Intelligence Collection, ensuring enhanced coverage, specific vulnerability patches, and improved false positive reduction. In addition to the managed rulesets, Azure WAF offers custom rules that enable you to create your own rules. With custom rules, you can set conditions based on attributes such as IP addresses, HTTP headers, and query strings to precisely control which traffic is allowed or blocked, providing flexibility and granularity. Within the custom rules, you can incorporate regex, which offers enhanced accuracy when matching patterns in your traffic. Regex (regular expressions) enable you to define complex conditions, allowing for highly specific filtering of incoming requests. Working with regex can sometimes be challenging due to its non-intuitive syntax. In this blog, we will demonstrate a practical, step-by-step approach for generating regex patterns using GitHub Copilot, refining them on Regex101, and validating their effectiveness in Azure WAF. This process helps ensure that your custom rules with regex work as intended, thereby enhancing your overall security effectiveness. GitHub Copilot GitHub Copilot is an AI-powered code completion tool developed by GitHub in collaboration with OpenAI. It assists developers by suggesting code snippets, functions, and even entire blocks of code as they type. By leveraging machine learning models trained on a vast amount of public code, GitHub Copilot can understand the context of the code being written and provide relevant suggestions, making the coding process faster and more efficient. Prompting GitHub Copilot can be particularly useful for security professionals. Enhanced code quality is one of the benefits, as GitHub Copilot can help security professionals write cleaner and more secure code by identifying potential vulnerabilities and suggesting best practices for secure coding, thus reducing the risk of introducing security flaws. Additionally, it offers time efficiency, as security professionals often need to write scripts or tools to automate security tasks, and GitHub Copilot can speed up this process by generating code snippets based on the prompts provided, allowing professionals to focus on more critical aspects of their work. GitHub Copilot can also assist in creating regex code, which is often complex and challenging to write. By providing accurate regex patterns based on prompts, it can help security professionals quickly develop effective text manipulation and pattern matching solutions. While AI-generated content can significantly streamline the process of creating regex patterns, it is important to verify the accuracy of these patterns to ensure they work as intended. Tools such as Regex101 provide a valuable platform for refining and validating regex patterns, helping to identify and correct any errors before implementation. What is Regex? Regular expressions, commonly known as regex, are sequences of characters that define search patterns. They are used in various programming languages and tools to match, locate, and manage text. Regex is incredibly powerful because it allows for complex text manipulation and pattern matching, making it an essential tool for tasks such as data validation, parsing, and transformation. However, regex can be difficult to learn due to its intricate syntax and the need for precise pattern construction. The learning curve is steep because even small mistakes can lead to unexpected results or errors. Additionally, regex patterns can become quite complex, requiring a deep understanding of both the syntax and the specific text being manipulated. Despite these challenges, regex is invaluable in the field of security. It can be used to detect and prevent malicious activities by identifying patterns associated with attacks, such as SQL injections or cross-site scripting. Regex can also help in filtering user inputs, ensuring that only safe and valid data is processed. Its ability to analyze large volumes of text swiftly and accurately makes it a powerful tool for security professionals aiming to protect systems and data from threats. Creating & Validating Regex Patterns Now that we have explored the capabilities of GitHub Copilot and the intricacies of regex, it is time to put this knowledge into practice. In the following steps, we will guide you through the process of leveraging GitHub Copilot to create and validate regex patterns effectively. By combining the power of AI-driven code suggestions with the precision of regex, you will be able to enhance your security tasks, automate complex text manipulations, and ensure robust data validation. Let us dive into the practical steps to get this done and see how these tools can streamline your workflow and bolster your security measures. To get started with GitHub Copilot, navigate here. We are going to leverage GitHub Copilot through Visual Studio Code (VSC). Once you have completed the requirements from the link above and you have enabled the extension, you can start prompting GitHub Copilot Chat. Prompt: Create two unique regex patterns. The first pattern should match any of the following words: "l33t", "h@x0r", "kewl", and "n00b", but the characters can be case insensitive. The second pattern should match the string of special characters "!@#$" followed by any integer that comes after the symbol "&". Response: Here are the two regex patterns: To match any of the words "l33t", "h@x0r", "kewl", and "n00b" in a case-insensitive manner: (?i)\b(l33t|h@x0r|kewl|n00b)\b To match the string of special characters "!@#$" followed by any integer that comes after the symbol "&": !@#\$&\d+ Now that GitHub Copilot has delivered 2 unique regex strings for use in our Azure WAF, let’s verify if the regex patterns work with the use of https://regex101.com. Regex101.com is an online platform designed to help users create, test, and debug Regex. It provides a user-friendly interface where you can input your regex patterns and evaluate them against sample text to see how they match. The site offers real-time feedback, highlighting matches and providing detailed explanations of each part of the regex pattern. This makes it an invaluable tool for both beginners learning regex and experienced users fine-tuning their patterns. Additionally, Regex101.com supports multiple regex flavors, including PCRE, JavaScript, and Python, allowing users to work with the syntax specific to their needs. Above, we see how we are validating the regex pattern at Regex101.com. I paste my regex pattern provided by GitHub Copilot at the top and then enter a JSON test body to match against the pattern. The tool verifies that the first regex pattern captures malicious attempts without case sensitivity and provides a detailed breakdown on the right side. This breakdown includes explanations of each part of the regex, helping to ensure that the pattern is correctly identifying the intended matches and highlighting any potential issues. In another example, we are using Regex101.com to validate a regex pattern aimed at identifying strings of unique characters. The tool verifies that the regex pattern successfully captures the string where each character appears only once and in order, followed by an integer. On the right side, Regex101.com provides a detailed breakdown of the regex pattern, explaining how each part contributes to the overall match. Now that we have validated the regex patterns with Regex101.com, let us implement them into Custom rules for Azure WAF. Using Regex with Azure WAF Having validated the regex patterns with Regex101.com, we can now proceed to implement these patterns into Custom rules for Azure WAF. This section provides a guide on integrating the validated regex patterns into your Azure WAF configuration to enhance web application security. By establishing these custom rules, you can tailor protection to meet specific requirements, ensuring malicious attempts are effectively intercepted and blocked. First, we will navigate to the Custom rules section of our Azure WAF policy, and author the two regex rules that we want to use to identify special patterns in request bodies going through our WAF. What is unique about using regex in Custom rules, is that you select Regex as an Operator in the Condition. From there, you will enter your regex pattern in the Match values section, select the action and the Custom rule is complete. After implementing the custom regex rules into Azure WAF, we executed a simulated malicious attempt to evaluate their effectiveness. The WAF, equipped with our regex patterns, successfully detected and intercepted the attack. The custom rules accurately identified the malicious activity and promptly blocked it, demonstrating the power and precision of using AI-generated regex patterns to enhance security measures. After executing the simulated malicious attempt, we examined the Azure WAF logs to confirm the effectiveness of our custom regex rules. The logs clearly indicated that the attack was intercepted, with the highlighted rule name and match value providing specific details about the block. This information is crucial for verifying that the custom rules are functioning as intended and accurately identifying malicious activities. By reviewing these logs, we can ensure that our security measures are robust and capable of protecting against potential threats. The detailed log entries not only confirm the success of our regex patterns but also offer insights into further refining and optimizing our security configurations. Conclusion Leveraging GitHub Copilot to generate regex patterns and validating them on Regex101.com before applying them to Azure WAF showcases the remarkable synergy between AI and security practices. By utilizing GitHub Copilot's intelligent code suggestions, we can efficiently create complex regex patterns tailored to our specific needs. Validating these patterns on Regex101.com ensures their accuracy and effectiveness in capturing malicious attempts. Once applied to Azure WAF, these regex patterns enhance our security measures, providing robust protection against potential threats. Testing and observing the impact of these AI-generated regex strings highlight the power and value of integrating AI into our security workflows. This approach not only streamlines the process but also demonstrates how AI can significantly contribute to hardening security, making it an efficient and worthwhile endeavor. References Introduction to Azure Web Application Firewall | Microsoft Learn What is Azure Web Application Firewall on Azure Application Gateway? | Microsoft Learn What is Azure Web Application Firewall on Azure Front Door? | Microsoft Learn Create and use v2 custom rules - Azure Web Application Firewall | Microsoft Learn GitHub Copilot602Views1like0Comments