network security perimeter
7 TopicsWhat would be the expected behavior for an NSP?
I'm using a network security perimeter in Azure. In the perimeter there are two resources assigned: A storage Account and An Azure SQL Databse. I'm using the BULK INSERT dbo.YourTable FROM 'sample_data.csv' getting data from the storage account. The NSP is enforced for both resources, so the public connectivity is denied for resources outside the perimeter I have experienced this behavior: the azure SQL CANNOT access the storage account when I run the command. I resolved using: I need to add an outbound rule in the NSP to reach the storage fqdn I need to add an inbound rule in the NSP to allow the public IP of the SQL Azure When I do 1 and 2, azure SQL is able to pump data from the storage. IMHO this is not the expected behavior for two resources in the NSP. I expect that, as they are in the same NSP, they can communicate to each other. I have experienced a different behavior when using keyvault in the same NSP. I'm using the keyvault to get the keys for encryption for the same storage. For the key vault, i didn't have to create any rule to make it able to communicate to the storage, as they are in the same NSP. I know, Azure SQL is in preview for the NSP and the keyvault in GA, but I want to ask if the experienced behavior (the SQL CANNOT connect to the storage even if in the same NSP) is due to a unstable or unimplemented feature, or I'm missing something? What is the expected behavior? Thank you community!!16Views0likes1CommentDeploying Third-Party Firewalls in Azure Landing Zones: Design, Configuration, and Best Practices
As enterprises adopt Microsoft Azure for large-scale workloads, securing network traffic becomes a critical part of the platform foundation. Azure’s Well-Architected Framework provides the blueprint for enterprise-scale Landing Zone design and deployments, and while Azure Firewall is a built-in PaaS option, some organizations prefer third-party firewall appliances for familiarity, feature depth, and vendor alignment. This blog explains the basic design patterns, key configurations, and best practices when deploying third-party firewalls (Palo Alto, Fortinet, Check Point, etc.) as part of an Azure Landing Zone. 1. Landing Zone Architecture and Firewall Role The Azure Landing Zone is Microsoft’s recommended enterprise-scale architecture for adopting cloud at scale. It provides a standardized, modular design that organizations can use to deploy and govern workloads consistently across subscriptions and regions. At its core, the Landing Zone follows a hub-and-spoke topology: Hub (Connectivity Subscription): Central place for shared services like DNS, private endpoints, VPN/ExpressRoute gateways, Azure Firewall (or third-party firewall appliances), Bastion, and monitoring agents. Provides consistent security controls and connectivity for all workloads. Firewalls are deployed here to act as the traffic inspection and enforcement point. Spokes (Workload Subscriptions): Application workloads (e.g., SAP, web apps, data platforms) are placed in spoke VNets. Additional specialized spokes may exist for Identity, Shared Services, Security, or Management. These are isolated for governance and compliance, but all connectivity back to other workloads or on-premises routes through the hub. Traffic Flows Through Firewalls North-South Traffic: Inbound connections from the Internet (e.g., customer access to applications). Outbound connections from Azure workloads to Internet services. Hybrid connectivity to on-premises datacenters or other clouds. Routed through the external firewall set for inspection and policy enforcement. East-West Traffic: Lateral traffic between spokes (e.g., Application VNet to Database VNet). Communication across environments like Dev → Test → Prod (if allowed). Routed through an internal firewall set to apply segmentation, zero-trust principles, and prevent lateral movement of threats. Why Firewalls Matter in the Landing Zone While Azure provides NSGs (Network Security Groups) and Route Tables for basic packet filtering and routing, these are not sufficient for advanced security scenarios. Firewalls add: Deep packet inspection (DPI) and application-level filtering. Intrusion Detection/Prevention (IDS/IPS) capabilities. Centralized policy management across multiple spokes. Segmentation of workloads to reduce blast radius of potential attacks. Consistent enforcement of enterprise security baselines across hybrid and multi-cloud. Organizations May Choose Depending on security needs, cost tolerance, and operational complexity, organizations typically adopt one of two models for third party firewalls: Two sets of firewalls One set dedicated for north-south traffic (external to Azure). Another set for east-west traffic (between VNets and spokes). Provides the highest security granularity, but comes with higher cost and management overhead. Single set of firewalls A consolidated deployment where the same firewall cluster handles both east-west and north-south traffic. Simpler and more cost-effective, but may introduce complexity in routing and policy segregation. This design choice is usually made during Landing Zone design, balancing security requirements, budget, and operational maturity. 2. Why Choose Third-Party Firewalls Over Azure Firewall? While Azure Firewall provides simplicity as a managed service, customers often choose third-party solutions due to: Advanced features – Deep packet inspection, IDS/IPS, SSL decryption, threat feeds. Vendor familiarity – Network teams trained on Palo Alto, Fortinet, or Check Point. Existing contracts – Enterprise license agreements and support channels. Hybrid alignment – Same vendor firewalls across on-premises and Azure. Azure Firewall is a fully managed PaaS service, ideal for customers who want a simple, cloud-native solution without worrying about underlying infrastructure. However, many enterprises continue to choose third-party firewall appliances (Palo Alto, Fortinet, Check Point, etc.) when implementing their Landing Zones. The decision usually depends on capabilities, familiarity, and enterprise strategy. Key Reasons to Choose Third-Party Firewalls Feature Depth and Advanced Security Third-party vendors offer advanced capabilities such as: Deep Packet Inspection (DPI) for application-aware filtering. Intrusion Detection and Prevention (IDS/IPS). SSL/TLS decryption and inspection. Advanced threat feeds, malware protection, sandboxing, and botnet detection. While Azure Firewall continues to evolve, these vendors have a longer track record in advanced threat protection. Operational Familiarity and Skills Network and security teams often have years of experience managing Palo Alto, Fortinet, or Check Point appliances on-premises. Adopting the same technology in Azure reduces the learning curve and ensures faster troubleshooting, smoother operations, and reuse of existing playbooks. Integration with Existing Security Ecosystem Many organizations already use vendor-specific management platforms (e.g., Panorama for Palo Alto, FortiManager for Fortinet, or SmartConsole for Check Point). Extending the same tools into Azure allows centralized management of policies across on-premises and cloud, ensuring consistent enforcement. Compliance and Regulatory Requirements Certain industries (finance, healthcare, government) require proven, certified firewall vendors for security compliance. Customers may already have third-party solutions validated by auditors and prefer extending those to Azure for consistency. Hybrid and Multi-Cloud Alignment Many enterprises run a hybrid model, with workloads split across on-premises, Azure, AWS, or GCP. Third-party firewalls provide a common security layer across environments, simplifying multi-cloud operations and governance. Customization and Flexibility Unlike Azure Firewall, which is a managed service with limited backend visibility, third-party firewalls give admins full control over operating systems, patching, advanced routing, and custom integrations. This flexibility can be essential when supporting complex or non-standard workloads. Licensing Leverage (BYOL) Enterprises with existing enterprise agreements or volume discounts can bring their own firewall licenses (BYOL) to Azure. This often reduces cost compared to pay-as-you-go Azure Firewall pricing. When Azure Firewall Might Still Be Enough Organizations with simple security needs (basic north-south inspection, FQDN filtering). Cloud-first teams that prefer managed services with minimal infrastructure overhead. Customers who want to avoid manual scaling and VM patching that comes with IaaS appliances. In practice, many large organizations use a hybrid approach: Azure Firewall for lightweight scenarios or specific environments, and third-party firewalls for enterprise workloads that require advanced inspection, vendor alignment, and compliance certifications. 3. Deployment Models in Azure Third-party firewalls in Azure are primarily IaaS-based appliances deployed as virtual machines (VMs). Leading vendors publish Azure Marketplace images and ARM/Bicep templates, enabling rapid, repeatable deployments across multiple environments. These firewalls allow organizations to enforce advanced network security policies, perform deep packet inspection, and integrate with Azure-native services such as Virtual Network (VNet) peering, Azure Monitor, and Azure Sentinel. Note: Some vendors now also release PaaS versions of their firewalls, offering managed firewall services with simplified operations. However, for the purposes of this blog, we will focus mainly on IaaS-based firewall deployments. Common Deployment Modes Active-Active Description: In this mode, multiple firewall VMs operate simultaneously, sharing the traffic load. An Azure Load Balancer distributes inbound and outbound traffic across all active firewall instances. Use Cases: Ideal for environments requiring high throughput, resilience, and near-zero downtime, such as enterprise data centers, multi-region deployments, or mission-critical applications. Considerations: Requires careful route and policy synchronization between firewall instances to ensure consistent traffic handling. Typically involves BGP or user-defined routes (UDRs) for optimal traffic steering. Scaling is easier: additional firewall VMs can be added behind the load balancer to handle traffic spikes. Active-Passive Description: One firewall VM handles all traffic (active), while the secondary VM (passive) stands by for failover. When the active node fails, Azure service principals manage IP reassignment and traffic rerouting. Use Cases: Suitable for environments where simpler management and lower operational complexity are preferred over continuous load balancing. Considerations: Failover may result in a brief downtime, typically measured in seconds to a few minutes. Synchronization between the active and passive nodes ensures firewall policies, sessions, and configurations are mirrored. Recommended for smaller deployments or those with predictable traffic patterns. Network Interfaces (NICs) Third-party firewall VMs often include multiple NICs, each dedicated to a specific type of traffic: Untrust/Public NIC: Connects to the Internet or external networks. Handles inbound/outbound public traffic and enforces perimeter security policies. Trust/Internal NIC: Connects to private VNets or subnets. Manages internal traffic between application tiers and enforces internal segmentation. Management NIC: Dedicated to firewall management traffic. Keeps administration separate from data plane traffic, improving security and reducing performance interference. HA NIC (Active-Passive setups): Facilitates synchronization between active and passive firewall nodes, ensuring session and configuration state is maintained across failovers. This design choice is usually made during Landing Zone design, balancing security requirements, budget, and operational maturity. : NICs of Palo Alto External Firewalls and FortiGate Internal Firewalls in two sets of firewall scenario 4. Key Configuration Considerations When deploying third-party firewalls in Azure, several design and configuration elements play a critical role in ensuring security, performance, and high availability. These considerations should be carefully aligned with organizational security policies, compliance requirements, and operational practices. Routing User-Defined Routes (UDRs): Define UDRs in spoke Virtual Networks to ensure all outbound traffic flows through the firewall, enforcing inspection and security policies before reaching the Internet or other Virtual Networks. Centralized routing helps standardize controls across multiple application Virtual Networks. Depending on the architecture traffic flow design, use appropriate Load Balancer IP as the Next Hop on UDRs of spoke Virtual Networks. Symmetric Routing: Ensure traffic follows symmetric paths (i.e., outbound and inbound flows pass through the same firewall instance). Avoid asymmetric routing, which can cause stateful firewalls to drop return traffic. Leverage BGP with Azure Route Server where supported, to simplify route propagation across hub-and-spoke topologies. : Azure UDR directing all traffic from a Spoke VNET to the Firewall IP Address Policies NAT Rules: Configure DNAT (Destination NAT) rules to publish applications securely to the Internet. Use SNAT (Source NAT) to mask private IPs when workloads access external resources. Security Rules: Define granular allow/deny rules for both north-south traffic (Internet to VNet) and east-west traffic (between Virtual Networks or subnets). Ensure least privilege by only allowing required ports, protocols, and destinations. Segmentation: Apply firewall policies to separate workloads, environments, and tenants (e.g., Production vs. Development). Enforce compliance by isolating workloads subject to regulatory standards (PCI-DSS, HIPAA, GDPR). Application-Aware Policies: Many vendors support Layer 7 inspection, enabling controls based on applications, users, and content (not just IP/port). Integrate with identity providers (Azure AD, LDAP, etc.) for user-based firewall rules. : Example Configuration of NAT Rules on a Palo Alto External Firewall Load Balancers Internal Load Balancer (ILB): Use ILBs for east-west traffic inspection between Virtual Networks or subnets. Ensures that traffic between applications always passes through the firewall, regardless of origin. External Load Balancer (ELB): Use ELBs for north-south traffic, handling Internet ingress and egress. Required in Active-Active firewall clusters to distribute traffic evenly across firewall nodes. Other Configurations: Configure health probes for firewall instances to ensure faulty nodes are automatically bypassed. Validate Floating IP configuration on Load Balancing Rules according to the respective vendor recommendations. Identity Integration Azure Service Principals: In Active-Passive deployments, configure service principals to enable automated IP reassignment during failover. This ensures continuous service availability without manual intervention. Role-Based Access Control (RBAC): Integrate firewall management with Azure RBAC to control who can deploy, manage, or modify firewall configurations. SIEM Integration: Stream logs to Azure Monitor, Sentinel, or third-party SIEMs for auditing, monitoring, and incident response. Licensing Pay-As-You-Go (PAYG): Licenses are bundled into the VM cost when deploying from the Azure Marketplace. Best for short-term projects, PoCs, or variable workloads. Bring Your Own License (BYOL): Enterprises can apply existing contracts and licenses with vendors to Azure deployments. Often more cost-effective for large-scale, long-term deployments. Hybrid Licensing Models: Some vendors support license mobility from on-premises to Azure, reducing duplication of costs. 5. Common Challenges Third-party firewalls in Azure provide strong security controls, but organizations often face practical challenges in day-to-day operations: Misconfiguration Incorrect UDRs, route tables, or NAT rules can cause dropped traffic or bypassed inspection. Asymmetric routing is a frequent issue in hub-and-spoke topologies, leading to session drops in stateful firewalls. Performance Bottlenecks Firewall throughput depends on the VM SKU (CPU, memory, NIC limits). Under-sizing causes latency and packet loss, while over-sizing adds unnecessary cost. Continuous monitoring and vendor sizing guides are essential. Failover Downtime Active-Passive models introduce brief service interruptions while IPs and routes are reassigned. Some sessions may be lost even with state sync, making Active-Active more attractive for mission-critical workloads. Backup & Recovery Azure Backup doesn’t support vendor firewall OS. Configurations must be exported and stored externally (e.g., storage accounts, repos, or vendor management tools). Without proper backups, recovery from failures or misconfigurations can be slow. Azure Platform Limits on Connections Azure imposes a per-VM cap of 250,000 active connections, regardless of what the firewall vendor appliance supports. This means even if an appliance is designed for millions of sessions, it will be constrained by Azure’s networking fabric. Hitting this cap can lead to unexplained traffic drops despite available CPU/memory. The workaround is to scale out horizontally (multiple firewall VMs behind a load balancer) and carefully monitor connection distribution. 6. Best Practices for Third-Party Firewall Deployments To maximize security, reliability, and performance of third-party firewalls in Azure, organizations should follow these best practices: Deploy in Availability Zones: Place firewall instances across different Availability Zones to ensure regional resilience and minimize downtime in case of zone-level failures. Prefer Active-Active for Critical Workloads: Where zero downtime is a requirement, use Active-Active clusters behind an Azure Load Balancer. Active-Passive can be simpler but introduces failover delays. Use Dedicated Subnets for Interfaces: Separate trust, untrust, HA, and management NICs into their own subnets. This enforces segmentation, simplifies route management, and reduces misconfiguration risk. Apply Least Privilege Policies: Always start with a deny-all baseline, then allow only necessary applications, ports, and protocols. Regularly review rules to avoid policy sprawl. Standardize Naming & Tagging: Adopt consistent naming conventions and resource tags for firewalls, subnets, route tables, and policies. This aids troubleshooting, automation, and compliance reporting. Validate End-to-End Traffic Flows: Test both north-south (Internet ↔ VNet) and east-west (VNet ↔ VNet/subnet) flows after deployment. Use tools like Azure Network Watcher and vendor traffic logs to confirm inspection. Plan for Scalability: Monitor throughput, CPU, memory, and session counts to anticipate when scale-out or higher VM SKUs are needed. Some vendors support autoscaling clusters for bursty workloads. Maintain Firmware & Threat Signatures: Regularly update the firewall’s software, patches, and threat intelligence feeds to ensure protection against emerging vulnerabilities and attacks. Automate updates where possible. Conclusion Third-party firewalls remain a core building block in many enterprise Azure Landing Zones. They provide the deep security controls and operational familiarity enterprises need, while Azure provides the scalable infrastructure to host them. By following the hub-and-spoke architecture, carefully planning deployment models, and enforcing best practices for routing, redundancy, monitoring, and backup, organizations can ensure a secure and reliable network foundation in Azure.1.3KViews5likes2CommentsAccelerate designing, troubleshooting & securing your network with Gen-AI powered tools, now GA.
We are thrilled to announce the general availability of Azure Networking skills in Copilot, an extension of Copilot in Azure and Security Copilot designed to enhance cloud networking experience. Azure Networking Copilot is set to transform how organizations design, operate, and optimize their Azure Network by providing contextualized responses tailored to networking-specific scenarios and using your network topology.1.6KViews1like1CommentUnmasking DDoS Attacks (Part 1/3)
In today’s always-online world, we take uninterrupted access to websites, apps, and digital services for granted. But lurking in the background is a cyber threat that can grind everything to a halt in an instant: DDoS attacks. These attacks don’t sneak in to steal data or plant malware—they’re all about chaos and disruption, flooding servers with so much traffic that they crash, slow down, or completely shut off. Over the years, DDoS attacks have evolved from annoying nuisances to full-blown cyber weapons, capable of hitting massive scales—some even reaching terabit-level traffic. Companies have lost millions of dollars due to downtime, and even governments and critical infrastructure have been targeted. Whether you’re a CTO, a business owner, a security pro, or just someone who loves tech, understanding these attacks is key to stopping them before they cause real damage. That’s where this blog series comes in. We’ll be breaking down everything you need to know about DDoS attacks—how they work, real-world examples, the latest prevention strategies, and even how you can leverage Azure services to detect and defend against them. This will be a three-part series, covering: 🔹Unmasking DDoS Attacks (Part 1): Understanding the Fundamentals and the Attacker’s Playbook What exactly is a DDoS attack, and how does an attacker plan and execute one? In this post, we’ll cover the fundamentals of DDoS attacks, explore the attacker’s perspective, and break down how an attack is crafted and launched. We’ll also discuss the different categories of DDoS attacks and how attackers choose which strategy to use. 🔹 Unmasking DDoS Attacks (Part 2): Analyzing Known Attack Patterns & Lessons from History DDoS attacks come in many forms, but what are the most common and dangerous attack patterns? In this deep dive, we’ll explore real-world DDoS attack patterns, categorize them based on their impact, and analyze some of the largest and most disruptive DDoS attacks in history. By learning from past attacks, we can better understand how DDoS threats evolve and what security teams can do to prepare. 🔹 Unmasking DDoS Attacks (Part 3): Detection, Mitigation, and the Future of DDoS Defense How do you detect a DDoS attack before it causes damage, and what are the best strategies to mitigate one? In this final post, we’ll explore detection techniques, proactive defense strategies, and real-time mitigation approaches. We’ll also discuss future trends in DDoS attacks and evolving defense mechanisms, ensuring that businesses stay ahead of the ever-changing threat landscape. So, without further ado, let’s jump right into Part 1 and start unraveling the world of DDoS attacks. What is a DDoS Attack? A Denial-of-Service (DoS) attack is like an internet traffic jam, but on purpose. It’s when attackers flood a website or online service with so much junk traffic that it slows down, crashes, or becomes completely unreachable for real users. Back in the early days of the internet, pulling off a DoS attack was relatively simple. Servers were smaller, and a single computer (or maybe a handful) could send enough malicious requests to take down a website. But as technology advanced and cloud computing took over, that approach stopped being effective. Today’s online services run on massive, distributed cloud networks, making them way more resilient. So, what did attackers do? They leveled up. Instead of relying on just one machine, they started using hundreds, thousands, or even millions—all spread out across the internet. These attacks became "distributed", with waves of traffic coming from multiple sources at once. And that’s how DDoS (Distributed Denial-of-Service) attacks were born. Instead of a single attacker, imagine a botnet—an army of compromised devices (anything from hacked computers to unsecured IoT gadgets)—all working together to flood a target with traffic. The result? Even the most powerful servers can struggle to stay online. In short, a DDoS attack is just a bigger, badder version of a DoS attack, built for the modern internet. And with cloud computing making things harder to take down, attackers have only gotten more creative in their methods. An Evolving Threat Landscape As recently reported by Microsoft: “DDoS attacks are happening more frequently and on a larger scale than ever before. In fact, the world has seen almost a 300 percent increase in these types of attacks year over year, and it’s only expected to get worse [link]". Orchestrating large-scale DDoS botnets attacks are inexpensive for attackers and are often powered by leveraging compromised devices (i.e., security cameras, home routers, cable modems, IoT devices, etc.). Within the last 6 months alone, our competitors have reported the following: June 2023: Waves of L7 attacks on various Microsoft properties March 2023: Akamai – 900 Gbps DDoS Attack Feb 2023: Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack August 2022: How Google Cloud blocked the largest Layer 7 DDoS attack at 46 million rps Graphs below are F5 labs report. Figure 1 Recent trends indicate that Technology sector is one of the most targeted segments along with Finance and Government Figure 2 Attacks are evolving & a large % of attacks are upgrading to Application DDoS or a multi-vector attack As the DDoS attacks gets bigger and more sophisticated, we need to take a defense-in-depth approach, to protect our customers in every step of the way. Azure services like Azure Front Door, Azure WAF and Azure DDoS are all working on various strategies to counter these emerging DDoS attack patterns. We will cover more on how to effectively use these services to protect your services hosted on Azure in part-3. Understanding DDoS Attacks: The Attacker's Perspective There can be many motivations behind a DDoS attack, ranging from simple mischief to financial gain, political activism, or even cyber warfare. But launching a successful DDoS attack isn’t just about flooding a website with traffic—it requires careful planning, multiple test runs, and a deep understanding of how the target’s infrastructure operates. So, what does it actually mean to bring down a service? It means pushing one or more critical resources past their breaking point—until the system grinds to a halt, becomes unresponsive, or outright collapses under the pressure. Whether it’s choking the network, exhausting compute power, or overloading application processes, the goal is simple: make the service so overwhelmed that legitimate users can’t access it at all. Resources Targeted During an Attack Network Capacity (Bandwidth and Infrastructure): The most common resource targeted in a DDoS attack, the goal is to consume all available network capacity, thereby preventing legitimate requests from getting through. This includes overwhelming routers, switches, and firewalls with excessive traffic, causing them to fail. Processing Power: By inundating a server with more requests than it can process, an attacker can cause it to slow down or even crash, denying service to legitimate users. Memory: Attackers might attempt to exhaust the server's memory capacity, causing degradation in service or outright failure. Disk Space and I/O Operations: An attacker could aim to consume the server's storage capacity or overwhelm its disk I/O operations, resulting in slowed system performance or denial of service. Connection-based Resources: In this type of attack, the resources that manage connections, such as sockets, ports, file descriptors, and connection tables in networking devices, are targeted. Overwhelming these resources can cause a disruption of service for legitimate users. Application Functionality: Specific functions of a web application can be targeted to cause a denial of service. For instance, if a web application has a particularly resource-intensive operation, an attacker may repeatedly request this operation to exhaust the server's resources. DNS Servers: A DNS server can be targeted to disrupt the resolution of domain names to IP addresses, effectively making the web services inaccessible to users. Zero-Day Vulnerabilities: Attackers often exploit unknown or zero-day vulnerabilities in applications or the network infrastructure as part of their attack strategy. Since these vulnerabilities are not yet known to the vendor, no patch is available, making them an attractive target for attackers. CDN Cache Bypass – HTTP flood attack bypasses the web application caching system that helps manage server load. Crafting The Attack Plan Most modern services no longer run on a single machine in someone’s basement—they are hosted on cloud providers with auto-scaling capabilities and vast network capacity. While this makes them more resilient, it does not make them invulnerable. Auto-scaling has its limits, and cloud networks are shared among millions of customers, meaning attackers can still find ways to overwhelm them. When planning a DDoS attack, attackers first analyze the target’s infrastructure to identify potential weaknesses. They then select an attack strategy designed to exploit those weak points as efficiently as possible. Different DDoS attack types target different resources and have unique characteristics. Broadly, these attack strategies can be categorized into three main types: Volumetric Attacks For volumetric attacks, the attacker’s goal is to saturate the target’s system resources by generating a high volume of traffic. To weaponize this attack, attackers usually employ botnets or compromised systems or even use other cloud providers (paid or fraudulently) to generate a large volume of traffic. The traffic is directed towards the target's network, making it difficult for legitimate traffic to reach the services. Examples: SYN Flood, UDP Flood, ICMP Flood, DNS Flood, HTTP Flood. Amplification Attacks Amplification attacks are a cunning tactic where attackers seek to maximize the impact of their actions without expending significant resources. Through crafty exploitation of vulnerabilities or features in systems, such as using reflection-based methods or taking advantage of application-level weaknesses, they make small queries or requests that produce disproportionately large responses or resource consumption on the target's side. Examples: DNS Amplification, NTP Amplification, Memcached Reflection Low and Slow Attacks Non-volumetric exhaustion attacks focus on depleting specific resources within a system or network rather than inundating it with sheer volume of traffic. By exploiting inherent limitations or design aspects, these attacks selectively target elements such as connection tables, CPU, or memory, leading to resource exhaustion without the need for high volume of traffic, making this a very attractive strategy for attackers. Attacks, such as Slowloris and RUDY, subtly deplete server resources like connections or CPU by mimicking legitimate traffic, making them difficult to detect. Examples: Slowloris, R-U-Dead-Yet? (RUDY). Vulnerability-Based Attacks Instead of relying on sheer traffic volume, these attacks exploit known vulnerabilities in software or services. The goal isn’t just to overwhelm resources but to crash, freeze, or destabilize a system by taking advantage of flaws in how it processes certain inputs. This type of attack is arguably the hardest to craft because it requires deep knowledge of the technology stack a service is running on. Attackers must painstakingly research software versions, configurations, and known vulnerabilities, then carefully craft malicious “poison pill” requests designed to trigger a failure. It’s a game of trial and error, often requiring multiple test runs before finding a request that successfully brings down the system. It’s also one of the most difficult attacks to defend against. Unlike volumetric attacks, which flood a service with traffic that security tools can detect, a vulnerability-based attack can cause a software crash so severe that it prevents the system from even generating logs or attack traffic metrics. Without visibility into what happened, detection and mitigation become incredibly challenging. Examples: Apache Killer, Log4Shell Executing The Attack Now that an attacker has finalized their attack strategy and identified which resource(s) to exhaust, they still need a way to execute the attack. They need the right tools and infrastructure to generate the overwhelming force required to bring a target down. Attackers have multiple options depending on their technical skills, resources, and objectives: Booters & Stressers – Renting attack power from popular botnets. Amplification attacks – Leveraging publicly available services (like DNS or NTP servers) to amplify attack traffic. Cloud abuse – Hijacking cloud VMs or misusing free-tier compute resources to generate attacks. But when it comes to executing large-scale, persistent, and devastating DDoS attacks, one method stands above the rest: botnets. Botnets: The Powerhouse Behind Modern DDoS Attacks A botnet is a network of compromised devices—computers, IoT gadgets, cloud servers, and even smartphones—all controlled by an attacker. These infected devices (known as bots or zombies) remain unnoticed by their owners while quietly waiting for attack commands. Botnets revolutionized DDoS attacks, making them: Massive in scale – Some botnets include millions of infected devices, generating terabits of attack traffic. Hard to block – Since the traffic comes from real, infected machines, it’s difficult to filter out malicious requests. Resilient – Even if some bots are shut down, the remaining network continues the attack. But how do attackers build, control, and launch a botnet-driven DDoS attack? The secret lies in Command and Control (C2) systems. How a Botnet Works: Inside the Attacker’s Playbook Infecting Devices: Building the Army Attackers spread malware through phishing emails, malicious downloads, unsecured APIs, or IoT vulnerabilities. Once infected, a device becomes a bot, silently connecting to the botnet's network. IoT devices (smart cameras, routers, smart TVs) are especially vulnerable due to poor security. Command & Control (C2) – The Brain of the Botnet A botnet needs a Command & Control (C2) server, which acts as its central command center. The attacker sends instructions through the C2 server, telling bots when, where, and how to attack. Types of C2 models: Centralized C2 – A single server controls all bots (easier to attack but simpler to manage). Peer-to-Peer (P2P) C2 – Bots communicate among themselves, making takedowns much harder. Fast Flux C2 – C2 infrastructure constantly changes IP addresses to avoid detection. Launching the Attack: Overwhelming the Target When the attacker gives the signal, the botnet unleashes the attack. Bots flood the target with traffic, connection requests, or amplification exploits. Since the traffic comes from thousands of real, infected devices, distinguishing attackers from normal users is extremely difficult. Botnets use encryption, proxy networks, and C2 obfuscation to stay online. Some botnets use hijacked cloud servers to further hide their origins. Famous Botnets & Their Impact Mirai (2016) – One of the most infamous botnets, Mirai infected IoT devices to launch a 1.2 Tbps DDoS attack, taking down Dyn DNS and causing major outages across Twitter, Netflix, and Reddit. Mozi (2020-Present) – A peer-to-peer botnet with millions of IoT bots worldwide. Meris (2021) – Hit 2.5 million RPS (requests per second), setting records for application-layer attacks. Botnets have transformed DDoS attacks, making them larger, harder to stop, and widely available on the dark web. With billions of internet-connected devices, botnets are only growing in size and sophistication. We will cover strategies on botnet detection and mitigations employed by Azure Front Door and Azure WAF services against such large DDoS attacks. Wrapping Up Part-1 With that, we’ve come to the end of Part 1 of our Unmasking DDoS Attacks series. To summarize, we’ve covered: ✅ The fundamentals of DDoS attacks—what they are and why they’re dangerous. ✅ The different categories of DDoS attacks—understanding how they overwhelm resources. ✅ The attacker’s perspective—how DDoS attacks are planned, strategized, and executed. ✅ The role of botnets—why they are the most powerful tool for large-scale attacks. This foundational knowledge is critical to understanding the bigger picture of DDoS threats—but there’s still more to uncover. Stay tuned for Part 2, where we’ll dive deeper into well-known DDoS attack patterns, examine some of the biggest DDoS incidents in history, and explore what lessons we can learn from past attacks to better prepare for the future. See you in Part 2!751Views2likes0CommentsMastering Azure at Scale: Why AVNM Is a Game-Changer for Network Management
In today’s dynamic cloud-first landscape, managing distributed and large-scale network infrastructures has become increasingly complex. As enterprises expand their digital footprint across multiple Azure subscriptions and regions, the demand for centralized, scalable, and policy-driven network governance becomes critical. Azure Virtual Network Manager (AVNM) emerges as a strategic solution enabling unified control, automation, and security enforcement across diverse environments. Key Challenges in Large Scale Network Management Managing multiple VNETs increases operational complexity Decentralized security approaches may leave vulnerabilities Validating network changes becomes more critical as VNETs grow IP address consumption requires efficient management solutions Complex network topologies demand simplified peering strategies. What is Azure Virtual Network Manager (AVNM) AVNM is a centralized network management service in Azure that allows you to: 🌐 Group and manage virtual networks across multiple subscriptions and regions. 🔐 Apply security and connectivity configurations (like hub-and-spoke or mesh topologies) at scale. ⚙️ Automate network configuration using static or dynamic membership (via Azure Policy). 📊 Enforce organization-wide security rules that can override standard NSG rules. 🛠️ Deploy changes regionally with control over rollout sequence and frequency. AVNM is especially useful for large enterprises managing complex, multi-region Azure environments, helping reduce operational overhead and improve consistency in network security and connectivity Use Case of AVNM: 1) Network Segmentation and Connectivity Features: AVNM allows network segmentation into development, production, testing, or team-based groups, and supports static and dynamic grouping. Connectivity configuration features include hub-and-spoke, mesh, and direct connectivity topologies, simplifying the creation and management of complex network topologies. Network Segmentation Features: Segment networks into Dev, Prod, Test, or team groups. Group VNets and subnets at subscription, management group, or tenant level. Use static or dynamic grouping with name or tags. Basic Editor allows editing ID/Tags/Resource Groups/Subscription conditions with GUI. Advanced Editor specifies flexible criteria with JSON. Membership changes reflect automatically in configurations. Apply configurations to network groups. Network Group Management: Simplified management of network groups. Manually pick VNets for static membership. Azure Policy notifies for dynamic membership. Diagrams illustrate segmentation into multiple production nodes and flowchart shows managing VNets and applying dynamic membership through Azure Policy. Connectivity Configuration Features: Create different topologies with a few clicks. Topologies include Hub and Spoke, Mesh, and Hub-and-Spoke with direct connectivity. Diagrams illustrate connections between nodes. Use cases highlight gateways, firewalls, and common infrastructure shared by spoke virtual networks in the hub. Emphasizes less hop and lower latency connectivity across regions, subscriptions, and tenants. 2) Security Configuration Features: Admin rules enforce organizational-level security policies, applied to all resources in desired network groups, and can overwrite conflicting rules. User rules managed by AVNM allow micro-segmentation and conflict-free rules with modularity. Security admin rules work with NSGs, evaluated prior to NSG rules, ensuring consistent, high-priority network security policies globally. Admin Rules and User Rules: Admin rules target network admins and central governance teams, applying security policies at a high level and automatically to new VMs. User rules, managed by AVNM, are designed for product/service teams, enabling micro-segmentation and modular, conflict-free rules. Security Admin Rules and NSGs: Security admin rules are evaluated before NSG rules, ensuring high-priority network security. They can allow, deny, or always allow traffic. Allowed and denied traffic can be viewed in VNet flow logs. Comparison of Security Admin Rules and NSG Rules: Security admin rules, applied by network admins, enforce organizational security with higher priority. NSG rules, applied by individual teams, offer flexibility within the guard rail and operate with lower priority. 3) Virtual Network Verifier: AVNM Network Verifier prevents errors through compliance checking, simplifies debugging, enhances network management, allows role-based access, and offers detailed analysis and reporting. AVNM IPAM creates pools for IP address planning, automatically assigns non-overlapped CIDRs, reserves IPs for specific demands, and prevents Azure address space from overlapping on-premises/multi-cloud environments. Virtual Network Verifier's Goal: Virtual Network Verifier aims to assist with virtual network management by providing diagnostics, security compliance, and guard rail checks. It includes a diagram of a virtual network setup involving ExpressRoute, VNet connections, Virtual WAN, and SD-WAN branches with remote users. An illustration shows a person using a magnifying glass and checkmarks for successful verification and a red cross for failures. Benefits of using AVNM Network Verifier: AVNM Network Verifier offers verification of network configurations, simplified debugging, enhanced network management, flexible and delegated access, and detailed analysis and reporting. A diagram visually represents the scope of the AVNM instance and its role in network verification. 4) IPAM: AVNM IPAM creates pools for IP address planning, automatically assigns non-overlapped CIDRs, reserves IPs for specific demands, and prevents Azure address space from overlapping on-premises/multi-cloud environments. Benefits of using AVNM IPAM: AVNM IPAM benefits include creating pools for IP address planning, automatically assigning non-overlapped CIDRs, reserving IPs for specific demands, preventing Azure address space from overlapping with on-premises or multi-cloud environments, and enforcing users to create VNets with no overlap CIDR. A diagram shows the distribution of IP pools between two organizations, highlighting the non-overlapping nature of their CIDRs. View allocations for an IP address pool: Users can view allocation details for an IP address pool, including how many IPs are allocated to subnets, CIDR blocks, etc., and how many IPs are consumed by Azure resources like VMs or VMSSs. A screenshot of the Azure portal interface shows the allocation and usage of IP addresses within an address pool. Historically, AVNM pricing was structured around Azure subscriptions, which often limited flexibility for customers managing diverse network scopes. With the latest update, pricing is now aligned to virtual networks empowering customers with greater control and adaptability in how they adopt AVNM across their environments. https://learn.microsoft.com/en-us/azure/virtual-network-manager/overview#pricing Azure Virtual Network Manager (AVNM) is a game-changer for network management, offering centralized control, enhanced security, and simplified connectivity configurations. Whether you're managing a small network or a complex multi-region environment, AVNM provides the tools and features needed to streamline operations and ensure robust network security.1KViews3likes0CommentsEnhance your cloud resources' security posture with network security perimeter
With network security perimeter, we are bringing in an ability to define a logical network boundary for PaaS resources deployed outside customer virtual networks and secure their public connectivity.3.9KViews8likes0Comments