cloud security best practices
17 TopicsMastering Azure Queries: Skip Token and Batching for Scale
Let's be honest. As a cloud engineer or DevOps professional managing a large Azure environment, running even a simple resource inventory query can feel like drinking from a firehose. You hit API limits, face slow performance, and struggle to get the complete picture of your estate—all because the data volume is overwhelming. But it doesn't have to be this way! This blog is your practical, hands-on guide to mastering two essential techniques for handling massive data volumes in Azure: using PowerShell and Azure Resource Graph (ARG): Skip Token (for full data retrieval) and Batching (for blazing-fast performance). 📋 TABLE OF CONTENTS 🚀 GETTING STARTED │ ├─ Prerequisites: PowerShell 7+ & Az.ResourceGraph Module └─ Introduction: Why Standard Queries Fail at Scale 📖 CORE CONCEPTS │ ├─ 📑 Skip Token: The Data Completeness Tool │ ├─ What is a Skip Token? │ ├─ The Bookmark Analogy │ ├─ PowerShell Implementation │ └─ 💻 Code Example: Pagination Loop │ └─ ⚡ Batching: The Performance Booster ├─ What is Batching? ├─ Performance Benefits ├─ Batching vs. Pagination ├─ Parallel Processing in PowerShell └─ 💻 Code Example: Concurrent Queries 🔍 DEEP DIVE │ ├─ Skip Token: Generic vs. Azure-Specific └─ Azure Resource Graph (ARG) at Scale ├─ ARG Overview ├─ Why ARG Needs These Techniques └─ 💻 Combined Example: Skip Token + Batching ✅ BEST PRACTICES │ ├─ When to Use Each Technique └─ Quick Reference Guide 📚 RESOURCES └─ Official Documentation & References Prerequisites Component Requirement / Details Command / Reference PowerShell Version The batching examples use ForEach-Object -Parallel, which requires PowerShell 7.0 or later. Check version: $PSVersionTable.PSVersion Install PowerShell 7+: Install PowerShell on Windows, Linux, and macOS Azure PowerShell Module Az.ResourceGraph module must be installed. Install module: Install-Module -Name Az.ResourceGraph -Scope CurrentUser Introduction: Why Standard Queries Don't Work at Scale When you query a service designed for big environments, like Azure Resource Graph, you face two limits: Result Limits (Pagination): APIs won't send you millions of records at once. They cap the result size (often 1,000 items) and stop. Efficiency Limits (Throttling): Sending a huge number of individual requests is slow and can cause the API to temporarily block you (throttling). Skip Token helps you solve the first limit by making sure you retrieve all results. Batching solves the second by grouping your requests to improve performance. Understanding Skip Token: The Continuation Pointer What is a Skip Token? A Skip Token (or continuation token) is a unique string value returned by an Azure API when a query result exceeds the maximum limit for a single response. Think of the Skip Token as a “bookmark” that tells Azure where your last page ended — so you can pick up exactly where you left off in the next API call. Instead of getting cut off after 1,000 records, the API gives you the first 1,000 results plus the Skip Token. You use this token in the next request to get the next page of data. This process is called pagination. Skip Token in Practice with PowerShell To get the complete dataset, you must use a loop that repeatedly calls the API, providing the token each time until the token is no longer returned. PowerShell Example: Using Skip Token to Loop Pages # Define the query $Query = "Resources | project name, type, location" $PageSize = 1000 $AllResults = @() $SkipToken = $null # Initialize the token Write-Host "Starting ARG query..." do { Write-Host "Fetching next page. (Token check: $($SkipToken -ne $null))" # 1. Execute the query, using the -SkipToken parameter $ResultPage = Search-AzGraph -Query $Query -First $PageSize -SkipToken $SkipToken # 2. Add the current page results to the main array $AllResults += $ResultPage # 3. Get the token for the next page, if it exists $SkipToken = $ResultPage.SkipToken Write-Host " -> Items in this page: $($ResultPage.Count). Total retrieved: $($AllResults.Count)" } while ($SkipToken -ne $null) # Loop as long as a Skip Token is returned Write-Host "Query finished. Total resources found: $($AllResults.Count)" This do-while loop is the reliable way to ensure you retrieve every item in a large result set. Understanding Batching: Grouping Requests What is Batching? Batching means taking several independent requests and combining them into a single API call. Instead of making N separate network requests for N pieces of data, you make one request containing all N sub-requests. Batching is primarily used for performance. It improves efficiency by: Reducing Overhead: Fewer separate network connections are needed. Lowering Throttling Risk: Fewer overall API calls are made, which helps you stay under rate limits. Feature Batching Pagination (Skip Token) Goal Improve efficiency/speed. Retrieve all data completely. Input Multiple different queries. Single query, continuing from a marker. Result One response with results for all grouped queries. Partial results with a token for the next step. Note: While Azure Resource Graph's REST API supports batch requests, the PowerShell Search-AzGraph cmdlet does not expose a -Batch parameter. Instead, we achieve batching by using PowerShell's ForEach-Object -Parallel (PowerShell 7+) to run multiple queries simultaneously. Batching in Practice with PowerShell Using parallel processing in PowerShell, you can efficiently execute multiple distinct Kusto queries targeting different scopes (like subscriptions) simultaneously. Method 5 Subscriptions 20 Subscriptions Sequential ~50 seconds ~200 seconds Parallel (ThrottleLimit 5) ~15 seconds ~45 seconds PowerShell Example: Running Multiple Queries in Parallel # Define multiple queries to run together $BatchQueries = @( @{ Query = "Resources | where type =~ 'Microsoft.Compute/virtualMachines'" Subscriptions = @("SUB_A") # Query 1 Scope }, @{ Query = "Resources | where type =~ 'Microsoft.Network/publicIPAddresses'" Subscriptions = @("SUB_B", "SUB_C") # Query 2 Scope } ) Write-Host "Executing batch of $($BatchQueries.Count) queries in parallel..." # Execute queries in parallel (true batching) $BatchResults = $BatchQueries | ForEach-Object -Parallel { $QueryConfig = $_ $Query = $QueryConfig.Query $Subs = $QueryConfig.Subscriptions Write-Host "[Batch Worker] Starting query: $($Query.Substring(0, [Math]::Min(50, $Query.Length)))..." -ForegroundColor Cyan $QueryResults = @() # Process each subscription in this query's scope foreach ($SubId in $Subs) { $SkipToken = $null do { $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } $Result = Search-AzGraph @Params if ($Result) { $QueryResults += $Result } $SkipToken = $Result.SkipToken } while ($SkipToken) } Write-Host " [Batch Worker] ✅ Query completed - Retrieved $($QueryResults.Count) resources" -ForegroundColor Green # Return results with metadata [PSCustomObject]@{ Query = $Query Subscriptions = $Subs Data = $QueryResults Count = $QueryResults.Count } } -ThrottleLimit 5 Write-Host "`nBatch complete. Reviewing results..." # The results are returned in the same order as the input array $VMCount = $BatchResults[0].Data.Count $IPCount = $BatchResults[1].Data.Count Write-Host "Query 1 (VMs) returned: $VMCount results." Write-Host "Query 2 (IPs) returned: $IPCount results." # Optional: Display detailed results Write-Host "`n--- Detailed Results ---" for ($i = 0; $i -lt $BatchResults.Count; $i++) { $Result = $BatchResults[$i] Write-Host "`nQuery $($i + 1):" Write-Host " Query: $($Result.Query)" Write-Host " Subscriptions: $($Result.Subscriptions -join ', ')" Write-Host " Total Resources: $($Result.Count)" if ($Result.Data.Count -gt 0) { Write-Host " Sample (first 3):" $Result.Data | Select-Object -First 3 | Format-Table -AutoSize } } Azure Resource Graph (ARG) and Scale Azure Resource Graph (ARG) is a service built for querying resource properties quickly across a large number of Azure subscriptions using the Kusto Query Language (KQL). Because ARG is designed for large scale, it fully supports Skip Token and Batching: Skip Token: ARG automatically generates and returns the token when a query exceeds its result limit (e.g., 1,000 records). Batching: ARG's REST API provides a batch endpoint for sending up to ten queries in a single request. In PowerShell, we achieve similar performance benefits using ForEach-Object -Parallel to process multiple queries concurrently. Combined Example: Batching and Skip Token Together This script shows how to use Batching to start a query across multiple subscriptions and then use Skip Token within the loop to ensure every subscription's data is fully retrieved. $SubscriptionIDs = @("SUB_A") $KQLQuery = "Resources | project id, name, type, subscriptionId" Write-Host "Starting BATCHED query across $($SubscriptionIDs.Count) subscription(s)..." Write-Host "Using parallel processing for true batching...`n" # Process subscriptions in parallel (batching) $AllResults = $SubscriptionIDs | ForEach-Object -Parallel { $SubId = $_ $Query = $using:KQLQuery $SubResults = @() Write-Host "[Batch Worker] Processing Subscription: $SubId" -ForegroundColor Cyan $SkipToken = $null $PageCount = 0 do { $PageCount++ # Build parameters $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } # Execute query $Result = Search-AzGraph @Params if ($Result) { $SubResults += $Result Write-Host " [Batch Worker] Sub: $SubId - Page $PageCount - Retrieved $($Result.Count) resources" -ForegroundColor Yellow } $SkipToken = $Result.SkipToken } while ($SkipToken) Write-Host " [Batch Worker] ✅ Completed $SubId - Total: $($SubResults.Count) resources" -ForegroundColor Green # Return results from this subscription $SubResults } -ThrottleLimit 5 # Process up to 5 subscriptions simultaneously Write-Host "`n--- Batch Processing Finished ---" Write-Host "Final total resource count: $($AllResults.Count)" # Optional: Display sample results if ($AllResults.Count -gt 0) { Write-Host "`nFirst 5 resources:" $AllResults | Select-Object -First 5 | Format-Table -AutoSize } Technique Use When... Common Mistake Actionable Advice Skip Token You must retrieve all data items, expecting more than 1,000 results. Forgetting to check for the token; you only get partial data. Always use a do-while loop to guarantee you get the complete set. Batching You need to run several separate queries (max 10 in ARG) efficiently. Putting too many queries in the batch, causing the request to fail. Group up to 10 logical queries or subscriptions into one fast request. By combining Skip Token for data completeness and Batching for efficiency, you can confidently query massive Azure estates without hitting limits or missing data. These two techniques — when used together — turn Azure Resource Graph from a “good tool” into a scalable discovery engine for your entire cloud footprint. Summary: Skip Token and Batching in Azure Resource Graph Goal: Efficiently query massive Azure environments using PowerShell and Azure Resource Graph (ARG). 1. Skip Token (The Data Completeness Tool) Concept What it Does Why it Matters PowerShell Use Skip Token A marker returned by Azure APIs when results hit the 1,000-item limit. It points to the next page of data. Ensures you retrieve all records, avoiding incomplete data (pagination). Use a do-while loop with the -SkipToken parameter in Search-AzGraph until the token is no longer returned. 2. Batching (The Performance Booster) Concept What it Does Why it Matters PowerShell Use Batching Processes multiple independent queries simultaneously using parallel execution. Drastically improves query speed by reducing overall execution time and helps avoid API throttling. Use ForEach-Object -Parallel (PowerShell 7+) with -ThrottleLimit to control concurrent queries. For PowerShell 5.1, use Start-Job with background jobs. 3. Best Practice: Combine Them For maximum efficiency, combine Batching and Skip Token. Use batching to run queries across multiple subscriptions simultaneously and use the Skip Token logic within the loop to ensure every single subscription's data is fully paginated and retrieved. Result: Fast, complete, and reliable data collection across your large Azure estate. References: Azure Resource Graph documentation Search-AzGraph PowerShell reference195Views1like2CommentsInfrastructure Landing Zone - Implementation Decision-Making:
🚀 Struggling to choose the right Infrastructure Landing Zone for your Azure deployment? This guide breaks down each option—speed, flexibility, security, and automation—so you can make the smartest decision for your cloud journey. Don’t risk costly mistakes—read now and build with confidence! 🔥4KViews4likes7CommentsMicrosoft Azure Cloud HSM is now generally available
Microsoft Azure Cloud HSM is now generally available. Azure Cloud HSM is a highly available, FIPS 140-3 Level 3 validated single-tenant hardware security module (HSM) service designed to meet the highest security and compliance standards. With full administrative control over their HSM, customers can securely manage cryptographic keys and perform cryptographic operations within their own dedicated Cloud HSM cluster. In today’s digital landscape, organizations face an unprecedented volume of cyber threats, data breaches, and regulatory pressures. At the heart of securing sensitive information lies a robust key management and encryption strategy, which ensures that data remains confidential, tamper-proof, and accessible only to authorized users. However, encryption alone is not enough. How cryptographic keys are managed determines the true strength of security. Every interaction in the digital world from processing financial transactions, securing applications like PKI, database encryption, document signing to securing cloud workloads and authenticating users relies on cryptographic keys. A poorly managed key is a security risk waiting to happen. Without a clear key management strategy, organizations face challenges such as data exposure, regulatory non-compliance and operational complexity. An HSM is a cornerstone of a strong key management strategy, providing physical and logical security to safeguard cryptographic keys. HSMs are purpose-built devices designed to generate, store, and manage encryption keys in a tamper-resistant environment, ensuring that even in the event of a data breach, protected data remains unreadable. As cyber threats evolve, organizations must take a proactive approach to securing data with enterprise-grade encryption and key management solutions. Microsoft Azure Cloud HSM empowers businesses to meet these challenges head-on, ensuring that security, compliance, and trust remain non-negotiable priorities in the digital age. Key Features of Azure Cloud HSM Azure Cloud HSM ensures high availability and redundancy by automatically clustering multiple HSMs and synchronizing cryptographic data across three instances, eliminating the need for complex configurations. It optimizes performance through load balancing of cryptographic operations, reducing latency. Periodic backups enhance security by safeguarding cryptographic assets and enabling seamless recovery. Designed to meet FIPS 140-3 Level 3, it provides robust security for enterprise applications. Ideal use cases for Azure Cloud HSM Azure Cloud HSM is ideal for organizations migrating security-sensitive applications from on-premises to Azure Virtual Machines or transitioning from Azure Dedicated HSM or AWS Cloud HSM to a fully managed Azure-native solution. It supports applications requiring PKCS#11, OpenSSL, and JCE for seamless cryptographic integration and enables running shrink-wrapped software like Apache/Nginx SSL Offload, Microsoft SQL Server/Oracle TDE, and ADCS on Azure VMs. Additionally, it supports tools and applications that require document and code signing. Get started with Azure Cloud HSM Ready to deploy Azure Cloud HSM? Learn more and start building today: Get Started Deploying Azure Cloud HSM Customers can download the Azure Cloud HSM SDK and Client Tools from GitHub: Microsoft Azure Cloud HSM SDK Stay tuned for further updates as we continue to enhance Microsoft Azure Cloud HSM to support your most demanding security and compliance needs.6.6KViews3likes2CommentsBuilding Azure Right: A Practical Checklist for Infrastructure Landing Zones
When the Gaps Start Showing A few months ago, we walked into a high-priority Azure environment review for a customer dealing with inconsistent deployments and rising costs. After a few discovery sessions, the root cause became clear: while they had resources running, there was no consistent foundation behind them. No standard tagging. No security baseline. No network segmentation strategy. In short—no structured Landing Zone. That situation isn't uncommon. Many organizations sprint into Azure workloads without first planning the right groundwork. That’s why having a clear, structured implementation checklist for your Landing Zone is so essential. What This Checklist Will Help You Do This implementation checklist isn’t just a formality. It’s meant to help teams: Align cloud implementation with business goals Avoid compliance and security oversights Improve visibility, governance, and operational readiness Build a scalable and secure foundation for workloads Let’s break it down, step by step. 🎯 Define Business Priorities Before Touching the Portal Before provisioning anything, work with stakeholders to understand: What outcomes matter most – Scalability? Faster go-to-market? Cost optimization? What constraints exist – Regulatory standards, data sovereignty, security controls What must not break – Legacy integrations, authentication flows, SLAs This helps prioritize cloud decisions based on value rather than assumption. 🔍 Get a Clear Picture of the Current Environment Your approach will differ depending on whether it’s a: Greenfield setup (fresh, no legacy baggage) Brownfield deployment (existing workloads to assess and uplift) For brownfield, audit gaps in areas like scalability, identity, and compliance before any new provisioning. 📜 Lock Down Governance Early Set standards from day one: Role-Based Access Control (RBAC): Granular, least-privilege access Resource Tagging: Consistent metadata for tracking, automation, and cost management Security Baselines: Predefined policies aligned with your compliance model (NIST, CIS, etc.) This ensures everything downstream is both discoverable and manageable. 🧭 Design a Network That Supports Security and Scale Network configuration should not be an afterthought: Define NSG Rules and enforce segmentation Use Routing Rules to control flow between tiers Consider Private Endpoints to keep services off the public internet This stage sets your network up to scale securely and avoid rework later. 🧰 Choose a Deployment Approach That Fits Your Team You don’t need to reinvent the wheel. Choose from: Predefined ARM/Bicep templates Infrastructure as Code (IaC) using tools like Terraform Custom Provisioning for unique enterprise requirements Standardizing this step makes every future deployment faster, safer, and reviewable. 🔐 Set Up Identity and Access Controls the Right Way No shared accounts. No “Owner” access to everyone. Use: Azure Active Directory (AAD) for identity management RBAC to ensure users only have access to what they need, where they need it This is a critical security layer—set it up with intent. 📈 Bake in Monitoring and Diagnostics from Day One Cloud environments must be observable. Implement: Log Analytics Workspace (LAW) to centralize logs Diagnostic Settings to capture platform-level signals Application Insights to monitor app health and performance These tools reduce time to resolution and help enforce SLAs. 🛡️ Review and Close on Security Posture Before allowing workloads to go live, conduct a security baseline check: Enable data encryption at rest and in transit Review and apply Azure Security Center recommendations Ensure ACC (Azure Confidential Computing) compliance if applicable Security is not a phase. It’s baked in throughout—but reviewed intentionally before go-live. 🚦 Validate Before You Launch Never skip a readiness review: Deploy in a test environment to validate templates and policies Get sign-off from architecture, security, and compliance stakeholders Track checklist completion before promoting anything to production This keeps surprises out of your production pipeline. In Closing: It’s Not Just a Checklist, It’s Your Blueprint When implemented well, this checklist becomes much more than a to-do list. It’s a blueprint for scalable, secure, and standardized cloud adoption. It helps teams stay on the same page, reduces firefighting, and accelerates real business value from Azure. Whether you're managing a new enterprise rollout or stabilizing an existing environment, this checklist keeps your foundation strong. Tags - Infrastructure Landing Zone Governance and Security Best Practices for Azure Infrastructure Landing Zones Automating Azure Landing Zone Setup with IaC Templates Checklist to Validate Azure Readiness Before Production Rollout Monitoring, Access Control, and Network Planning in Azure Landing Zones Azure Readiness Checklist for Production5.1KViews6likes3CommentsCreating an Application Landing Zone on Azure Using Bicep
🧩 What Is an Application Landing Zone? An Application Landing Zone is a pre-configured Azure environment designed to host applications in a secure, scalable, and governed manner. It is a foundational component of the Azure Landing Zones framework, which supports enterprise-scale cloud adoption by providing a consistent and governed environment for deploying workloads. 🔍 Key Characteristics Security and Compliance Built-in policies and controls ensure that applications meet organizational and regulatory requirements. Pre-configured Infrastructure Includes networking, identity, security, monitoring, and governance components that are ready to support application workloads Scalability and Flexibility Designed to scale with application demand, supporting both monolithic and microservices-based architectures. Governance and Management Integrated with Azure Policy, Azure Monitor, and Azure Security Center to enforce governance and provide operational insights. Developer Enablement Provides a consistent environment that accelerates development and deployment cycles. 🏗️ Core Components An Application Landing Zone typically includes: Networking with Virtual Networks (VNets) with subnets and NSGs Azure Active Directory (AAD) integration Role-Based Access Control (RBAC) Azure Key Vault/Managed HSM for secrets management Monitoring and Logging via Azure Monitor and Log Analytics Application Gateway or Azure Front Door for traffic management CI/CD Pipelines integrated with Azure DevOps or GitHub Actions 🛠️ Prerequisites Before deploying the Application Landing Zone, please ensure the following: ✅ Access & Identity Azure Subscription Access: You must have access to an active Azure subscription where the landing zone will be provisioned. This subscription should be part of a broader management group hierarchy if you're following enterprise scale landing zone patterns. A Service Principal (SPN): A Service Principal is required for automating deployments via CI/CD pipelines or Infrastructure as Code (IaC) tools. It should have an atleast Contributor role at the subscription level to create and manage resources. Explicit access to the following is required: - Resource Groups (for deploying application components) - Azure Policy (to assign and manage governance rules) - Azure Key Vault (to retrieve secrets, certificates, or credentials) Azure Active Directory (AAD) Ensure that AAD is properly configured for: - Role-Based Access Control (RBAC) - Group-based access assignments - Conditional Access policies (if applicable) Tip: Use Managed Identities where possible to reduce the need for credential management. ✅ Tooling Azure CLI - Required for scripting and executing deployment commands. - Ensure you're authenticated using az login or a service principal. - Recommended version: 2.55.0 or later for compatibility with Bicep latest Azure features Azure PowerShell - Installed and authenticated (Connect-AzAccount) - Recommended module: Az module version 11.0.0 or later Visual Studio Code Preferred IDE for working with Bicep and ARM templates. - Install the following extensions: - Bicep: for authoring and validating infrastructure templates. - Azure Account: for managing Azure sessions and subscriptions. Source Control & CI/CD Integration Access to GitHub or Azure DevOps is required for: - Storing IaC templates - Automating deployments via pipelines - Managing version control and collaboration � Tip: Use GitHub Actions or Azure Pipelines to automate validation, testing, and deployment of your landing zone templates. ✅ Environment Setup Resource Naming Conventions Define a naming standard that reflects resource type, environment, region, and application. Example: rg-app1-prod-weu for a production resource group in West Europe. Tagging Strategy Predefine tags for: - Cost Management (e.g., CostCenter, Project) - Ownership (e.g., Owner, Team) - Environment (e.g., Dev, Test, Prod) Networking Baseline Ensure that required VNets, subnets, and DNS settings are in place. Plan for hybrid connectivity if integrating with on-premises networks (e.g., via VPN or ExpressRoute). Security Baseline Define and apply: - RBAC roles for least-privilege access - Azure built-in as well as custom Policies for compliance enforcement - NSGs and ASGs for network security 🧱 Application Landing Zone Architecture Using Bicep Bicep is a domain-specific language (DSL) for deploying Azure resources declaratively. It simplifies the authoring experience compared to traditional ARM templates and supports modular, reusable, and maintainable infrastructure-as-code (IaC) practices. The Application Landing Zone (App LZ) architecture leverages Bicep to define and deploy a secure, scalable, and governed environment for hosting applications. This architecture is structured into phases, each representing a logical grouping of resources. These phases align with enterprise cloud adoption frameworks and enable teams to deploy infrastructure incrementally and consistently. 🧱 Architectural Phases The App LZ is typically divided into the following phases, each implemented using modular Bicep templates: 1. Foundation Phase Establishes the core infrastructure and governance baseline: Resource groups Virtual networks and subnets Network security groups (NSGs) Diagnostic settings Azure Policy assignments 2. Identity & Access Phase Implements secure access and identity controls: Role-Based Access Control (RBAC) Azure Active Directory (AAD) integration Managed identities Key Vault access policies 3. Security & Monitoring Phase Ensures observability and compliance: Azure Monitor and Log Analytics Security Center configuration Alerts and action groups Defender for Cloud settings 4. Application Infrastructure Phase Deploys application-specific resources: App Services, AKS, or Function Apps Application Gateway or Azure Front Door Storage accounts, databases, and messaging services Private endpoints and service integrations 5. CI/CD Integration Phase Automates deployment and lifecycle management: GitHub Actions or Azure Pipelines Deployment scripts and parameter files Secrets management via Key Vault Environment-specific configurations 🔁 Modular Bicep Templates Each phase is implemented using modular Bicep templates, which offer: Reusability: Templates can be reused across environments (Dev, Test, Prod). Flexibility: Parameters allow customization without modifying core logic. Incremental Deployment: Phases can be deployed independently or chained together. Testability: Each module can be validated against test cases before full deployment. 💡 Example: A network.bicep module can be reused across multiple landing zones with different subnet configurations. To ensure a smooth and automated deployment experience, here’s the complete flow from setup: ✅ Benefits of This Approach Consistency & Compliance: Enforces Azure best practices and governance policies Modularity: Reusable Bicep modules simplify maintenance and scaling Automation: CI/CD pipelines reduce manual effort and errors Security: Aligns with Microsoft’s security baselines and CAF Scalability: Easily extendable to support new workloads or environments Native Azure Integration: Supports all Azure resources and features. Tooling Support: Integrated with Visual Studio Code, Azure CLI, and GitHub. 🔄 Why Choose Bicep Over Terraform? First-Party Integration: Bicep is a first-party solution maintained by Microsoft, ensuring day-one support for new Azure services and API changes. This means customers can immediately leverage the latest features and updates without waiting for third-party providers to catch up. Azure-Specific Optimization: Bicep is deeply integrated with Azure services, offering a tailored experience for Azure resource management. This integration ensures that deployments are optimized for Azure, providing better performance and reliability. Simplified Syntax: Bicep uses a domain-specific language (DSL) that is more concise and easier to read compared to Terraform's HCL (HashiCorp Configuration Language). This simplicity reduces the learning curve and makes it easier for teams to write and maintain infrastructure code. Incremental Deployment: Unlike Terraform, Bicep does not store state. Instead, it relies on incremental deployment, which simplifies the deployment process and reduces the complexity associated with state management. This approach ensures that resources are deployed consistently without the need for managing state files. Azure Policy Integration: Bicep integrates seamlessly with Azure Policy, allowing for preflight validation to ensure compliance with policies before deployment. This integration helps in maintaining governance and compliance across deployments What-If Analysis: Bicep offers a "what-if" operation that predicts the changes before deploying a Bicep file. This feature allows customers to preview the impact of their changes without making any modifications to the existing infrastructure. 🏁 Conclusion Creating an Application Landing Zone using Bicep provides a robust, scalable, and secure foundation for deploying applications in Azure. By following a phased, modular approach and leveraging automation, organizations can accelerate their cloud adoption journey while maintaining governance and operational excellence.1.4KViews1like0CommentsStrengthening Azure infrastructure and platform security - 5 new updates
In the face of AI-driven digital growth and a threat landscape that never sleeps, Azure continues to raise the bar on Zero Trust-ready, “secure-by-default” networking. Today we’re excited to announce five innovations that make it even easier to protect your cloud workloads while keeping developers productive: Innovation What it is Why it matters Next generation of Azure Intel® TDX Confidential VMs (Private Preview) Azure’s next generation of Confidential Virtual Machines now powered by the 5th Gen Intel® Xeon® processors (code-named Emerald Rapids) with Intel® Trust Domain Extensions (Intel® TDX). Enables organizations to bring confidential workloads to the cloud without code changes to applications. The supported VMs include the general-purpose families DCesv6-series and the memory optimized families ECesv6-series. CAPTCHA support for Azure WAF (Public Preview) A new WAF action that presents a visual / audio CAPTCHA when traffic matches custom or Bot Manager rules. Stops sophisticated, human-mimicking bots while letting legitimate users through with minimal friction. Microsoft Learn Azure Bastion Developer (New Regions, simplified secure-by-default UX) A free, lightweight Bastion offering surfaced directly in the VM Connect blade. One-click, private RDP/SSH to a single VM—no subnet planning, no public IP. Gives dev/test teams instant, hardened access without extra cost, jump servers, or NSGs. Azure Azure Virtual Network TAP (Public Preview) Native agentless packet mirroring available for all VM SKUs with zero impact to VM performance and network throughput. Deep visibility for threat-hunting, performance, and compliance—now cloud-native. Microsoft Learn Azure Firewall integration in Security Copilot (GA) A generative AI-powered solution that helps secure networks with the speed and scale of AI. Threat hunt across Firewalls using natural language questions instead of manually scouring through logs and threat databases. Microsoft Learn 1. Next generation of Azure Intel® TDX Confidential VMs (Private Preview) We are excited to announce the preview of Azure’s next generation of Confidential Virtual Machines powered by the 5th Gen Intel® Xeon® processors (code-named Emerald Rapids) with Intel® Trust Domain Extensions (Intel® TDX). This will help to enable organizations to bring confidential workloads to the cloud without code changes to applications. The supported VMs include the general-purpose families DCesv6-series and the memory optimized families ECesv6-series. Azure’s next generation of confidential VMs will bring improvements and new features compared to our previous generation. These VMs are our first offering to utilize our open-source paravisor, OpenHCL. This innovation allows us to enhance transparency with our customers, reinforcing our commitment to the "trust but verify" model. Additionally, our new confidential VMs support Azure Boost, enabling up to 205k IOPS and 4 GB/s throughput of remote storage along with 54 GBps VM network bandwidth. We are expanding the capabilities of our Intel® TDX powered confidential VMs by incorporating features from our general purpose and other confidential VMs. These enhancements include Guest Attestation support, and support of Intel® Tiber™ Trust Authority for enterprises seeking operator independent attestation. The DCesv6-series and ECesv6-series preview is available now in the East US, West US, West US 3, and West Europe regions. Supported OS images include Windows Server 2025, Windows Server 2022, Ubuntu 22.04, and Ubuntu 24.04. Please sign up at aka.ms/acc/v6preview and we will reach out to you. 2. Smarter Bot Defense with WAF + CAPTCHA Modern web applications face an ever-growing array of automated threats, including bots, web scrapers, and brute-force attacks. Many of these attacks evade common security measures such as IP blocking, geo-restrictions, and rate limiting, which struggle to differentiate between legitimate users and automated traffic. As cyber threats become more sophisticated, businesses require stronger, more adaptive security solutions. Azure Front Door’s Web Application Firewall (WAF) now introduces CAPTCHA in public preview—an interactive mechanism designed to verify human users and block malicious automated traffic in real time. By requiring suspicious traffic to successfully complete a CAPTCHA challenge, WAF ensures that only legitimate users can access applications while keeping bots at bay. This capability is particularly valuable for common login and sign-up workflows, mitigating the risk of account takeovers, credential stuffing attacks, and brute-force intrusions that threaten sensitive user data. Key Benefits of CAPTCHA on Azure Front Door WAF Prevent Automated Attacks – Blocks bots from accessing login pages, forms, and other critical website elements. Secure User Accounts – Mitigates credential stuffing and brute-force attempts to protect sensitive user information. Reduce Spam & Fraud – Ensures only real users can submit comments, register accounts, or complete transactions. Easy Deployment and Management – Requires minimal configuration, reducing operational overhead while maintaining a robust security posture. How CAPTCHA Works When a client request matches a WAF rule configured for CAPTCHA enforcement, the user is presented with an interactive CAPTCHA challenge to confirm they are human. Upon successful completion, Azure WAF validates the request and allows access to the application. Requests that fail the challenge are blocked, preventing bots from proceeding further. Getting Started CAPTCHA is now available in public preview for Azure WAF. Administrators can configure this feature within their WAF policy settings to strengthen bot mitigation strategies and improve security posture effortlessly. To learn more and start protecting your applications today, visit our Azure WAF documentation. 3. Azure Bastion Developer—Secure VM Access at Zero Cost Azure Bastion Developer is a lightweight, free offering of the Azure Bastion service designed for Dev/Test users who need secure connections to their Virtual Machines (VMs) without requiring additional features or scalability. It simplifies secure access to VMs, addressing common issues related to usability and cost. To get started, users can sign in to the Azure portal and follow the setup instructions for connecting to their VMs. This service is particularly beneficial for developers looking for a cost-effective solution for secure connectivity. It's now available in 36 regions with a new portal secure by default user experience. Key takeaways Instant enablement from the VM Connect tab. One concurrent session, ideal for dev/test and PoC environments. No public IPs, agents, or client software required. 4. Deep Packet Visibility with Virtual Network TAP Azure virtual network terminal access point enables customers to mirror virtual machine traffic to packet collectors or analytics tools without having to deploy agents or impact virtual machine network throughput, allowing you to mirror 100% of your production traffic. By configuring virtual network TAP on a virtual machine’s network interface, organizations can stream inbound and outbound traffic to destinations within the same or peered virtual network for real-time monitoring for various uses cases, including: Enhanced security and threat detection: Security teams can inspect full packet data in real-time to detect and respond to potential threats. Performance monitoring and troubleshooting: Operations teams can analyze live traffic patterns to identify bottlenecks, troubleshoot latency issues, and optimize application performance. Regulatory compliance: Organizations subject to compliance frameworks such as Health Insurance Portability and Accountability Act (HIPAA), and General Data Protection Regulation (GDPR) can use virtual network TAP to capture network activity for auditing and forensic investigations. Virtual network TAP supports all Azure VM SKU and integrates seamlessly with validated partner solutions, offering extended visibility and security capabilities. For a list of partner solutions that are validated to work with virtual network TAP, see partner solutions. 5. Protect networks at machine speed with Generative AI Azure Firewall intercepts and blocks malicious traffic using the intrusion detection and prevention system (IDPS) today. It processes huge volumes of packets, analyzes signals from numerous network resources, and generates vast amounts of logs. To reason over all this data and cut through the noise to analyze threats, analysts spend several hours if not days performing manual tasks. The Azure Firewall integration in Security Copilot helps analysts perform these investigations with the speed and scale of AI. An example of a security analyst processing the threats their Firewall stopped can be seen below: Analysts spend hours writing custom queries or navigating several manual steps to retrieve threat information and gather additional contextual information such as geographical location of IPs, threat rating of a fully qualified domain name (FQDN), details of common vulnerabilities and exposures (CVEs) associated with an IDPS signature, and more. Copilot pulls information from the relevant sources to enrich your threat data in a fraction of the time and can do this not just for a single threat/Firewall but for all threats across your entire Firewall fleet. It can also correlate information with other security products to understand how attackers are targeting your entire infrastructure. To learn more about the user journey and value that Copilot can deliver, see the Azure blog from our preview announcement at RSA last year. To see these capabilities in action, take a look at this Tech Community blog, and to get started, see the documentation. Looking Forward Azure is committed to delivering secure, reliable, and high-performance connectivity so you can focus on building what’s next. Our team is dedicated to creating innovative, resilient, and secure solutions that empower businesses to leverage AI and the cloud to their fullest potential. Our approach of providing layered defense in depth via our security solutions like Confidential Compute, Azure DDoS Protection, Azure Firewall, Azure WAF, Azure virtual network TAP, network security perimeter will continue with more enhancements and features upcoming. We can’t wait to see how you’ll use these new security capabilities and will be keen to hear your feedback.905Views0likes0CommentsCheck Azure AI Service Models and Features by Region in Excel Format
Introduction While Microsoft's documentation provides information on the OpenAI models available in Azure, it can be challenging to determine which features are included in which model versions and in which regions they are available. There can be occasional inaccuracies in the documentation. I believe this is the limitations of natural language documentation, thus I found that we should investigate directly within the actual Azure environment. This idea led to the creation of this article. You need available Azure subscription, then you can retrieve a list of models available in a specific region using the az cognitiveservices model list command. This command allows you to query the Azure environment directly to obtain up-to-date information on available models. In the following sections, we'll explore how to analyze this data using various examples. Please note that the PowerShell scripts in this article are intended to be run with PowerShell Core. If you execute them using Windows PowerShell, they might not function as expected. Using Azure CLI to List Available AI Models To begin, let's check what information can be retrieved. By executing the following command: az cognitiveservices model list -l westus3 You'll receive a JSON array containing details about each model available in the West US3 region. Here's a partial excerpt from the output: PS C:\Users\xxxxx> az cognitiveservices model list -l westus3 [ { "kind": "OpenAI", "model": { "baseModel": null, "callRateLimit": null, "capabilities": { "audio": "true", "scaleType": "Standard" }, "deprecation": { "fineTune": null, "inference": "2025-03-01T00:00:00Z" }, "finetuneCapabilities": null, "format": "OpenAI", "isDefaultVersion": true, "lifecycleStatus": "Preview", "maxCapacity": 9999, "name": "tts", "skus": [ { "capacity": { "default": 3, "maximum": 9999, "minimum": null, "step": null }, "deprecationDate": "2026-02-01T00:00:00+00:00", "name": "Standard", "rateLimits": [ { "count": 1.0, "key": "request", "renewalPeriod": 60.0, "rules": null } ], "usageName": "OpenAI.Standard.tts" } ], "source": null, "systemData": { "createdAt": "2023-11-20T00:00:00+00:00", "createdBy": "Microsoft", "createdByType": "Application", "lastModifiedAt": "2023-11-20T00:00:00+00:00", "lastModifiedBy": "Microsoft", "lastModifiedByType": "Application" }, "version": "001" }, "skuName": "S0" }, { "kind": "OpenAI", "model": { "baseModel": null, "callRateLimit": null, "capabilities": { "audio": "true", "scaleType": "Standard" }, "deprecation": { "fineTune": null, "inference": "2025-03-01T00:00:00Z" }, "finetuneCapabilities": null, "format": "OpenAI", "isDefaultVersion": true, "lifecycleStatus": "Preview", "maxCapacity": 9999, "name": "tts-hd", "skus": [ { "capacity": { "default": 3, "maximum": 9999, "minimum": null, "step": null }, "deprecationDate": "2026-02-01T00:00:00+00:00", "name": "Standard", "rateLimits": [ { "count": 1.0, "key": "request", "renewalPeriod": 60.0, "rules": null } ], "usageName": "OpenAI.Standard.tts-hd" } ], "source": null, "systemData": { "createdAt": "2023-11-20T00:00:00+00:00", "createdBy": "Microsoft", "createdByType": "Application", "lastModifiedAt": "2023-11-20T00:00:00+00:00", "lastModifiedBy": "Microsoft", "lastModifiedByType": "Application" }, "version": "001" }, "skuName": "S0" }, From the structure of the JSON output, we can derive the following insights: Model Information: Basic details about the model, such as model.format, model.name, and model.version, can be obtained from the format respectively. Available Capabilities: The model.capabilities field lists the functionalities supported by the model. Entries like chat-completion and completions indicate support for Chat Completion and text generation features. If a particular capability isn't supported by the model, it simply won't appear in the capabilities array. For example, if imageGeneration is not listed, it implies that the model doesn't support image generation. Deployment Information: Model's deployment options can be found in the model.skus field, which outlines the available SKUs for deploying the model. Service Type: The kind field indicates the type of service the model belongs to, such as OpenAI, MaaS, or other Azure AI services. By querying each region and extracting the necessary information from the JSON response, we can effectively analyze and understand the availability and capabilities of Azure AI models across different regions. Retrieving Azure AI Model Information Across All Regions To gather model information across all Azure regions, you can utilize the Azure CLI in conjunction with PowerShell. Given the substantial volume of data and to avoid placing excessive load on Azure Resource Manager, it's advisable to retrieve and store data for each region separately. This approach allows for more manageable analysis based on the saved files. While you can process JSON data using your preferred programming language, I personally prefer PowerShell. The following sample demonstrates how to execute Azure CLI commands within PowerShell to achieve this task. # retrieve all Azure regions $regions = az account list-locations --query [].name -o tsv # Use this array if you retrive only specific regions # $regions = @('westus3', 'eastus', 'swedencentral') # retrieve all region model data and put as JSON file $idx = 1 $regions | foreach { $json = $null write-host ("[{0:00}/{1:00}] Getting models for {2} " -f $idx++, $regions.Length, $_) try { $json = az cognitiveservices model list -l $_ } catch { # skip some unavailable regions write-host ("Error getting models for {0}: {1}" -f $_, $_.Exception.Message) } if($json -ne $null) { $models = $json | ConvertFrom-Json if($models.length -gt 0) { $json | Out-File -FilePath "./models/$($_).json" -Encoding utf8 } else { # skip empty array Write-Host ("No models found for region: {0}" -f $_) } } } Summarizing the Collected Data Let's load the previously saved JSON files and count the number of models available in each Azure region. The following PowerShell script reads all .json files from the ./models directory, converts their contents into PowerShell objects, adds the region information based on the filename, and then groups the models by region to count them: # Load all JSON files, convert to objects, and add region information $models = Get-ChildItem -Path "./models" -Filter "*.json" | ForEach-Object { $region = $_.BaseName Get-Content -Path $_.FullName -Raw | ConvertFrom-Json | ForEach-Object { $_ | Add-Member -NotePropertyName 'region' -NotePropertyValue $region -PassThru } } # Group models by region and count them $models | Group-Object -Property region | ForEach-Object { Write-Host ("{0} models in {1}" -f $_.Count, $_.Name) } This script will output the number of models available in each region. For example: 228 models in australiaeast 222 models in brazilsouth 206 models in canadacentral 222 models in canadaeast 89 models in centralus ... Which Models Are Deployable in Each Azure Region? One of the first things you probably want is a list of which models and versions are available in each Azure region. The following script uses -ExpandProperty to unpack the model property array. Additionally, it expands the model.sku property to retrieve information about deployment models. Please note that you have to remove ./model/global.json before running the following script. # Since using 'select -ExpandProperty' modifies the original object, it’s a good idea to reload data from the files each time Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # Expand model and model.skus, and extract relevant fields $models | select region, kind -ExpandProperty model ` | select region, kind, @{l='modelFormat';e={$_.format}}, @{l='modelName';e={$_.name}}, @{l='modelVersion'; e={$_.version}}, @{l='lifecycle';e={$_.lifecycleStatus}}, @{l='default';e={$_.isDefaultVersion}} -ExpandProperty skus ` | select region, kind, modelFormat, modelName, modelVersion, lifecycle, default, @{l='skuName';e={$_.name}}, @{l='deprecationDate ';e={$_.deprecationDate }} ` | sort-object region, kind, modelName, modelVersion ` | sv output $output | Format-Table | Out-String -Width 4096 $output | Export-Csv -Path "modelList.csv" Given the volume of data, viewing it in Excel or a similar tool makes it easier to analyze. Let’s ignore the fact that Excel might try to auto-convert model version numbers into dates—yes, that annoying behavior. For example, if you filter for the gpt-4o model in the japaneast region, you’ll see that the latest version 2024-11-20 is now supported with the standard deployment SKU. Listing Available Capabilities While the model.capabilities property allows us to see which features each model supports, it doesn't include properties for unsupported features. This omission makes it challenging to determine all possible capabilities and apply appropriate filters. To address this, we'll aggregate the capabilities from all models across all regions to build a comprehensive dictionary of available features. This approach will help us understand the full range of capabilities and facilitate more effective filtering and analysis. # Read from files again Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # list only unique capabilities $models ` | select -ExpandProperty model ` | select -ExpandProperty capabilities ` | foreach { $_.psobject.properties.name} ` | select -Unique ` | sort-object ` | sv capability_dictionary The result is as follows: allowProvisionedManagedCommitment area assistants audio chatCompletion completion embeddings embeddingsMaxInputs fineTune FineTuneTokensMaxValue FineTuneTokensMaxValuePerExample imageGenerations inference jsonObjectResponse jsonSchemaResponse maxContextToken maxOutputToken maxStandardFinetuneDeploymentCount maxTotalToken realtime responses scaleType search Creating a Feature Support Matrix for Each Model Now, let's use this list to build a feature support matrix that shows which capabilities are supported by each model. This matrix will help us understand the availability of features across different models and regions. # Read from JSON files Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # Outputting capability properties for each model (null if absent) $models ` | select region, kind -ExpandProperty model ` | select region, kind, format, name, version -ExpandProperty capabilities ` | select (@('region', 'kind', 'format', 'name', 'version') + $capability_dictionary) ` | sort region, kind, format, name, version ` | sv model_capabilities # output as csv file $model_capabilities | Export-Csv -Path "modelCapabilities.csv" Finding Models That Support Desired Capabilities To identify models that support specific capabilities, such as imageGenerations, you can open the CSV file in Excel and filter accordingly. Upon doing so, you'll notice that support for image generation is quite limited across regions. When examining models that support the Completion endpoint in the eastus region, you'll find names like ada, babbage, curie, and davinci. These familiar names bring back memories of earlier models. It also reminds that during the GPT-3.5 Turbo era, models supported both Chat Completion and Completion endpoints. Conclusion By retrieving data directly from the actual Azure environment rather than relying solely on documentation, you can ensure access to the most up-to-date information. In this article, we adopted an approach where necessary information was flattened and exported to a CSV file for examination in Excel. Once you're familiar with the underlying data structure, this method allows for investigations and analyses from various perspectives. As you become more accustomed to this process, it might prove faster than navigating through official documentation. However, for aspects like "capabilities," which may be somewhat intuitive yet lack clear definitions, it's advisable to contact Azure Support for detailed information.901Views3likes0CommentsIntroducing XFF header for Azure Firewall: Gain crucial insights to help stay secure
The X-Forwarded-For (XFF) HTTP header provides crucial insight into the origin of web requests. The header works as a mechanism for conveying the original source IP addresses of clients, and not just across one hop, but through chains of multiple intermediaries. Information embedded in XFF headers is vital to network security to help with both enforcement and auditing. Thus, it’s important for proxies like Azure Firewall to preserve this information when packets flow through the networks. This blog shares Azure Firewall handling XFF headers. How does Azure Firewall handle XFF headers? Proxies can perform several actions on the XFF headers received. This includes preserving the XFF contents received before forwarding to the next hop, augmenting client IP to the XFF header and enforcing policies based on XFF contents. Azure Firewall preserves and augments XFF header based on how the traffic is received and processed. Behavior is detailed below. Traffic/Payload Rule Processed Preserves original content in the XFF header Augment Client IP to the XFF header HTTP payload Application Rules Preserved YES HTTPs payload Application Rules Preserved NO (XFF header is encrypted) HTTPs with TLS termination Application Rules Preserved YES HTTP or HTTPs payload DNAT/Network rules Preserved – Azure Firewall doesn’t impact HTTP headers as traffic is processed at layer 4 Validating Azure Firewall behavior: For this blog, I set up a local environment with NGINX to validate Firewall behavior. This includes a local client running in Azure, Internet client and a NGINX webserver to process http/s traffic. I used a private DNS zone to redirect traffic of a popular domain (example.com) to my NGINX server behind the firewall. HTTP/s client traffic and response: The client sends a http payload to example.com after adding 192.0.2.100 to XFF header. Azure Firewall output: The Azure Firewall receives both HTTP and HTTPs requests as the NGINX server redirects the client HTTP traffic to HTTPs listener. Server XFF header output: For HTTP requests, XFF output displays both the client IP and the appended IP in the curl request. For HTTPs requests, XFF output displays only the IP added by the client. DNAT traffic to the server: Internet clients send https traffic to the NGINX server via Azure Firewall Public IP. Azure Firewall receives the traffic as a DNAT rule and redirects the traffic to the translated destination server. Server XFF header output: Traffic is received with XFF header inserted by the client. Azure Firewall doesn’t impact this header as it receives the traffic on the network. Conclusion: In conclusion, the X-Forwarded-For (XFF) HTTP header plays a crucial role in providing insight into the origin of web requests. It helps convey the original source IP addresses of clients through multiple intermediaries, which is vital for network security, enforcement, and auditing. Azure Firewall's handling of XFF headers ensures that this information is preserved and augmented based on how the traffic is received and processed. By maintaining the integrity of XFF headers, Azure Firewall enhances security measures and provides a reliable mechanism for tracking the source of web traffic.798Views0likes0CommentsAccelerating Azure Security: Key Insights at Our Upcoming Virtual Event
On April 22 nd join Azure security and AI experts at the “Azure security and AI Adoption” Tech Accelerator event hosted on Tech Community. 7 total sessions will be hosted, including several contributors from the Azure Infrastructure Blog. Here is a roundup of 4 key sessions for Azure Infrastructure Blog followers: Kicking off at 8 AM PST will be the Keynote delivered by Azure CVP Uli Homann – “Security: An essential part of your Azure and AI journey.” Join to take a closer look at the importance of embedding security from initial consideration of Azure to the ongoing management of workloads and AI applications. Next up at 8:30 AM will be “Secure by design: Azure datacenter & hardware security” delivered by Azure Senior Director Alistair Speirs. Join this session to explore the security capabilities built into every Azure datacenter, along with new hardware security innovations that provide an extra layer of protection for every workload in Azure. Our first session delivered by Azure Infra Blog author will be Azure Principal Program Manager Anupam Vij’s “Azure Network Security Embedded Features and Use Cases” at 9:00 AM. This session will guide you through common use cases, detailing which robust security features apply to each scenario. The last session featuring an Azure Infra Blog author will be Joey Snow’s “How to design and build secure AI projects” at 10:30 AM. This session equips technical professionals with the knowledge and tools to create secure AI workloads. Attendees will learn how to incorporate Responsible AI and security-by-design principles from the Azure Well-Architected Framework, identify tradeoffs during design, and prioritize security. We hope you are able to join the virtual event, as well as join Azure platform security’s presence at in-person events including RSA Conference (April 28-May 1) and Microsoft Build (May 19-22).130Views0likes0CommentsSo, you want to have a public IP Address for your application?
In Microsoft Azure, a public IP address is a fundamental component for enabling internet-facing services, such as hosting a web application, facilitating remote access, or exposing an API endpoint. While this connectivity drives functionality, it also exposes resources to the unpredictable and often hostile expanse of the internet. This blog dives deep into the security implications of a public IP in Azure, using a detailed scenario to illustrate potential threats and demonstrating how Azure’s robust toolkit—Network Security Groups (NSGs), Azure DDoS Protection, Azure Firewall, Web Application Firewall (WAF), Private Link, and Azure Bastion—can safeguard against them. Scenario: The exposed e-commerce platform Imagine a small e-commerce business launching its online store on Azure. The infrastructure includes an application gateway hosting a web server with a public IP (e.g., 20.55.123.45), an Azure SQL Database for inventory and customer data, and a load balancer distributing traffic. Initially, the setup works flawlessly, customers browse products, place orders, and the business grows. But one day, the IT team notices unusual activity: failed login attempts spike, site performance dips, and a customer reports a suspicious pop-up on the checkout page. The public IP left with minimal protection has become a target. The threats of public IP exposure A public IP is like an open address in a bustling digital city. It’s visible to anyone with the means to look, and without proper safeguards, it invites a variety of threats: Brute Force Attacks: Exposed endpoints, such as a VM with Remote Desktop Protocol (RDP) or SSH enabled, become prime targets for attackers attempting to guess credentials. With enough attempts, weak passwords can crumble, granting unauthorized access to sensitive systems. Exploitation of Vulnerabilities: Unpatched software or misconfigured services behind a public IP can be exploited. Attackers regularly scan for known vulnerabilities—like outdated web servers or databases—using automated tools to infiltrate systems and extract data or plant malware. Distributed Denial of Service (DDoS) Attacks: A public IP can attract floods of malicious traffic designed to overwhelm resources, rendering services unavailable. For businesses relying on uptime, this can lead to lost revenue and damaged trust. Application-Layer Attacks: Web applications exposed via a public IP are susceptible to threats like SQL injection, cross-site scripting (XSS), or other exploits that manipulate poorly secured code, potentially compromising data integrity or user privacy. Left unprotected, a public IP becomes a liability, amplifying the attack surface and inviting persistent threats from the internet’s darker corners. Azure’s Security Arsenal Azure provides a layered approach to securing resources with public IPs. By leveraging its built-in services, organizations can transform that open gateway into a fortified checkpoint. Here’s how these tools work together to mitigate risks: Azure DDoS Protection Azure DDoS Protection protects from overwhelming public IPs with malicious traffic. Azure DDoS Protection, available for infrastructure protection and as Network & IP Protection SKUs, monitors and mitigates these threats. The Network and IP Protection SKUs uses machine learning to profile normal traffic patterns, automatically detecting and scrubbing malicious floods—such as SYN floods or UDP amplification attacks—before they impact application availability. Azure Web Application Firewall (WAF) When a public IP fronts a web application (e.g., via Azure Application Gateway), the WAF adds application-layer protection. It inspects HTTP/HTTPS traffic, thwarting attacks like SQL injection or XSS by applying OWASP core rule sets. This is critical for workloads where the public IP serves as the entry point to customer-facing services. Network Security Groups (NSGs) NSGs act as a virtual firewall at the subnet or network interface level, filtering traffic based on predefined rules. For the specific scenario above, an NSG should be used to restrict inbound traffic to an Application Gateway’s public IP, allowing only specific ports (e.g., HTTPS on port 443) from trusted sources while blocking unsolicited RDP or SSH attempts. This reduces the attack surface by ensuring only necessary traffic reaches the resource. Azure Private Link Sometimes, the best defense is to avoid public exposure entirely. Azure Private Link allows resources—like Azure SQL Database or Storage—to be accessed over a private endpoint within a virtual network, bypassing the public internet. By pairing a public IP with Private Link for internal services, organizations can limit external exposure while maintaining secure, private connectivity. Azure Bastion For administrative access to backend VMs, exposing RDP or SSH ports via a public IP is a common risk. Azure Bastion eliminates this need by providing a fully managed, browser-based jump box. Admins connect securely through the Azure portal over TLS, reducing the chance of brute force attacks on open ports. Building a Secure Foundation A public IP in Azure doesn’t have to be a vulnerability, it can be a controlled entryway when paired with the right defenses. Start by applying the principle of least privilege with NSGs, restricting traffic to only what’s necessary. Layer on DDoS Protection and Azure Firewall for network-level resilience and add WAF for web-specific threats. Where possible, shift sensitive services to Private Link, and use Bastion for secure management. Together, these services create a multi-tiered shield, turning a potential weakness into a strength. In today’s threat landscape, a public IP is inevitable for many workloads. But leveraging Azure’s built in security tools, your organization can embrace the cloud’s connectivity while keeping threats at bay, allowing you to embrace the cloud without compromising security.1.1KViews0likes0Comments