cloud security best practices
17 TopicsBuilding Azure Right: A Practical Checklist for Infrastructure Landing Zones
When the Gaps Start Showing A few months ago, we walked into a high-priority Azure environment review for a customer dealing with inconsistent deployments and rising costs. After a few discovery sessions, the root cause became clear: while they had resources running, there was no consistent foundation behind them. No standard tagging. No security baseline. No network segmentation strategy. In short—no structured Landing Zone. That situation isn't uncommon. Many organizations sprint into Azure workloads without first planning the right groundwork. That’s why having a clear, structured implementation checklist for your Landing Zone is so essential. What This Checklist Will Help You Do This implementation checklist isn’t just a formality. It’s meant to help teams: Align cloud implementation with business goals Avoid compliance and security oversights Improve visibility, governance, and operational readiness Build a scalable and secure foundation for workloads Let’s break it down, step by step. 🎯 Define Business Priorities Before Touching the Portal Before provisioning anything, work with stakeholders to understand: What outcomes matter most – Scalability? Faster go-to-market? Cost optimization? What constraints exist – Regulatory standards, data sovereignty, security controls What must not break – Legacy integrations, authentication flows, SLAs This helps prioritize cloud decisions based on value rather than assumption. 🔍 Get a Clear Picture of the Current Environment Your approach will differ depending on whether it’s a: Greenfield setup (fresh, no legacy baggage) Brownfield deployment (existing workloads to assess and uplift) For brownfield, audit gaps in areas like scalability, identity, and compliance before any new provisioning. 📜 Lock Down Governance Early Set standards from day one: Role-Based Access Control (RBAC): Granular, least-privilege access Resource Tagging: Consistent metadata for tracking, automation, and cost management Security Baselines: Predefined policies aligned with your compliance model (NIST, CIS, etc.) This ensures everything downstream is both discoverable and manageable. 🧭 Design a Network That Supports Security and Scale Network configuration should not be an afterthought: Define NSG Rules and enforce segmentation Use Routing Rules to control flow between tiers Consider Private Endpoints to keep services off the public internet This stage sets your network up to scale securely and avoid rework later. 🧰 Choose a Deployment Approach That Fits Your Team You don’t need to reinvent the wheel. Choose from: Predefined ARM/Bicep templates Infrastructure as Code (IaC) using tools like Terraform Custom Provisioning for unique enterprise requirements Standardizing this step makes every future deployment faster, safer, and reviewable. 🔐 Set Up Identity and Access Controls the Right Way No shared accounts. No “Owner” access to everyone. Use: Azure Active Directory (AAD) for identity management RBAC to ensure users only have access to what they need, where they need it This is a critical security layer—set it up with intent. 📈 Bake in Monitoring and Diagnostics from Day One Cloud environments must be observable. Implement: Log Analytics Workspace (LAW) to centralize logs Diagnostic Settings to capture platform-level signals Application Insights to monitor app health and performance These tools reduce time to resolution and help enforce SLAs. 🛡️ Review and Close on Security Posture Before allowing workloads to go live, conduct a security baseline check: Enable data encryption at rest and in transit Review and apply Azure Security Center recommendations Ensure ACC (Azure Confidential Computing) compliance if applicable Security is not a phase. It’s baked in throughout—but reviewed intentionally before go-live. 🚦 Validate Before You Launch Never skip a readiness review: Deploy in a test environment to validate templates and policies Get sign-off from architecture, security, and compliance stakeholders Track checklist completion before promoting anything to production This keeps surprises out of your production pipeline. In Closing: It’s Not Just a Checklist, It’s Your Blueprint When implemented well, this checklist becomes much more than a to-do list. It’s a blueprint for scalable, secure, and standardized cloud adoption. It helps teams stay on the same page, reduces firefighting, and accelerates real business value from Azure. Whether you're managing a new enterprise rollout or stabilizing an existing environment, this checklist keeps your foundation strong. Tags - Infrastructure Landing Zone Governance and Security Best Practices for Azure Infrastructure Landing Zones Automating Azure Landing Zone Setup with IaC Templates Checklist to Validate Azure Readiness Before Production Rollout Monitoring, Access Control, and Network Planning in Azure Landing Zones Azure Readiness Checklist for Production5.1KViews6likes3CommentsInfrastructure Landing Zone - Implementation Decision-Making:
🚀 Struggling to choose the right Infrastructure Landing Zone for your Azure deployment? This guide breaks down each option—speed, flexibility, security, and automation—so you can make the smartest decision for your cloud journey. Don’t risk costly mistakes—read now and build with confidence! 🔥4KViews4likes7CommentsCheck Azure AI Service Models and Features by Region in Excel Format
Introduction While Microsoft's documentation provides information on the OpenAI models available in Azure, it can be challenging to determine which features are included in which model versions and in which regions they are available. There can be occasional inaccuracies in the documentation. I believe this is the limitations of natural language documentation, thus I found that we should investigate directly within the actual Azure environment. This idea led to the creation of this article. You need available Azure subscription, then you can retrieve a list of models available in a specific region using the az cognitiveservices model list command. This command allows you to query the Azure environment directly to obtain up-to-date information on available models. In the following sections, we'll explore how to analyze this data using various examples. Please note that the PowerShell scripts in this article are intended to be run with PowerShell Core. If you execute them using Windows PowerShell, they might not function as expected. Using Azure CLI to List Available AI Models To begin, let's check what information can be retrieved. By executing the following command: az cognitiveservices model list -l westus3 You'll receive a JSON array containing details about each model available in the West US3 region. Here's a partial excerpt from the output: PS C:\Users\xxxxx> az cognitiveservices model list -l westus3 [ { "kind": "OpenAI", "model": { "baseModel": null, "callRateLimit": null, "capabilities": { "audio": "true", "scaleType": "Standard" }, "deprecation": { "fineTune": null, "inference": "2025-03-01T00:00:00Z" }, "finetuneCapabilities": null, "format": "OpenAI", "isDefaultVersion": true, "lifecycleStatus": "Preview", "maxCapacity": 9999, "name": "tts", "skus": [ { "capacity": { "default": 3, "maximum": 9999, "minimum": null, "step": null }, "deprecationDate": "2026-02-01T00:00:00+00:00", "name": "Standard", "rateLimits": [ { "count": 1.0, "key": "request", "renewalPeriod": 60.0, "rules": null } ], "usageName": "OpenAI.Standard.tts" } ], "source": null, "systemData": { "createdAt": "2023-11-20T00:00:00+00:00", "createdBy": "Microsoft", "createdByType": "Application", "lastModifiedAt": "2023-11-20T00:00:00+00:00", "lastModifiedBy": "Microsoft", "lastModifiedByType": "Application" }, "version": "001" }, "skuName": "S0" }, { "kind": "OpenAI", "model": { "baseModel": null, "callRateLimit": null, "capabilities": { "audio": "true", "scaleType": "Standard" }, "deprecation": { "fineTune": null, "inference": "2025-03-01T00:00:00Z" }, "finetuneCapabilities": null, "format": "OpenAI", "isDefaultVersion": true, "lifecycleStatus": "Preview", "maxCapacity": 9999, "name": "tts-hd", "skus": [ { "capacity": { "default": 3, "maximum": 9999, "minimum": null, "step": null }, "deprecationDate": "2026-02-01T00:00:00+00:00", "name": "Standard", "rateLimits": [ { "count": 1.0, "key": "request", "renewalPeriod": 60.0, "rules": null } ], "usageName": "OpenAI.Standard.tts-hd" } ], "source": null, "systemData": { "createdAt": "2023-11-20T00:00:00+00:00", "createdBy": "Microsoft", "createdByType": "Application", "lastModifiedAt": "2023-11-20T00:00:00+00:00", "lastModifiedBy": "Microsoft", "lastModifiedByType": "Application" }, "version": "001" }, "skuName": "S0" }, From the structure of the JSON output, we can derive the following insights: Model Information: Basic details about the model, such as model.format, model.name, and model.version, can be obtained from the format respectively. Available Capabilities: The model.capabilities field lists the functionalities supported by the model. Entries like chat-completion and completions indicate support for Chat Completion and text generation features. If a particular capability isn't supported by the model, it simply won't appear in the capabilities array. For example, if imageGeneration is not listed, it implies that the model doesn't support image generation. Deployment Information: Model's deployment options can be found in the model.skus field, which outlines the available SKUs for deploying the model. Service Type: The kind field indicates the type of service the model belongs to, such as OpenAI, MaaS, or other Azure AI services. By querying each region and extracting the necessary information from the JSON response, we can effectively analyze and understand the availability and capabilities of Azure AI models across different regions. Retrieving Azure AI Model Information Across All Regions To gather model information across all Azure regions, you can utilize the Azure CLI in conjunction with PowerShell. Given the substantial volume of data and to avoid placing excessive load on Azure Resource Manager, it's advisable to retrieve and store data for each region separately. This approach allows for more manageable analysis based on the saved files. While you can process JSON data using your preferred programming language, I personally prefer PowerShell. The following sample demonstrates how to execute Azure CLI commands within PowerShell to achieve this task. # retrieve all Azure regions $regions = az account list-locations --query [].name -o tsv # Use this array if you retrive only specific regions # $regions = @('westus3', 'eastus', 'swedencentral') # retrieve all region model data and put as JSON file $idx = 1 $regions | foreach { $json = $null write-host ("[{0:00}/{1:00}] Getting models for {2} " -f $idx++, $regions.Length, $_) try { $json = az cognitiveservices model list -l $_ } catch { # skip some unavailable regions write-host ("Error getting models for {0}: {1}" -f $_, $_.Exception.Message) } if($json -ne $null) { $models = $json | ConvertFrom-Json if($models.length -gt 0) { $json | Out-File -FilePath "./models/$($_).json" -Encoding utf8 } else { # skip empty array Write-Host ("No models found for region: {0}" -f $_) } } } Summarizing the Collected Data Let's load the previously saved JSON files and count the number of models available in each Azure region. The following PowerShell script reads all .json files from the ./models directory, converts their contents into PowerShell objects, adds the region information based on the filename, and then groups the models by region to count them: # Load all JSON files, convert to objects, and add region information $models = Get-ChildItem -Path "./models" -Filter "*.json" | ForEach-Object { $region = $_.BaseName Get-Content -Path $_.FullName -Raw | ConvertFrom-Json | ForEach-Object { $_ | Add-Member -NotePropertyName 'region' -NotePropertyValue $region -PassThru } } # Group models by region and count them $models | Group-Object -Property region | ForEach-Object { Write-Host ("{0} models in {1}" -f $_.Count, $_.Name) } This script will output the number of models available in each region. For example: 228 models in australiaeast 222 models in brazilsouth 206 models in canadacentral 222 models in canadaeast 89 models in centralus ... Which Models Are Deployable in Each Azure Region? One of the first things you probably want is a list of which models and versions are available in each Azure region. The following script uses -ExpandProperty to unpack the model property array. Additionally, it expands the model.sku property to retrieve information about deployment models. Please note that you have to remove ./model/global.json before running the following script. # Since using 'select -ExpandProperty' modifies the original object, it’s a good idea to reload data from the files each time Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # Expand model and model.skus, and extract relevant fields $models | select region, kind -ExpandProperty model ` | select region, kind, @{l='modelFormat';e={$_.format}}, @{l='modelName';e={$_.name}}, @{l='modelVersion'; e={$_.version}}, @{l='lifecycle';e={$_.lifecycleStatus}}, @{l='default';e={$_.isDefaultVersion}} -ExpandProperty skus ` | select region, kind, modelFormat, modelName, modelVersion, lifecycle, default, @{l='skuName';e={$_.name}}, @{l='deprecationDate ';e={$_.deprecationDate }} ` | sort-object region, kind, modelName, modelVersion ` | sv output $output | Format-Table | Out-String -Width 4096 $output | Export-Csv -Path "modelList.csv" Given the volume of data, viewing it in Excel or a similar tool makes it easier to analyze. Let’s ignore the fact that Excel might try to auto-convert model version numbers into dates—yes, that annoying behavior. For example, if you filter for the gpt-4o model in the japaneast region, you’ll see that the latest version 2024-11-20 is now supported with the standard deployment SKU. Listing Available Capabilities While the model.capabilities property allows us to see which features each model supports, it doesn't include properties for unsupported features. This omission makes it challenging to determine all possible capabilities and apply appropriate filters. To address this, we'll aggregate the capabilities from all models across all regions to build a comprehensive dictionary of available features. This approach will help us understand the full range of capabilities and facilitate more effective filtering and analysis. # Read from files again Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # list only unique capabilities $models ` | select -ExpandProperty model ` | select -ExpandProperty capabilities ` | foreach { $_.psobject.properties.name} ` | select -Unique ` | sort-object ` | sv capability_dictionary The result is as follows: allowProvisionedManagedCommitment area assistants audio chatCompletion completion embeddings embeddingsMaxInputs fineTune FineTuneTokensMaxValue FineTuneTokensMaxValuePerExample imageGenerations inference jsonObjectResponse jsonSchemaResponse maxContextToken maxOutputToken maxStandardFinetuneDeploymentCount maxTotalToken realtime responses scaleType search Creating a Feature Support Matrix for Each Model Now, let's use this list to build a feature support matrix that shows which capabilities are supported by each model. This matrix will help us understand the availability of features across different models and regions. # Read from JSON files Get-ChildItem -Path "./models" -Filter "*.json" ` | foreach { Get-Content -Path $_.FullName ` | ConvertFrom-Json ` | Add-Member -NotePropertyName 'region' -NotePropertyValue $_.Name.Split(".")[0] -PassThru } | sv models # Outputting capability properties for each model (null if absent) $models ` | select region, kind -ExpandProperty model ` | select region, kind, format, name, version -ExpandProperty capabilities ` | select (@('region', 'kind', 'format', 'name', 'version') + $capability_dictionary) ` | sort region, kind, format, name, version ` | sv model_capabilities # output as csv file $model_capabilities | Export-Csv -Path "modelCapabilities.csv" Finding Models That Support Desired Capabilities To identify models that support specific capabilities, such as imageGenerations, you can open the CSV file in Excel and filter accordingly. Upon doing so, you'll notice that support for image generation is quite limited across regions. When examining models that support the Completion endpoint in the eastus region, you'll find names like ada, babbage, curie, and davinci. These familiar names bring back memories of earlier models. It also reminds that during the GPT-3.5 Turbo era, models supported both Chat Completion and Completion endpoints. Conclusion By retrieving data directly from the actual Azure environment rather than relying solely on documentation, you can ensure access to the most up-to-date information. In this article, we adopted an approach where necessary information was flattened and exported to a CSV file for examination in Excel. Once you're familiar with the underlying data structure, this method allows for investigations and analyses from various perspectives. As you become more accustomed to this process, it might prove faster than navigating through official documentation. However, for aspects like "capabilities," which may be somewhat intuitive yet lack clear definitions, it's advisable to contact Azure Support for detailed information.901Views3likes0CommentsMicrosoft Azure Cloud HSM is now generally available
Microsoft Azure Cloud HSM is now generally available. Azure Cloud HSM is a highly available, FIPS 140-3 Level 3 validated single-tenant hardware security module (HSM) service designed to meet the highest security and compliance standards. With full administrative control over their HSM, customers can securely manage cryptographic keys and perform cryptographic operations within their own dedicated Cloud HSM cluster. In today’s digital landscape, organizations face an unprecedented volume of cyber threats, data breaches, and regulatory pressures. At the heart of securing sensitive information lies a robust key management and encryption strategy, which ensures that data remains confidential, tamper-proof, and accessible only to authorized users. However, encryption alone is not enough. How cryptographic keys are managed determines the true strength of security. Every interaction in the digital world from processing financial transactions, securing applications like PKI, database encryption, document signing to securing cloud workloads and authenticating users relies on cryptographic keys. A poorly managed key is a security risk waiting to happen. Without a clear key management strategy, organizations face challenges such as data exposure, regulatory non-compliance and operational complexity. An HSM is a cornerstone of a strong key management strategy, providing physical and logical security to safeguard cryptographic keys. HSMs are purpose-built devices designed to generate, store, and manage encryption keys in a tamper-resistant environment, ensuring that even in the event of a data breach, protected data remains unreadable. As cyber threats evolve, organizations must take a proactive approach to securing data with enterprise-grade encryption and key management solutions. Microsoft Azure Cloud HSM empowers businesses to meet these challenges head-on, ensuring that security, compliance, and trust remain non-negotiable priorities in the digital age. Key Features of Azure Cloud HSM Azure Cloud HSM ensures high availability and redundancy by automatically clustering multiple HSMs and synchronizing cryptographic data across three instances, eliminating the need for complex configurations. It optimizes performance through load balancing of cryptographic operations, reducing latency. Periodic backups enhance security by safeguarding cryptographic assets and enabling seamless recovery. Designed to meet FIPS 140-3 Level 3, it provides robust security for enterprise applications. Ideal use cases for Azure Cloud HSM Azure Cloud HSM is ideal for organizations migrating security-sensitive applications from on-premises to Azure Virtual Machines or transitioning from Azure Dedicated HSM or AWS Cloud HSM to a fully managed Azure-native solution. It supports applications requiring PKCS#11, OpenSSL, and JCE for seamless cryptographic integration and enables running shrink-wrapped software like Apache/Nginx SSL Offload, Microsoft SQL Server/Oracle TDE, and ADCS on Azure VMs. Additionally, it supports tools and applications that require document and code signing. Get started with Azure Cloud HSM Ready to deploy Azure Cloud HSM? Learn more and start building today: Get Started Deploying Azure Cloud HSM Customers can download the Azure Cloud HSM SDK and Client Tools from GitHub: Microsoft Azure Cloud HSM SDK Stay tuned for further updates as we continue to enhance Microsoft Azure Cloud HSM to support your most demanding security and compliance needs.6.6KViews3likes2CommentsMastering Azure Queries: Skip Token and Batching for Scale
Let's be honest. As a cloud engineer or DevOps professional managing a large Azure environment, running even a simple resource inventory query can feel like drinking from a firehose. You hit API limits, face slow performance, and struggle to get the complete picture of your estate—all because the data volume is overwhelming. But it doesn't have to be this way! This blog is your practical, hands-on guide to mastering two essential techniques for handling massive data volumes in Azure: using PowerShell and Azure Resource Graph (ARG): Skip Token (for full data retrieval) and Batching (for blazing-fast performance). 📋 TABLE OF CONTENTS 🚀 GETTING STARTED │ ├─ Prerequisites: PowerShell 7+ & Az.ResourceGraph Module └─ Introduction: Why Standard Queries Fail at Scale 📖 CORE CONCEPTS │ ├─ 📑 Skip Token: The Data Completeness Tool │ ├─ What is a Skip Token? │ ├─ The Bookmark Analogy │ ├─ PowerShell Implementation │ └─ 💻 Code Example: Pagination Loop │ └─ ⚡ Batching: The Performance Booster ├─ What is Batching? ├─ Performance Benefits ├─ Batching vs. Pagination ├─ Parallel Processing in PowerShell └─ 💻 Code Example: Concurrent Queries 🔍 DEEP DIVE │ ├─ Skip Token: Generic vs. Azure-Specific └─ Azure Resource Graph (ARG) at Scale ├─ ARG Overview ├─ Why ARG Needs These Techniques └─ 💻 Combined Example: Skip Token + Batching ✅ BEST PRACTICES │ ├─ When to Use Each Technique └─ Quick Reference Guide 📚 RESOURCES └─ Official Documentation & References Prerequisites Component Requirement / Details Command / Reference PowerShell Version The batching examples use ForEach-Object -Parallel, which requires PowerShell 7.0 or later. Check version: $PSVersionTable.PSVersion Install PowerShell 7+: Install PowerShell on Windows, Linux, and macOS Azure PowerShell Module Az.ResourceGraph module must be installed. Install module: Install-Module -Name Az.ResourceGraph -Scope CurrentUser Introduction: Why Standard Queries Don't Work at Scale When you query a service designed for big environments, like Azure Resource Graph, you face two limits: Result Limits (Pagination): APIs won't send you millions of records at once. They cap the result size (often 1,000 items) and stop. Efficiency Limits (Throttling): Sending a huge number of individual requests is slow and can cause the API to temporarily block you (throttling). Skip Token helps you solve the first limit by making sure you retrieve all results. Batching solves the second by grouping your requests to improve performance. Understanding Skip Token: The Continuation Pointer What is a Skip Token? A Skip Token (or continuation token) is a unique string value returned by an Azure API when a query result exceeds the maximum limit for a single response. Think of the Skip Token as a “bookmark” that tells Azure where your last page ended — so you can pick up exactly where you left off in the next API call. Instead of getting cut off after 1,000 records, the API gives you the first 1,000 results plus the Skip Token. You use this token in the next request to get the next page of data. This process is called pagination. Skip Token in Practice with PowerShell To get the complete dataset, you must use a loop that repeatedly calls the API, providing the token each time until the token is no longer returned. PowerShell Example: Using Skip Token to Loop Pages # Define the query $Query = "Resources | project name, type, location" $PageSize = 1000 $AllResults = @() $SkipToken = $null # Initialize the token Write-Host "Starting ARG query..." do { Write-Host "Fetching next page. (Token check: $($SkipToken -ne $null))" # 1. Execute the query, using the -SkipToken parameter $ResultPage = Search-AzGraph -Query $Query -First $PageSize -SkipToken $SkipToken # 2. Add the current page results to the main array $AllResults += $ResultPage # 3. Get the token for the next page, if it exists $SkipToken = $ResultPage.SkipToken Write-Host " -> Items in this page: $($ResultPage.Count). Total retrieved: $($AllResults.Count)" } while ($SkipToken -ne $null) # Loop as long as a Skip Token is returned Write-Host "Query finished. Total resources found: $($AllResults.Count)" This do-while loop is the reliable way to ensure you retrieve every item in a large result set. Understanding Batching: Grouping Requests What is Batching? Batching means taking several independent requests and combining them into a single API call. Instead of making N separate network requests for N pieces of data, you make one request containing all N sub-requests. Batching is primarily used for performance. It improves efficiency by: Reducing Overhead: Fewer separate network connections are needed. Lowering Throttling Risk: Fewer overall API calls are made, which helps you stay under rate limits. Feature Batching Pagination (Skip Token) Goal Improve efficiency/speed. Retrieve all data completely. Input Multiple different queries. Single query, continuing from a marker. Result One response with results for all grouped queries. Partial results with a token for the next step. Note: While Azure Resource Graph's REST API supports batch requests, the PowerShell Search-AzGraph cmdlet does not expose a -Batch parameter. Instead, we achieve batching by using PowerShell's ForEach-Object -Parallel (PowerShell 7+) to run multiple queries simultaneously. Batching in Practice with PowerShell Using parallel processing in PowerShell, you can efficiently execute multiple distinct Kusto queries targeting different scopes (like subscriptions) simultaneously. Method 5 Subscriptions 20 Subscriptions Sequential ~50 seconds ~200 seconds Parallel (ThrottleLimit 5) ~15 seconds ~45 seconds PowerShell Example: Running Multiple Queries in Parallel # Define multiple queries to run together $BatchQueries = @( @{ Query = "Resources | where type =~ 'Microsoft.Compute/virtualMachines'" Subscriptions = @("SUB_A") # Query 1 Scope }, @{ Query = "Resources | where type =~ 'Microsoft.Network/publicIPAddresses'" Subscriptions = @("SUB_B", "SUB_C") # Query 2 Scope } ) Write-Host "Executing batch of $($BatchQueries.Count) queries in parallel..." # Execute queries in parallel (true batching) $BatchResults = $BatchQueries | ForEach-Object -Parallel { $QueryConfig = $_ $Query = $QueryConfig.Query $Subs = $QueryConfig.Subscriptions Write-Host "[Batch Worker] Starting query: $($Query.Substring(0, [Math]::Min(50, $Query.Length)))..." -ForegroundColor Cyan $QueryResults = @() # Process each subscription in this query's scope foreach ($SubId in $Subs) { $SkipToken = $null do { $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } $Result = Search-AzGraph @Params if ($Result) { $QueryResults += $Result } $SkipToken = $Result.SkipToken } while ($SkipToken) } Write-Host " [Batch Worker] ✅ Query completed - Retrieved $($QueryResults.Count) resources" -ForegroundColor Green # Return results with metadata [PSCustomObject]@{ Query = $Query Subscriptions = $Subs Data = $QueryResults Count = $QueryResults.Count } } -ThrottleLimit 5 Write-Host "`nBatch complete. Reviewing results..." # The results are returned in the same order as the input array $VMCount = $BatchResults[0].Data.Count $IPCount = $BatchResults[1].Data.Count Write-Host "Query 1 (VMs) returned: $VMCount results." Write-Host "Query 2 (IPs) returned: $IPCount results." # Optional: Display detailed results Write-Host "`n--- Detailed Results ---" for ($i = 0; $i -lt $BatchResults.Count; $i++) { $Result = $BatchResults[$i] Write-Host "`nQuery $($i + 1):" Write-Host " Query: $($Result.Query)" Write-Host " Subscriptions: $($Result.Subscriptions -join ', ')" Write-Host " Total Resources: $($Result.Count)" if ($Result.Data.Count -gt 0) { Write-Host " Sample (first 3):" $Result.Data | Select-Object -First 3 | Format-Table -AutoSize } } Azure Resource Graph (ARG) and Scale Azure Resource Graph (ARG) is a service built for querying resource properties quickly across a large number of Azure subscriptions using the Kusto Query Language (KQL). Because ARG is designed for large scale, it fully supports Skip Token and Batching: Skip Token: ARG automatically generates and returns the token when a query exceeds its result limit (e.g., 1,000 records). Batching: ARG's REST API provides a batch endpoint for sending up to ten queries in a single request. In PowerShell, we achieve similar performance benefits using ForEach-Object -Parallel to process multiple queries concurrently. Combined Example: Batching and Skip Token Together This script shows how to use Batching to start a query across multiple subscriptions and then use Skip Token within the loop to ensure every subscription's data is fully retrieved. $SubscriptionIDs = @("SUB_A") $KQLQuery = "Resources | project id, name, type, subscriptionId" Write-Host "Starting BATCHED query across $($SubscriptionIDs.Count) subscription(s)..." Write-Host "Using parallel processing for true batching...`n" # Process subscriptions in parallel (batching) $AllResults = $SubscriptionIDs | ForEach-Object -Parallel { $SubId = $_ $Query = $using:KQLQuery $SubResults = @() Write-Host "[Batch Worker] Processing Subscription: $SubId" -ForegroundColor Cyan $SkipToken = $null $PageCount = 0 do { $PageCount++ # Build parameters $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } # Execute query $Result = Search-AzGraph @Params if ($Result) { $SubResults += $Result Write-Host " [Batch Worker] Sub: $SubId - Page $PageCount - Retrieved $($Result.Count) resources" -ForegroundColor Yellow } $SkipToken = $Result.SkipToken } while ($SkipToken) Write-Host " [Batch Worker] ✅ Completed $SubId - Total: $($SubResults.Count) resources" -ForegroundColor Green # Return results from this subscription $SubResults } -ThrottleLimit 5 # Process up to 5 subscriptions simultaneously Write-Host "`n--- Batch Processing Finished ---" Write-Host "Final total resource count: $($AllResults.Count)" # Optional: Display sample results if ($AllResults.Count -gt 0) { Write-Host "`nFirst 5 resources:" $AllResults | Select-Object -First 5 | Format-Table -AutoSize } Technique Use When... Common Mistake Actionable Advice Skip Token You must retrieve all data items, expecting more than 1,000 results. Forgetting to check for the token; you only get partial data. Always use a do-while loop to guarantee you get the complete set. Batching You need to run several separate queries (max 10 in ARG) efficiently. Putting too many queries in the batch, causing the request to fail. Group up to 10 logical queries or subscriptions into one fast request. By combining Skip Token for data completeness and Batching for efficiency, you can confidently query massive Azure estates without hitting limits or missing data. These two techniques — when used together — turn Azure Resource Graph from a “good tool” into a scalable discovery engine for your entire cloud footprint. Summary: Skip Token and Batching in Azure Resource Graph Goal: Efficiently query massive Azure environments using PowerShell and Azure Resource Graph (ARG). 1. Skip Token (The Data Completeness Tool) Concept What it Does Why it Matters PowerShell Use Skip Token A marker returned by Azure APIs when results hit the 1,000-item limit. It points to the next page of data. Ensures you retrieve all records, avoiding incomplete data (pagination). Use a do-while loop with the -SkipToken parameter in Search-AzGraph until the token is no longer returned. 2. Batching (The Performance Booster) Concept What it Does Why it Matters PowerShell Use Batching Processes multiple independent queries simultaneously using parallel execution. Drastically improves query speed by reducing overall execution time and helps avoid API throttling. Use ForEach-Object -Parallel (PowerShell 7+) with -ThrottleLimit to control concurrent queries. For PowerShell 5.1, use Start-Job with background jobs. 3. Best Practice: Combine Them For maximum efficiency, combine Batching and Skip Token. Use batching to run queries across multiple subscriptions simultaneously and use the Skip Token logic within the loop to ensure every single subscription's data is fully paginated and retrieved. Result: Fast, complete, and reliable data collection across your large Azure estate. References: Azure Resource Graph documentation Search-AzGraph PowerShell reference195Views1like2CommentsCreating an Application Landing Zone on Azure Using Bicep
🧩 What Is an Application Landing Zone? An Application Landing Zone is a pre-configured Azure environment designed to host applications in a secure, scalable, and governed manner. It is a foundational component of the Azure Landing Zones framework, which supports enterprise-scale cloud adoption by providing a consistent and governed environment for deploying workloads. 🔍 Key Characteristics Security and Compliance Built-in policies and controls ensure that applications meet organizational and regulatory requirements. Pre-configured Infrastructure Includes networking, identity, security, monitoring, and governance components that are ready to support application workloads Scalability and Flexibility Designed to scale with application demand, supporting both monolithic and microservices-based architectures. Governance and Management Integrated with Azure Policy, Azure Monitor, and Azure Security Center to enforce governance and provide operational insights. Developer Enablement Provides a consistent environment that accelerates development and deployment cycles. 🏗️ Core Components An Application Landing Zone typically includes: Networking with Virtual Networks (VNets) with subnets and NSGs Azure Active Directory (AAD) integration Role-Based Access Control (RBAC) Azure Key Vault/Managed HSM for secrets management Monitoring and Logging via Azure Monitor and Log Analytics Application Gateway or Azure Front Door for traffic management CI/CD Pipelines integrated with Azure DevOps or GitHub Actions 🛠️ Prerequisites Before deploying the Application Landing Zone, please ensure the following: ✅ Access & Identity Azure Subscription Access: You must have access to an active Azure subscription where the landing zone will be provisioned. This subscription should be part of a broader management group hierarchy if you're following enterprise scale landing zone patterns. A Service Principal (SPN): A Service Principal is required for automating deployments via CI/CD pipelines or Infrastructure as Code (IaC) tools. It should have an atleast Contributor role at the subscription level to create and manage resources. Explicit access to the following is required: - Resource Groups (for deploying application components) - Azure Policy (to assign and manage governance rules) - Azure Key Vault (to retrieve secrets, certificates, or credentials) Azure Active Directory (AAD) Ensure that AAD is properly configured for: - Role-Based Access Control (RBAC) - Group-based access assignments - Conditional Access policies (if applicable) Tip: Use Managed Identities where possible to reduce the need for credential management. ✅ Tooling Azure CLI - Required for scripting and executing deployment commands. - Ensure you're authenticated using az login or a service principal. - Recommended version: 2.55.0 or later for compatibility with Bicep latest Azure features Azure PowerShell - Installed and authenticated (Connect-AzAccount) - Recommended module: Az module version 11.0.0 or later Visual Studio Code Preferred IDE for working with Bicep and ARM templates. - Install the following extensions: - Bicep: for authoring and validating infrastructure templates. - Azure Account: for managing Azure sessions and subscriptions. Source Control & CI/CD Integration Access to GitHub or Azure DevOps is required for: - Storing IaC templates - Automating deployments via pipelines - Managing version control and collaboration � Tip: Use GitHub Actions or Azure Pipelines to automate validation, testing, and deployment of your landing zone templates. ✅ Environment Setup Resource Naming Conventions Define a naming standard that reflects resource type, environment, region, and application. Example: rg-app1-prod-weu for a production resource group in West Europe. Tagging Strategy Predefine tags for: - Cost Management (e.g., CostCenter, Project) - Ownership (e.g., Owner, Team) - Environment (e.g., Dev, Test, Prod) Networking Baseline Ensure that required VNets, subnets, and DNS settings are in place. Plan for hybrid connectivity if integrating with on-premises networks (e.g., via VPN or ExpressRoute). Security Baseline Define and apply: - RBAC roles for least-privilege access - Azure built-in as well as custom Policies for compliance enforcement - NSGs and ASGs for network security 🧱 Application Landing Zone Architecture Using Bicep Bicep is a domain-specific language (DSL) for deploying Azure resources declaratively. It simplifies the authoring experience compared to traditional ARM templates and supports modular, reusable, and maintainable infrastructure-as-code (IaC) practices. The Application Landing Zone (App LZ) architecture leverages Bicep to define and deploy a secure, scalable, and governed environment for hosting applications. This architecture is structured into phases, each representing a logical grouping of resources. These phases align with enterprise cloud adoption frameworks and enable teams to deploy infrastructure incrementally and consistently. 🧱 Architectural Phases The App LZ is typically divided into the following phases, each implemented using modular Bicep templates: 1. Foundation Phase Establishes the core infrastructure and governance baseline: Resource groups Virtual networks and subnets Network security groups (NSGs) Diagnostic settings Azure Policy assignments 2. Identity & Access Phase Implements secure access and identity controls: Role-Based Access Control (RBAC) Azure Active Directory (AAD) integration Managed identities Key Vault access policies 3. Security & Monitoring Phase Ensures observability and compliance: Azure Monitor and Log Analytics Security Center configuration Alerts and action groups Defender for Cloud settings 4. Application Infrastructure Phase Deploys application-specific resources: App Services, AKS, or Function Apps Application Gateway or Azure Front Door Storage accounts, databases, and messaging services Private endpoints and service integrations 5. CI/CD Integration Phase Automates deployment and lifecycle management: GitHub Actions or Azure Pipelines Deployment scripts and parameter files Secrets management via Key Vault Environment-specific configurations 🔁 Modular Bicep Templates Each phase is implemented using modular Bicep templates, which offer: Reusability: Templates can be reused across environments (Dev, Test, Prod). Flexibility: Parameters allow customization without modifying core logic. Incremental Deployment: Phases can be deployed independently or chained together. Testability: Each module can be validated against test cases before full deployment. 💡 Example: A network.bicep module can be reused across multiple landing zones with different subnet configurations. To ensure a smooth and automated deployment experience, here’s the complete flow from setup: ✅ Benefits of This Approach Consistency & Compliance: Enforces Azure best practices and governance policies Modularity: Reusable Bicep modules simplify maintenance and scaling Automation: CI/CD pipelines reduce manual effort and errors Security: Aligns with Microsoft’s security baselines and CAF Scalability: Easily extendable to support new workloads or environments Native Azure Integration: Supports all Azure resources and features. Tooling Support: Integrated with Visual Studio Code, Azure CLI, and GitHub. 🔄 Why Choose Bicep Over Terraform? First-Party Integration: Bicep is a first-party solution maintained by Microsoft, ensuring day-one support for new Azure services and API changes. This means customers can immediately leverage the latest features and updates without waiting for third-party providers to catch up. Azure-Specific Optimization: Bicep is deeply integrated with Azure services, offering a tailored experience for Azure resource management. This integration ensures that deployments are optimized for Azure, providing better performance and reliability. Simplified Syntax: Bicep uses a domain-specific language (DSL) that is more concise and easier to read compared to Terraform's HCL (HashiCorp Configuration Language). This simplicity reduces the learning curve and makes it easier for teams to write and maintain infrastructure code. Incremental Deployment: Unlike Terraform, Bicep does not store state. Instead, it relies on incremental deployment, which simplifies the deployment process and reduces the complexity associated with state management. This approach ensures that resources are deployed consistently without the need for managing state files. Azure Policy Integration: Bicep integrates seamlessly with Azure Policy, allowing for preflight validation to ensure compliance with policies before deployment. This integration helps in maintaining governance and compliance across deployments What-If Analysis: Bicep offers a "what-if" operation that predicts the changes before deploying a Bicep file. This feature allows customers to preview the impact of their changes without making any modifications to the existing infrastructure. 🏁 Conclusion Creating an Application Landing Zone using Bicep provides a robust, scalable, and secure foundation for deploying applications in Azure. By following a phased, modular approach and leveraging automation, organizations can accelerate their cloud adoption journey while maintaining governance and operational excellence.1.4KViews1like0CommentsSecuring the digital future: Advanced firewall protection for all Azure customers
Introduction In today's digital landscape, rapid innovation—especially in areas like AI—is reshaping how we work and interact. With this progress comes a growing array of cyber threats and gaps that impact every organization. Notably, the convergence of AI, data security, and digital assets has become particularly enticing for bad actors, who leverage these advanced tools and valuable information to orchestrate sophisticated attacks. Security is far from an optional add-on; it is the strategic backbone of modern business operations and resiliency. The evolving threat landscape Cyber threats are becoming more sophisticated and persistent. A single breach can result in costly downtime, loss of sensitive data, and damage to customer trust. Organizations must not only detect incidents but also proactively prevent them –all while complying with regulatory standards like GDPR and HIPAA. Security requires staying ahead of threats and ensuring that every critical component of your digital environment is protected. Azure Firewall: Strengthening security for all users Azure Firewall is engineered and innovated to benefit all users by serving as a robust, multifaceted line of defense. Below are five key scenarios that illustrate how Azure Firewall provides security across various use cases: First, Azure Firewall acts as a gateway that separates the external world from your internal network. By establishing clearly defined boundaries, it ensures that only authorized traffic can flow between different parts of your infrastructure. This segmentation is critical in limiting the spread of an attack, should one occur, effectively containing potential threats to a smaller segment of the network. Second, the key role of the Azure Firewall is to filter traffic between clients, applications, and servers. This filtering capability prevents unauthorized access, ensuring that hackers cannot easily infiltrate private systems to steal sensitive data. For instance, whether protecting personal financial information or health data, the firewall inspects and controls traffic to maintain data integrity and confidentiality. Third, beyond protecting internal Azure or on-premises resources, Azure Firewall can also regulate outbound traffic to the Internet. By filtering user traffic from Azure to the Internet, organizations can prevent employees from accessing potentially harmful websites or inadvertently downloading malicious content. This is supported through FQDN or URL filtering, as well as web category controls, where administrators can filter traffic to domain names or categories such as social media, gambling, hacking, and more. In addition, security today means staying ahead of threats, not just controlling access. It requires proactively detecting and blocking malicious traffic before it even reaches the organization’s environment. Azure Firewall is integrated with Microsoft’s Threat Intelligence feed, which supplies millions of known malicious IP addresses and domains in real time. This integration enables the firewall to dynamically detect and block threats as soon as they are identified. In addition, Azure Firewall IDPS (Intrusion Detection and Prevention System) extends this proactive defense by offering advanced capabilities to identify and block suspicious activity by: Monitoring malicious activity: Azure Firewall IDPS rapidly detects attacks by identifying specific patterns associated with malware command and control, phishing, trojans, botnets, exploits, and more. Proactive blocking: Once a potential threat is detected, Azure Firewall IDPS can automatically block the offending traffic and alert security teams, reducing the window of exposure and minimizing the risk of a breach. Together, these integrated capabilities ensure that your network is continuously protected by a dynamic, multi-layered defense system that not only detects threats in real time but also helps prevent them from ever reaching your critical assets. Image: Trend illustrating the number of IDPS alerts Azure Firewall generated from September 2024 to March 2025 Finally, Azure Firewall’s cloud-native architecture delivers robust security while streamlining management. An agile management experience not only improves operational efficiency but also frees security teams to focus on proactive threat detection and strategic security initiatives by providing: High availability and resiliency: As a fully managed service, Azure Firewall is built on the power of the cloud, ensuring high availability and built-in resiliency to keep your security always active. Autoscaling for easy maintenance: Azure Firewall automatically scales to meet your network’s demands. This autoscaling capability means that as your traffic grows or fluctuates, the firewall adjusts in real time—eliminating the need for manual intervention and reducing operational overhead. Centralized management with Azure Firewall Manager: Azure Firewall Manager provides centralized management experience for configuring, deploying, and monitoring multiple Azure Firewall instances across regions and subscriptions. You can create and manage firewall policies across your entire organization, ensuring uniform rule enforcement and simplifying updates. This helps reduce administrative overhead while enhancing visibility and control over your network security posture. Seamless integration with Azure Services: Azure Firewall’s strong integration with other Azure services, such as Microsoft Sentinel, Microsoft Defender, and Azure Monitor, creates a unified security ecosystem. This integration not only enhances visibility and threat detection across your environment but also streamlines management and incident response. Conclusion Azure Firewall's combination of robust network segmentation, advanced IDPS and threat intelligence capabilities, and cloud-native scalability makes it an essential component of modern security architectures—empowering organizations to confidently defend against today’s ever-evolving cyber threats while seamlessly integrating with the broader Azure security ecosystem.1.4KViews1like0CommentsLearn to elevate security and resiliency of Azure and AI projects with skilling plans
In an era where organizations are increasingly adopting a cloud-first approach to support digital transformation and AI-driven innovation, learning skills to enhance cloud resilience and security has become a top priority. By 2025, an estimated 85% of companies will have embraced a cloud-first strategy, according to research by Gartner, marking a significant shift toward reliance on platforms like Microsoft Azure for mission-critical workloads. Yet according to a recent Flexera survey, 78% of respondents found a lack of skilled people and expertise to be one of their top three cloud challenges along with optimizing costs and boosting security. To help our customers unlock the full potential of their Azure investments, Microsoft introduced Azure Essentials, a single destination for in-depth skilling, guidance and support for elevating reliability, security, and ongoing performance of their cloud and AI investments. In this blog we’ll explore this guidance in detail and introduce you to two new free, self-paced skilling resource Plans on Microsoft Learn to get your team skilled on building resiliency into your Azure and AI environments. Empower your team: Learn proactive resiliency for critical workloads in Azure Azure offers a resilient foundation to reliably support workloads in the cloud, and our Well-Architected Framework helps teams design systems to recover from failures with minimal disruption. Figure 1: Design your critical workloads for resiliency, and assess existing workloads for ongoing performance, compliance and resiliency. The new resiliency-focused Microsoft Learn skilling plan helps teams learn to “Elevate reliability, security, and ongoing performance of Azure and AI projects”, and they see how the Well-Architected Framework, coupled with the Cloud Adoption Framework, provides actionable guidelines to enhance resilience, optimize security measures, and ensure consistent, high-performance for Azure workloads and AI deployments. The Plan also covers cost optimization through the FinOps Framework, ensuring that security and reliability measures are implemented within budget. This training also emphasizes Azure AI Foundry, a tool that allows teams to work on AI-driven projects while maintaining security and governance standards, which are critical to reducing vulnerabilities and ensuring long-term stability. The plan guides learners in securely developing, testing, and deploying AI solutions, empowering them to build resilient applications that can support sustained performance and data integrity. The impact of Azure’s resiliency guidance is significant. According to Forrester, following this framework reduces planned downtime by 30%, prevents 15% of revenue loss due to resilience issues, and achieves an 18% ROI through rearchitected workloads. Given that 60% of reliability failures result in losses of at least $100,000, and 15% of failures cost upwards of $1 million, these preventative measures underscore the financial value of resilient architecture. Ensuring security in Azure AI workloads AI adds complexity to security considerations in cloud environments. AI applications often require significant data handling, which introduces new vulnerabilities and compliance considerations. Microsoft’s guidance focuses on integrating robust security practices directly into AI project workflows, ensuring that organizations adhere to stringent data protection regulations. Azure’s tools, including multi-zone deployment options, network security solutions, and data protection services, empower customers to create resilient and secure workloads. Our new training on proactive resiliency and reliability of critical Azure and AI workloads guides you in building fault-tolerant systems and managing risks in your environments. This plan teaches users how to assess workloads, identify vulnerabilities, and deploy prioritized resiliency strategies, equipping them to achieve optimal performance even under adverse conditions. Maximizing business value and ROI through resiliency and security Companies that prioritize resiliency and security in their cloud strategies enjoy multiple benefits beyond reduced downtime. Forrester’s findings suggest that a commitment to resilience has a three-year financial impact, with significant cost savings from avoided outages, higher ROI from optimized workloads, and increased productivity. Organizations can reinvest these savings into further modernization efforts, expanding their capabilities in AI and data analytics. Azure’s tools, frameworks, and Microsoft’s shared responsibility model give businesses the foundation to build resilient, secure, and high-performing applications that align with their goals. Microsoft Learn’s structured learning Plans provide self-paced modules to help you “Elevate Azure Reliability and Performance” and “Improve resiliency of critical workloads on Azure,” provide essential training to build skills in designing and maintaining reliable and secure cloud projects. As more companies embrace cloud-first strategies, Microsoft’s commitment to proactive resiliency, architectural guidance, and cost management tools will empower organizations to realize the full potential of their cloud and AI investments. Start your journey to a reliable and secure Azure cloud today. Resources: Visit Microsoft Learn Plans419Views1like0Comments