updates
735 TopicsFollow-Up to ‘Important Changes to App Service Managed Certificates’: October 2025 Update
This post provides an update to the Tech Community article ‘Important Changes to App Service Managed Certificates: Is Your Certificate Affected?’ and covers the latest changes introduced since July 2025. With the November 2025 update, ASMC now remains supported even if the site is not publicly accessible, provided all other requirements are met. Details on requirements, exceptions, and validation steps are included below. Background Context to July 2025 Changes As of July 2025, all ASMC certificate issuance and renewals use HTTP token validation. Previously, public access was required because DigiCert needed to access the endpoint https://<hostname>/.well-known/pki-validation/fileauth.txt to verify the token before issuing the certificate. App Service automatically places this token during certificate creation and renewal. If DigiCert cannot access this endpoint, domain ownership validation fails, and the certificate cannot be issued. October 2025 Update Starting October 2025, App Service now allows DigiCert's requests to the https://<hostname>/.well-known/pki-validation/fileauth.txt endpoint, even if the site blocks public access. If there’s a request to create an App Service Managed Certificate (ASMC), App Service places the domain validation token at the validation endpoint. When DigiCert tries to reach the validation endpoint, App Service front ends present the token, and the request terminates at the front end layer. DigiCert's request does not reach the workers running the application. This behavior is now the default for ASMC issuance for initial certificate creation and renewals. Customers do not need to specifically allow DigiCert's IP addresses. Exceptions and Unsupported Scenarios This update addresses most scenarios that restrict public access, including App Service Authentication, disabling public access, IP restrictions, private endpoints, and client certificates. However, a public DNS record is still required. For example, sites using a private endpoint with a custom domain on a private DNS cannot validate domain ownership and obtain a certificate. Even with all validations now relying on HTTP token validation and DigiCert requests being allowed through, certain configurations are still not supported for ASMC: Sites configured as "Nested" or "External" endpoints behind Traffic Manager. Only "Azure" endpoints are supported. Certificates requested for domains ending in *.trafficmanager.net are not supported. Testing Customers can easily test whether their site’s configuration or set-up supports ASMC by attempting to create one for their site. If the initial request succeeds, renewals should also work, provided all requirements are met and the site is not listed in an unsupported scenario.179Views0likes0CommentsSecurity Review for Microsoft Edge version 141
We have reviewed the new settings in Microsoft Edge version 141 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 139 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit. Microsoft Edge version 141 introduced 6 new Computer and User settings; we have included a spreadsheet listing the new settings to make it easier for you to find. As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here. Please continue to give us feedback through the Security Baselines Discussion site or this post.Azure Data Components Network Architecture with secure configurations
This Blog will guide how to setup Data Components in Secure VNET's : 1.We have also defined the network architecture diagram and the secure configuration for these data components. 2. How we deploy the code securely within the VNET's13KViews3likes1CommentResiliency Best Practices You Need For your Blob Storage Data
Maintaining Resiliency in Azure Blob Storage: A Guide to Best Practices Azure Blob Storage is a cornerstone of modern cloud storage, offering scalable and secure solutions for unstructured data. However, maintaining resiliency in Blob Storage requires careful planning and adherence to best practices. In this blog, I’ll share practical strategies to ensure your data remains available, secure, and recoverable under all circumstances. 1. Enable Soft Delete for Accidental Recovery (Most Important) Mistakes happen, but soft delete can be your safety net and. It allows you to recover deleted blobs within a specified retention period: Configure a soft delete retention period in Azure Storage. Regularly monitor your blob storage to ensure that critical data is not permanently removed by mistake. Enabling soft delete in Azure Blob Storage does not come with any additional cost for simply enabling the feature itself. However, it can potentially impact your storage costs because the deleted data is retained for the configured retention period, which means: The retained data contributes to the total storage consumption during the retention period. You will be charged according to the pricing tier of the data (Hot, Cool, or Archive) for the duration of retention 2. Utilize Geo-Redundant Storage (GRS) Geo-redundancy ensures your data is replicated across regions to protect against regional failures: Choose RA-GRS (Read-Access Geo-Redundant Storage) for read access to secondary replicas in the event of a primary region outage. Assess your workload’s RPO (Recovery Point Objective) and RTO (Recovery Time Objective) needs to select the appropriate redundancy. 3. Implement Lifecycle Management Policies Efficient storage management reduces costs and ensures long-term data availability: Set up lifecycle policies to transition data between hot, cool, and archive tiers based on usage. Automatically delete expired blobs to save on costs while keeping your storage organized. 4. Secure Your Data with Encryption and Access Controls Resiliency is incomplete without robust security. Protect your blobs using: Encryption at Rest: Azure automatically encrypts data using server-side encryption (SSE). Consider enabling customer-managed keys for additional control. Access Policies: Implement Shared Access Signatures (SAS) and Stored Access Policies to restrict access and enforce expiration dates. 5. Monitor and Alert for Anomalies Stay proactive by leveraging Azure’s monitoring capabilities: Use Azure Monitor and Log Analytics to track storage performance and usage patterns. Set up alerts for unusual activities, such as sudden spikes in access or deletions, to detect potential issues early. 6. Plan for Disaster Recovery Ensure your data remains accessible even during critical failures: Create snapshots of critical blobs for point-in-time recovery. Enable backup for blog & have the immutability feature enabled Test your recovery process regularly to ensure it meets your operational requirements. 7. Resource lock Adding Azure Locks to your Blob Storage account provides an additional layer of protection by preventing accidental deletion or modification of critical resources 7. Educate and Train Your Team Operational resilience often hinges on user awareness: Conduct regular training sessions on Blob Storage best practices. Document and share a clear data recovery and management protocol with all stakeholders. 8. "Critical Tip: Do Not Create New Containers with Deleted Names During Recovery" If a container or blob storage is deleted for any reason and recovery is being attempted, it’s crucial not to create a new container with the same name immediately. Doing so can significantly hinder the recovery process by overwriting backend pointers, which are essential for restoring the deleted data. Always ensure that no new containers are created using the same name during the recovery attempt to maximize the chances of successful restoration. Wrapping It Up Azure Blob Storage offers an exceptional platform for scalable and secure storage, but its resiliency depends on following best practices. By enabling features like soft delete, implementing redundancy, securing data, and proactively monitoring your storage environment, you can ensure that your data is resilient to failures and recoverable in any scenario. Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn Data redundancy - Azure Storage | Microsoft Learn Overview of Azure Blobs backup - Azure Backup | Microsoft Learn Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn1.1KViews1like1CommentWindows 11, version 25H2 security baseline
Microsoft is pleased to announce the security baseline package for Windows 11, version 25H2! You can download the baseline package from the Microsoft Security Compliance Toolkit, test the recommended configurations in your environment, and customize / implement them as appropriate. Summary of changes This release includes several changes made since the Windows 11, version 24H2 security baseline to further assist in the security of enterprise customers, to include better alignment with the latest capabilities and standards. The changes include what is depicted in the table below. Security Policy Change Summary Printer: Impersonate a client after authentication Add “RESTRICTED SERVICES\PrintSpoolerService” to allow the Print Spooler’s restricted service identity to impersonate clients securely NTLM Auditing Enhancements Enable by default to improve visibility into NTLM usage within your environment MDAV: Attack Surface Reduction (ASR) Add "Block process creations originating from PSExec and WMI commands" (d1e49aac-8f56-4280-b9ba-993a6d77406c) with a recommended value of 2 (Audit) to improve visibility into suspicious activity MDAV: Control whether exclusions are visible to local users Move to Not Configured as it is overridden by the parent setting MDAV: Scan packed executables Remove from the baseline because the setting is no longer functional - Windows always scans packed executables by default Network: Configure NetBIOS settings Disable NetBIOS name resolution on all network adapters to reduce legacy protocol exposure Disable Internet Explorer 11 Launch Via COM Automation Disable to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces Include command line in process creation events Enable to improve visibility into how processes are executed across the system WDigest Authentication Remove from the baseline because the setting is obsolete - WDigest is disabled by default and no longer needed in modern Windows environments Printer Improving Print Security with IPPS and Certificate Validation To enhance the security of network printing, Windows introduces two new policies focused on controlling the use of IPP (Internet Printing Protocol) printers and enforcing encrypted communications. The setting, "Require IPPS for IPP printers", (Administrative Templates\Printers) determines whether printers that do not support TLS are allowed to be installed. When this policy is disabled (default), both IPP and IPPS transport printers can be installed - although IPPS is preferred when both are available. When enabled, only IPPS printers will be installed; attempts to install non-compliant printers will fail and generate an event in the Application log, indicating that installation was blocked by policy. The second policy, "Set TLS/SSL security policy for IPP printers" (same policy path) requires that printers present valid and trusted TLS/SSL certificates before connections can be established. Enabling this policy defends against spoofed or unauthorized printers, reducing the risk of credential theft or redirection of sensitive print jobs. While these policies significantly improve security posture, enabling them may introduce operational challenges in environments where IPP and self-signed or locally issued certificates are still commonly used. For this reason, neither policy is enforced in the security baseline, at this time. We recommend that you assess your printers, and if they meet the requirements, consider enabling those policies with a remediation plan to address any non-compliant printers in a controlled and predictable manner. User Rights Assignment Update: Impersonate a client after authentication We have added RESTRICTED SERVICES\PrintSpoolerService in the “Impersonate a client after authentication” User Rights Assignment policy. The baseline already includes Administrators, SERVICE, LOCAL SERVICE, and NETWORK SERVICE for this user right. Adding the restricted Print Spooler supports Microsoft’s ongoing effort to apply least privilege to system services. It enables Print Spooler to securely impersonate user tokens in modern print scenarios using a scoped, restricted service identity. Although this identity is associated with functionality introduced as part of Windows Protected Print (WPP), it is required to support proper print operations even if WPP is not currently enabled. The system manifests the identity by default, and its presence ensures forward compatibility with WPP-based printing. Note: This account may appear as a raw SID (e.g., S-1-5-99-...) in Group Policy or local policy tools before the service is fully initialized. This is expected and does not indicate a misconfiguration. Warning: Removing this entry will result in print failures in environments where WPP is enabled. We recommend retaining this entry in any custom security configuration that defines this user right. NTLM Auditing Enhancements Windows 11, version 25H2 includes enhanced NTLM auditing capabilities, enabled by default, which significantly improves visibility into NTLM usage within your environment. These enhancements provide detailed audit logs to help security teams monitor and investigate authentication activity, identify insecure practices, and prepare for future NTLM restrictions. Since these auditing improvements are enabled by default, no additional configuration is required, and thus the baseline does not explicitly enforce them. For more details, see Overview of NTLM auditing enhancements in Windows 11 and Windows Server 2025. Microsoft Defender Antivirus Attack Surface Reduction (ASR) In this release, we've updated the Attack Surface Reduction (ASR) rules to add the policy Block process creations originating from PSExec and WMI commands (d1e49aac-8f56-4280-b9ba-993a6d77406c) with a recommended value of 2 (Audit). By auditing this rule, you can gain essential visibility into potential privilege escalation attempts via tools such as PSExec or persistence mechanisms using WMI. This enhancement helps organizations proactively identify suspicious activities without impacting legitimate administrative workflows. Control whether exclusions are visible to local users We have removed the configuration for the policy "Control whether exclusions are visible to local users" (Windows Components\Microsoft Defender Antivirus) from the baseline in this release. This change was made because the parent policy "Control whether or not exclusions are visible to Local Admins" is already set to Enabled, which takes precedence and effectively overrides the behavior of the former setting. As a result, explicitly configuring the child policy is unnecessary. You can continue to manage exclusion visibility through the parent policy, which provides the intended control over whether local administrators can view exclusion lists. Scan packed executables The “Scan packed executables” setting (Windows Components\Microsoft Defender Antivirus\Scan) has been removed from the security baseline because it is no longer functional in modern Windows releases. Microsoft Defender Antivirus always scans packed executables by default, therefore configuring this policy has no effect on the system. Disable NetBIOS Name Resolution on All Networks In this release, we start disabling NetBIOS name resolution on all network adapters in the security baseline, including those connected to private and domain networks. The change is reflected in the policy setting “Configure NetBIOS settings” (Network\DNS Client). We are trying to eliminate the legacy name resolution protocol that is vulnerable to spoofing and credential theft. NetBIOS is no longer needed in modern environments where DNS is fully deployed and supported. To mitigate potential compatibility issues, you should ensure that all internal systems and applications use DNS for name resolution. We recommend the following; test critical workflows in a staging environment prior to deployment, monitor for any resolution failures or fallback behavior, and inform support staff of the change to assist with troubleshooting as needed. This update aligns with our broader efforts to phase out legacy protocols and improve security. Disable Internet Explorer 11 Launch Via COM Automation To enhance the security posture of enterprise environments, we recommend disabling Internet Explorer 11 Launch Via COM Automation (Windows Components\Internet Explorer) to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces such as CreateObject("InternetExplorer.Application"). Allowing such behavior poses a significant risk by exposing systems to the legacy MSHTML and ActiveX components, which are vulnerable to exploitation. Include command line in process creation events We have enabled the setting "Include command line in process creation events" (System\Audit Process Creation) in the baseline to improve visibility into how processes are executed across the system. Capturing command-line arguments allows defenders to detect and investigate malicious activity that may otherwise appear legitimate, such as abuse of scripting engines, credential theft tools, or obfuscated payloads using native binaries. This setting supports modern threat detection techniques with minimal performance overhead and is highly recommended. WDigest Authentication We removed the policy "WDigest Authentication (disabling may require KB2871997)" from the security baseline because it is no longer necessary for Windows. This policy was originally enforced to prevent WDigest from storing user’s plaintext passwords in memory, which posed a serious credential theft risk. However, starting with 24H2 update, the engineering teams deprecated this policy. As a result, there is no longer a need to explicitly enforce this setting, and the policy has been removed from the baseline to reflect the current default behavior. Since the setting does not write to the normal policies location in the registry it will not be cleaned up automatically for any existing deployments. Please let us know your thoughts by commenting on this post or through the Security Baseline Community.7.5KViews6likes8CommentsMastering Azure Queries: Skip Token and Batching for Scale
Let's be honest. As a cloud engineer or DevOps professional managing a large Azure environment, running even a simple resource inventory query can feel like drinking from a firehose. You hit API limits, face slow performance, and struggle to get the complete picture of your estate—all because the data volume is overwhelming. But it doesn't have to be this way! This blog is your practical, hands-on guide to mastering two essential techniques for handling massive data volumes in Azure: using PowerShell and Azure Resource Graph (ARG): Skip Token (for full data retrieval) and Batching (for blazing-fast performance). 📋 TABLE OF CONTENTS 🚀 GETTING STARTED │ ├─ Prerequisites: PowerShell 7+ & Az.ResourceGraph Module └─ Introduction: Why Standard Queries Fail at Scale 📖 CORE CONCEPTS │ ├─ 📑 Skip Token: The Data Completeness Tool │ ├─ What is a Skip Token? │ ├─ The Bookmark Analogy │ ├─ PowerShell Implementation │ └─ 💻 Code Example: Pagination Loop │ └─ ⚡ Batching: The Performance Booster ├─ What is Batching? ├─ Performance Benefits ├─ Batching vs. Pagination ├─ Parallel Processing in PowerShell └─ 💻 Code Example: Concurrent Queries 🔍 DEEP DIVE │ ├─ Skip Token: Generic vs. Azure-Specific └─ Azure Resource Graph (ARG) at Scale ├─ ARG Overview ├─ Why ARG Needs These Techniques └─ 💻 Combined Example: Skip Token + Batching ✅ BEST PRACTICES │ ├─ When to Use Each Technique └─ Quick Reference Guide 📚 RESOURCES └─ Official Documentation & References Prerequisites Component Requirement / Details Command / Reference PowerShell Version The batching examples use ForEach-Object -Parallel, which requires PowerShell 7.0 or later. Check version: $PSVersionTable.PSVersion Install PowerShell 7+: Install PowerShell on Windows, Linux, and macOS Azure PowerShell Module Az.ResourceGraph module must be installed. Install module: Install-Module -Name Az.ResourceGraph -Scope CurrentUser Introduction: Why Standard Queries Don't Work at Scale When you query a service designed for big environments, like Azure Resource Graph, you face two limits: Result Limits (Pagination): APIs won't send you millions of records at once. They cap the result size (often 1,000 items) and stop. Efficiency Limits (Throttling): Sending a huge number of individual requests is slow and can cause the API to temporarily block you (throttling). Skip Token helps you solve the first limit by making sure you retrieve all results. Batching solves the second by grouping your requests to improve performance. Understanding Skip Token: The Continuation Pointer What is a Skip Token? A Skip Token (or continuation token) is a unique string value returned by an Azure API when a query result exceeds the maximum limit for a single response. Think of the Skip Token as a “bookmark” that tells Azure where your last page ended — so you can pick up exactly where you left off in the next API call. Instead of getting cut off after 1,000 records, the API gives you the first 1,000 results plus the Skip Token. You use this token in the next request to get the next page of data. This process is called pagination. Skip Token in Practice with PowerShell To get the complete dataset, you must use a loop that repeatedly calls the API, providing the token each time until the token is no longer returned. PowerShell Example: Using Skip Token to Loop Pages # Define the query $Query = "Resources | project name, type, location" $PageSize = 1000 $AllResults = @() $SkipToken = $null # Initialize the token Write-Host "Starting ARG query..." do { Write-Host "Fetching next page. (Token check: $($SkipToken -ne $null))" # 1. Execute the query, using the -SkipToken parameter $ResultPage = Search-AzGraph -Query $Query -First $PageSize -SkipToken $SkipToken # 2. Add the current page results to the main array $AllResults += $ResultPage # 3. Get the token for the next page, if it exists $SkipToken = $ResultPage.SkipToken Write-Host " -> Items in this page: $($ResultPage.Count). Total retrieved: $($AllResults.Count)" } while ($SkipToken -ne $null) # Loop as long as a Skip Token is returned Write-Host "Query finished. Total resources found: $($AllResults.Count)" This do-while loop is the reliable way to ensure you retrieve every item in a large result set. Understanding Batching: Grouping Requests What is Batching? Batching means taking several independent requests and combining them into a single API call. Instead of making N separate network requests for N pieces of data, you make one request containing all N sub-requests. Batching is primarily used for performance. It improves efficiency by: Reducing Overhead: Fewer separate network connections are needed. Lowering Throttling Risk: Fewer overall API calls are made, which helps you stay under rate limits. Feature Batching Pagination (Skip Token) Goal Improve efficiency/speed. Retrieve all data completely. Input Multiple different queries. Single query, continuing from a marker. Result One response with results for all grouped queries. Partial results with a token for the next step. Note: While Azure Resource Graph's REST API supports batch requests, the PowerShell Search-AzGraph cmdlet does not expose a -Batch parameter. Instead, we achieve batching by using PowerShell's ForEach-Object -Parallel (PowerShell 7+) to run multiple queries simultaneously. Batching in Practice with PowerShell Using parallel processing in PowerShell, you can efficiently execute multiple distinct Kusto queries targeting different scopes (like subscriptions) simultaneously. Method 5 Subscriptions 20 Subscriptions Sequential ~50 seconds ~200 seconds Parallel (ThrottleLimit 5) ~15 seconds ~45 seconds PowerShell Example: Running Multiple Queries in Parallel # Define multiple queries to run together $BatchQueries = @( @{ Query = "Resources | where type =~ 'Microsoft.Compute/virtualMachines'" Subscriptions = @("SUB_A") # Query 1 Scope }, @{ Query = "Resources | where type =~ 'Microsoft.Network/publicIPAddresses'" Subscriptions = @("SUB_B", "SUB_C") # Query 2 Scope } ) Write-Host "Executing batch of $($BatchQueries.Count) queries in parallel..." # Execute queries in parallel (true batching) $BatchResults = $BatchQueries | ForEach-Object -Parallel { $QueryConfig = $_ $Query = $QueryConfig.Query $Subs = $QueryConfig.Subscriptions Write-Host "[Batch Worker] Starting query: $($Query.Substring(0, [Math]::Min(50, $Query.Length)))..." -ForegroundColor Cyan $QueryResults = @() # Process each subscription in this query's scope foreach ($SubId in $Subs) { $SkipToken = $null do { $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } $Result = Search-AzGraph @Params if ($Result) { $QueryResults += $Result } $SkipToken = $Result.SkipToken } while ($SkipToken) } Write-Host " [Batch Worker] ✅ Query completed - Retrieved $($QueryResults.Count) resources" -ForegroundColor Green # Return results with metadata [PSCustomObject]@{ Query = $Query Subscriptions = $Subs Data = $QueryResults Count = $QueryResults.Count } } -ThrottleLimit 5 Write-Host "`nBatch complete. Reviewing results..." # The results are returned in the same order as the input array $VMCount = $BatchResults[0].Data.Count $IPCount = $BatchResults[1].Data.Count Write-Host "Query 1 (VMs) returned: $VMCount results." Write-Host "Query 2 (IPs) returned: $IPCount results." # Optional: Display detailed results Write-Host "`n--- Detailed Results ---" for ($i = 0; $i -lt $BatchResults.Count; $i++) { $Result = $BatchResults[$i] Write-Host "`nQuery $($i + 1):" Write-Host " Query: $($Result.Query)" Write-Host " Subscriptions: $($Result.Subscriptions -join ', ')" Write-Host " Total Resources: $($Result.Count)" if ($Result.Data.Count -gt 0) { Write-Host " Sample (first 3):" $Result.Data | Select-Object -First 3 | Format-Table -AutoSize } } Azure Resource Graph (ARG) and Scale Azure Resource Graph (ARG) is a service built for querying resource properties quickly across a large number of Azure subscriptions using the Kusto Query Language (KQL). Because ARG is designed for large scale, it fully supports Skip Token and Batching: Skip Token: ARG automatically generates and returns the token when a query exceeds its result limit (e.g., 1,000 records). Batching: ARG's REST API provides a batch endpoint for sending up to ten queries in a single request. In PowerShell, we achieve similar performance benefits using ForEach-Object -Parallel to process multiple queries concurrently. Combined Example: Batching and Skip Token Together This script shows how to use Batching to start a query across multiple subscriptions and then use Skip Token within the loop to ensure every subscription's data is fully retrieved. $SubscriptionIDs = @("SUB_A") $KQLQuery = "Resources | project id, name, type, subscriptionId" Write-Host "Starting BATCHED query across $($SubscriptionIDs.Count) subscription(s)..." Write-Host "Using parallel processing for true batching...`n" # Process subscriptions in parallel (batching) $AllResults = $SubscriptionIDs | ForEach-Object -Parallel { $SubId = $_ $Query = $using:KQLQuery $SubResults = @() Write-Host "[Batch Worker] Processing Subscription: $SubId" -ForegroundColor Cyan $SkipToken = $null $PageCount = 0 do { $PageCount++ # Build parameters $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } # Execute query $Result = Search-AzGraph @Params if ($Result) { $SubResults += $Result Write-Host " [Batch Worker] Sub: $SubId - Page $PageCount - Retrieved $($Result.Count) resources" -ForegroundColor Yellow } $SkipToken = $Result.SkipToken } while ($SkipToken) Write-Host " [Batch Worker] ✅ Completed $SubId - Total: $($SubResults.Count) resources" -ForegroundColor Green # Return results from this subscription $SubResults } -ThrottleLimit 5 # Process up to 5 subscriptions simultaneously Write-Host "`n--- Batch Processing Finished ---" Write-Host "Final total resource count: $($AllResults.Count)" # Optional: Display sample results if ($AllResults.Count -gt 0) { Write-Host "`nFirst 5 resources:" $AllResults | Select-Object -First 5 | Format-Table -AutoSize } Technique Use When... Common Mistake Actionable Advice Skip Token You must retrieve all data items, expecting more than 1,000 results. Forgetting to check for the token; you only get partial data. Always use a do-while loop to guarantee you get the complete set. Batching You need to run several separate queries (max 10 in ARG) efficiently. Putting too many queries in the batch, causing the request to fail. Group up to 10 logical queries or subscriptions into one fast request. By combining Skip Token for data completeness and Batching for efficiency, you can confidently query massive Azure estates without hitting limits or missing data. These two techniques — when used together — turn Azure Resource Graph from a “good tool” into a scalable discovery engine for your entire cloud footprint. Summary: Skip Token and Batching in Azure Resource Graph Goal: Efficiently query massive Azure environments using PowerShell and Azure Resource Graph (ARG). 1. Skip Token (The Data Completeness Tool) Concept What it Does Why it Matters PowerShell Use Skip Token A marker returned by Azure APIs when results hit the 1,000-item limit. It points to the next page of data. Ensures you retrieve all records, avoiding incomplete data (pagination). Use a do-while loop with the -SkipToken parameter in Search-AzGraph until the token is no longer returned. 2. Batching (The Performance Booster) Concept What it Does Why it Matters PowerShell Use Batching Processes multiple independent queries simultaneously using parallel execution. Drastically improves query speed by reducing overall execution time and helps avoid API throttling. Use ForEach-Object -Parallel (PowerShell 7+) with -ThrottleLimit to control concurrent queries. For PowerShell 5.1, use Start-Job with background jobs. 3. Best Practice: Combine Them For maximum efficiency, combine Batching and Skip Token. Use batching to run queries across multiple subscriptions simultaneously and use the Skip Token logic within the loop to ensure every single subscription's data is fully paginated and retrieved. Result: Fast, complete, and reliable data collection across your large Azure estate. References: Azure Resource Graph documentation Search-AzGraph PowerShell reference266Views1like2CommentsUpcoming Changes to Azure Relay IP Addresses and DNS Support
Azure Relay is an integral part of modern hybrid cloud architectures, enabling seamless connectivity between on-premises and cloud resources. To ensure continued reliability and security, Microsoft is implementing important updates to the IP addresses and DNS naming conventions used by Azure Relay services. What’s Changing? As detailed in the changes to IP-addresses for Azure Relay and Azure Relay WCF and Hybrid Connections DNS Support reference blogs, customers should be aware of two primary changes: IP and Name Transitions: The IP addresses and corresponding DNS names for Azure Relay endpoints will change during the transition period. For example, g0-prod-bn-vaz0001-sb.servicebus.windows.net can change to gv0-prod-bn-vaz0001-sb.servicebus.windows.net DNS Support Enhancements: Improved DNS support will enhance reliability and future-proof connectivity for both WCF Relay and Hybrid Connections users. Recommended Actions for Customers To minimize disruption, it is crucial for users to update their network configurations and firewall rules to accommodate these new IP addresses and DNS names as soon as possible. These will be made available using the below PS1 script - Update Allow Lists: Ensure that your firewalls and network security groups permit traffic to the new IP ranges and DNS endpoints as specified in the official documentation. Monitor Transition Phases: Be prepared for two rounds of changes. Apply updates promptly during both the initial and final transitions. Automating Namespace Information Retrieval To assist with this transition, Microsoft has updated the PowerShell script for retrieving namespace information, which now reflects the planned changes. You can access the latest script here: GetNamespaceInfo.ps1 (azure-relay-dotnet/tools) (Instructions on how to use the ps1 script is available in the README) This script allows you to efficiently check the current configuration of your Azure Relay namespaces and validate connectivity against the updated endpoints. Sample output PS D:\AzureVMSSEssentials\Tools\GetNamespaceInfoWithIpRanges> .\GetNamespaceInfo.ps1 <your-relay-namespace>.servicebus.windows.net Namespace : <your-relay-namespace>.servicebus.windows.net Deployment : PROD-BN-VAZ0001 ClusterDNS : ns-prod-bn-vaz0001.eastus2.cloudapp.azure.com ClusterRegion : eastus2 ClusterVIP : 40.84.75.3 GatewayDnsFormat : g{0}-bn-vaz0001-sb.servicebus.windows.net or gv{0}-bn-vaz0001-sb.servicebus.windows.net Notes : Entries with 'FUTURE' IPAddress may be added at a later time as needed Current IP Ranges Name IPAddress ---- --------- g0-bn-vaz0001-sb.servicebus.windows.net 20.36.144.8 g1-bn-vaz0001-sb.servicebus.windows.net 20.36.144.1 g2-bn-vaz0001-sb.servicebus.windows.net 20.36.144.2 g3-bn-vaz0001-sb.servicebus.windows.net 20.36.144.11 g4-bn-vaz0001-sb.servicebus.windows.net 20.36.144.3 g5-bn-vaz0001-sb.servicebus.windows.net FUTURE g6-bn-vaz0001-sb.servicebus.windows.net FUTURE ... g98-bn-vaz0001-sb.servicebus.windows.net FUTURE g99-bn-vaz0001-sb.servicebus.windows.net FUTURE Future IP Ranges for Region:eastus2 addressPrefixes --------------- 135.18.130.0/23 135.18.132.0/26 135.18.132.64/27187Views0likes0Comments