updates
767 TopicsIntroducing the Azure Maps Geocode Autocomplete API
We’re thrilled to unveil the public preview of Azure Maps Geocode Autocomplete API, a powerful REST service designed to modernize and elevate autocomplete capabilities across Microsoft’s mapping platforms. If you’ve ever started typing an address into a search bar and immediately seen a list of relevant suggestions—whether it’s for a landmark, or your own home—you’ve already experienced the convenience of autocomplete. What’s less obvious is just how complex it is to deliver those suggestions quickly, accurately, and in a format that modern applications can use. That’s exactly the challenge this new API is designed to solve. Why Autocomplete Matters More Than Ever The Azure Maps Geocode Autocomplete API is the natural successor to the Bing Maps Autosuggest REST API, designed to meet the growing demand for intelligent, real-time location suggestions across a wide range of applications. It’s an ideal solution for developers who need reliable and scalable autocomplete functionality—whether for small business websites or large-scale enterprise systems. Key use cases include: Store locators: When a customer starts typing “New Yo…” into store locator, autocomplete instantly suggests “New York, N.Y.” With just a click, the map centers on the right location—making it fast and effortless to find the nearest branch. Rideshare or dispatching platforms: A rideshare driver needs to pick up a passenger at “One Microsoft Way.” Instead of typing out the full address, the driver starts entering “One Micro…” and the app instantly offers the correct road segment in Redmond, Washington. Delivery services: A delivery app can limit suggestions to postal codes within a specific region, ensuring the addresses customers choose are deliverable and reducing the risk of failed shipments Any Web UIs requiring location input: From real estate search to form autofill, autocomplete enhances the user experience wherever accurate location entry is needed. What the API Can Do The Geocode Autocomplete API is designed to deliver fast, relevant, and structured suggestions as users type. Key capabilities include: Entity Suggestions: Supports both Place (e.g., administrative districts, populated places, landmarks, postal codes) and Address (e.g., roads, point addresses) entities. Ranking: Results can be ranked based on entity popularity, user location (coordinates), and bounding box (bbox). Structured Output: Returns suggestions with structured address formats, making integration seamless. Multilingual Support: Set up query language preferences via the Accept-Language parameter. Flexible Filtering: You can filter suggestions by specifying a country or region using countryRegion, or by targeting a specific entity subtype using resultType. This allows you to extract entities with precise categorization—for example, you can filter results to return only postal codes to match the needs of a location-based selection input in your web application. How It Works The Geocode Autocomplete API is accessed via the following endpoint: https://atlas.microsoft.com/geocode:autocomplete?api-version=2025-06-01-preview This endpoint provides autocomplete-style suggestions for addresses and places. With just a few parameters, like your Azure Maps subscription key, a query string, and optionally user coordinates or a bounding box, you can start returning structured suggestions instantly. Developers can further issue geocode service with the selected/ideal entity as query to locate the entity on map, which is a common scenario for producing interactive mapping experiences. Let’s look at below examples: Example 1: Place Entity Autocomplete GET https://atlas.microsoft.com/geocode:autocomplete?api-version=2025-06-01-preview &subscription-key={YourAzureMapsKey} &coordinates={coordinates} &query=new yo &top=3 A user starts typing “new yo.” The API quickly returns results like “New York City” and “New York State,” each complete with structured metadata you can plug directly into your app. { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": { "typeGroup": "Place", "type": "PopulatedPlace", "geometry": null, "address": { "locality": "New York", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York, N.Y." } } }, { "type": "Feature", "properties": { "typeGroup": "Place", "type": "AdminDivision1", "geometry": null, "address": { "locality": "", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York" } } }, { "type": "Feature", "properties": { "typeGroup": "Place", "type": "AdminDivision2", "geometry": null, "address": { "locality": "", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." }, { "name": "New York County" } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York County" } } } ] } Example 2: Address Entity Autocomplete GET https://atlas.microsoft.com/geocode:autocomplete?api-version=2025-06-01-preview &subscription-key={YourAzureMapsKey} &bbox={bbox} &query=One Micro &top=3 &countryRegion=US A query for “One Micro” scoped to the U.S. yields “NE One Microsoft Way, Redmond, WA 98052, United States.” That’s a complete, structured address ready to be mapped, dispatched, or stored. { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": { "typeGroup": "Address", "type": "RoadBlock", "geometry": null, "address": { "locality": "Redmond", "adminDistricts": [ { "name": "Washington", "shortName": "WA" }, { "name": "King County" } ], "countryRegions": { "ISO": "US", "name": "United States" }, "postalCode": "98052", "streetName": "NE One Microsoft Way", "addressLine": "", "formattedAddress": "NE One Microsoft Way, Redmond, WA 98052, United States" } } } ] } Example 3: Integration with Web Application Below sample shows user enter query and autocomplete service provide a series of suggestions based on user query and location. Pricing and Billing The Geocode Autocomplete API uses the same metering model as the Azure Maps Search service. For billing purposes, every 10 Geocode Autocomplete API requests are counted as one billable transaction. This approach keeps usage and costs consistent with what developers are already familiar with in Azure Maps. Ready to Build Smarter Location Experiences? Whether you're powering a store locator, enhancing address entry, or building a dynamic dispatch system, the new Geocode Autocomplete API gives you the precision, flexibility, and performance needed to deliver seamless location intelligence. With real-world use cases already proving its value, now is the perfect time to integrate this service into your applications and unlock richer, more interactive mapping experiences. Let’s build what’s next—faster, smarter, and more intuitive. Resources to Get Started Geocode Autocomplete REST API Documentation Geocode Autocomplete Samples Migrate from Bing Maps to Azure Maps How to use Azure Maps APIs811Views1like0CommentsSecure Unique Default Hostnames Now GA for Functions and Logic Apps
We are pleased to announce that Secure Unique Default Hostnames are now generally available (GA) for Azure Functions and Logic Apps (Standard). This expands the security model previously available for Web Apps to the entire App Service ecosystem and provides customers with stronger, more secure, and standardized hostname behavior across all workloads. Why This Feature Matters Historically, App Service resources have used default hostname format such as: <SiteName>.azurewebsites.net While straightforward, this pattern introduced potential security risks, particularly in scenarios where DNS records were left behind after deleting a resource. In those situations, a different user could create a new resource with the same name and unintentionally receive traffic or bindings associated with the old DNS configuration, creating opportunities for issues such as subdomain takeover. Secure Unique Default Hostnames address this by assigning a unique, randomized, region‑scoped hostname to each resource, for example: <SiteName>-<Hash>.<Region>.azurewebsites.net This change ensures that: No other customer can recreate the same default hostname. Apps inherently avoid risks associated with dangling DNS entries. Customers gain a more secure, reliable baseline behavior across App Service. Adopting this model now helps organizations prepare for the long‑term direction of the platform while improving security posture today. What’s New: GA Support for Functions and Logic Apps With this release, both Azure Functions and Logic Apps (Standard) fully support the Secure Unique Default Hostname capability. This brings these services in line with Web Apps and ensures customers across all App Service workloads benefit from the same secure and consistent default hostname model. Azure CLI Support The Azure CLI for Web Apps and Function Apps now includes support for the “--domain-name-scope” parameter. This allows customers to explicitly specify the scope used when generating a unique default hostname during resource creation. Examples: az webapp create --domain-name-scope {NoReuse, ResourceGroupReuse, SubscriptionReuse, TenantReuse} az functionapp create --domain-name-scope {NoReuse, ResourceGroupReuse, SubscriptionReuse, TenantReuse} Including this parameter ensures that deployments consistently use the intended hostname scope and helps teams prepare their automation and provisioning workflows for the secure unique default hostname model. Why Customers Should Adopt This Now While existing resources will continue to function normally, customers are strongly encouraged to adopt Secure Unique Default Hostnames for all new deployments. Early adoption provides several important benefits: A significantly stronger security posture. Protection against dangling DNS and subdomain takeover scenarios. Consistent default hostname behavior as the platform evolves. Alignment with recommended deployment practices and modern DNS hygiene. This feature represents the current best practice for hostname management on App Service and adopting it now helps ensure that new deployments follow the most secure and consistent model available. Recommended Next Steps Enable Secure Unique Default Hostnames for all new Web Apps, Function Apps, and Logic Apps. Update automation and CLI scripts to include the --domain-name-scope parameter when creating new resources. Begin updating internal documentation and operational processes to reflect the new hostname pattern. Additional Resources For readers who want to explore the technical background and earlier announcements in more detail, the following articles offer deeper coverage of unique default hostnames: Public Preview: Creating Web App with a Unique Default Hostname This article introduces the foundational concepts behind unique default hostnames. It explains why the feature was created, how the hostname format works, and provides examples and guidance for enabling the feature when creating new resources. Secure Unique Default Hostnames: GA on App Service Web Apps and Public Preview on Functions This article provides the initial GA announcement for Web Apps and early preview details for Functions. It covers the security benefits, recommended usage patterns, and guidance on how to handle existing resources that were created without unique default hostnames. Conclusion Secure Unique Default Hostnames now provide a more secure and consistent default hostname model across Web Apps, Function Apps, and Logic Apps. This enhancement reduces DNS‑related risks and strengthens application security, and organizations are encouraged to adopt this feature as the standard for new deployments.105Views0likes0CommentsAnnouncing new hybrid deployment options for Azure Virtual Desktop
Today, we’re excited to announce the limited preview of Azure Virtual Desktop for hybrid environments, a new platform for bringing the power of cloud-native desktop virtualization to on-premises infrastructure.19KViews10likes30CommentsAnnouncing Azure CycleCloud Workspace for Slurm: Version 2025.12.01 Release
The Azure CycleCloud Workspace for Slurm 2025.12.01 release introduces major upgrades that strengthen performance, monitoring, authentication, and platform flexibility for HPC environments. This update integrates Prometheus self‑agent monitoring and Azure Managed Grafana, giving teams real‑time visibility into node metrics, Slurm jobs, and cluster health through ready‑to‑use dashboards. The release also adds Entra ID Single Sign‑On (SSO) to streamline secure access across CycleCloud and Open OnDemand. With centralized identity management and support for MFA, organizations can simplify user onboarding while improving security. Additionally, the update expands platform support with ARM64 compute nodes and compatibility for Ubuntu 24.04 and AlmaLinux 9, enabling more flexible and efficient HPC cluster deployments. Overall, this version focuses on improved observability, stronger security, and broader infrastructure options for technical and scientific HPC teams.Black Forest Labs FLUX.2 Visual Intelligence for Enterprise Creative now on Microsoft Foundry
Black Forest Labs’ (BFL) FLUX.2 is now available on Microsoft Foundry. Building on FLUX1.1 [pro] and FLUX.1 Kontext [pro], we’re excited to introduce FLUX.2 [pro] which continues to push the frontier for visual intelligence. FLUX.2 [pro] delivers state-of-the-art quality with pre-optimized settings, matching the best closed models for prompt adherence and visual fidelity while generating faster at lower cost. Prompt: "Cinematic film still of a woman walking alone through a narrow Madrid street at night, warm street lamps, cool blue shadows, light rain reflecting on cobblestones, moody and atmospheric, shallow depth of field, natural skin texture, subtle film grain and introspective mood" This prompt shines because it taps into FLUX.2 [pro]'s cinematic‑lighting engine, letting the model fuse warm street‑lamp glow and cool shadows into a visually striking, film‑grade composition. What’s game-changing about FLUX.2 [pro]? FLUX.2 is designed for real-world creative workflows where consistency, accuracy, and iteration speed determine whether AI generation can replace traditional production pipelines. The model understands lighting, perspective, materials, and spatial relationships. It maintains characters and products consistent across up to 10 reference images simultaneously. It adheres to brand constraints like exact hex colors and legible text. The result: production-ready assets with fewer touchups and stronger brand fidelity. What’s New: Production‑grade quality up to 4MP: High‑fidelity, coherent scenes with realistic lighting, spatial logic, and fine detail suitable for product photography and commercial use cases. Multi‑reference consistency: Reference up to 10 images simultaneously with the best character, product, and style consistency available today. Generate dozens of brand-compliant assets where identity stays perfectly aligned shot to shot. Brand‑accurate results: Exact hex‑color matching, reliable typography, and structured controls (JSON, pose guidance) mean fewer manual fixes and stronger brand compliance. Strong prompt fidelity for complex directions: Improved adherence to complex, structured instructions including multi-part prompts, compositional constraints, and JSON-based controls. 32K token context supports long, detailed workflows with exact positioning specifications, physics-aware lighting, and precise compositional requirements in a single prompt. Optimized inference: FLUX.2 [pro] delivers state-of-the-art quality with pre-optimized inference settings, generating faster at lower cost than competing closed models. FLUX.2 transforms creative production economics by enabling workflows that weren't possible with earlier systems. Teams ship complete campaigns in days instead of weeks, with fewer manual touchups and stronger brand fidelity at scale. This performance stems from FLUX.2's unified architecture, which combines generation and editing in a single latent flow matching model. How it Works FLUX.2 combines image generation and editing in a single latent flow matching architecture, coupling a Mistral‑3 24B vision‑language model (VLM) with a rectified flow transformer. The VLM brings real‑world knowledge and contextual understanding, while the flow transformer models spatial relationships, material properties, and compositional logic that earlier architectures struggled to render. FLUX.2’s architecture unifies visual generation and editing, fuses language‑grounded understanding with flow‑based spatial modeling, and delivers production‑ready, brand‑safe images with predictable control especially when you need consistent identity, exact colors, and legible typography at high resolution. Technical details can be found in the FLUX.2 VAE blog post. Top enterprise scenarios & patterns to try with FLUX.2 [pro] The addition of FLUX.2 [pro] is the next step in the evolution for delivering faster, richer, and more controllable generation unlocking a new wave of creative potential for enterprises. Bring FLUX.2 [pro] into your workflow and transform your creative pipeline from concept to production by trying out these patterns: Enterprise scenarios Patterns to try E‑commerce hero shots Start with a small set of references (product front, material/texture, logo). Prompt for a studio hero shot on a white seamless background, three‑quarter view, softbox key + subtle rim light. Include exact hex for brand accents and specify logo placement. Output at 4MP. Product variants at scale Reuse the hero references; ask for specific colorway, angle, and background variants (e.g., “Create {COLOR} variant, {ANGLE} view, {BG} background”). Keep brand hex and logo position constant across variants. Campaign consistency (character/product identity) Provide 5–10 reference images for the character/product (faces, outfits, mood boards). Request the same identity across scenes with consistent lighting/style (e.g., cinematic warm daylight) and defined environments (e.g., urban rooftop). Marketing templates & localization Define a template (e.g., 3‑column grid: left image, right text). Set headline/body sizes (e.g., 24pt/14pt), contrast ≥ 4.5:1, and brand font. Swap localized copy per locale while keeping layout and spacing consistent. Best practices to get to production readiness with Microsoft Foundry FLUX.2 [pro] brings state-of-the-art image quality to your fingertips. In Microsoft Foundry, you can turn those capabilities into predictable, governed outcomes by standardizing templates, managing references, enforcing brand rules, and controlling spend. These practices below leverage FLUX.2 [pro]’s visual intelligence and turn them into repeatable recipes, auditable artifacts, and cost‑controlled processes within a governed Foundry pipeline. Best Practice What to do Foundry tip Approved templates Create 3–5 templates (e.g., hero shot, variant gallery, packaging, social card) with sections for Composition (camera, lighting, environment), Brand (hex colors, logo placement), Typography (font, sizes, contrast), and Output (resolution, format). Store templates in Foundry as approved artifacts; version them and restrict edits via RBAC. Versioned reference sets Keep 3–10 references per subject (product: front/side/texture; talent: face/outfit/mood) and link them to templates. Save references in governed Foundry storage; reference IDs travel with the job metadata. Resolution staging Use a three‑stage plan: Concept (1–2MP) → Review (2–3MP) → Final (4MP). Leverage FLUX.1 [pro] and FLUX1.1 Kontext [pro] before the Final stage for fast iteration and cost control Enforce stage‑based quotas and cap max resolution per job; require approval to move to 4MP. Automated QA & approvals Run post‑generation checks for color match, text legibility, and safe‑area compliance; gate final renders behind a review step. Use Foundry workflows to require sign‑off at the Review stage before Final stage. Telemetry & feedback Track latency, success rate, usage, and cost per render; collect reviewer notes and refine templates. Dashboards in Foundry: monitor job health, cost, and template performance. Foundry Models continues to grow with cutting-edge additions to meet every enterprise need—including models from Black Forest Labs, OpenAI, and more. From models like GPT‑image‑1, FLUX.2 [pro], and Sora 2, Microsoft Foundry has become the place where creators push the boundaries of what’s possible. Watch how Foundry transforms creative workflows with this demo: Customer Stories As seen at Ignite 2025, real‑world customers like Sinyi Realty have already demonstrated the efficiency of Black Forest Lab’s models on Microsoft Foundry by choosing FLUX.1 Kontext [pro] for its superior performance and selective editing. For their new 'Clear All' feature, they preferred a model that preserves the original room structure and simply removes clutter, rather than generating a new space from scratch, saving time and money. Read the story to learn more. “We wanted to stay in the same workspace rather than having to maintain different platforms,” explains TeWei Hsieh, who works in data engineering and data architecture. “By keeping FLUX Kontext model in Foundry, our data scientists and data engineers can work in the same environment.” As customers like Sinyi Realty have already shown, BFL FLUX models raise the bar for speed, precision, and operational efficiency. With FLUX.2 now on Microsoft Foundry, organizations can bring that same competitive edge directly into their own production pipelines. FLUX.2 [pro] Pricing Foundry Models are fully hosted and managed on Azure. FLUX.2 [pro] is available through pay-as-you-go and on Global Standard deployment type with the following pricing: Generated image: The first generated megapixel (MP) is charged $0.03. Each subsequent megapixel is charged $0.015. Reference image(s): We charge $0.015 for each megapixel. Important Notes: For pricing, resolution is always rounded up to the next megapixel, separately for each reference image and for the generated image. 1 megapixel is counted as 1024x1024 pixels For multiple reference images, each reference image is counted as 1 megapixel Images exceeding 4 megapixels are resized to 4 megapixels Reference the Foundry Models pricing page for pricing. Build Trustworthy AI Solutions Black Forest Labs models in Foundry Models are delivered under the Microsoft Product Terms, giving you enterprise-grade security and compliance out of the box. Each FLUX endpoint offers Content Safety controls and guardrails. Runtime protections include built-in content-safety filters, role-based access control, virtual-network isolation, and automatic Azure Monitor logging. Governance signals stream directly into Azure Policy, Purview, and Microsoft Sentinel, giving security and compliance teams real-time visibility. Together, Microsoft's capabilities let you create with more confidence, knowing that privacy, security, and safety are woven into every Black Forest Labs deployment from day one. Getting Started with FLUX.2 in Microsoft Foundry If you don’t have an Azure subscription, you can sign up for an Azure account here. Search for the model name in the model catalog in Foundry under “Build.” FLUX.2-pro Open the model card in the model catalog. Click on deploy to obtain the inference API and key. View your deployment under Build > Models. You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground. You can use the API and key with various clients. Learn More ▶️ RSVP for the next Model Monday LIVE on YouTube or On-Demand 👩💻 Explore FLUX.2 Documentation on Microsoft Learn 👋 Continue the conversation on Discord1.2KViews0likes2CommentsUpcoming Changes to Azure Relay IP Addresses and DNS Support
Azure Relay is an integral part of modern hybrid cloud architectures, enabling seamless connectivity between on-premises and cloud resources. To ensure continued reliability and security, Microsoft is implementing important updates to the IP addresses and DNS naming conventions used by Azure Relay services. What’s Changing? As detailed in the changes to IP-addresses for Azure Relay and Azure Relay WCF and Hybrid Connections DNS Support reference blogs, customers should be aware of two primary changes: IP and Name Transitions: The IP addresses and corresponding DNS names for Azure Relay endpoints will change during the transition period. For example, g0-prod-bn-vaz0001-sb.servicebus.windows.net can change to gv0-prod-bn-vaz0001-sb.servicebus.windows.net DNS Support Enhancements: Improved DNS support will enhance reliability and future-proof connectivity for both WCF Relay and Hybrid Connections users. Recommended Actions for Customers To minimize disruption, it is crucial for users to update their network configurations and firewall rules to accommodate these new IP addresses and DNS names as soon as possible. These will be made available using the below PS1 script - Update Allow Lists: Ensure that your firewalls and network security groups permit traffic to the new IP ranges and DNS endpoints as specified in the official documentation. Monitor Transition Phases: Be prepared for two rounds of changes. Apply updates promptly during both the initial and final transitions. Automating Namespace Information Retrieval To assist with this transition, Microsoft has updated the PowerShell script for retrieving namespace information, which now reflects the planned changes. You can access the latest script here: GetNamespaceInfo.ps1 (azure-relay-dotnet/tools) (Instructions on how to use the ps1 script is available in the README) This script allows you to efficiently check the current configuration of your Azure Relay namespaces and validate connectivity against the updated endpoints. Sample output PS D:\AzureVMSSEssentials\Tools\GetNamespaceInfoWithIpRanges> .\GetNamespaceInfo.ps1 <your-relay-namespace>.servicebus.windows.net Namespace : <your-relay-namespace>.servicebus.windows.net Deployment : PROD-BN-VAZ0001 ClusterDNS : ns-prod-bn-vaz0001.eastus2.cloudapp.azure.com ClusterRegion : eastus2 ClusterVIP : 40.84.75.3 GatewayDnsFormat : g{0}-bn-vaz0001-sb.servicebus.windows.net or gv{0}-bn-vaz0001-sb.servicebus.windows.net Notes : Entries with 'FUTURE' IPAddress may be added at a later time as needed Current IP Ranges Name IPAddress ---- --------- g0-bn-vaz0001-sb.servicebus.windows.net 20.36.144.8 g1-bn-vaz0001-sb.servicebus.windows.net 20.36.144.1 g2-bn-vaz0001-sb.servicebus.windows.net 20.36.144.2 g3-bn-vaz0001-sb.servicebus.windows.net 20.36.144.11 g4-bn-vaz0001-sb.servicebus.windows.net 20.36.144.3 g5-bn-vaz0001-sb.servicebus.windows.net FUTURE g6-bn-vaz0001-sb.servicebus.windows.net FUTURE ... g98-bn-vaz0001-sb.servicebus.windows.net FUTURE g99-bn-vaz0001-sb.servicebus.windows.net FUTURE Future IP Ranges for Region:eastus2 addressPrefixes --------------- 135.18.130.0/23 135.18.132.0/26 135.18.132.64/27307Views1like1CommentMastering Azure Queries: Skip Token and Batching for Scale
Let's be honest. As a cloud engineer or DevOps professional managing a large Azure environment, running even a simple resource inventory query can feel like drinking from a firehose. You hit API limits, face slow performance, and struggle to get the complete picture of your estate—all because the data volume is overwhelming. But it doesn't have to be this way! This blog is your practical, hands-on guide to mastering two essential techniques for handling massive data volumes in Azure: using PowerShell and Azure Resource Graph (ARG): Skip Token (for full data retrieval) and Batching (for blazing-fast performance). 📋 TABLE OF CONTENTS 🚀 GETTING STARTED │ ├─ Prerequisites: PowerShell 7+ & Az.ResourceGraph Module └─ Introduction: Why Standard Queries Fail at Scale 📖 CORE CONCEPTS │ ├─ 📑 Skip Token: The Data Completeness Tool │ ├─ What is a Skip Token? │ ├─ The Bookmark Analogy │ ├─ PowerShell Implementation │ └─ 💻 Code Example: Pagination Loop │ └─ ⚡ Batching: The Performance Booster ├─ What is Batching? ├─ Performance Benefits ├─ Batching vs. Pagination ├─ Parallel Processing in PowerShell └─ 💻 Code Example: Concurrent Queries 🔍 DEEP DIVE │ ├─ Skip Token: Generic vs. Azure-Specific └─ Azure Resource Graph (ARG) at Scale ├─ ARG Overview ├─ Why ARG Needs These Techniques └─ 💻 Combined Example: Skip Token + Batching ✅ BEST PRACTICES │ ├─ When to Use Each Technique └─ Quick Reference Guide 📚 RESOURCES └─ Official Documentation & References Prerequisites Component Requirement / Details Command / Reference PowerShell Version The batching examples use ForEach-Object -Parallel, which requires PowerShell 7.0 or later. Check version: $PSVersionTable.PSVersion Install PowerShell 7+: Install PowerShell on Windows, Linux, and macOS Azure PowerShell Module Az.ResourceGraph module must be installed. Install module: Install-Module -Name Az.ResourceGraph -Scope CurrentUser Introduction: Why Standard Queries Don't Work at Scale When you query a service designed for big environments, like Azure Resource Graph, you face two limits: Result Limits (Pagination): APIs won't send you millions of records at once. They cap the result size (often 1,000 items) and stop. Efficiency Limits (Throttling): Sending a huge number of individual requests is slow and can cause the API to temporarily block you (throttling). Skip Token helps you solve the first limit by making sure you retrieve all results. Batching solves the second by grouping your requests to improve performance. Understanding Skip Token: The Continuation Pointer What is a Skip Token? A Skip Token (or continuation token) is a unique string value returned by an Azure API when a query result exceeds the maximum limit for a single response. Think of the Skip Token as a “bookmark” that tells Azure where your last page ended — so you can pick up exactly where you left off in the next API call. Instead of getting cut off after 1,000 records, the API gives you the first 1,000 results plus the Skip Token. You use this token in the next request to get the next page of data. This process is called pagination. Skip Token in Practice with PowerShell To get the complete dataset, you must use a loop that repeatedly calls the API, providing the token each time until the token is no longer returned. PowerShell Example: Using Skip Token to Loop Pages # Define the query $Query = "Resources | project name, type, location" $PageSize = 1000 $AllResults = @() $SkipToken = $null # Initialize the token Write-Host "Starting ARG query..." do { Write-Host "Fetching next page. (Token check: $($SkipToken -ne $null))" # 1. Execute the query, using the -SkipToken parameter $ResultPage = Search-AzGraph -Query $Query -First $PageSize -SkipToken $SkipToken # 2. Add the current page results to the main array $AllResults += $ResultPage # 3. Get the token for the next page, if it exists $SkipToken = $ResultPage.SkipToken Write-Host " -> Items in this page: $($ResultPage.Count). Total retrieved: $($AllResults.Count)" } while ($SkipToken -ne $null) # Loop as long as a Skip Token is returned Write-Host "Query finished. Total resources found: $($AllResults.Count)" This do-while loop is the reliable way to ensure you retrieve every item in a large result set. Understanding Batching: Grouping Requests What is Batching? Batching means taking several independent requests and combining them into a single API call. Instead of making N separate network requests for N pieces of data, you make one request containing all N sub-requests. Batching is primarily used for performance. It improves efficiency by: Reducing Overhead: Fewer separate network connections are needed. Lowering Throttling Risk: Fewer overall API calls are made, which helps you stay under rate limits. Feature Batching Pagination (Skip Token) Goal Improve efficiency/speed. Retrieve all data completely. Input Multiple different queries. Single query, continuing from a marker. Result One response with results for all grouped queries. Partial results with a token for the next step. Note: While Azure Resource Graph's REST API supports batch requests, the PowerShell Search-AzGraph cmdlet does not expose a -Batch parameter. Instead, we achieve batching by using PowerShell's ForEach-Object -Parallel (PowerShell 7+) to run multiple queries simultaneously. Batching in Practice with PowerShell Using parallel processing in PowerShell, you can efficiently execute multiple distinct Kusto queries targeting different scopes (like subscriptions) simultaneously. Method 5 Subscriptions 20 Subscriptions Sequential ~50 seconds ~200 seconds Parallel (ThrottleLimit 5) ~15 seconds ~45 seconds PowerShell Example: Running Multiple Queries in Parallel # Define multiple queries to run together $BatchQueries = @( @{ Query = "Resources | where type =~ 'Microsoft.Compute/virtualMachines'" Subscriptions = @("SUB_A") # Query 1 Scope }, @{ Query = "Resources | where type =~ 'Microsoft.Network/publicIPAddresses'" Subscriptions = @("SUB_B", "SUB_C") # Query 2 Scope } ) Write-Host "Executing batch of $($BatchQueries.Count) queries in parallel..." # Execute queries in parallel (true batching) $BatchResults = $BatchQueries | ForEach-Object -Parallel { $QueryConfig = $_ $Query = $QueryConfig.Query $Subs = $QueryConfig.Subscriptions Write-Host "[Batch Worker] Starting query: $($Query.Substring(0, [Math]::Min(50, $Query.Length)))..." -ForegroundColor Cyan $QueryResults = @() # Process each subscription in this query's scope foreach ($SubId in $Subs) { $SkipToken = $null do { $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } $Result = Search-AzGraph @Params if ($Result) { $QueryResults += $Result } $SkipToken = $Result.SkipToken } while ($SkipToken) } Write-Host " [Batch Worker] ✅ Query completed - Retrieved $($QueryResults.Count) resources" -ForegroundColor Green # Return results with metadata [PSCustomObject]@{ Query = $Query Subscriptions = $Subs Data = $QueryResults Count = $QueryResults.Count } } -ThrottleLimit 5 Write-Host "`nBatch complete. Reviewing results..." # The results are returned in the same order as the input array $VMCount = $BatchResults[0].Data.Count $IPCount = $BatchResults[1].Data.Count Write-Host "Query 1 (VMs) returned: $VMCount results." Write-Host "Query 2 (IPs) returned: $IPCount results." # Optional: Display detailed results Write-Host "`n--- Detailed Results ---" for ($i = 0; $i -lt $BatchResults.Count; $i++) { $Result = $BatchResults[$i] Write-Host "`nQuery $($i + 1):" Write-Host " Query: $($Result.Query)" Write-Host " Subscriptions: $($Result.Subscriptions -join ', ')" Write-Host " Total Resources: $($Result.Count)" if ($Result.Data.Count -gt 0) { Write-Host " Sample (first 3):" $Result.Data | Select-Object -First 3 | Format-Table -AutoSize } } Azure Resource Graph (ARG) and Scale Azure Resource Graph (ARG) is a service built for querying resource properties quickly across a large number of Azure subscriptions using the Kusto Query Language (KQL). Because ARG is designed for large scale, it fully supports Skip Token and Batching: Skip Token: ARG automatically generates and returns the token when a query exceeds its result limit (e.g., 1,000 records). Batching: ARG's REST API provides a batch endpoint for sending up to ten queries in a single request. In PowerShell, we achieve similar performance benefits using ForEach-Object -Parallel to process multiple queries concurrently. Combined Example: Batching and Skip Token Together This script shows how to use Batching to start a query across multiple subscriptions and then use Skip Token within the loop to ensure every subscription's data is fully retrieved. $SubscriptionIDs = @("SUB_A") $KQLQuery = "Resources | project id, name, type, subscriptionId" Write-Host "Starting BATCHED query across $($SubscriptionIDs.Count) subscription(s)..." Write-Host "Using parallel processing for true batching...`n" # Process subscriptions in parallel (batching) $AllResults = $SubscriptionIDs | ForEach-Object -Parallel { $SubId = $_ $Query = $using:KQLQuery $SubResults = @() Write-Host "[Batch Worker] Processing Subscription: $SubId" -ForegroundColor Cyan $SkipToken = $null $PageCount = 0 do { $PageCount++ # Build parameters $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } # Execute query $Result = Search-AzGraph @Params if ($Result) { $SubResults += $Result Write-Host " [Batch Worker] Sub: $SubId - Page $PageCount - Retrieved $($Result.Count) resources" -ForegroundColor Yellow } $SkipToken = $Result.SkipToken } while ($SkipToken) Write-Host " [Batch Worker] ✅ Completed $SubId - Total: $($SubResults.Count) resources" -ForegroundColor Green # Return results from this subscription $SubResults } -ThrottleLimit 5 # Process up to 5 subscriptions simultaneously Write-Host "`n--- Batch Processing Finished ---" Write-Host "Final total resource count: $($AllResults.Count)" # Optional: Display sample results if ($AllResults.Count -gt 0) { Write-Host "`nFirst 5 resources:" $AllResults | Select-Object -First 5 | Format-Table -AutoSize } Technique Use When... Common Mistake Actionable Advice Skip Token You must retrieve all data items, expecting more than 1,000 results. Forgetting to check for the token; you only get partial data. Always use a do-while loop to guarantee you get the complete set. Batching You need to run several separate queries (max 10 in ARG) efficiently. Putting too many queries in the batch, causing the request to fail. Group up to 10 logical queries or subscriptions into one fast request. By combining Skip Token for data completeness and Batching for efficiency, you can confidently query massive Azure estates without hitting limits or missing data. These two techniques — when used together — turn Azure Resource Graph from a “good tool” into a scalable discovery engine for your entire cloud footprint. Summary: Skip Token and Batching in Azure Resource Graph Goal: Efficiently query massive Azure environments using PowerShell and Azure Resource Graph (ARG). 1. Skip Token (The Data Completeness Tool) Concept What it Does Why it Matters PowerShell Use Skip Token A marker returned by Azure APIs when results hit the 1,000-item limit. It points to the next page of data. Ensures you retrieve all records, avoiding incomplete data (pagination). Use a do-while loop with the -SkipToken parameter in Search-AzGraph until the token is no longer returned. 2. Batching (The Performance Booster) Concept What it Does Why it Matters PowerShell Use Batching Processes multiple independent queries simultaneously using parallel execution. Drastically improves query speed by reducing overall execution time and helps avoid API throttling. Use ForEach-Object -Parallel (PowerShell 7+) with -ThrottleLimit to control concurrent queries. For PowerShell 5.1, use Start-Job with background jobs. 3. Best Practice: Combine Them For maximum efficiency, combine Batching and Skip Token. Use batching to run queries across multiple subscriptions simultaneously and use the Skip Token logic within the loop to ensure every single subscription's data is fully paginated and retrieved. Result: Fast, complete, and reliable data collection across your large Azure estate. AI/ML Use Cases Powered by ARG Optimizations: Machine Learning Pipelines: Cost forecasting and capacity planning models require historical resource data—SkipToken pagination enables efficient extraction of millions of records for training without hitting API limits. Scenario: Cost forecasting models predict next month's Azure spend by analyzing historical resource usage. For example, training a model to forecast VM costs requires pulling 12 months of resource data (CPU usage, memory, SKU changes) across all subscriptions. SkipToken pagination enables extracting millions of these records efficiently without hitting API limits—a single query might return 500K+ VM records that need to be fed into Azure ML for training. Anomaly Detection: Security teams use Azure ML to monitor suspicious configurations—parallel queries across subscriptions enable real-time threat detection with low latency. Scenario: Security teams deploy models to catch unusual activity, like someone suddenly creating 50 VMs in an unusual region or changing network security rules at 3 AM. Parallel queries across all subscriptions let these detections systems scan your entire Azure environment in seconds rather than minutes, enabling real-time alerts when threats emerge. RAG Systems: Converting resource metadata to embeddings for semantic search requires processing entire inventories—pagination and batching become mandatory for handling large-scale embedding generation. Scenario: Imagine asking "Which SQL databases are using premium storage but have low IOPS?" A RAG system converts all your Azure resources into searchable embeddings (vector representations), then uses AI to understand your question and find relevant resources semantically. Building this requires processing your entire Azure inventory—potentially lots of resources—where pagination and batching become mandatory to generate embeddings without overwhelming the OpenAI API. References: Azure Resource Graph documentation Search-AzGraph PowerShell reference437Views1like2Comments