updates
734 TopicsResiliency Best Practices You Need For your Blob Storage Data
Maintaining Resiliency in Azure Blob Storage: A Guide to Best Practices Azure Blob Storage is a cornerstone of modern cloud storage, offering scalable and secure solutions for unstructured data. However, maintaining resiliency in Blob Storage requires careful planning and adherence to best practices. In this blog, I’ll share practical strategies to ensure your data remains available, secure, and recoverable under all circumstances. 1. Enable Soft Delete for Accidental Recovery (Most Important) Mistakes happen, but soft delete can be your safety net and. It allows you to recover deleted blobs within a specified retention period: Configure a soft delete retention period in Azure Storage. Regularly monitor your blob storage to ensure that critical data is not permanently removed by mistake. Enabling soft delete in Azure Blob Storage does not come with any additional cost for simply enabling the feature itself. However, it can potentially impact your storage costs because the deleted data is retained for the configured retention period, which means: The retained data contributes to the total storage consumption during the retention period. You will be charged according to the pricing tier of the data (Hot, Cool, or Archive) for the duration of retention 2. Utilize Geo-Redundant Storage (GRS) Geo-redundancy ensures your data is replicated across regions to protect against regional failures: Choose RA-GRS (Read-Access Geo-Redundant Storage) for read access to secondary replicas in the event of a primary region outage. Assess your workload’s RPO (Recovery Point Objective) and RTO (Recovery Time Objective) needs to select the appropriate redundancy. 3. Implement Lifecycle Management Policies Efficient storage management reduces costs and ensures long-term data availability: Set up lifecycle policies to transition data between hot, cool, and archive tiers based on usage. Automatically delete expired blobs to save on costs while keeping your storage organized. 4. Secure Your Data with Encryption and Access Controls Resiliency is incomplete without robust security. Protect your blobs using: Encryption at Rest: Azure automatically encrypts data using server-side encryption (SSE). Consider enabling customer-managed keys for additional control. Access Policies: Implement Shared Access Signatures (SAS) and Stored Access Policies to restrict access and enforce expiration dates. 5. Monitor and Alert for Anomalies Stay proactive by leveraging Azure’s monitoring capabilities: Use Azure Monitor and Log Analytics to track storage performance and usage patterns. Set up alerts for unusual activities, such as sudden spikes in access or deletions, to detect potential issues early. 6. Plan for Disaster Recovery Ensure your data remains accessible even during critical failures: Create snapshots of critical blobs for point-in-time recovery. Enable backup for blog & have the immutability feature enabled Test your recovery process regularly to ensure it meets your operational requirements. 7. Resource lock Adding Azure Locks to your Blob Storage account provides an additional layer of protection by preventing accidental deletion or modification of critical resources 7. Educate and Train Your Team Operational resilience often hinges on user awareness: Conduct regular training sessions on Blob Storage best practices. Document and share a clear data recovery and management protocol with all stakeholders. 8. "Critical Tip: Do Not Create New Containers with Deleted Names During Recovery" If a container or blob storage is deleted for any reason and recovery is being attempted, it’s crucial not to create a new container with the same name immediately. Doing so can significantly hinder the recovery process by overwriting backend pointers, which are essential for restoring the deleted data. Always ensure that no new containers are created using the same name during the recovery attempt to maximize the chances of successful restoration. Wrapping It Up Azure Blob Storage offers an exceptional platform for scalable and secure storage, but its resiliency depends on following best practices. By enabling features like soft delete, implementing redundancy, securing data, and proactively monitoring your storage environment, you can ensure that your data is resilient to failures and recoverable in any scenario. Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn Data redundancy - Azure Storage | Microsoft Learn Overview of Azure Blobs backup - Azure Backup | Microsoft Learn Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn1.1KViews1like1CommentWindows 11, version 25H2 security baseline
Microsoft is pleased to announce the security baseline package for Windows 11, version 25H2! You can download the baseline package from the Microsoft Security Compliance Toolkit, test the recommended configurations in your environment, and customize / implement them as appropriate. Summary of changes This release includes several changes made since the Windows 11, version 24H2 security baseline to further assist in the security of enterprise customers, to include better alignment with the latest capabilities and standards. The changes include what is depicted in the table below. Security Policy Change Summary Printer: Impersonate a client after authentication Add “RESTRICTED SERVICES\PrintSpoolerService” to allow the Print Spooler’s restricted service identity to impersonate clients securely NTLM Auditing Enhancements Enable by default to improve visibility into NTLM usage within your environment MDAV: Attack Surface Reduction (ASR) Add "Block process creations originating from PSExec and WMI commands" (d1e49aac-8f56-4280-b9ba-993a6d77406c) with a recommended value of 2 (Audit) to improve visibility into suspicious activity MDAV: Control whether exclusions are visible to local users Move to Not Configured as it is overridden by the parent setting MDAV: Scan packed executables Remove from the baseline because the setting is no longer functional - Windows always scans packed executables by default Network: Configure NetBIOS settings Disable NetBIOS name resolution on all network adapters to reduce legacy protocol exposure Disable Internet Explorer 11 Launch Via COM Automation Disable to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces Include command line in process creation events Enable to improve visibility into how processes are executed across the system WDigest Authentication Remove from the baseline because the setting is obsolete - WDigest is disabled by default and no longer needed in modern Windows environments Printer Improving Print Security with IPPS and Certificate Validation To enhance the security of network printing, Windows introduces two new policies focused on controlling the use of IPP (Internet Printing Protocol) printers and enforcing encrypted communications. The setting, "Require IPPS for IPP printers", (Administrative Templates\Printers) determines whether printers that do not support TLS are allowed to be installed. When this policy is disabled (default), both IPP and IPPS transport printers can be installed - although IPPS is preferred when both are available. When enabled, only IPPS printers will be installed; attempts to install non-compliant printers will fail and generate an event in the Application log, indicating that installation was blocked by policy. The second policy, "Set TLS/SSL security policy for IPP printers" (same policy path) requires that printers present valid and trusted TLS/SSL certificates before connections can be established. Enabling this policy defends against spoofed or unauthorized printers, reducing the risk of credential theft or redirection of sensitive print jobs. While these policies significantly improve security posture, enabling them may introduce operational challenges in environments where IPP and self-signed or locally issued certificates are still commonly used. For this reason, neither policy is enforced in the security baseline, at this time. We recommend that you assess your printers, and if they meet the requirements, consider enabling those policies with a remediation plan to address any non-compliant printers in a controlled and predictable manner. User Rights Assignment Update: Impersonate a client after authentication We have added RESTRICTED SERVICES\PrintSpoolerService in the “Impersonate a client after authentication” User Rights Assignment policy. The baseline already includes Administrators, SERVICE, LOCAL SERVICE, and NETWORK SERVICE for this user right. Adding the restricted Print Spooler supports Microsoft’s ongoing effort to apply least privilege to system services. It enables Print Spooler to securely impersonate user tokens in modern print scenarios using a scoped, restricted service identity. Although this identity is associated with functionality introduced as part of Windows Protected Print (WPP), it is required to support proper print operations even if WPP is not currently enabled. The system manifests the identity by default, and its presence ensures forward compatibility with WPP-based printing. Note: This account may appear as a raw SID (e.g., S-1-5-99-...) in Group Policy or local policy tools before the service is fully initialized. This is expected and does not indicate a misconfiguration. Warning: Removing this entry will result in print failures in environments where WPP is enabled. We recommend retaining this entry in any custom security configuration that defines this user right. NTLM Auditing Enhancements Windows 11, version 25H2 includes enhanced NTLM auditing capabilities, enabled by default, which significantly improves visibility into NTLM usage within your environment. These enhancements provide detailed audit logs to help security teams monitor and investigate authentication activity, identify insecure practices, and prepare for future NTLM restrictions. Since these auditing improvements are enabled by default, no additional configuration is required, and thus the baseline does not explicitly enforce them. For more details, see Overview of NTLM auditing enhancements in Windows 11 and Windows Server 2025. Microsoft Defender Antivirus Attack Surface Reduction (ASR) In this release, we've updated the Attack Surface Reduction (ASR) rules to add the policy Block process creations originating from PSExec and WMI commands (d1e49aac-8f56-4280-b9ba-993a6d77406c) with a recommended value of 2 (Audit). By auditing this rule, you can gain essential visibility into potential privilege escalation attempts via tools such as PSExec or persistence mechanisms using WMI. This enhancement helps organizations proactively identify suspicious activities without impacting legitimate administrative workflows. Control whether exclusions are visible to local users We have removed the configuration for the policy "Control whether exclusions are visible to local users" (Windows Components\Microsoft Defender Antivirus) from the baseline in this release. This change was made because the parent policy "Control whether or not exclusions are visible to Local Admins" is already set to Enabled, which takes precedence and effectively overrides the behavior of the former setting. As a result, explicitly configuring the child policy is unnecessary. You can continue to manage exclusion visibility through the parent policy, which provides the intended control over whether local administrators can view exclusion lists. Scan packed executables The “Scan packed executables” setting (Windows Components\Microsoft Defender Antivirus\Scan) has been removed from the security baseline because it is no longer functional in modern Windows releases. Microsoft Defender Antivirus always scans packed executables by default, therefore configuring this policy has no effect on the system. Disable NetBIOS Name Resolution on All Networks In this release, we start disabling NetBIOS name resolution on all network adapters in the security baseline, including those connected to private and domain networks. The change is reflected in the policy setting “Configure NetBIOS settings” (Network\DNS Client). We are trying to eliminate the legacy name resolution protocol that is vulnerable to spoofing and credential theft. NetBIOS is no longer needed in modern environments where DNS is fully deployed and supported. To mitigate potential compatibility issues, you should ensure that all internal systems and applications use DNS for name resolution. We recommend the following; test critical workflows in a staging environment prior to deployment, monitor for any resolution failures or fallback behavior, and inform support staff of the change to assist with troubleshooting as needed. This update aligns with our broader efforts to phase out legacy protocols and improve security. Disable Internet Explorer 11 Launch Via COM Automation To enhance the security posture of enterprise environments, we recommend disabling Internet Explorer 11 Launch Via COM Automation (Windows Components\Internet Explorer) to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces such as CreateObject("InternetExplorer.Application"). Allowing such behavior poses a significant risk by exposing systems to the legacy MSHTML and ActiveX components, which are vulnerable to exploitation. Include command line in process creation events We have enabled the setting "Include command line in process creation events" (System\Audit Process Creation) in the baseline to improve visibility into how processes are executed across the system. Capturing command-line arguments allows defenders to detect and investigate malicious activity that may otherwise appear legitimate, such as abuse of scripting engines, credential theft tools, or obfuscated payloads using native binaries. This setting supports modern threat detection techniques with minimal performance overhead and is highly recommended. WDigest Authentication We removed the policy "WDigest Authentication (disabling may require KB2871997)" from the security baseline because it is no longer necessary for Windows. This policy was originally enforced to prevent WDigest from storing user’s plaintext passwords in memory, which posed a serious credential theft risk. However, starting with 24H2 update, the engineering teams deprecated this policy. As a result, there is no longer a need to explicitly enforce this setting, and the policy has been removed from the baseline to reflect the current default behavior. Since the setting does not write to the normal policies location in the registry it will not be cleaned up automatically for any existing deployments. Please let us know your thoughts by commenting on this post or through the Security Baseline Community.6.8KViews6likes8CommentsMastering Azure Queries: Skip Token and Batching for Scale
Let's be honest. As a cloud engineer or DevOps professional managing a large Azure environment, running even a simple resource inventory query can feel like drinking from a firehose. You hit API limits, face slow performance, and struggle to get the complete picture of your estate—all because the data volume is overwhelming. But it doesn't have to be this way! This blog is your practical, hands-on guide to mastering two essential techniques for handling massive data volumes in Azure: using PowerShell and Azure Resource Graph (ARG): Skip Token (for full data retrieval) and Batching (for blazing-fast performance). 📋 TABLE OF CONTENTS 🚀 GETTING STARTED │ ├─ Prerequisites: PowerShell 7+ & Az.ResourceGraph Module └─ Introduction: Why Standard Queries Fail at Scale 📖 CORE CONCEPTS │ ├─ 📑 Skip Token: The Data Completeness Tool │ ├─ What is a Skip Token? │ ├─ The Bookmark Analogy │ ├─ PowerShell Implementation │ └─ 💻 Code Example: Pagination Loop │ └─ ⚡ Batching: The Performance Booster ├─ What is Batching? ├─ Performance Benefits ├─ Batching vs. Pagination ├─ Parallel Processing in PowerShell └─ 💻 Code Example: Concurrent Queries 🔍 DEEP DIVE │ ├─ Skip Token: Generic vs. Azure-Specific └─ Azure Resource Graph (ARG) at Scale ├─ ARG Overview ├─ Why ARG Needs These Techniques └─ 💻 Combined Example: Skip Token + Batching ✅ BEST PRACTICES │ ├─ When to Use Each Technique └─ Quick Reference Guide 📚 RESOURCES └─ Official Documentation & References Prerequisites Component Requirement / Details Command / Reference PowerShell Version The batching examples use ForEach-Object -Parallel, which requires PowerShell 7.0 or later. Check version: $PSVersionTable.PSVersion Install PowerShell 7+: Install PowerShell on Windows, Linux, and macOS Azure PowerShell Module Az.ResourceGraph module must be installed. Install module: Install-Module -Name Az.ResourceGraph -Scope CurrentUser Introduction: Why Standard Queries Don't Work at Scale When you query a service designed for big environments, like Azure Resource Graph, you face two limits: Result Limits (Pagination): APIs won't send you millions of records at once. They cap the result size (often 1,000 items) and stop. Efficiency Limits (Throttling): Sending a huge number of individual requests is slow and can cause the API to temporarily block you (throttling). Skip Token helps you solve the first limit by making sure you retrieve all results. Batching solves the second by grouping your requests to improve performance. Understanding Skip Token: The Continuation Pointer What is a Skip Token? A Skip Token (or continuation token) is a unique string value returned by an Azure API when a query result exceeds the maximum limit for a single response. Think of the Skip Token as a “bookmark” that tells Azure where your last page ended — so you can pick up exactly where you left off in the next API call. Instead of getting cut off after 1,000 records, the API gives you the first 1,000 results plus the Skip Token. You use this token in the next request to get the next page of data. This process is called pagination. Skip Token in Practice with PowerShell To get the complete dataset, you must use a loop that repeatedly calls the API, providing the token each time until the token is no longer returned. PowerShell Example: Using Skip Token to Loop Pages # Define the query $Query = "Resources | project name, type, location" $PageSize = 1000 $AllResults = @() $SkipToken = $null # Initialize the token Write-Host "Starting ARG query..." do { Write-Host "Fetching next page. (Token check: $($SkipToken -ne $null))" # 1. Execute the query, using the -SkipToken parameter $ResultPage = Search-AzGraph -Query $Query -First $PageSize -SkipToken $SkipToken # 2. Add the current page results to the main array $AllResults += $ResultPage # 3. Get the token for the next page, if it exists $SkipToken = $ResultPage.SkipToken Write-Host " -> Items in this page: $($ResultPage.Count). Total retrieved: $($AllResults.Count)" } while ($SkipToken -ne $null) # Loop as long as a Skip Token is returned Write-Host "Query finished. Total resources found: $($AllResults.Count)" This do-while loop is the reliable way to ensure you retrieve every item in a large result set. Understanding Batching: Grouping Requests What is Batching? Batching means taking several independent requests and combining them into a single API call. Instead of making N separate network requests for N pieces of data, you make one request containing all N sub-requests. Batching is primarily used for performance. It improves efficiency by: Reducing Overhead: Fewer separate network connections are needed. Lowering Throttling Risk: Fewer overall API calls are made, which helps you stay under rate limits. Feature Batching Pagination (Skip Token) Goal Improve efficiency/speed. Retrieve all data completely. Input Multiple different queries. Single query, continuing from a marker. Result One response with results for all grouped queries. Partial results with a token for the next step. Note: While Azure Resource Graph's REST API supports batch requests, the PowerShell Search-AzGraph cmdlet does not expose a -Batch parameter. Instead, we achieve batching by using PowerShell's ForEach-Object -Parallel (PowerShell 7+) to run multiple queries simultaneously. Batching in Practice with PowerShell Using parallel processing in PowerShell, you can efficiently execute multiple distinct Kusto queries targeting different scopes (like subscriptions) simultaneously. Method 5 Subscriptions 20 Subscriptions Sequential ~50 seconds ~200 seconds Parallel (ThrottleLimit 5) ~15 seconds ~45 seconds PowerShell Example: Running Multiple Queries in Parallel # Define multiple queries to run together $BatchQueries = @( @{ Query = "Resources | where type =~ 'Microsoft.Compute/virtualMachines'" Subscriptions = @("SUB_A") # Query 1 Scope }, @{ Query = "Resources | where type =~ 'Microsoft.Network/publicIPAddresses'" Subscriptions = @("SUB_B", "SUB_C") # Query 2 Scope } ) Write-Host "Executing batch of $($BatchQueries.Count) queries in parallel..." # Execute queries in parallel (true batching) $BatchResults = $BatchQueries | ForEach-Object -Parallel { $QueryConfig = $_ $Query = $QueryConfig.Query $Subs = $QueryConfig.Subscriptions Write-Host "[Batch Worker] Starting query: $($Query.Substring(0, [Math]::Min(50, $Query.Length)))..." -ForegroundColor Cyan $QueryResults = @() # Process each subscription in this query's scope foreach ($SubId in $Subs) { $SkipToken = $null do { $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } $Result = Search-AzGraph @Params if ($Result) { $QueryResults += $Result } $SkipToken = $Result.SkipToken } while ($SkipToken) } Write-Host " [Batch Worker] ✅ Query completed - Retrieved $($QueryResults.Count) resources" -ForegroundColor Green # Return results with metadata [PSCustomObject]@{ Query = $Query Subscriptions = $Subs Data = $QueryResults Count = $QueryResults.Count } } -ThrottleLimit 5 Write-Host "`nBatch complete. Reviewing results..." # The results are returned in the same order as the input array $VMCount = $BatchResults[0].Data.Count $IPCount = $BatchResults[1].Data.Count Write-Host "Query 1 (VMs) returned: $VMCount results." Write-Host "Query 2 (IPs) returned: $IPCount results." # Optional: Display detailed results Write-Host "`n--- Detailed Results ---" for ($i = 0; $i -lt $BatchResults.Count; $i++) { $Result = $BatchResults[$i] Write-Host "`nQuery $($i + 1):" Write-Host " Query: $($Result.Query)" Write-Host " Subscriptions: $($Result.Subscriptions -join ', ')" Write-Host " Total Resources: $($Result.Count)" if ($Result.Data.Count -gt 0) { Write-Host " Sample (first 3):" $Result.Data | Select-Object -First 3 | Format-Table -AutoSize } } Azure Resource Graph (ARG) and Scale Azure Resource Graph (ARG) is a service built for querying resource properties quickly across a large number of Azure subscriptions using the Kusto Query Language (KQL). Because ARG is designed for large scale, it fully supports Skip Token and Batching: Skip Token: ARG automatically generates and returns the token when a query exceeds its result limit (e.g., 1,000 records). Batching: ARG's REST API provides a batch endpoint for sending up to ten queries in a single request. In PowerShell, we achieve similar performance benefits using ForEach-Object -Parallel to process multiple queries concurrently. Combined Example: Batching and Skip Token Together This script shows how to use Batching to start a query across multiple subscriptions and then use Skip Token within the loop to ensure every subscription's data is fully retrieved. $SubscriptionIDs = @("SUB_A") $KQLQuery = "Resources | project id, name, type, subscriptionId" Write-Host "Starting BATCHED query across $($SubscriptionIDs.Count) subscription(s)..." Write-Host "Using parallel processing for true batching...`n" # Process subscriptions in parallel (batching) $AllResults = $SubscriptionIDs | ForEach-Object -Parallel { $SubId = $_ $Query = $using:KQLQuery $SubResults = @() Write-Host "[Batch Worker] Processing Subscription: $SubId" -ForegroundColor Cyan $SkipToken = $null $PageCount = 0 do { $PageCount++ # Build parameters $Params = @{ Query = $Query Subscription = $SubId First = 1000 } if ($SkipToken) { $Params['SkipToken'] = $SkipToken } # Execute query $Result = Search-AzGraph @Params if ($Result) { $SubResults += $Result Write-Host " [Batch Worker] Sub: $SubId - Page $PageCount - Retrieved $($Result.Count) resources" -ForegroundColor Yellow } $SkipToken = $Result.SkipToken } while ($SkipToken) Write-Host " [Batch Worker] ✅ Completed $SubId - Total: $($SubResults.Count) resources" -ForegroundColor Green # Return results from this subscription $SubResults } -ThrottleLimit 5 # Process up to 5 subscriptions simultaneously Write-Host "`n--- Batch Processing Finished ---" Write-Host "Final total resource count: $($AllResults.Count)" # Optional: Display sample results if ($AllResults.Count -gt 0) { Write-Host "`nFirst 5 resources:" $AllResults | Select-Object -First 5 | Format-Table -AutoSize } Technique Use When... Common Mistake Actionable Advice Skip Token You must retrieve all data items, expecting more than 1,000 results. Forgetting to check for the token; you only get partial data. Always use a do-while loop to guarantee you get the complete set. Batching You need to run several separate queries (max 10 in ARG) efficiently. Putting too many queries in the batch, causing the request to fail. Group up to 10 logical queries or subscriptions into one fast request. By combining Skip Token for data completeness and Batching for efficiency, you can confidently query massive Azure estates without hitting limits or missing data. These two techniques — when used together — turn Azure Resource Graph from a “good tool” into a scalable discovery engine for your entire cloud footprint. Summary: Skip Token and Batching in Azure Resource Graph Goal: Efficiently query massive Azure environments using PowerShell and Azure Resource Graph (ARG). 1. Skip Token (The Data Completeness Tool) Concept What it Does Why it Matters PowerShell Use Skip Token A marker returned by Azure APIs when results hit the 1,000-item limit. It points to the next page of data. Ensures you retrieve all records, avoiding incomplete data (pagination). Use a do-while loop with the -SkipToken parameter in Search-AzGraph until the token is no longer returned. 2. Batching (The Performance Booster) Concept What it Does Why it Matters PowerShell Use Batching Processes multiple independent queries simultaneously using parallel execution. Drastically improves query speed by reducing overall execution time and helps avoid API throttling. Use ForEach-Object -Parallel (PowerShell 7+) with -ThrottleLimit to control concurrent queries. For PowerShell 5.1, use Start-Job with background jobs. 3. Best Practice: Combine Them For maximum efficiency, combine Batching and Skip Token. Use batching to run queries across multiple subscriptions simultaneously and use the Skip Token logic within the loop to ensure every single subscription's data is fully paginated and retrieved. Result: Fast, complete, and reliable data collection across your large Azure estate. References: Azure Resource Graph documentation Search-AzGraph PowerShell reference235Views1like2CommentsUpcoming Changes to Azure Relay IP Addresses and DNS Support
Azure Relay is an integral part of modern hybrid cloud architectures, enabling seamless connectivity between on-premises and cloud resources. To ensure continued reliability and security, Microsoft is implementing important updates to the IP addresses and DNS naming conventions used by Azure Relay services. What’s Changing? As detailed in the changes to IP-addresses for Azure Relay and Azure Relay WCF and Hybrid Connections DNS Support reference blogs, customers should be aware of two primary changes: IP and Name Transitions: The IP addresses and corresponding DNS names for Azure Relay endpoints will change during the transition period. For example, g0-prod-bn-vaz0001-sb.servicebus.windows.net can change to gv0-prod-bn-vaz0001-sb.servicebus.windows.net DNS Support Enhancements: Improved DNS support will enhance reliability and future-proof connectivity for both WCF Relay and Hybrid Connections users. Recommended Actions for Customers To minimize disruption, it is crucial for users to update their network configurations and firewall rules to accommodate these new IP addresses and DNS names as soon as possible. These will be made available using the below PS1 script - Update Allow Lists: Ensure that your firewalls and network security groups permit traffic to the new IP ranges and DNS endpoints as specified in the official documentation. Monitor Transition Phases: Be prepared for two rounds of changes. Apply updates promptly during both the initial and final transitions. Automating Namespace Information Retrieval To assist with this transition, Microsoft has updated the PowerShell script for retrieving namespace information, which now reflects the planned changes. You can access the latest script here: GetNamespaceInfo.ps1 (azure-relay-dotnet/tools) (Instructions on how to use the ps1 script is available in the README) This script allows you to efficiently check the current configuration of your Azure Relay namespaces and validate connectivity against the updated endpoints. Sample output PS D:\AzureVMSSEssentials\Tools\GetNamespaceInfoWithIpRanges> .\GetNamespaceInfo.ps1 <your-relay-namespace>.servicebus.windows.net Namespace : <your-relay-namespace>.servicebus.windows.net Deployment : PROD-BN-VAZ0001 ClusterDNS : ns-prod-bn-vaz0001.eastus2.cloudapp.azure.com ClusterRegion : eastus2 ClusterVIP : 40.84.75.3 GatewayDnsFormat : g{0}-bn-vaz0001-sb.servicebus.windows.net or gv{0}-bn-vaz0001-sb.servicebus.windows.net Notes : Entries with 'FUTURE' IPAddress may be added at a later time as needed Current IP Ranges Name IPAddress ---- --------- g0-bn-vaz0001-sb.servicebus.windows.net 20.36.144.8 g1-bn-vaz0001-sb.servicebus.windows.net 20.36.144.1 g2-bn-vaz0001-sb.servicebus.windows.net 20.36.144.2 g3-bn-vaz0001-sb.servicebus.windows.net 20.36.144.11 g4-bn-vaz0001-sb.servicebus.windows.net 20.36.144.3 g5-bn-vaz0001-sb.servicebus.windows.net FUTURE g6-bn-vaz0001-sb.servicebus.windows.net FUTURE ... g98-bn-vaz0001-sb.servicebus.windows.net FUTURE g99-bn-vaz0001-sb.servicebus.windows.net FUTURE Future IP Ranges for Region:eastus2 addressPrefixes --------------- 135.18.130.0/23 135.18.132.0/26 135.18.132.64/27168Views0likes0CommentsCan only remote into azure vm from DC
Hi all, I have set up a site to site connection from on prem to azure and I can remote in via the main dc on prem but not any other server or ping from any other server to the azure. Why can I only remote into the azure VM from the server that has Routing and remote access? Any ideas on how I can fix this?743Views0likes2CommentsBeyond the Desktop: The Future of Development with Microsoft Dev Box and GitHub Codespaces
The modern developer platform has already moved past the desktop. We’re no longer defined by what’s installed on our laptops, instead we look at what tooling we can use to move from idea to production. An organisations developer platform strategy is no longer a nice to have, it sets the ceiling for what’s possible, an organisation can’t iterate it's way to developer nirvana if the foundation itself is brittle. A great developer platform shrinks TTFC (time to first commit), accelerates release velocity, and maybe most importantly, helps alleviate everyday frictions that lead to developer burnout. Very few platforms deliver everything an organization needs from a developer platform in one product. Modern development spans multiple dimensions, local tooling, cloud infrastructure, compliance, security, cross-platform builds, collaboration, and rapid onboarding. The options organizations face are then to either compromise on one or more of these areas or force developers into rigid environments that slow productivity and innovation. This is where Microsoft Dev Box and GitHub Codespaces come into play. On their own, each addresses critical parts of the modern developer platform: Microsoft Dev Box provides a full, managed cloud workstation. Dev Box gives developers a consistent, high-performance environment while letting central IT apply strict governance and control. Internally at Microsoft, we estimate that usage of Dev Box by our development teams delivers savings of 156 hours per year per developer purely on local environment setup and upkeep. We have also seen significant gains in other key SPACE metrics reducing context-switching friction and improving build/test cycles. Although the benefits of Dev Box are clear in the results demonstrated by our customers it is not without its challenges. The biggest challenge often faced by Dev Box customers is its lack of native Linux support. At the time of writing and for the foreseeable future Dev Box does not support native Linux developer workstations. While WSL2 provides partial parity, I know from my own engineering projects it still does not deliver the full experience. This is where GitHub Codespaces comes into this story. GitHub Codespaces delivers instant, Linux-native environments spun up directly from your repository. It’s lightweight, reproducible, and ephemeral ideal for rapid iteration, PR testing, and cross-platform development where you need Linux parity or containerized workflows. Unlike Dev Box, Codespaces can run fully in Linux, giving developers access to native tools, scripts, and runtimes without workarounds. It also removes much of the friction around onboarding: a new developer can open a repository and be coding in minutes, with the exact environment defined by the project’s devcontainer.json. That said, Codespaces isn’t a complete replacement for a full workstation. While it’s perfect for isolated project work or ephemeral testing, it doesn’t provide the persistent, policy-controlled environment that enterprise teams often require for heavier workloads or complex toolchains. Used together, they fill the gaps that neither can cover alone: Dev Box gives the enterprise-grade foundation, while Codespaces provides the agile, cross-platform sandbox. For organizations, this pairing sets a higher ceiling for developer productivity, delivering a truly hybrid, agile and well governed developer platform. Better Together: Dev Box and GitHub Codespaces in action Together, Microsoft Dev Box and GitHub Codespaces deliver a hybrid developer platform that combines consistency, speed, and flexibility. Teams can spin up full, policy-compliant Dev Box workstations preloaded with enterprise tooling, IDEs, and local testing infrastructure, while Codespaces provides ephemeral, Linux-native environments tailored to each project. One of my favourite use cases is having local testing setups like a Docker Swarm cluster, ready to go in either Dev Box or Codespaces. New developers can jump in and start running services or testing microservices immediately, without spending hours on environment setup. Anecdotally, my time to first commit and time to delivering “impact” has been significantly faster on projects where one or both technologies provide local development services out of the box. Switching between Dev Boxes and Codespaces is seamless every environment keeps its own libraries, extensions, and settings intact, so developers can jump between projects without reconfiguring or breaking dependencies. The result is a turnkey, ready-to-code experience that maximizes productivity, reduces friction, and lets teams focus entirely on building, testing, and shipping software. To showcase this value, I thought I would walk through an example scenario. In this scenario I want to simulate a typical modern developer workflow. Let's look at a day in the life of a developer on this hybrid platform building an IOT project using Python and React. Spin up a ready-to-go workstation (Dev Box) for Windows development and heavy builds. Launch a Linux-native Codespace for cross-platform services, ephemeral testing, and PR work. Run "local" testing like a Docker Swarm cluster, database, and message queue ready to go out-of-the-box. Switch seamlessly between environments without losing project-specific configurations, libraries, or extensions. 9:00 AM – Morning Kickoff on Dev Box I start my day on my Microsoft Dev Box, which gives me a fully-configured Windows environment with VS Code, design tools, and Azure integrations. I select my teams project, and the environment is pre-configured for me through the Dev Box catalogue. Fortunately for me, its already provisioned. I could always self service another one using the "New Dev Box" button if I wanted too. I'll connect through the browser but I could use the desktop app too if I wanted to. My Tasks are: Prototype a new dashboard widget for monitoring IoT device temperature. Use GUI-based tools to tweak the UI and preview changes live. Review my Visio Architecture. Join my morning stand up. Write documentation notes and plan API interactions for the backend. In a flash, I have access to my modern work tooling like Teams, I have this projects files already preloaded and all my peripherals are working without additional setup. Only down side was that I did seem to be the only person on my stand up this morning? Why Dev Box first: GUI-heavy tasks are fast and responsive. Dev Box’s environment allows me to use a full desktop. Great for early-stage design, planning, and visual work. Enterprise Apps are ready for me to use out of the box (P.S. It also supports my multi-monitor setup). I use my Dev Box to make a very complicated change to my IoT dashboard. Changing the title from "IoT Dashboard" to "Owain's IoT Dashboard". I preview this change in a browser live. (Time for a coffee after this hardwork). The rest of the dashboard isnt loading as my backend isnt running... yet. 10:30 AM – Switching to Linux Codespaces Once the UI is ready, I push the code to GitHub and spin up a Linux-native GitHub Codespace for backend development. Tasks: Implement FastAPI endpoints to support the new IoT feature. Run the service on my Codespace and debug any errors. Why Codespaces now: Linux-native tools ensure compatibility with the production server. Docker and containerized testing run natively, avoiding WSL translation overhead. The environment is fully reproducible across any device I log in from. 12:30 PM – Midday Testing & Sync I toggle between Dev Box and Codespaces to test and validate the integration. I do this in my Dev Box Edge browser viewing my codespace (I use my Codespace in a browser through this demo to highlight the difference in environments. In reality I would leverage the VSCode "Remote Explorer" extension and its GitHub Codespace integration to use my Codespace from within my own desktop VSCode but that is personal preference) and I use the same browser to view my frontend preview. I update the environment variable for my frontend that is running locally in my Dev Box and point it at the port running my API locally on my Codespace. In this case it was a web socket connection and HTTPS calls to port 8000. I can make this public by changing the port visibility in my Codespace. https://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/api/devices wss://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/ws This allows me to: Preview the frontend widget on Dev Box, connecting to the backend running in Codespaces. Make small frontend adjustments in Dev Box while monitoring backend logs in Codespaces. Commit changes to GitHub, keeping both environments in sync and leveraging my CI/CD for deployment to the next environment. We can see the Dev Box running local frontend and the Codespace running the API connected to each other, making requests and displaying the data in the frontend! Hybrid advantage: Dev Box handles GUI previews comfortably and allows me to live test frontend changes. Codespaces handles production-aligned backend testing and Linux-native tools. Dev Box allows me to view all of my files in one screen with potentially multiple Codespaces running in browser of VS Code Desktop. Due to all of those platform efficiencies I have completed my days goals within an hour or two and now I can spend the rest of my day learning about how to enable my developers to inner source using GitHub CoPilot and MCP (Shameless plug). The bottom line There are some additional considerations when architecting a developer platform for an enterprise such as private networking and security not covered in this post but these are implementation details to deliver the described developer experience. Architecting such a platform is a valuable investment to deliver the developer platform foundations we discussed at the top of the article. While in this demo I have quickly built I was working in a mono repository in real engineering teams it is likely (I hope) that an application is built of many different repositories. The great thing about Dev Box and Codespaces is that this wouldn’t slow down the rapid development I can achieve when using both. My Dev Box would be specific for the project or development team, pre loaded with all the tools I need and potentially some repos too! When I need too I can quickly switch over to Codespaces and work in a clean isolated environment and push my changes. In both cases any changes I want to deliver locally are pushed into GitHub (Or ADO), merged and my CI/CD ensures that my next step, potentially a staging environment or who knows perhaps *Whispering* straight into production is taken care of. Once I’m finished I delete my Codespace and potentially my Dev Box if I am done with the project, knowing I can self service either one of these anytime and be up and running again! Now is there overlap in terms of what can be developed in a Codespace vs what can be developed in Azure Dev Box? Of course, but as organisations prioritise developer experience to ensure release velocity while maintaining organisational standards and governance then providing developers a windows native and Linux native service both of which are primarily charged on the consumption of the compute* is a no brainer. There are also gaps that neither fill at the moment for example Microsoft Dev Box only provides windows compute while GitHub Codespaces only supports VS Code as your chosen IDE. It's not a question of which service do I choose for my developers, these two services are better together! * Changes have been announced to Dev Box pricing. A W365 license is already required today and dev boxes will continue to be managed through Azure. For more information please see: Microsoft Dev Box capabilities are coming to Windows 365 - Microsoft Dev Box | Microsoft Learn649Views2likes0CommentsPrescaling in Azure Firewall is now generally available
Azure Firewall protects your applications and workloads with cloud-native network security that automatically scales based on your traffic needs. Today, we’re excited to announce the general availability of prescaling in Azure Firewall – a new capability that gives you more control and predictability over how your firewall scales. Why pre-scaling? Today, Azure Firewall automatically scales in response to real-time traffic demand. For organizations with predictable traffic patterns – such as seasonal events, business campaigns, holidays, or planned migrations – the ability to plan capacity in advance can provide greater confidence and control. That’s where prescaling comes in. With prescaling, you can: Plan ahead– Set a baseline number of firewall capacity units to ensure capacity is already in place before demand rises. Stay flexible – Define both minimum and maximum capacity unit values, so your firewall always has room to grow while staying within your chosen bounds. See clearly – Monitor capacity trends with a new observed capacity metric and configure alerts to know when scaling events occur. You can think of it as adding extra checkout counters before a holiday rush – when the customers arrive, you’re already prepared to serve them without delays or bottlenecks. Example scenarios E-commerce sales events – Scale up before a holiday shopping promotion to handle the surge in online buyers. Workload migrations – Ensure sufficient capacity is ready during a large data or VM migration window. Seasonal usage – For industries like education, gaming, or media streaming, pre-scale ahead of known peak seasons. Getting started in Azure Portal Navigate to your Azure Firewall resource in the Azure Portal. Select Scaling options in settings. By default, every Azure Firewall starts in autoscaling mode. To enable prescaling, simply switch to pre-scaling mode in the Azure Portal and configure your desired capacity range: Minimum capacity: 2 or higher. Maximum capacity: up to 50, depending on your needs. Monitor the scaling behavior with the observed capacity metric. Billing and availability Pre-scaling uses a new Capacity Unit Hour meter. Charges apply based on the number of firewall instances you configure. Standard: $0.07 per capacity unit hour Premium: $0.11 per capacity unit hour ✨ Next steps Prescaling gives you predictable performance and proactive control over your firewall, helping you confidently handle the traffic patterns that matter most to your business. 🚀 Try prescaling today and share your feedback with the team. Learn more about how to configure and monitor this feature in the Azure Firewall prescaling documentation.890Views0likes0Comments