azure
580 TopicsStrengthening Azure File Sync security with Managed Identities
Hello Folks, As IT pros, we’re always looking for ways to reduce complexity and improve security in our infrastructure. One area that’s often overlooked is how our services authenticate with each other. Especially when it comes to Azure File Sync. In this post, I’ll walk you through how Managed Identities can simplify and secure your Azure File Sync deployments, based on my recent conversation with Grace Kim, Program Manager on the Azure Files and File Sync team. Why Managed Identities Matter Traditionally, Azure File Sync servers authenticate to the Storage Sync service using server certificates or shared access keys. While functional, these methods introduce operational overhead and potential security risks. Certificates expire, keys get misplaced, and rotating credentials can be a pain. Managed Identities solve this by allowing your server to authenticate securely without storing or managing credentials. Once enabled, the server uses its identity to access Azure resources, and permissions are managed through Azure Role-Based Access Control (RBAC). Using Azure File Sync with Managed Identities provides significant security enhancements and simpler credential management for enterprises. Instead of relying on storage account keys or SAS tokens, Azure File Sync authenticates using a system-assigned Managed Identity from Microsoft Entra ID (Azure AD). This keyless approach greatly improves security by removing long-lived secrets and reducing the attack surface. Access can be controlled via fine-grained Azure role-based access control (RBAC) rather than a broadly privileged key, enforcing least-privileged permissions on file shares. I believe that Azure AD RBAC is far more secure than managing storage account keys or SAS credentials. The result is a secure-by-default setup that minimizes the risk of credential leaks while streamlining authentication management. Managed Identities also improve integration with other Azure services and support enterprise-scale deployments. Because authentication is unified under Azure AD, Azure File Sync’s components (the Storage Sync Service and each registered server) seamlessly obtain tokens to access Azure Files and the sync service without any embedded secrets. This design fits into common Azure security frameworks and encourages consistent identity and access policies across services. In practice, the File Sync managed identity can be granted appropriate Azure roles to interact with related services (for example, allowing Azure Backup or Azure Monitor to access file share data) without sharing separate credentials. At scale, organizations benefit from easier administration. New servers can be onboarded by simply enabling a managed identity (on an Azure VM or an Azure Arc–connected server) and assigning the proper role, avoiding complex key management for each endpoint. Azure’s logging and monitoring tools also recognize these identities, so actions taken by Azure File Sync are transparently auditable in Azure AD activity logs and storage access logs. Given these advantages, new Azure File Sync deployments now enable Managed Identity by default, underscoring a shift toward identity-based security as the standard practice for enterprise file synchronization. This approach ensures that large, distributed file sync environments remain secure, manageable, and well-integrated with the rest of the Azure ecosystem. How It Works When you enable Managed Identity on your Azure VM or Arc-enabled server, Azure automatically provisions an identity for that server. This identity is then used by the Storage Sync service to authenticate and communicate securely. Here’s what happens under the hood: The server receives a system-assigned Managed Identity. Azure File Sync uses this identity to access the storage account. No certificates or access keys are required. Permissions are controlled via RBAC, allowing fine-grained access control. Enabling Managed Identity: Two Scenarios Azure VM If your server is an Azure VM: Go to the VM settings in the Azure portal. Enable System Assigned Managed Identity. Install Azure File Sync. Register the server with the Storage Sync service. Enable Managed Identity in the Storage Sync blade. Once enabled, Azure handles the identity provisioning and permissions setup in the background. Non-Azure VM (Arc-enabled) If your server is on-prem or in another cloud: First, make the server Arc-enabled. Enable System Assigned Managed Identity via Azure Arc. Follow the same steps as above to install and register Azure File Sync. This approach brings parity to hybrid environments, allowing you to use Managed Identities even outside Azure. Next Steps If you’re managing Azure File Sync in your environment, I highly recommend transitioning to Managed Identities. It’s a cleaner, more secure approach that aligns with modern identity practices. ✅ Resources 📚 https://learn.microsoft.com/azure/storage/files/storage-sync-files-planning 🔐 https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview ⚙️ https://learn.microsoft.com/azure/azure-arc/servers/overview 🎯 https://learn.microsoft.com/azure/role-based-access-control/overview 🛠️ Action Items Audit your current Azure File Sync deployments. Identify servers using certificates or access keys. Enable Managed Identity on eligible servers. Use RBAC to assign appropriate permissions. Let me know how your transition to Managed Identities goes. If you run into any snags or have questions, drop a comment. Cheers! Pierre149Views0likes0CommentsInstall and run Azure Foundry Local LLM server & Open WebUI on Windows Server 2025
Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API. Foundry Local has the following benefits: On-Device Inference: Run models locally on your own hardware, reducing your costs while keeping all your data on your device. Model Customization: Select from preset models or use your own to meet specific requirements and use cases. Cost Efficiency: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible. Seamless Integration: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow. Foundry Local is ideal for scenarios where: You want to keep sensitive data on your device. You need to operate in environments with limited or no internet connectivity. You want to reduce cloud inference costs. You need low-latency AI responses for real-time applications. You want to experiment with AI models before deploying to a cloud environment. You can install Foundry Local by running the following command: winget install Microsoft.FoundryLocal Once Foundry Local is installed, you download and interact with a model from the command line by using a command like: foundry model run phi-4 This will download the phi-4 model and provide a text based chat interface. If you want to interact with Foundry Local through a web chat interface, you can use the open source Open WebUI project. You can install Open WebUI on Windows Server by performing the following steps: Download OpenWebUIInstaller.exe from https://github.com/BrainDriveAI/OpenWebUI_CondaInstaller/releases. You'll get warning messages from Windows Defender SmartScreen. Copy OpenWebUIInstaller.exe into C:\Temp. In an elevated command prompt, run the following commands winget install -e --id Anaconda.Miniconda3 --scope machine $env:Path = 'C:\ProgramData\miniconda3;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Scripts;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Library\bin;' + $env:Path conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2 C:\Temp\OpenWebUIInstaller.exe Then from the dialog choose to install and run Open WebUI. You then need to take several extra steps to configure Open WebUI to connect to the Foundry Local endpoint. Enable Direct Connections in Open WebUI Select Settings and Admin Settings in the profile menu. Select Connections in the navigation menu. Enable Direct Connections by turning on the toggle. This allows users to connect to their own OpenAI compatible API endpoints. Connect Open WebUI to Foundry Local: Select Settings in the profile menu. Select Connections in the navigation menu. Select + by Manage Direct Connections. For the URL, enter http://localhost:PORT/v1 where PORT is the Foundry Local endpoint port (use the CLI command foundry service status to find it). Note that Foundry Local dynamically assigns a port, so it isn't always the same. For the Auth, select None. Select Save ➡️ What is Foundry Local https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/what-is-foundry-local ➡️ Edge AI for Beginners https://aka.ms/edgeai-for-beginners ➡️ Open WebUI: https://docs.openwebui.com/425Views1like3CommentsSupercharging NVAs in Azure with Accelerated Connections
Hello folks, If you run firewalls, routers, or SD‑WAN NVAs in Azure and your pain is connection scale rather than raw Mbps, there is a feature you should look at: Accelerated Connections. It shifts connection processing to dedicated hardware in the Azure fleet and lets you size connection capacity per NIC, which translates into higher connections‑per‑second and more total active sessions for your virtual appliances and VMs. This article distills a recent E2E chat I hosted with the Technical Product Manager working on Accelerated Connections and shows you how to enable and operate it safely in production. The demo and guidance below are based on that conversation and the current public documentation. What Accelerated Connections is (and what it is not) Accelerated Connections is configured at the NIC level of your NVAs or VMs. You can choose which NICs participate. That means you might enable it only on your high‑throughput ingress and egress NICs and leave the management NIC alone. It improves two things that matter to infrastructure workloads: Connections per second (CPS). New flows are established much faster. Total active connections. Each NIC can hold far more simultaneous sessions before you hit limits. It does not increase your nominal throughput number. The benefit is stability under high connection pressure, which helps reduce drops and flapping during surges. There is a small latency bump because you introduce another “bump in the wire,” but in application terms it is typically negligible compared to the stability you gain. How it works under the hood In the traditional path, host CPUs evaluate SDN policies for flows that traverse your virtual network. That becomes a bottleneck for connection scale. Accelerated Connections offloads that policy work onto specialized data processing hardware in the Azure fleet so your NVAs and VMs are not capped by host CPU and flow‑table memory constraints. Industry partners have described this as decoupling the SDN stack from the server and shifting the fast‑path onto DPUs residing in purpose‑built appliances, delivered to you as a capability you attach at the vNIC. The result is much higher CPS and active connection scale for virtual firewalls, load balancers, and switches. Sizing the feature per NIC with Auxiliary SKUs You pick a performance tier per NIC using Auxiliary SKU values. Today the tiers are A1, A2, A4, and A8. These map to increasing capacity for total simultaneous connections and CPS, so you can right‑size cost and performance to the NIC’s role. As discussed in my chat with Yusef, the mnemonic is simple: A1 ≈ 1 million connections, A2 ≈ 2 million, A4 ≈ 4 million, A8 ≈ 8 million per NIC, along with increasing CPS ceilings. Choose the smallest tier that clears your peak, then monitor and adjust. Pricing is per hour for the auxiliary capability. Tip: Start with A1 or A2 on ingress and egress NICs of your NVAs, observe CPS and active session counters during peak events, then scale up only if needed. Where to enable it You can enable Accelerated Connections through the Azure portal, CLI, PowerShell, Terraform, or templates. The setting is applied on the network interface. In the portal, export the NIC’s template and you will see two properties you care about: auxiliaryMode and auxiliarySku. Set auxiliaryMode to AcceleratedConnections and choose an auxiliarySku tier (A1, A2, A4, A8). Note: Accelerated Connections is currently a limited GA capability. You may need to sign up before you can configure it in your subscription. Enablement and change windows Standalone VMs. You can enable Accelerated Connections with a stop then start of the VM after updating the NIC properties. Plan a short outage. Virtual Machine Scale Sets. As of now, moving existing scale sets onto Accelerated Connections requires re‑deployment. Parity with the standalone flow is planned, but do not bank on it for current rollouts. Changing SKUs later. Moving from A1 to A2 or similar also implies a downtime window. Treat it as an in‑place maintenance event. Operationally, approach this iteratively. Update a lower‑traffic region first, validate, then roll out broadly. Use active‑active NVAs behind a load balancer so one instance can drain while you update the other. Operating guidance for IT Pros Pick the right NICs. Do not enable on the management NIC. Focus on the interfaces carrying high connection volume. Baseline and monitor. Before enabling, capture CPS and active session metrics from your NVAs. After enabling, verify reductions in connection drops at peak. The point is stability under pressure. Capacity planning. Start at A1 or A2. Move up only if you see sustained saturation at peak. The tiers are designed so you do not pay for headroom you do not need. Expect a tiny latency increase. There is another hop in the path. In real application flows the benefit in fewer drops and higher CPS outweighs the added microseconds. Validate with your own A/B tests. Plan change windows. Enabling on existing VMs and resizing the Auxiliary SKU both involve downtime. Use active‑active pairs behind a load balancer and drain one side while you flip the other Why this matters Customers in regulated and high‑traffic industries like health care often found that connection scale forced them to horizontally expand NVAs, which inflated both cloud spend and licensing, and complicated operations. Offloading the SDN policy work to dedicated hardware allows you to process many more connections on fewer instances, and to do so more predictably. Resources Azure Accelerated Networking overview: https://learn.microsoft.com/azure/virtual-network/accelerated-networking-overview Accelerated connections on NVAs or other VMs (Limited GA): https://learn.microsoft.com/azure/networking/nva-accelerated-connections Manage accelerated networking for Azure Virtual Machines: https://learn.microsoft.com/azure/virtual-network/manage-accelerated-networking Network optimized virtual machine connection acceleration (Preview): https://learn.microsoft.com/azure/virtual-network/network-optimized-vm-network-connection-acceleration Create an Azure Virtual Machine with Accelerated Networking: https://docs.azure.cn/virtual-network/create-virtual-machine-accelerated-networking Next steps Validate eligibility. Confirm your subscription is enabled for Accelerated Connections and that your target regions and VM families are supported. Learn article Select candidate workloads. Prioritize NVAs or VMs that hit CPS or flow‑table limits at peak. Use existing telemetry to pick the first region and appliance pair. 31 Pilot on one NIC per appliance. Enable on the data‑path NIC, start with A1 or A2, then stop/start the VM during a short maintenance window. Measure before and after. 32 Roll out iteratively. Expand to additional regions and appliances using active‑active patterns behind a load balancer to minimize downtime. 33 Right‑size the SKU. If you observe sustained headroom, stay put. If you approach limits, step up a tier during a planned window. 34143Views0likes0CommentsAzure File Sync: A Practical, Tested Deployment Playbook for ITPros.
This post distills that 10‑minute drill into a step‑by‑step, battle‑tested playbook you can run in your own environment, complete with the “gotchas” that trip folks up, why they happen, and how to avoid them. But first... Why Use Azure File Sync? Hybrid File Services: Cloud Meets On-Prem Azure File Sync lets you centralize your organization’s file shares in Azure Files while keeping the flexibility, performance, and compatibility of your existing Windows file servers. You can keep a full copy of your data locally or use your Windows Server as a fast cache for your Azure file share. This means you get cloud scalability and resilience, but users still enjoy local performance and familiar protocols (SMB, NFS, FTPS). Cloud Tiering: Optimize Storage Costs With cloud tiering, your most frequently accessed files are cached locally, while less-used files are tiered to the cloud. You control how much disk space is used for caching, and tiered files can be recalled on-demand. This enables you to reduce on-prem storage costs without sacrificing user experience. Multi-Site Sync: Global Collaboration Azure File Sync is ideal for distributed organizations. You can provision local Windows Servers in each office, and changes made in one location automatically sync to all others. This simplifies file management and enables faster access for cloud-based apps and services. Business Continuity and Disaster Recovery Azure Files provides resilient, redundant storage, so your local server becomes a disposable cache. If a server fails, you simply add a new server to your Azure File Sync deployment, install the agent, and sync. Your file namespace is downloaded first, so users can get back to work quickly. You can also use warm standby servers or Windows Clustering for even faster recovery. Cloud-Side Backup Note: Azure File Sync is NOT a backup solution.... But, you ca reduce on-prem backup costs by taking centralized backups in the cloud using Azure Backup. Azure file shares have native snapshot capabilities, and Azure Backup can automate scheduling and retention. Restores to the cloud are automatically downloaded to your Windows Servers. Seamless Migration Azure File Sync enables seamless migration of on-prem file data to Azure Files. You can sync existing file servers with Azure Files in the background, moving data without disrupting users or changing access patterns. File structure and permissions remain intact, and apps continue to work as expected. Performance, Security, and Compatibility Recent improvements have boosted Azure File Sync’s performance (up to 200 items/sec), and it now supports Windows Server 2025 and integrates with Windows Admin Center for unified management. Managed identities and Active Directory-based authentication are supported for secure, keyless access. Real-World Use Cases Branch Office Consolidation: Multiple sites, each with its own file server, can be consolidated into a central Azure File Share while maintaining local performance. Business Continuity: Companies facing threats like natural disasters use Azure File Sync to improve server recovery times and ensure uninterrupted work. Collaboration: Organizations leverage Azure File Sync for fast, secure collaboration across locations, reducing latency and simplifying IT management. The Quick Troubleshooting TL;DR Insufficient permissions during cloud endpoint creation → “Role assignment creation failed.” You need Owner or the Azure File Sync Administrator built‑in role; Contributor isn’t enough because the workflow must create role assignments. Region mismatches → Your file share and Storage Sync Service must live in the same region as the deployment target. Wrong identity/account → If you’re signed into the wrong tenant or account mid‑portal (easy to do), the wizard fails when it tries to create the cloud endpoint. Switch to the account that actually has the required role and retry. Agent/version issues → An old agent on your Windows Server will cause registration or enumeration problems. Use the latest agent and consider auto‑upgrade to stay current. Networking & access keys → Ensure access keys are enabled on the storage account and required outbound URLs/ports are allowed. Operational expectations → Azure File Sync runs on a roughly 24‑hour change detection cycle by default; for DR drills or immediate needs, trigger change detection via PowerShell. And remember: File Sync is not a backup. Back up the storage account. End‑to‑End Deployment Playbook 1) Prerequisites (don’t skip these) Storage account supporting SMB 3.1.1 (and required authentication settings), with access keys enabled. Create your Azure file share in the same region as your File Sync deployment. Establish a clear naming convention Windows Server for the File Sync agent (example: Windows Server 2019) Identity & Access: Assign either Owner or Azure File Sync Administrator (a least‑privilege built‑in role designed specifically for this scenario). Contributor will let you get partway (storage account, Storage Sync Service) but will fail when creating the cloud endpoint because it can’t create role assignments. 2) Lay down the cloud side In the Azure portal, create the file share in your chosen storage account/region. Create a Storage Sync Service (ideally in a dedicated resource group), again ensuring the region is correct and supported for your needs. 3) Prep the server On your Windows Server, install the Azure File Sync agent (latest version). During setup, consider enabling auto‑upgrade; if the server is down during a scheduled upgrade, it catches up on the next boot, keeping you current with security and bug fixes. Register the server to your Storage Sync Service (select subscription, resource group, and service). If you have multiple subscriptions, the portal can occasionally hide one, PowerShell is an alternative path if needed. 4) Create the sync topology In the Storage Sync Service, create a Sync Group. This is the container for both cloud and server endpoints. Under normal conditions, the cloud endpoint is created automatically when you select the storage account + file share. If you hit “role assignment creation failed” here, verify your signed‑in account and role. Switching back to the account with the proper role resolves it; you can then recreate the cloud endpoint inside the existing Sync Group. Add a server endpoint: pick the registered server (it must show up in the drop‑down, if it doesn’t, registration isn’t complete) and the local path to sync. 5) Cloud tiering & initial sync behavior Cloud tiering keeps hot data locally and stubs colder data to conserve space. If you disable cloud tiering, you’ll maintain a full local copy of all files. If enabled, set the Volume Free Space Policy (how much free space to preserve on the volume) and review recall policy implications. Choose the initial sync mode, merge existing content or overwrite. 6) Ops, monitoring, and DR notes Change detection cadence is approximately 24 hours. For DR tests or urgent cutovers, run the change detection PowerShell command to accelerate discovery of changes. Backups: Azure File Sync is not a backup. Protect your storage account using your standard backup strategy. Networking: Allow required outbound ports/URLs; validate corporate proxies/firewalls. Monitoring: Turn on the logging and monitoring you need for telemetry and auditing. 7) Performance & cost planning Evaluate Provisioned v2 storage accounts to dial in IOPS/throughput to your business needs and gain better pricing predictability. It’s a smart time to decide this up front during a new deployment. 8) Identity options & least privilege You can also set up managed identities for File Sync to reduce reliance on user principals. If you do use user accounts, ensure they carry the Azure File Sync Administrator role or Owner. Keep the agent updated; it’s basic hygiene that prevents a surprising number of issues. 9) Quotas & capacity troubleshooting Hitting quota problems? Revisit your Volume Free Space Policy (cloud tiering) and recall policy. Sometimes the answer is simply adding a disk or increasing its size as data patterns evolve. Key Benefits for Infra Teams Hybrid file services without forklift: Keep your existing Windows file servers while centralizing data in Azure Files, adding elasticity and resiliency with minimal disruption . Right‑sized capacity on‑prem: Cloud tiering preserves local performance for hot data and trims cold data footprint to stretch on‑prem storage further. Operational predictability: Built‑in auto‑upgrade for the agent and a known change detection cycle, with the ability to force change detection for DR/failover testing. Least‑privilege by design: The Azure File Sync Administrator role gives just the rights needed to deploy/manage sync without over‑provisioning. Performance on your terms: Option to choose Provisioned v2 to meet IOPS/throughput targets and bring cost clarity. Available Resources What is Azure File Sync?: https://learn.microsoft.com/azure/storage/file-sync/file-sync-introduction Azure Files: More performance, more control, more value for your file data: https://azure.microsoft.com/blog/azure-files-more-performance-more-control-more-value-for-your-file-data/ Azure File Sync Deployment Guide: https://learn.microsoft.com/azure/storage/file-sync/file-sync-deployment-guide Troubleshooting documentation : https://learn.microsoft.com/troubleshoot/azure/azure-storage/files/file-sync/file-sync-troubleshoot Azure File Sync “copilot” troubleshooting experience: https://learn.microsoft.com/azure/copilot/improve-storage-accounts Next Steps (Run This in Your Lab) Verify roles: On the target subscription/resource group, grant Azure File Sync Administrator (or Owner) to your deployment identity. Confirm in Access control (IAM). Create the file share in the same region as your Storage Sync Service. Enable access keys on the storage account. Install the latest agent on your Windows Server; enable auto‑upgrade. Register the server to your Storage Sync Service. Create a Sync Group, then the cloud endpoint. If you see a role assignment error, re‑check your signed‑in account/role and retry. Add the server endpoint with the right path, decide on cloud tiering, set Volume Free Space Policy, and choose initial sync behavior (merge vs overwrite). Open required egress on your network devices, enable monitoring/logging, and plan backup for the storage account. Optionally evaluate Provisioned v2 for throughput/IOPS and predictable pricing before moving to production. If you’ve got a scenario that behaves differently in the field, I want to hear about it. Drop me a note with what you tried, what failed, and where in the flow it happened. Cheers! Pierre344Views0likes0CommentsUnlocking Private IP for Azure Application Gateway: Security, Compliance, and Practical Deployment
If you’re responsible for securing, scaling, and optimizing cloud infrastructure, this update is for you. Based on my recent conversation with Vyshnavi Namani, Product Manager on the Azure Networking team, I’ll break down what private IP means for your environment, why it matters, and how to get started. Why Private IP for Application Gateway? Application Gateway has long been the go-to Layer 7 load balancer for web traffic in Azure. It manages, routes, and secures requests to your backend resources, offering SSL offloading and integrated Web Application Firewall (WAF) capabilities. But until now, public IPs were the norm, meaning exposure to the internet and the need for extra security layers. With Private IP, your Application Gateway can be deployed entirely within your virtual network (VNet), isolated from public internet access. This is a huge win for organizations with strict security, compliance, or policy requirements. Now, your traffic stays internal, protected by Azure’s security layers, and only accessible to authorized entities within your ecosystem. Key Benefits for ITPRO 🔒 No Public Exposure With a private-only Application Gateway, no public IP is assigned. The gateway is accessible only via internal networks, eliminating any direct exposure to the public internet. This removes a major attack vector by keeping traffic entirely within your trusted network boundaries. 📌 Granular Network Control Private IP mode grants full control over network policies. Strict NSG rules can be applied (no special exceptions needed for Azure management traffic) and custom route tables can be used (including a 0.0.0.0/0 route to force outbound traffic through on-premises or appliance-based security checkpoints). ☑️ Compliance Alignment Internal-only gateways help meet enterprise compliance and data governance requirements. Sensitive applications remain isolated within private networks, aiding data residency and preventing unintended data exfiltration. Organizations with “no internet exposure” policies can now include Application Gateway without exception. Architectural Considerations and Deployment Prerequisites To deploy Azure Application Gateway with Private IP, you should plan for the following: SKU & Feature Enablement: Use the v2 SKU (Standard_v2 or WAF_v2). The Private IP feature is GA but may require opt-in via the EnableApplicationGatewayNetworkIsolation flag in Azure Portal, CLI, or PowerShell. Dedicated Subnet: Deploy the gateway in a dedicated subnet (no other resources allowed). Recommended size: /24 for v2. This enables clean NSG and route table configurations. NSG Configuration: Inbound: Allow AzureLoadBalancer for health probes and internal client IPs on required ports. Outbound: Allow only necessary internal destinations; apply a DenyAll rule to block internet egress. User-Defined Routes (UDRs): Optional but recommended for forced tunneling. Set 0.0.0.0/0 to route traffic through an NVA, Azure Firewall, or ExpressRoute gateway. Client Connectivity: Ensure internal clients (VMs, App Services, on-prem users via VPN/ExpressRoute) can reach the gateway’s private IP. Use Private DNS or custom DNS zones for name resolution. Outbound Dependencies: For services like Key Vault or telemetry, use Private Link or NAT Gateway if internet access is required. Plan NSG and UDRs accordingly. Management Access: Admins must be on the VNet or connected network to test or manage the gateway. Azure handles control-plane traffic internally via a management NIC. Migration Notes: Existing gateways may require redeployment to switch to private-only mode. Feature registration must be active before provisioning. Practical Scenarios Here are several practical scenarios where deploying Azure Application Gateway with Private IP is especially beneficial: 🔐 Internal-Only Web Applications Organizations hosting intranet portals, HR systems, or internal dashboards can use Private IP to ensure these apps are only accessible from within the corporate network—via VPN, ExpressRoute, or peered VNets. 🏥 Regulated Industries (Healthcare, Finance, Government) Workloads that handle sensitive data (e.g., patient records, financial transactions) often require strict network isolation. Private IP ensures traffic never touches the public internet, supporting compliance with HIPAA, PCI-DSS, or government data residency mandates. 🧪 Dev/Test Environments Development teams can deploy isolated environments for testing without exposing them externally. This reduces risk and avoids accidental data leaks during early-stage development. 🌐 Hybrid Network Architectures In hybrid setups where on-prem systems interact with Azure-hosted services, Private IP gateways can route traffic securely through ExpressRoute or VPN, maintaining internal-only access and enabling centralized inspection via NVAs. 🛡️ Zero Trust Architectures Private IP supports zero trust principles by enforcing least-privilege access, denying internet egress, and requiring explicit NSG rules for all traffic—ideal for organizations implementing segmented, policy-driven networks. Resources https://docs.microsoft.com/azure/application-gateway/ https://learn.microsoft.com/azure/application-gateway/configuration-overview https://learn.microsoft.com/azure/virtual-network/network-security-groups-overview https://learn.microsoft.com/azure/virtual-network/virtual-network-peering-overview Next Steps Evaluate Your Workloads: Identify apps and services that require internal-only access. Plan Migration: Map out your VNets, subnets, and NSGs for a smooth transition. Enable Private IP Feature: Register and deploy in your Azure subscription. Test Security: Validate that only intended traffic flows through your gateway. Final Thoughts Private IP for Azure Application Gateway is an improvement for secure, compliant, and efficient cloud networking. If you’re an ITPRO managing infrastructure, now’s the time check out this feature and level up your Azure architecture. Have questions or want to share your experience? Drop a comment below. Cheers! Pierre254Views1like0CommentsCloud Shell Quick Tip: Service Tag Network Security Group Rule
This video shows you how to configure an NSG rule to allow SSH from the Azure Cloud shell within the portal. This is great if you need to access resources and don't have a VPN or Azure Bastion set up within the Virtual Network (VNET). I show you just three minutes how to modify your NSG to permit the AzureCloud service tag and ssh into my VM.5.9KViews1like1CommentUnlocking Flexibility with Azure Files Provisioned V2
In this episode of E2E:10-Minute Drill, host Pierre Roman sits down with Will Gries, Principal PM in Azure Storage, to explore the newly released Azure Files Provisioned V2 billing model. This model introduces a game-changing approach to cloud file storage by allowing users to provision storage, IOPS, and throughput independently—a major leap forward in flexibility and cost optimization. 📺 Watch the full episode: https://youtu.be/Tb6y0fvJBMs Previously, Standard Azure Files used a pay-as-you-go model where you pay per GB of storage plus transaction fees for every file operation (reads, writes, lists, etc.). That often made bills hard to predict. There was also a Premium tier (Provisioned V1 on SSDs, where you pre-allocated capacity; that gave you fixed performance and no transaction charges, but you might have to over-provision storage to get more IOPS, whether you needed that extra space or not. Provisioned V2 changes the game... You can now pre-provision your storage, IOPS, and throughput you need for a file share. That’s what you pay for – and nothing more. There are no per-operation fees at all in V2. It’s like moving from a metered phone plan to an unlimited plan: a stable bill each month, and you can adjust your “plan” up or down as needed. Key Benefits of Provisioned V2 Predictable (and Lower) Costs: No more paying for every single read/write. You pay a known monthly rate based on the resources you reserve. This means no surprise spikes in cost when your usage increases. In many cases, Provisioned V2 actually lowers the total cost for active workloads. Microsoft has noted that common workloads might save on the order of 30–50% compared to the old pay-as-you-go model, thanks to lower storage prices and zero transaction fees. High Performance on Demand: Each file share can now scale up to 50,000 IOPS and 5 GiB/sec throughput, and support up to 256 TiB of data in a single share. That’s a big jump from the old limits. More importantly, you’re in control of the performance: if you need more IOPS or bandwidth, you can dial it up anytime (and dial it down later if you overshot). Provisioned V2 also includes burst capacity for short spikes, so your share can automatically handle occasional surges above your baseline IOPS. Bottom line – your Azure Files can now handle much larger and more IO-intensive workloads without breaking a sweat. Simpler Management & Planning: Forget about juggling Hot vs Cool vs Transaction Optimized tiers or guessing at how many transactions you’ll run. With V2, every Standard file share works the same way – you just decide how much capacity and performance to provision. This makes it much easier to plan and budget. You can monitor each share’s usage with new per-share metrics (Azure shows you how much of your provisioned IOPS/throughput you’re using), which helps right-size your settings. If you’re syncing on-prem file servers to Azure with Azure File Sync, the predictable costs and higher limits of V2 make your hybrid setup easier to manage and potentially cheaper, too. Azure Files provisioned v2 billing model for flexibility, cost savings, and predictability | Microsoft Community Hub Provisioned V2 makes Azure Files more cloud-friendly and enterprise-ready. Whether you’re a new user or have been using Azure Files for years, this model offers a win-win: you get more control and performance, and you eliminate the unpredictable bills. If you have heavy usage, you’ll appreciate the cost savings and headroom. If you have lighter usage, you’ll enjoy the simplicity and peace of mind. Overall, if you use Azure Files (or are planning to), Provisioned V2 is likely to make your life easier and your storage costs lower. It’s a welcome upgrade that addresses a lot of customer pain points in cloud file storage. If you're looking to optimize your Azure storage strategy, this episode is a must-watch. 🔗 Explore all episodes: https://aka.ms/E2E-10min-Drill Resources: Azure Storage Blog – Provisioned V2 Announcement (Jan 2025):“Azure Files provisioned v2 billing model for flexibility, cost savings, and predictability” – Official blog post introducing Provisioned V2, with details on the new limits and pricing model. (Microsoft Tech Community)Microsoft Azure Blog (Apr 2025):“Azure Files: More performance, more control, more value for your file data” – Azure blog highlighting the increased performance and value offered by Azure Files (including the new billing model). Microsoft Learn – Understand Azure Files Billing Models: Documentation explaining Azure Files billing, with sections on the Provisioned V2 model and how it differs from previous models. Cheer! Pierre221Views0likes0CommentsAnnouncing the new Microsoft Learn Plan - Preparing for your organization's AI workloads
We're pleased to announce the new "Preparing for your organization's AI workloads" Microsoft Learn Plan - focused the IT/Ops audience and now available on Microsoft Learn! This set of content was curated by our team and is targeted at helping IT Professionals who want to learn how to support their organization's AI applications and infrastructure. The Learning Plan is composed of 4 milestones, which in its turn are composed of a total of 22 modules: Milestone 1: Getting Started with AI on Microsoft Azure - Learning Path: Introduction to AI in Azure - 12 modules Milestone 2: Introduction to AI Services Infrastructure on Azure - Learning Path: Manage Authentication, Authorization, and RBAC for AI Workloads on Azure - 3 modules - Learning Path: Manage Network Access for AI Workloads - 2 modules Milestone 3: Monitoring AI Services on Azure - Learning Path: Monitor AI Workloads on Azure - 3 modules Milestone 4: Advanced Management of AI Workloads on Azure - Learning Path: AI Workload Governance and DLP - 2 modules This comprehensive plan introduces foundational AI concepts, then guides you through advanced topics. Whether you're an IT administrator, security specialist, or AI practitioner, this plan equips you with the skills to build trusted, secure, and compliant AI solutions at scale. We hope you enjoy learning! Let us know what you think about this content in the comment section below! If you'd like to see more of this type of content, or have any suggestions, let us know as well!1.7KViews4likes1CommentHow Azure Storage Powers AI Workloads: Behind the Scenes with OpenAI, Blobfuse & More
In the latest episode of E2E: 10-minute Drill, I sat down with Vamshi from the Azure Storage team to explore how Azure Blob Storage is fueling the AI revolution. From training massive foundation models like ChatGPT to enabling enterprise-grade AI solutions. Whether you're building your own LLM, fine-tuning models with proprietary data, or just curious about how Microsoft supports OpenAI’s infrastructure, this episode is packed with insights. 🎥 Watch the Full Episode 👉Watch on YouTube 🔍 Key Highlights Azure Blob Storage is the backbone of AI workloads, storing everything from training data to user-generated content in apps like ChatGPT and DALL·E. Microsoft’s collaboration with OpenAI has led to innovations like Azure Scaled Accounts and Blobfuse2, now available to all Azure customers. Enterprises can now securely bring their own data to Azure AI services, with enhanced access control and performance at exabyte scale. 📂 Documentation & Resources 🚀 Azure Blob Storage Overview - https://learn.microsoft.com/azure/storage/blobs/ 📝 Blobfuse2 (Linux FUSE Adapter for Azure Blob Storage) - https://learn.microsoft.com/azure/storage/blobs/blobfuse2-introduction 🧠 Azure OpenAI Service - https://learn.microsoft.com/azure/cognitive-services/openai/overview 🔐 Azure Role-Based Access Control (RBAC) - https://learn.microsoft.com/azure/role-based-access-control/overview 💬 Why It Matters As AI becomes a core workload for infrastructure teams, understanding how to scale, secure, and optimize your data pipelines is critical. This episode offers a behind-the-scenes look at how Microsoft is enabling developers and enterprises to build the next generation of intelligent applications—using the same tools that power OpenAI. 📣 Stay Connected Subscribe to the ITOpsTalk YouTube Channel and follow the E2E:10-minute Drill series for more conversations on cloud, AI, and innovation. And, As always if you have any questions or comments, please leave them below. I'll make sure we get back to you. Cheers!! Pierre221Views0likes0Comments