azure
584 TopicsJoin us at Microsoft Azure Infra Summit 2026 for deep technical Azure infrastructure content
Microsoft Azure Infra Summit 2026 is a free, engineering-led virtual event created for IT professionals, platform engineers, SREs, and infrastructure teams who want to go deeper on how Azure really works in production. It will take place May 19-21, 2026. This event is built for the people responsible for keeping systems running, making sound architecture decisions, and dealing with the operational realities that show up long after deployment day. Over the past year, one message has come through clearly from the community: infrastructure and operations audiences want more in-depth technical content. They want fewer surface-level overviews and more practical guidance from the engineers and experts who build, run, and support these systems every day. That is exactly what Azure Infra Summit aims to deliver. All content is created AND delivered by engineering, targeting folks working with Azure infrastructure and operating production environments. Who is this for: IT professionals, platform engineers, SREs, and infrastructure teams When: May 19-21, 2026 - 8:00 AM–1:00 PM Pacific Time, all 3 days Where: Online Virtual Cost: Free Level: Most sessions are advanced (L300-400). Register here: https://aka.ms/MAIS-Reg Built for the people who run workloads on Azure Azure Infra Summit is for the people who do more than deploy to Azure. It is for the people who run it. If your day involves uptime, patching, governance, monitoring, reliability, networking, identity, storage, or hybrid infrastructure, this event is for you. Whether you are an IT professional managing enterprise environments, a platform engineer designing landing zones, an Azure administrator, an architect, or an SRE responsible for resilience and operational excellence, you will find content built with your needs in mind. We are intentionally shaping this event around peer-to-peer technical learning. That means engineering-led sessions, practical examples, and candid discussion about architecture, failure modes, operational tradeoffs, and what breaks in production. The promise here is straightforward: less fluff, more infrastructure. What to expect Azure Infra Summit will feature deep technical content in the 300 to 400 level range, with sessions designed by engineering to help you build, operate, and optimize Azure infrastructure more effectively. The event will include a mix of live and pre-recorded sessions and live Q&A. Throughout the three days, we will dig into topics such as: Hybrid operations and management Networking at scale Storage, backup, and disaster recovery Observability, SLOs, and day-2 operations Confidential compute Architecture, automation, governance, and optimization in Azure Core environments And more… The goal is simple: to give you practical guidance you can take back to your environment and apply right away. We want attendees to leave with stronger mental models, a better understanding of how Azure behaves in the real world, and clearer patterns for designing and operating infrastructure with confidence. Why this event matters Infrastructure decisions have a long tail. The choices we make around architecture, operations, governance, and resilience show up later in the form of performance issues, outages, cost, complexity, and recovery challenges. That is why deep technical learning matters, and why events like this matter. Join us I hope you will join us for Microsoft Azure Infra Summit 2026, happening May 19-21, 2026. If you care about how Azure infrastructure behaves in the real world, and you want practical, engineering-led guidance on how to build, operate, and optimize it, this event was built for you. Register here: https://aka.ms/MAIS-Reg Cheers! Pierre Roman3.7KViews2likes1CommentOverview of Azure Workload Modernization
Azure workload modernization generally means shifting from traditional deployment options, such as running a workload within a VM, to more cloud native components, such as functions, PaaS services, and other cloud architecture components. Shift from VMs to PaaS and Cloud-Native Services: By replatforming to services like Azure App Service for web apps, managed databases (e.g. Azure SQL Database), or container platforms (e.g. Azure Kubernetes Service (AKS)), you offload infrastructure management to Azure. Azure handles patches, scaling, and high availability, so your team can focus on code and features. (Learn more: https://learn.microsoft.com/azure/app-modernization-guidance/plan/plan-an-application-modernization-strategy#iaas-vs-paas) Immediately Leverage Azure’s Built-in Capabilities: You can light up Azure’s ecosystem features for security, compliance, monitoring, and more. For example, without changing any code you can enable Azure Monitor for telemetry and alerting, use Azure’s compliance certifications to meet regulatory needs, and turn on governance controls. Modernizing a workload is about unlocking things like auto-scaling, backup/DR, and patch management that will be handled for you as platform features. (See: https://learn.microsoft.com/azure/well-architected/framework/platform-automation) Treat Modernization as a Continuous Journey. Modernizing isn’t a single “big bang” rewrite, it’s an ongoing process. Once on Azure, plan to iteratively improve your applications as new services and best practices emerge. Implement DevOps pipelines (CI/CD) to regularly deliver updates and refactor parts of the system over time. This allows you to adopt new Azure capabilities (such as improved instance types, updated frameworks, or new managed services) with minimal disruption. By continually integrating improvements – from code enhancements to architecture changes – you ensure your workloads keep getting more efficient, secure, and scalable. (See: https://learn.microsoft.com/azure/app-modernization-guidance/get-started/application-modernization-life-cycle – continuous improvement approach) Use Containers and Event-Driven Architectures to Evolve Legacy Apps: Breaking apart large, tightly-coupled applications into smaller components can drastically improve agility and resilience. Containerize parts of your app and deploy them to a managed orchestrator like Azure Kubernetes Service (AKS) for better scalability and fault isolation. In an AKS cluster, each microservice or module runs independently, so you can update or scale one component without impacting the whole system. In addition, consider introducing serverless functions (via Azure Functions) or event-driven services for specific tasks and background jobs. These approaches enable on-demand scaling and cost efficiency – Azure only runs your code when triggered by events or requests. Adopting microservices and serverless architectures helps your application become more modular, easier to maintain, and automatically scalable to meet demand. (Learn more: https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices and https://learn.microsoft.com/azure/azure-functions/functions-overview) Modernize Security and Identity: Update your application’s security posture to align with cloud best practices. Integrate your apps with Microsoft Entra ID for modern authentication and single sign-on, rather than custom or legacy auth methods. This provides immediate enhancements like multi-factor authentication, token-based access, and easier user management across cloud services. Additionally, take advantage of Azure’s global networking and security services, for example, use Azure Front Door to improve performance for users worldwide and add a built-in Web Application Firewall to protect against DDoS and web attacks. By using cloud-native security services (such as Azure Key Vault to manage app secrets and certificates, or Microsoft Defender for Cloud for threat protection), you can significantly strengthen your workload’s security while reducing the operational burden on your team. (See: https://learn.microsoft.com/entra/identity/intro and https://learn.microsoft.com/azure/frontdoor/front-door-overview)488Views0likes0CommentsManaged Instance on Azure App Service: What IT/Ops Teams Need to Know
Azure App Service has long been one of the most reliable ways to run web apps on Azure, giving teams a fully managed platform with built‑in scaling, deployment integration, and enterprise‑grade security. But for organizations that need more control, expanded flexibility, or the ability to run apps that have additional dependencies, the new Managed Instance on Azure App Service (preview) brings a powerful new option. Vinicius Apolinario recently sat down with Andrew Westgarth, Product Manager for Azure App Service to talk through what Managed Instances are, why they matter, and how IT/Ops teams can take advantage of the new capabilities. What Managed Instances Bring to the Table Managed Instances (MI) deliver the App Service experience you know with added flexibility for additional scenarios. You get the same PaaS benefits—patching, scaling, deployment workflows—but with the control typically associated with IaaS. Some of the highlights we discussed: App Service and Managed Instance on Azure App Service — What are the main differences and what scenarios MI is focusing on. Consistent App Service experience — Same deployment model, same runtime options, same operational model. App service experience for different audiences — How IT/Ops teams can leverage MI and what does it mean for development teams. Features IT/Ops Teams Will Appreciate Beyond the core architecture, MI introduces capabilities that make day‑to‑day operations easier: Configuration (Install) Script — A new way to customize the underlying environment with scripts that run during provisioning. This is especially useful for installing dependencies, configuring app and OS settings, installing fonts, or preparing the environment for the workload. RDP Access for Troubleshooting — A long‑requested feature that gives operators a secure way to RDP into the instance for deep troubleshooting. Perfect for diagnosing issues that require OS‑level visibility. Learn more about Managed Instance on Azure App Service (preview): Documentation: https://aka.ms/AppService/ManagedInstance Hands On Lab: https://aka.ms/managedinstanceonappservicelab Blog: https://aka.ms/managedinstanceonappservice Ignite session: https://ignite.microsoft.com/en-US/sessions/BRK102243Views1like0CommentsAzure Migration Challenges (and how to resolve them)
Moving workloads to Azure is rarely plug-and-play. Here are some workarounds for challenges organizations encounter when planning and executing migrations. Server Migration Legacy OS & Software Compatibility Old, out-of-support operating systems may not run in Azure or may perform poorly. Tightly coupled apps tied to specific hardware or OS versions are hard to replicate. Fix: Run compatibility assessments early. Upgrade or patch the OS before migrating, or refactor the workload to run on a supported OS. Performance Sizing On-prem VMs may rely on fast local SSDs or low-latency network links you won't get by default in Azure. Undersizing means poor performance; oversizing means wasted spend. Fix: Use Azure Migrate's performance-based recommendations to right-size your VMs. Network & Identity Integration Migrated servers still need to communicate with on-prem resources and authenticate users. Splitting app servers and auth servers across environments breaks things fast. Fix: Design network topology & identity infrastructure before you move anything. Move workloads that have interdependencies together. Governance & Cloud Sprawl On-prem controls (naming conventions, equipment tags) don't automatically follow you to the cloud. Spinning up resources with a click leads to sprawl. Fix: Set up Azure Policy from day one. Enforce tagging, naming, and compliance rules as part of the migration project—not after. Skills Gaps On-prem server experts aren't automatically fluent in Azure operations. Fix: Invest in cloud operations training before and during the migration. Database Migration Compatibility Not every database engine or version maps cleanly to an Azure equivalent. Fix: Run the Azure Data Migration Assistant early to verify feature and functionality support. Post-Migration Performance Performance depends on the hosting ecosystem; what worked on-prem may not translate directly. Fix: Revisit indexing and configuration after migration. Use SQL Intelligent Insights and Performance Recommendations for tuning guidance. Choosing the Right Service Tier Azure offers elastic pools, managed instances, Hyperscale, and sharding—picking wrong may be costly. Fix: Profile your workload with your DBA and use Azure Migrate's Database Assessment for sizing suggestions. Security Configuration User logins, roles, and encryption settings must migrate with the data. Fix: Map every layer of your on-prem security configuration and implement corresponding controls post-migration. Data Integrity Data types, constraints, and triggers must come over intact with zero loss or corruption. Fix: Use reliable migration tools, test multiple times, and validate row counts and key constraints. Plan cutover during low-usage windows and always have a rollback plan. Application Migration Legacy App Complexity Custom and legacy apps carry years of accumulated config files, hard-coded paths, IP addresses, and environment-specific logging. Each app can feel like its own mini migration project. Fix: Use Azure Migrate's app dependency analysis to map what each app needs before you touch it. Dependency Conflicts Apps may depend on specific framework versions, libraries, or OS features that aren't available or supported in Azure. Fix: Identify and resolve dependency gaps early. Consider containerizing or refactoring apps to isolate them from environment differences. Scale of Effort Dozens or hundreds of apps, each with unique characteristics, create a massive manual workload. Fix: Automate everything you can. Use porting assistants and batch migration tooling to reduce repetitive tasks. Key Takeaway Start assessments early, automate aggressively, set up governance from day one, and train your team before the move—not after. The most likely cause of a migration failure comes from skipping the prep work.517Views2likes1CommentMigration, Modernization & Agentic Tools
This video covers Migration, Modernization, and Agentic tools. Agentic tools introduce autonomy, continuous optimization, and context‑aware decision‑making into the migration lifecycle. Instead of treating migration as a one‑time lift‑and‑shift, they operate as ongoing systems that: Discover and map environments dynamically Recommend modernization paths based on real telemetry Automate execution steps end‑to‑end Continuously validate, optimize, and remediate after landing in Azure This shifts migration from a project to a self‑improving system. This video provides an overview of new tools in Azure Copilot and GitHub Copilot that you can use when migrating and modernizing. These tools provide the following benefits: Agents can classify workloads into migrate/modernize/rebuild patterns based on performance, code structure, and operational signals. Agents can execute migration waves automatically—copying data, validating cutovers, sequencing dependencies, and rolling back if needed. Agentic tools can continuously tune cost, performance, resiliency, and security posture using telemetry and policy-driven actions. Agentic tools can ensure governance is embedded into the migration engine—ensuring workloads land compliant, secure, and aligned with enterprise standards. Autonomous discovery and automated execution remove weeks of manual effort. Parallelized migration waves become safe because the system understands dependencies. Automated validation reduces human error during cutover. Refactoring recommendations are grounded in code and performance analysis. Agentic tools keep optimizing cost, security, and resilience—closing the loop between migration and operations.
281Views0likes0CommentsAutomating Large‑Scale Data Management with Azure Storage Actions
Azure Storage customers increasingly operate at massive scale, with millions or even billions of items distributed across multiple storage accounts. As the scale of the data increases, managing the data introduces a different set of challenges. In a recent episode of Azure Storage Talk, I sat down with Shashank, a Product Manager on the Azure Storage Actions team, to discuss how Azure Storage Actions helps customers automate common data management tasks without writing custom code or managing infrastructure. This post summarizes the key concepts, scenarios, and learnings from that conversation. Listen to the full conversation below. The Problem: Data Management at Scale Is Hard As storage estates grow, customers often need to: Apply retention or immutability policies for compliance Protect sensitive or important data from modification Optimize storage costs by tiering infrequently accessed data Add or clean up metadata (blob index tags) for discovery and downstream processing Today, many customers handle these needs by writing custom scripts or maintaining internal tooling. This approach requires significant engineering effort, ongoing maintenance, careful credential handling, and extensive testing, especially when operating across millions of item across multiple storage accounts. These challenges become more pronounced as data estates sprawl across regions and subscriptions. What Is Azure Storage Actions? Azure Storage Actions is a fully managed, serverless automation platform designed to perform routine data management operations at scale for: Azure Blob Storage Azure Data Lake Storage It allows customers to define condition-based logic and apply native storage operations such as tagging, tiering, deletion, or immutability, across large datasets without deploying or managing servers. Azure Storage Actions is built around two main concepts: Storage Tasks A storage task is an Azure Resource Manager (ARM) resource that defines: The conditions used to evaluate blobs (for example, file name, size, timestamps, or index tags) The actions to take when conditions are met (such as changing tiers, adding immutability, or modifying tags) The task definition is created once and centrally managed. Task Assignments A task assignment applies a storage task to one or more storage accounts. This allows the same logic to be reused without redefining it for each account. Each assignment can: Run once (for cleanup or one-off processing) Run on a recurring schedule Be scoped using container filters or excluded prefixes Walkthrough Scenario: Compliance and Cost Optimization During the episode, Shashank demonstrated a real-world scenario involving a storage account used by a legal team. The Goal Identify PDF files tagged as important Apply a time-based immutability policy to prevent tampering Move those files from the Hot tier to the Archive tier to reduce storage costs Add a new tag indicating the data is protected Move all other blobs to the Cool tier for cost efficiency The Traditional Approach Without Storage Actions, this would typically require: Writing scripts to iterate through blobs Handling credentials and permissions Testing logic on sample data Scaling execution safely across large datasets Maintaining and rerunning the scripts over time Using Azure Storage Actions With Storage Actions, the administrator: Defines conditions based on file extension and index tags Chains multiple actions (immutability, tiering, tagging) Uses a built-in preview capability to validate which blobs match the conditions Executes the task without provisioning infrastructure The entire workflow is authored declaratively in the Azure portal and executed by the platform. Visibility, Monitoring, and Auditability Azure Storage Actions provides built-in observability: Preview conditions allow customers to validate logic against a subset of blobs before execution Azure Monitor metrics track task runs, targeted objects, and successful operations Execution reports are generated as CSV files for each run, detailing: Blobs processed Actions performed Execution status for audit purposes This makes Storage Actions suitable for scenarios where traceability and review are important. Common Customer Use Cases Shashank shared several examples of how customers are using Azure Storage Actions today: Financial services: Applying immutability and retention policies to call recordings for compliance Airlines: Cost optimization by tiering or cleaning up blobs based on creation time or size Manufacturing: One-time processing to reset or remove blob index tags on IoT-generated data These scenarios range from recurring automation to one-off operational tasks. Getting Started and Sharing Feedback Azure Storage Actions is available in over 40 public Azure regions. To learn more, check out: Azure Storage Actions product page: https://azure.microsoft.com/en-us/products/storage-actions Azure Storage Actions public documentation: https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal Azure Storage Actions pricing page: https://azure.microsoft.com/en-us/pricing/details/storage-actions/ For questions or feedback, the team can be reached at: storageactions@microsoft.com233Views1like0CommentsJSON Web Token (JWT) Validation in Azure Application Gateway: Secure Your APIs at the Gate
Hello Folks! In a Zero Trust world, identity becomes the control plane and tokens become the gatekeepers. Recently, in an E2E conversation with my colleague Vyshnavi Namani, we dug into a topic every ITPro supporting modern apps should understand: JSON Web Token (JWT) validation, specifically using Azure Application Gateway. In this post we’ll distill that conversation into a technical guide for infrastructure pros who want to secure APIs and backend workloads without rewriting applications. Why IT Pros Should Care About JWT Validation JSON Web Token (JWT) is an open standard token format (RFC 7519) used to represent claims or identity information between two parties. JWTs are issued by an identity provider (Microsoft Entra ID) and attached to API requests in an HTTP Authorization: Bearer <token> header. They are tamper-evident and include a digital signature, so they can be validated cryptographically. JWT validation in Azure Application Gateway means the gateway will check every incoming HTTPS request for a valid JWT before it forwards the traffic to your backend service. Think of it like a bouncer or security guard at the club entrance: if the client doesn’t present a valid “ID” (token), they don’t get in. This first-hop authentication happens at the gateway itself. No extra custom auth code is needed in your APIs. The gateway uses Microsoft Entra ID (Azure AD) as the authority to verify the token’s signature and claims (issuer/tenant, audience, expiry, etc.). By performing token checks at the edge, Application Gateway ensures that only authenticated requests reach your application. If the JWT is missing or invalid, the gateway could deny the request depending on your configuration (e.g. returns HTTP 401 Unauthorized) without disturbing your backend. If the JWT is valid, the gateway can even inject an identity header (x-msft-entra-identity) with the user’s tenant and object ID before passing the call along 9 . This offloads authentication from your app and provides a consistent security gate in front of all your APIs. Key benefits of JWT validation at the gateway: Stronger security at the edge: The gateway checks each token’s signature and key claims, blocking bad tokens before they reach your app. No backend work needed: Since the gateway handles JWT validation, your services don’t need token‑parsing code. Therefore, there is less maintenance and lower CPU use. Stateless and scalable: Every request brings its own token, so there’s no session management. Any gateway instance can validate tokens independently, and Azure handles key rotation for you. Simplified compliance: Centralized JWT policies make it easier to prove only authorized traffic gets through, without each app team building their own checks. Defense in depth: Combine JWT validation with WAF rules to block malicious payloads and unauthorized access. In short, JWT validation gives your Application Gateway the smarts to know who’s knocking at the door, and to only let the right people in. How JWT Validation Works At its core, JWT validation uses a trusted authority (for now it uses Microsoft Entra ID) to issue a token. That token is presented to the Application Gateway, which then validates: The token is legitimate The token was issued by the expected tenant The audience matches the resource you intend to protect If all checks pass, the gateway returns a 200 OK and the request continues to your backend. If anything fails, the gateway returns 403 Forbidden, and your backend never sees the call. You can check code and errors here: JSON Web Token (JWT) validation in Azure Application Gateway (Preview) Setting Up JWT Validation in Azure Application Gateway The steps to configure JWT validation in Azure Application Gateway are documented here: JSON Web Token (JWT) validation in Azure Application Gateway (Preview) Use Cases That Matter to IT Pros Zero Trust Multi-Tenant Workloads Geolocation-Based Access AI Workloads Next Steps Identify APIs or workloads exposed through your gateways. Audit whether they already enforce token validation. Test JWT validation in a dev environment. Integrate the policy into your Zero Trust architecture. Collaborate with your dev teams on standardizing audiences. Resources Azure Application Gateway JWT Validation https://learn.microsoft.com/azure/application-gateway/json-web-token-overview Microsoft Entra ID App Registrations https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app Azure Application Gateway Documentation https://learn.microsoft.com/azure/application-gateway/overview Azure Zero Trust Guidance https://learn.microsoft.com/security/zero-trust/zero-trust-overview Azure API Management and API Security Best Practices https://learn.microsoft.com/azure/api-management/api-management-key-concepts Microsoft Identity Platform (Tokens, JWT, OAuth2 https://learn.microsoft.com/azure/active-directory/develop/security-tokens Using Curl with JWT Validation Scenarios https://learn.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#request-an-access-token Final Thoughts JWT validation in Azure Application Gateway is a powerful addition to your skills for securing cloud applications. It brings identity awareness right into your networking layer, which is a huge win for security and simplicity. If you manage infrastructure and worry about unauthorized access to your APIs, give it a try. It can drastically reduce the “attack surface” by catching invalid requests early. As always, I’d love to hear about your experiences. Have you implemented JWT validation on App Gateway, or do you plan to? Let me know how it goes! Feel free to drop a comment or question. Cheers! Pierre Roman
946Views1like1CommentAnatomy of an Outage: How Microsoft focuses on Transparency during and post incident
Outages happen—no matter the hyperscale provider, no matter the architecture. What separates resilient organizations from the rest is how quickly they detect issues, how effectively they communicate, and how well they learn from the inevitable. Rick Claus had the opportunity to co-present a session on the topic of how Microsoft communicates during outages and what YOU can do to be more proactive on how your Azure based infra is weathering the storm. He and Tajinder Pal Singh Ahluwalia pull back the curtain on how Microsoft handles major incidents—from the first customer impact signal to the deep‑dive retrospectives that follow.486Views1like0CommentsConfigure a log analytics workspace to collect Window Server Event log, IIS and performance data.
Configuring Azure Monitor with Log Analytics for IIS Servers Azure Monitor combined with Log Analytics provides centralized telemetry collection for performance metrics, event logs, and application logs from Windows-based workloads. This guide demonstrates how to configure data collection from IIS servers using Data Collection Rules (DCRs). Create the Log Analytics Workspace Navigate to Log Analytics workspaces in the Azure portal Select Create Choose your resource group (e.g., Zava IIS resource group) Provide a workspace name and select your preferred region Select Review + Create, then Create After deployment, configure RBAC permissions by assigning the Contributor role to users or service principals that need to interact with the workspace data. Configure Data Collection Infrastructure Create a Data Collection Endpoint: Navigate to Azure Monitor in the portal Select Data Collection Endpoints, then Create Specify the endpoint name, subscription, resource group, and region (match your Log Analytics workspace region) Create the endpoint Create a Data Collection Rule: Navigate to Data Collection Rules and select Create Provide a rule name, resource group, and region Select Windows as the platform type Choose the data collection endpoint created in the previous step Skip the Resources tab initially (you'll associate VMs later) Configure Data Sources Add three data source types to capture comprehensive telemetry: Performance Counters: On the Collect and Deliver page, select Add data source Choose Performance Counters as the data source type Select Basic for standard CPU, memory, disk, and network metrics (or Custom for specific counters) Set the destination to Azure Monitor Logs and select your Log Analytics workspace Windows Event Logs: Add another data source and select Windows Event Logs Choose Basic collection mode Select Application, Security, and System logs Configure severity filters (Critical, Error, Warning for Application and System; Audit Success for Security) Specify the same Log Analytics workspace as the destination IIS Logs: Add a final data source for Internet Information Services logs Accept the default IIS log file paths or customize as needed Set the destination to your Log Analytics workspace After configuring all data sources, select Review + Create, then Create the data collection rule. Associate Resources Navigate to your newly created Data Collection Rule Select Resources from the rule properties Click Add and select your IIS servers (e.g., zava-iis1, zava-iis2) Return to Data Collection Endpoints Select your endpoint and add the same IIS servers as resources This two-step association ensures proper routing of telemetry data. Query Collected Data After allowing time for data collection, query the telemetry: Navigate to your Log Analytics workspace Select Logs to open the query editor Browse predefined queries under Virtual Machines Run the "What data has been collected" query to view performance counters, network metrics, and memory data Access Insights to monitor data ingestion volumes You can create custom KQL queries to analyze specific events, performance patterns, or IIS log entries across your monitored infrastructure. Find out more at: https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/overview799Views0likes0CommentsDeploy and configure an Azure Application Gateway for load balancing and website protection.
Azure Application Gateway provides layer 7 load balancing with integrated Web Application Firewall (WAF) capabilities, enabling traffic distribution across backend servers while protecting against common web exploits like SQL injection and DDoS attacks. This guide walks through deploying an Application Gateway to front-end two Windows Server IIS instances in an availability set. Network Infrastructure Configuration The first step you need to take is to prepare your Azure network infrastructure for Azure Application Gateway deployment. You can do this by performing the following steps: Create Application Gateway Subnet Navigate to Virtual Networks and select your IIS VNet Select Subnets > Add Subnet Configure the subnet: Name: app-GW-subnet Starting address: 10.0.1.0 (or next available subnet range) Leave other settings at defaults (no private endpoint policies or subnet delegation required)app-gateway-iis-vms-narrated-itopstalk.txt Configure NSG Rules for Backend Traffic Select the first IIS VM's Network Security Group Create an inbound rule: Source: Application Gateway subnet (10.0.1.0/24) Service: HTTP Provide priority and descriptive name Repeat for the second IIS VM's NSG to allow traffic from the Application Gateway subnet on port 80app-gateway-iis-vms-narrated-itopstalk.txt Application Gateway Deployment Once the Azure network infrastructure is prepared, you can then deploy the application gateway and configure network traffic protection policies. Basic Configuration Search for Application Gateways in the Azure Portal Click Create > Application Gateway Configure basic settings: Resource Group: Same as IIS VMs Name: (e.g., ZAVA-app-GW2) Region: Same as IIS VMs Tier: Standard V2 IP Address Type: IPv4 only Select Configure Virtual Network and choose the IIS VNet Select the Application Gateway subnet created earlier Create a new public IPv4 address for the gateway frontend. Backend Pool Configuration On the Backends page, select Add a backend pool Provide a pool name Add both IIS VM private IP addresses to the pool. Routing Rule Configuration On the Configuration page, select Add a routing rule Configure the listener: Provide a rule name Create a listener with a descriptive name Protocol: HTTP Port: 80 Listener type: Basic Configure backend targets: Target type: Backend pool Backend pool: Select the pool created in the previous step Create new backend settings with port 80 Configure optional settings (cookie affinity, connection draining) as needed Specify a priority for the routing rule Complete the wizard to deploy the gatewayapp-gateway-iis-vms-narrated-itopstalk.txt Verification and Testing Navigate to Application Gateways and select your deployed gateway Copy the Public IP Address from the overview page Access the public IP in a browser and refresh multiple times to observe load balancing between IIS-1 and IIS-2 Navigate to Backend Pools to view backend health status for troubleshooting. Web Application Firewall Protection In your Application Gateway, navigate to Web Application Firewall Select Create a web application firewall policy Provide a policy name Enable Bot Protection for enhanced security Save the policy Review the policy's Managed Rules to confirm OWASP Core Rule Set and bot protection rules are active. The Application Gateway now distributes traffic across your IIS availability set while providing enterprise-grade security protection through integrated WAF capabilities. Find out more at: https://learn.microsoft.com/en-us/azure/application-gateway/overview375Views2likes0Comments