azure
591 TopicsAzure Migration Challenges (and how to resolve them)
Moving workloads to Azure is rarely plug-and-play. Here are some workarounds for challenges organizations encounter when planning and executing migrations. Server Migration Legacy OS & Software Compatibility Old, out-of-support operating systems may not run in Azure or may perform poorly. Tightly coupled apps tied to specific hardware or OS versions are hard to replicate. Fix: Run compatibility assessments early. Upgrade or patch the OS before migrating, or refactor the workload to run on a supported OS. Performance Sizing On-prem VMs may rely on fast local SSDs or low-latency network links you won't get by default in Azure. Undersizing means poor performance; oversizing means wasted spend. Fix: Use Azure Migrate's performance-based recommendations to right-size your VMs. Network & Identity Integration Migrated servers still need to communicate with on-prem resources and authenticate users. Splitting app servers and auth servers across environments breaks things fast. Fix: Design network topology & identity infrastructure before you move anything. Move workloads that have interdependencies together. Governance & Cloud Sprawl On-prem controls (naming conventions, equipment tags) don't automatically follow you to the cloud. Spinning up resources with a click leads to sprawl. Fix: Set up Azure Policy from day one. Enforce tagging, naming, and compliance rules as part of the migration project—not after. Skills Gaps On-prem server experts aren't automatically fluent in Azure operations. Fix: Invest in cloud operations training before and during the migration. Database Migration Compatibility Not every database engine or version maps cleanly to an Azure equivalent. Fix: Run the Azure Data Migration Assistant early to verify feature and functionality support. Post-Migration Performance Performance depends on the hosting ecosystem; what worked on-prem may not translate directly. Fix: Revisit indexing and configuration after migration. Use SQL Intelligent Insights and Performance Recommendations for tuning guidance. Choosing the Right Service Tier Azure offers elastic pools, managed instances, Hyperscale, and sharding—picking wrong may be costly. Fix: Profile your workload with your DBA and use Azure Migrate's Database Assessment for sizing suggestions. Security Configuration User logins, roles, and encryption settings must migrate with the data. Fix: Map every layer of your on-prem security configuration and implement corresponding controls post-migration. Data Integrity Data types, constraints, and triggers must come over intact with zero loss or corruption. Fix: Use reliable migration tools, test multiple times, and validate row counts and key constraints. Plan cutover during low-usage windows and always have a rollback plan. Application Migration Legacy App Complexity Custom and legacy apps carry years of accumulated config files, hard-coded paths, IP addresses, and environment-specific logging. Each app can feel like its own mini migration project. Fix: Use Azure Migrate's app dependency analysis to map what each app needs before you touch it. Dependency Conflicts Apps may depend on specific framework versions, libraries, or OS features that aren't available or supported in Azure. Fix: Identify and resolve dependency gaps early. Consider containerizing or refactoring apps to isolate them from environment differences. Scale of Effort Dozens or hundreds of apps, each with unique characteristics, create a massive manual workload. Fix: Automate everything you can. Use porting assistants and batch migration tooling to reduce repetitive tasks. Key Takeaway Start assessments early, automate aggressively, set up governance from day one, and train your team before the move—not after. The most likely cause of a migration failure comes from skipping the prep work.216Views2likes1CommentAzure App Service Managed Instances: What IT/Ops Teams Need to Know
Azure App Service has long been one of the most reliable ways to run web apps on Azure, giving teams a fully managed platform with built‑in scaling, deployment integration, and enterprise‑grade security. But for organizations that need more control, expanded flexibility, or the ability to run apps that have additional dependencies, the new Azure App Service Managed Instance (preview) brings a powerful new option. Vinicius Apolinario recently sat down with Andrew Westgarth, Product Manager for Azure App Service to talk through what Managed Instances are, why they matter, and how IT/Ops teams can take advantage of the new capabilities. What Managed Instances Bring to the Table Managed Instances (MI) deliver the App Service experience you know with added flexibility for additional scenarios. You get the same PaaS benefits—patching, scaling, deployment workflows—but with the control typically associated with IaaS. Some of the highlights we discussed: App Service and App Service Managed Instance — What are the main differences and what scenarios MI is focusing on. Consistent App Service experience — Same deployment model, same runtime options, same operational model. App service experience for different audiences — How IT/Ops teams can leverage MI and what does it mean for development teams. Features IT/Ops Teams Will Appreciate Beyond the core architecture, MI introduces capabilities that make day‑to‑day operations easier: Configuration (Install) Script — A new way to customize the underlying environment with scripts that run during provisioning. This is especially useful for installing dependencies, configuring app and OS settings, installing fonts, or preparing the environment for the workload. RDP Access for Troubleshooting — A long‑requested feature that gives operators a secure way to RDP into the instance for deep troubleshooting. Perfect for diagnosing issues that require OS‑level visibility. Learn more about Azure App Service Managed Instance (preview): Documentation: https://aka.ms/AppService/ManagedInstance Hands On Lab: https://aka.ms/managedinstanceonappservicelab Blog: https://aka.ms/managedinstanceonappservice Ignite session: https://ignite.microsoft.com/en-US/sessions/BRK102125Views1like0CommentsMigration, Modernization & Agentic Tools
This video covers Migration, Modernization, and Agentic tools. Agentic tools introduce autonomy, continuous optimization, and context‑aware decision‑making into the migration lifecycle. Instead of treating migration as a one‑time lift‑and‑shift, they operate as ongoing systems that: Discover and map environments dynamically Recommend modernization paths based on real telemetry Automate execution steps end‑to‑end Continuously validate, optimize, and remediate after landing in Azure This shifts migration from a project to a self‑improving system. This video provides an overview of new tools in Azure Copilot and GitHub Copilot that you can use when migrating and modernizing. These tools provide the following benefits: Agents can classify workloads into migrate/modernize/rebuild patterns based on performance, code structure, and operational signals. Agents can execute migration waves automatically—copying data, validating cutovers, sequencing dependencies, and rolling back if needed. Agentic tools can continuously tune cost, performance, resiliency, and security posture using telemetry and policy-driven actions. Agentic tools can ensure governance is embedded into the migration engine—ensuring workloads land compliant, secure, and aligned with enterprise standards. Autonomous discovery and automated execution remove weeks of manual effort. Parallelized migration waves become safe because the system understands dependencies. Automated validation reduces human error during cutover. Refactoring recommendations are grounded in code and performance analysis. Agentic tools keep optimizing cost, security, and resilience—closing the loop between migration and operations.
195Views0likes0CommentsAutomating Large‑Scale Data Management with Azure Storage Actions
Azure Storage customers increasingly operate at massive scale, with millions or even billions of items distributed across multiple storage accounts. As the scale of the data increases, managing the data introduces a different set of challenges. In a recent episode of Azure Storage Talk, I sat down with Shashank, a Product Manager on the Azure Storage Actions team, to discuss how Azure Storage Actions helps customers automate common data management tasks without writing custom code or managing infrastructure. This post summarizes the key concepts, scenarios, and learnings from that conversation. Listen to the full conversation below. The Problem: Data Management at Scale Is Hard As storage estates grow, customers often need to: Apply retention or immutability policies for compliance Protect sensitive or important data from modification Optimize storage costs by tiering infrequently accessed data Add or clean up metadata (blob index tags) for discovery and downstream processing Today, many customers handle these needs by writing custom scripts or maintaining internal tooling. This approach requires significant engineering effort, ongoing maintenance, careful credential handling, and extensive testing, especially when operating across millions of item across multiple storage accounts. These challenges become more pronounced as data estates sprawl across regions and subscriptions. What Is Azure Storage Actions? Azure Storage Actions is a fully managed, serverless automation platform designed to perform routine data management operations at scale for: Azure Blob Storage Azure Data Lake Storage It allows customers to define condition-based logic and apply native storage operations such as tagging, tiering, deletion, or immutability, across large datasets without deploying or managing servers. Azure Storage Actions is built around two main concepts: Storage Tasks A storage task is an Azure Resource Manager (ARM) resource that defines: The conditions used to evaluate blobs (for example, file name, size, timestamps, or index tags) The actions to take when conditions are met (such as changing tiers, adding immutability, or modifying tags) The task definition is created once and centrally managed. Task Assignments A task assignment applies a storage task to one or more storage accounts. This allows the same logic to be reused without redefining it for each account. Each assignment can: Run once (for cleanup or one-off processing) Run on a recurring schedule Be scoped using container filters or excluded prefixes Walkthrough Scenario: Compliance and Cost Optimization During the episode, Shashank demonstrated a real-world scenario involving a storage account used by a legal team. The Goal Identify PDF files tagged as important Apply a time-based immutability policy to prevent tampering Move those files from the Hot tier to the Archive tier to reduce storage costs Add a new tag indicating the data is protected Move all other blobs to the Cool tier for cost efficiency The Traditional Approach Without Storage Actions, this would typically require: Writing scripts to iterate through blobs Handling credentials and permissions Testing logic on sample data Scaling execution safely across large datasets Maintaining and rerunning the scripts over time Using Azure Storage Actions With Storage Actions, the administrator: Defines conditions based on file extension and index tags Chains multiple actions (immutability, tiering, tagging) Uses a built-in preview capability to validate which blobs match the conditions Executes the task without provisioning infrastructure The entire workflow is authored declaratively in the Azure portal and executed by the platform. Visibility, Monitoring, and Auditability Azure Storage Actions provides built-in observability: Preview conditions allow customers to validate logic against a subset of blobs before execution Azure Monitor metrics track task runs, targeted objects, and successful operations Execution reports are generated as CSV files for each run, detailing: Blobs processed Actions performed Execution status for audit purposes This makes Storage Actions suitable for scenarios where traceability and review are important. Common Customer Use Cases Shashank shared several examples of how customers are using Azure Storage Actions today: Financial services: Applying immutability and retention policies to call recordings for compliance Airlines: Cost optimization by tiering or cleaning up blobs based on creation time or size Manufacturing: One-time processing to reset or remove blob index tags on IoT-generated data These scenarios range from recurring automation to one-off operational tasks. Getting Started and Sharing Feedback Azure Storage Actions is available in over 40 public Azure regions. To learn more, check out: Azure Storage Actions product page: https://azure.microsoft.com/en-us/products/storage-actions Azure Storage Actions public documentation: https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal Azure Storage Actions pricing page: https://azure.microsoft.com/en-us/pricing/details/storage-actions/ For questions or feedback, the team can be reached at: storageactions@microsoft.com189Views1like0CommentsJSON Web Token (JWT) Validation in Azure Application Gateway: Secure Your APIs at the Gate
Hello Folks! In a Zero Trust world, identity becomes the control plane and tokens become the gatekeepers. Recently, in an E2E conversation with my colleague Vyshnavi Namani, we dug into a topic every ITPro supporting modern apps should understand: JSON Web Token (JWT) validation, specifically using Azure Application Gateway. In this post we’ll distill that conversation into a technical guide for infrastructure pros who want to secure APIs and backend workloads without rewriting applications. Why IT Pros Should Care About JWT Validation JSON Web Token (JWT) is an open standard token format (RFC 7519) used to represent claims or identity information between two parties. JWTs are issued by an identity provider (Microsoft Entra ID) and attached to API requests in an HTTP Authorization: Bearer <token> header. They are tamper-evident and include a digital signature, so they can be validated cryptographically. JWT validation in Azure Application Gateway means the gateway will check every incoming HTTPS request for a valid JWT before it forwards the traffic to your backend service. Think of it like a bouncer or security guard at the club entrance: if the client doesn’t present a valid “ID” (token), they don’t get in. This first-hop authentication happens at the gateway itself. No extra custom auth code is needed in your APIs. The gateway uses Microsoft Entra ID (Azure AD) as the authority to verify the token’s signature and claims (issuer/tenant, audience, expiry, etc.). By performing token checks at the edge, Application Gateway ensures that only authenticated requests reach your application. If the JWT is missing or invalid, the gateway could deny the request depending on your configuration (e.g. returns HTTP 401 Unauthorized) without disturbing your backend. If the JWT is valid, the gateway can even inject an identity header (x-msft-entra-identity) with the user’s tenant and object ID before passing the call along 9 . This offloads authentication from your app and provides a consistent security gate in front of all your APIs. Key benefits of JWT validation at the gateway: Stronger security at the edge: The gateway checks each token’s signature and key claims, blocking bad tokens before they reach your app. No backend work needed: Since the gateway handles JWT validation, your services don’t need token‑parsing code. Therefore, there is less maintenance and lower CPU use. Stateless and scalable: Every request brings its own token, so there’s no session management. Any gateway instance can validate tokens independently, and Azure handles key rotation for you. Simplified compliance: Centralized JWT policies make it easier to prove only authorized traffic gets through, without each app team building their own checks. Defense in depth: Combine JWT validation with WAF rules to block malicious payloads and unauthorized access. In short, JWT validation gives your Application Gateway the smarts to know who’s knocking at the door, and to only let the right people in. How JWT Validation Works At its core, JWT validation uses a trusted authority (for now it uses Microsoft Entra ID) to issue a token. That token is presented to the Application Gateway, which then validates: The token is legitimate The token was issued by the expected tenant The audience matches the resource you intend to protect If all checks pass, the gateway returns a 200 OK and the request continues to your backend. If anything fails, the gateway returns 403 Forbidden, and your backend never sees the call. You can check code and errors here: JSON Web Token (JWT) validation in Azure Application Gateway (Preview) Setting Up JWT Validation in Azure Application Gateway The steps to configure JWT validation in Azure Application Gateway are documented here: JSON Web Token (JWT) validation in Azure Application Gateway (Preview) Use Cases That Matter to IT Pros Zero Trust Multi-Tenant Workloads Geolocation-Based Access AI Workloads Next Steps Identify APIs or workloads exposed through your gateways. Audit whether they already enforce token validation. Test JWT validation in a dev environment. Integrate the policy into your Zero Trust architecture. Collaborate with your dev teams on standardizing audiences. Resources Azure Application Gateway JWT Validation https://learn.microsoft.com/azure/application-gateway/json-web-token-overview Microsoft Entra ID App Registrations https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app Azure Application Gateway Documentation https://learn.microsoft.com/azure/application-gateway/overview Azure Zero Trust Guidance https://learn.microsoft.com/security/zero-trust/zero-trust-overview Azure API Management and API Security Best Practices https://learn.microsoft.com/azure/api-management/api-management-key-concepts Microsoft Identity Platform (Tokens, JWT, OAuth2 https://learn.microsoft.com/azure/active-directory/develop/security-tokens Using Curl with JWT Validation Scenarios https://learn.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#request-an-access-token Final Thoughts JWT validation in Azure Application Gateway is a powerful addition to your skills for securing cloud applications. It brings identity awareness right into your networking layer, which is a huge win for security and simplicity. If you manage infrastructure and worry about unauthorized access to your APIs, give it a try. It can drastically reduce the “attack surface” by catching invalid requests early. As always, I’d love to hear about your experiences. Have you implemented JWT validation on App Gateway, or do you plan to? Let me know how it goes! Feel free to drop a comment or question. Cheers! Pierre Roman
815Views1like1CommentAnatomy of an Outage: How Microsoft focuses on Transparency during and post incident
Outages happen—no matter the hyperscale provider, no matter the architecture. What separates resilient organizations from the rest is how quickly they detect issues, how effectively they communicate, and how well they learn from the inevitable. Rick Claus had the opportunity to co-present a session on the topic of how Microsoft communicates during outages and what YOU can do to be more proactive on how your Azure based infra is weathering the storm. He and Tajinder Pal Singh Ahluwalia pull back the curtain on how Microsoft handles major incidents—from the first customer impact signal to the deep‑dive retrospectives that follow.412Views1like0CommentsConfigure a log analytics workspace to collect Window Server Event log, IIS and performance data.
Configuring Azure Monitor with Log Analytics for IIS Servers Azure Monitor combined with Log Analytics provides centralized telemetry collection for performance metrics, event logs, and application logs from Windows-based workloads. This guide demonstrates how to configure data collection from IIS servers using Data Collection Rules (DCRs). Create the Log Analytics Workspace Navigate to Log Analytics workspaces in the Azure portal Select Create Choose your resource group (e.g., Zava IIS resource group) Provide a workspace name and select your preferred region Select Review + Create, then Create After deployment, configure RBAC permissions by assigning the Contributor role to users or service principals that need to interact with the workspace data. Configure Data Collection Infrastructure Create a Data Collection Endpoint: Navigate to Azure Monitor in the portal Select Data Collection Endpoints, then Create Specify the endpoint name, subscription, resource group, and region (match your Log Analytics workspace region) Create the endpoint Create a Data Collection Rule: Navigate to Data Collection Rules and select Create Provide a rule name, resource group, and region Select Windows as the platform type Choose the data collection endpoint created in the previous step Skip the Resources tab initially (you'll associate VMs later) Configure Data Sources Add three data source types to capture comprehensive telemetry: Performance Counters: On the Collect and Deliver page, select Add data source Choose Performance Counters as the data source type Select Basic for standard CPU, memory, disk, and network metrics (or Custom for specific counters) Set the destination to Azure Monitor Logs and select your Log Analytics workspace Windows Event Logs: Add another data source and select Windows Event Logs Choose Basic collection mode Select Application, Security, and System logs Configure severity filters (Critical, Error, Warning for Application and System; Audit Success for Security) Specify the same Log Analytics workspace as the destination IIS Logs: Add a final data source for Internet Information Services logs Accept the default IIS log file paths or customize as needed Set the destination to your Log Analytics workspace After configuring all data sources, select Review + Create, then Create the data collection rule. Associate Resources Navigate to your newly created Data Collection Rule Select Resources from the rule properties Click Add and select your IIS servers (e.g., zava-iis1, zava-iis2) Return to Data Collection Endpoints Select your endpoint and add the same IIS servers as resources This two-step association ensures proper routing of telemetry data. Query Collected Data After allowing time for data collection, query the telemetry: Navigate to your Log Analytics workspace Select Logs to open the query editor Browse predefined queries under Virtual Machines Run the "What data has been collected" query to view performance counters, network metrics, and memory data Access Insights to monitor data ingestion volumes You can create custom KQL queries to analyze specific events, performance patterns, or IIS log entries across your monitored infrastructure. Find out more at: https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/overview672Views0likes0CommentsDeploy and configure an Azure Application Gateway for load balancing and website protection.
Azure Application Gateway provides layer 7 load balancing with integrated Web Application Firewall (WAF) capabilities, enabling traffic distribution across backend servers while protecting against common web exploits like SQL injection and DDoS attacks. This guide walks through deploying an Application Gateway to front-end two Windows Server IIS instances in an availability set. Network Infrastructure Configuration The first step you need to take is to prepare your Azure network infrastructure for Azure Application Gateway deployment. You can do this by performing the following steps: Create Application Gateway Subnet Navigate to Virtual Networks and select your IIS VNet Select Subnets > Add Subnet Configure the subnet: Name: app-GW-subnet Starting address: 10.0.1.0 (or next available subnet range) Leave other settings at defaults (no private endpoint policies or subnet delegation required)app-gateway-iis-vms-narrated-itopstalk.txt Configure NSG Rules for Backend Traffic Select the first IIS VM's Network Security Group Create an inbound rule: Source: Application Gateway subnet (10.0.1.0/24) Service: HTTP Provide priority and descriptive name Repeat for the second IIS VM's NSG to allow traffic from the Application Gateway subnet on port 80app-gateway-iis-vms-narrated-itopstalk.txt Application Gateway Deployment Once the Azure network infrastructure is prepared, you can then deploy the application gateway and configure network traffic protection policies. Basic Configuration Search for Application Gateways in the Azure Portal Click Create > Application Gateway Configure basic settings: Resource Group: Same as IIS VMs Name: (e.g., ZAVA-app-GW2) Region: Same as IIS VMs Tier: Standard V2 IP Address Type: IPv4 only Select Configure Virtual Network and choose the IIS VNet Select the Application Gateway subnet created earlier Create a new public IPv4 address for the gateway frontend. Backend Pool Configuration On the Backends page, select Add a backend pool Provide a pool name Add both IIS VM private IP addresses to the pool. Routing Rule Configuration On the Configuration page, select Add a routing rule Configure the listener: Provide a rule name Create a listener with a descriptive name Protocol: HTTP Port: 80 Listener type: Basic Configure backend targets: Target type: Backend pool Backend pool: Select the pool created in the previous step Create new backend settings with port 80 Configure optional settings (cookie affinity, connection draining) as needed Specify a priority for the routing rule Complete the wizard to deploy the gatewayapp-gateway-iis-vms-narrated-itopstalk.txt Verification and Testing Navigate to Application Gateways and select your deployed gateway Copy the Public IP Address from the overview page Access the public IP in a browser and refresh multiple times to observe load balancing between IIS-1 and IIS-2 Navigate to Backend Pools to view backend health status for troubleshooting. Web Application Firewall Protection In your Application Gateway, navigate to Web Application Firewall Select Create a web application firewall policy Provide a policy name Enable Bot Protection for enhanced security Save the policy Review the policy's Managed Rules to confirm OWASP Core Rule Set and bot protection rules are active. The Application Gateway now distributes traffic across your IIS availability set while providing enterprise-grade security protection through integrated WAF capabilities. Find out more at: https://learn.microsoft.com/en-us/azure/application-gateway/overview318Views2likes0CommentsDeploying Windows Servers in an Azure Availability Set
Deploying Windows Servers in an Azure Availability Set This guide demonstrates deploying Windows Server an Azure Availability Set for Windows Server IIS workloads. An availability set logically groups virtual machines across fault domains and update domains within a single Azure data center. Fault domains provide physical hardware isolation (separate racks, power, and network switches), while update domains ensure Azure staggers platform maintenance, rebooting only one domain at a time with 30-minute recovery windows. VMs must be assigned to availability sets during creation and you cannot add existing VMs later. Creating the First VM Navigate to Azure Portal > Virtual Machines > Create Create a new resource group (e.g., "Zava IIS") Name the VM (e.g., "Zava IIS 1") and select region (e.g., East US 2) Under Availability Options, select "Availability set" > Create New Name the availability set and accept defaults (2 fault domains, 2 update domains) Configure local admin account (avoid using "Administrator") Select "No inbound ports" for security Enable Azure Hybrid Benefit if you have existing Windows Server licenses Verify Premium SSD is selected under Disks (required for 99.95% SLA) Note the virtual network name for subsequent VMs Under Management, disable automatic shutdown and hotpatch Under Monitoring, disable boot diagnostics Review and create the VMAvailability-Set-Audio-Pre-avatar.txt Creating the Second VM Return to Virtual Machines > Create Use the same resource group Name the second VM (e.g., "Zava IIS 2") Select the existing availability set created in step 4 above Match all settings from the first VM (admin account, no inbound ports, hybrid benefit, Premium SSD) Ensure the VM connects to the same virtual network as the first VM Disable auto shutdown, hotpatch, and boot diagnostics Review and create Ensure that the VMs are configured with Premium SSD to achieve the highest possible SLA of 99.999%. In a future post, we’ll cover how to configure Azure Application Gateway to load balance traffic across computers in an availability set as well as protecting against DDoS and OWASP top 10 attacks Learn more about Azure Availability Sets
304Views1like0CommentsMicrosoft Entra Domain Services: Deploy, Join a VM, and Use Classic AD Tools
Microsoft Entra Domain Services (Entra DS) provides you with the functionality of managed domain controllers in Azure. This allows you to domain-join Windows Server VMs, use Group Policy, and manage DNS on a specially prepared vNet subnet without deploying and patching your own DC VMs. This post walks through: • Preparing your virtual network • Deploying Entra DS • Configuring DNS • Joining a Windows Server VM to the managed domain • Using AD DS and Windows Server DNS tools from that VM Prerequisites • An Azure subscription. • A Microsoft Entra tenant with a custom DNS domain verified (for example, zava.support). Entra DS uses this custom domain as the managed domain name. • Permission to create resource groups, VNets, and Entra DS. • Permission to manage Entra groups in the tenant (add administrators/configure RBAC). Step 1 – Create a resource group and virtual network 1. Create a new resource group in your chosen region to hold all Entra DS resources and VMs. 2. Create a virtual network (for example, zava-entra-dsvn) in that resource group (for example, address space: 172.16.0.0/16 (or a range that fits your environment). 3. Add a subnet dedicated to the Entra DS domain controllers (for example, zava-entra-dc). This subnet will host the managed domain controller resources created by Entra DS and you won’t actually deploy VMs there. Important Keep this DC subnet separate from your workload subnets. You can use NSGs, but avoid blocking Entra DS management traffic. Step 2 – Add a workload subnet for VMs 1. In the same virtual network, create a second subnet (for example, zava-domain-vms) for domain-joined workloads such as IIS VMs. This special subnet is where you’ll deploy the Windows Server VM that joins the Entra DS domain. Step 3 – Deploy Microsoft Entra Domain Services In the Azure portal, create a new Microsoft Entra Domain Services managed domain by performing the following steps: 1. Select the resource group you created earlier. 2. Confirm the DNS domain name (for example, zava.support)—this comes from your Entra tenant’s custom domain. 3. Choose the region (same region as the virtual network). 4. Keep the default Enterprise SKU unless you have a specific need for another. 5. On the Networking page: · Select the virtual network you created. · Select the DC subnet for the managed domain controllers. 6. On the Administration page note that the AAD DC Administrators group (legacy name shown in the portal) is effectively the Domain Admins equivalent for the managed domain. Any user you add to this group in Entra becomes a domain admin in Entra DS. 7. Configure synchronization scope between Entra and Entra DS. · All accounts (default) – synchronizes both cloud-only and synchronized users. · Cloud-only accounts – useful when you’re already syncing on-prem identities and you only want specific cloud accounts in Entra DS. 8. Review the Security settings page. By default: · NTLMv1 disabled. · You can enable/disable NTLM password sync, or effectively disable NTLM entirely. · RC4 encryption disabled by default. · Kerberos armoring enabled by default. · LDAP signing and LDAP channel binding enabled by default. 9. Review your configuration and create the Entra DS managed domain. Note after deployment, you cannot change: • The managed domain DNS name • Subscription • Resource group • Virtual network and subnet used by Entra DS Step 4 – Fix virtual network DNS with Entra DS health checks 1. Once deployment completes, open the Entra DS resource and go to View health. 2. Run the health checks. If the diagnostic reports that the virtual network DNS servers are not set to the Entra DS managed DC IPs, select Fix to automatically configure the VNet’s DNS servers. · In Entra DS, note the DNS server IPs (for example, 172.16.0.4 and 172.16.0.5). · In the virtual network’s DNS settings, confirm these IPs are configured as custom DNS servers. Tip Any VM in this virtual network that needs to join the managed domain must use these Entra DS DNS addresses. Step 5 – Add administrators to the AAD DC Administrators group 1. In the Entra admin center, go to Groups > All groups and locate AAD DC Administrators. 2. Open the group and add your primary admin account (for example, prime@zava.support) and add a dedicated domain admin–style account (for example, adds.prime@zava.support) to be the primary administrator for the managed domain. Important note: You’ll need to change the password of any Entra account you want to use in the managed AD DS domain after deploying Entra DS. This will configure password synchronization between Entra and Entra DS, allowing you to use the Entra account. If you don’t change the password, you’ll be unable to use the account with Entra DS even though it will function normally in other parts of Azure. This trips a lot of people up. Step 6 – Create a Windows Server IaaS VM on the workload subnet 1. In the Azure portal, create a new Windows Server VM (for example, an IIS server): 1. Place it in the same resource group. 2. Select the virtual network you created earlier. 3. Attach it to the workload subnet (for example, zava-domain-vms). 4. Configure a local administrator account (for example, username prime with a strong password). 2. On the Management blade, note the option “Login with Microsoft Entra ID”: 1. This enables direct Entra login to the VM but does not join the VM to the Entra DS domain. 2. For this walkthrough, you’ll join the VM to Entra DS using classic domain join so don’t need to enable this option. 3. Complete the wizard and deploy the VM. Step 7 – Connect to the VM and verify DNS 1. Once the VM is deployed, open the VM in the portal and select Connect > RDP. 1. Request a JIT RDP port opening if required. 2. Download the RDP file and open it with Remote Desktop Connection. 2. Sign in with the local administrator account you configured when deploying the VM and not your Entra account. 3. In the VM, open a command prompt and run: ipconfig /all 1. Confirm that the DNS servers are the Entra DS managed IPs (for example, 172.16.0.4 and 172.16.0.5). If DNS is wrong Double-check the VNet’s DNS settings and ensure the VM is attached to the correct virtual network and subnet, then restart the VM. Step 8 – Join the VM to the Entra DS domain 1. On the VM, open Server Manager and select Local Server. 2. Next to Workgroup, select the workgroup name to open System Properties (Computer Name tab). 3. Select Change… and then: · Under Member of, select Domain. · Enter the Entra DS domain name (for example, zava.support). 4. When prompted for credentials, use an account that’s a member of AAD DC Administrators, such as adds.prime@zava.support, and enter the password. 5. When you receive the confirmation that the computer has joined the domain, restart the VM. Step 9 – Sign in with an Entra DS domain account 1. After the VM restarts, reconnect via RDP using the VM’s public IP and: · Username: your domain UPN (for example, adds.prime@zava.support). · Password: the account’s password. 2. Confirm that you are signed in as a domain user in the Entra DS managed domain. Step 10 – Use AD DS and DNS tools on the domain-joined VM 1. Install and open Active Directory Users and Computers (RSAT) on the VM. · Browse the managed domain structure. · Notice containers such as AADDC Computers, AADDC Users, and groups like Domain Admins that map back to Entra groups. 2. Create an organizational unit (OU), for example IIS Servers, to contain IIS VMs. 3. Open Group Policy Management and: · Create a Group Policy Object targeting the IIS Servers OU. · Link and configure settings as required (hardening, IIS config, etc.). 4. Open the DNS Manager console on the VM, which now connects to the Entra DS–managed DNS servers. 5. Create a new Host (A) record, for example: · Name: iis3 · FQDN: iis3.zava.support · IP address: the appropriate internal address. 6. Open a command prompt and verify DNS resolution with: nslookup iis3.zava.support • Confirm it returns the correct IP address. Entra DS gives you familiar AD capabilities—domain join, Group Policy, and DNS—without the overhead of running and maintaining your own DC VMs in Azure. You can find out more at: https://learn.microsoft.com/en-us/entra/identity/domain-services/overview775Views1like0Comments