azure security
86 TopicsDeploying Azure Redis Enterprise with Geo-Replication Using Terraform
This post walks through a production‑proven pattern for running stateful services across Azure regions using Terraform. We’ll cover a primary–replica Redis architecture, regional isolation with Key Vault and networking, and a clean Terraform parameterization strategy that scales from development to production without duplication. Why Multi‑Region State Is Hard Running applications globally is easy when everything is stateless—if something fails, you redeploy. But stateful services tell a different story. Caches, message brokers, and data stores can’t be treated as disposable. They hold business‑critical data, and downtime or inconsistency quickly becomes customer‑visible. In real‑world systems, common requirements include: Low‑latency reads from multiple regions Automatic recovery when a region becomes unavailable Predictable data consistency Repeatable infrastructure from dev through production Manually configuring this per region doesn’t scale. Drift sets in. Failover is unclear. Backups get forgotten. That’s where Terraform + Azure Managed Redis geo‑replication shines. Github Link : https://github.com/vsakash5/Managed-redis.git High‑Level Architecture We use a primary–replica Redis Enterprise model: Primary Redis Single write endpoint Highly available inside its region Source of truth Replica Redis Read‑only Asynchronously synced from primary Can be promoted during disaster recovery Each region is fully isolated: Separate subnets Separate Key Vaults Private Endpoints only (no public exposure) This prevents shared failure domains and allows each region to operate independently if needed. The Terraform Design Principle Instead of maintaining separate Terraform stacks per region, the key idea is: One reusable module, one tfvars file per environment, multiple regions inside it. The module is written once. Regional differences are supplied via parameter suffixes like: _replica _secondary _tertiary This keeps logic centralized and environments consistent. Core Parameter Layers 1. Environment Identity (Shared) Terraform environment = "dev" # dev | staging | prod context_prefix = "app" Show more lines These values are reused everywhere—names, tags, and identifiers. 2. Primary Region Terraform location = "eastus2" resource_group_name = "rg-app-dev-primary" Show more lines 3. Replica Region Terraform location_replica = "uksouth" resource_group_name_replica = "rg-app-dev-replica" The symmetry is intentional. Terraform can now apply the same module twice without branching logic. Regional Isolation: Networking and Secrets Why isolation matters Geo‑replication copies data, not dependencies. If both Redis instances depend on: the same subnet the same Key Vault then a failure in one region can cascade into the other. Networking (One Subnet per Region) Benefits: Independent NSGs Independent routing Independent capacity planning Key Vault (One per Region) Why this matters: Redis credentials are not replicated Each region stores its own secrets A Key Vault outage doesn’t take both regions down Redis Configuration Primary Redis (Writes Enabled) The geo‑replication group name must match. That’s the logical binding Azure uses to link instances. Private Endpoint‑Only Access No Redis instance is exposed publicly. Each region uses: A private endpoint A workload subnet Internal DNS resolution This means: No public IPs No inbound attack surface Traffic stays on the Azure backbone Linking Primary and Replica Terraform explicitly defines the relationship: Terraform managed_redis_geo_replication_config = { primary_to_replica = { primary_redis_key = "primary" replica_keys = ["replica"] } } Terraform ensures: Primary is created first Replica is deployed second Geo‑replication is established last Environment Scaling: Dev → Staging → Prod The infrastructure pattern never changes. Only values do. Environment Group Name Dev dev-grp Staging stg-grp Prod prod-grp This is how you avoid “snowflake” environments. Disaster Recovery Strategy If the primary region fails: Applications fail over to the replica read endpoint Terraform configuration is updated to: Remove geo‑replication Promote replica config to primary Traffic is fully restored Once the original region recovers, roles can be re‑established cleanly. No click‑ops. No guesswork. Key Lessons Learned 1. Naming is Infrastructure Predictable names enable automation, discovery, and auditing. 2. Key Vault Isolation Beats Availability A shared Key Vault is a shared outage. 3. Parameterization Beats Copy‑Paste Fix once → benefit everywhere. 4. Geo‑Replication Is a Contract Matching replication group names is non‑negotiable. 5. The tfvars File Is the Source of Truth If it’s not in Terraform, it’s not real. Final Thoughts Running stateful services in multiple regions doesn’t require magic— it requires discipline: Isolate aggressively Parameterize consistently Automate everything Test failure often With this approach, adding a new region becomes configuration—not redesign. That’s how infrastructure scales.68Views1like0CommentsSecurity Copilot Agents in Defender XDR: where things actually stand
With RSAC 2026 behind us and the E5 inclusion now rolling out between April 20 and June 30, anyone planning SOC workflows or sitting on a capacity budget needs to get a clear picture of what is GA, what is preview, and what was just announced. The marketing pages tend to blur those lines. This is my sober look at the current state, with the operational details that matter for adoption decisions. What is actually shipping right now The Phishing Triage Agent is GA. It only handles user-reported phish through Defender for Office 365 P2, but for most SOCs that is a meaningful chunk of the L1 queue. Verdicts come with a natural-language rationale rather than just a label, which is the part that determines whether analysts will trust it. The agent learns from analyst confirmations and overrides, so the feedback loop matters more than the initial setup. There is a setup detail that is easy to miss: the agent will not classify alerts that have already been suppressed by alert tuning. The built-in rule "Auto-Resolve - Email reported by user as malware or phish" needs to be off, and any custom tuning rules that touch this alert type need review. If you skip this, the agent runs on an empty queue and you wonder why nothing is happening. The Threat Intelligence Briefing Agent is also GA. It produces tenant-tailored intel briefings on a regular cadence. Useful, but lower operational impact than the triage agents. Copilot Chat in Defender went GA with the April 2026 update. Conversational Q&A inside the portal, grounded in your incident and entity data. This is the lowest-risk way to get value out of Security Copilot and probably where most teams should start. Public preview, worth watching The Dynamic Threat Detection Agent is the most technically interesting one. It runs continuously in the Defender backend, correlates across Defender and Sentinel telemetry, generates its own hypotheses, and emits a dynamic alert when the evidence converges. Detection source on the alert is Security Copilot. Each alert includes the structured fields (severity, MITRE techniques, remediation) plus a narrative explaining the reasoning. For EU tenants the residency point is worth confirming with whoever owns data protection in your org: the service runs region-local, so customer data and required telemetry stay inside the designated geographic boundary. During public preview it is enabled by default for eligible customers and is free. At GA, currently targeted for late 2026, it transitions to the SCU consumption model and can be disabled. The Threat Hunting Agent is also in public preview. Natural language to KQL with guided hunting. Lower stakes, but useful for teams without deep KQL expertise on hand. Announced at RSAC, still preview Two agents got the headlines in March: The Security Alert Triage Agent extends the agentic triage approach beyond phishing into identity and cloud alerts. The longer-term direction is consolidating phishing, identity, and cloud triage under a single agent. Rollout is from April 2026, in preview. The Security Analyst Agent is the multi-step investigation agent. Deeper context across Defender and Sentinel, prioritised findings, transparent reasoning trace. Preview since March 26. Both look promising on paper, but Microsoft's history of preview features that take a long time to mature is well-documented. I would not plan production workflows around either of them yet. What you actually get with the E5 inclusion This is the licensing change most people are dealing with right now. Security Copilot has been part of the E5 product terms since January 1, 2026. Tenant rollout is phased between April 20 and June 30, 2026, with a 7-day notification before activation. The numbers: 400 SCUs per month for every 1,000 paid user licenses Capped at 10,000 SCUs per month, which you hit at around 25,000 seats Linear scaling below that, so a 3,000-seat tenant gets 1,200 SCUs per month No rollover, the pool resets monthly What is included: chat, promptbooks, agentic scenarios across Defender, Entra, Intune, Purview, and the standalone portal. Agent Builder and the Graph APIs are in. If you also run Sentinel, the included SCUs apply to Security Copilot scenarios there. What is not included: Sentinel data lake compute and storage. Those still run through Azure on the regular meters. Beyond the included pool you pay 6 USD per SCU pay-as-you-go, with 30 days notice before that mode kicks in. Practical things worth knowing before activation A few details that are easy to miss in the docs: Under System > Settings > Copilot in Defender > Preferences, switch from Auto-generate to Generate on demand. Auto-generate will burn SCUs on incidents nobody is going to look at. Generate on demand gives you direct control. In the Security Copilot portal workspace settings, check the data storage location and the data sharing toggle. Data sharing is on by default, which means Microsoft uses interaction data for product improvement. If your compliance position does not allow that, change it before agents start running. Changing it requires the Capacity Contributor role. Agent runs are not equivalent to the same number of analyst chat prompts. A triage agent processing fifty alerts in one run consumes meaningfully more SCUs than fifty manual prompts on the same data. If you have a high-volume phishing pipeline, model that out before you flip the switch broadly. The usage dashboard in the Security Copilot portal breaks down consumption by day, user, and scenario. Output quality depends on telemetry quality. Flaky connectors, gaps in log sources, or a high baseline of misconfigured alerts will produce verdicts that match. Connector health monitoring (the SentinelHealth table in Advanced Hunting is a sensible starting point) is a precondition. The agents only improve if analysts feed the override loop. If your team treats the verdicts as background noise rather than confirming or correcting them, the feedback signal is lost and calibration stays where it shipped. That is a process problem, not a product problem, but it determines whether any of this is worth the SCUs. A reasonable adoption order A rough sequence that minimises capacity surprises: Copilot Chat in Defender first. Lowest risk, immediate value through natural language Q&A in the investigation context. Phishing Triage Agent on a controlled subset, with a review cadence in place. Check the built-in tuning rules first. Watch the SCU dashboard for the first month before adding anything else. Let the Dynamic Threat Detection Agent run while it is in public preview, since it is default-on and free anyway. Compare its alerts against existing Sentinel detections. Security Alert Triage Agent for identity and cloud once the phishing baseline is stable. Establish a monthly review covering agent decisions, false-positive rate, SCU cost, and MTTD/MTTR trends. Technically, agentic triage is moving past phishing into identity and cloud, and the Dynamic Threat Detection Agent represents a genuine attempt at the false-negative problem rather than just another rule engine. Lizenziell, the E5 inclusion removes the biggest barrier to adoption that previously existed. The risk is enabling everything at once. Agents that nobody reviews are agents that consume capacity without delivering value, and the SCU dashboard is the only thing that will tell you that is happening. One agent, one use case, a 30-day baseline, then the next one. The order matters more than the speed.60Views0likes0CommentsDesigning Outbound Connectivity for "Private Subnets" in Azure
Why Private Subnets Change Everything Historically, Azure virtual machines relied on default outbound internet access, where the platform automatically assigned a dynamic SNAT IP from a shared pool. This was convenient but problematic: ❌ No deterministic outbound IP addresses ❌ No traffic inspection or filtering ❌ No FQDN or URL governance ❌ Difficult to audit for compliance ❌ Susceptible to noisy neighbor SNAT exhaustion With private subnets, outbound access is disabled by default. This shifts the responsibility to the architect — deliberately. The result is an environment where: ✅ Every outbound flow is intentional ✅ Every outbound IP is known and documented ✅ Every egress path can be governed and logged ✅ Compliance evidence is straightforward to produce The question is no longer "does my VM have internet access?" but rather "how exactly does my VM reach the internet, and is that path appropriate for this workload?" The Three Outbound Patterns at a Glance Option Primary Role Inspection Scale Cost Best For NAT Gateway Managed outbound SNAT ❌ None ⭐⭐⭐ High 💲 Low Simple, scalable egress Azure Firewall Secure governed egress ✅ Full L3–L7 ⭐⭐⭐ High 💲💲💲 Higher Security boundaries Load Balancer Legacy SNAT ❌ None ⭐⭐ Limited 💲 Low Legacy / transitional Scenario 1: NAT Gateway What is NAT Gateway? Azure NAT Gateway is a fully managed, zone‑resilient, outbound‑only SNAT service. It attaches at the subnet level and automatically handles all outbound flows from that subnet using one or more static public IP addresses or prefixes. It is purpose‑built for one thing: providing predictable, scalable outbound internet access — without routing complexity or inline devices. Key flow are depicted below: VM → NAT Gateway: Automatic SNAT (no UDR required) NAT Gateway → Internet: Static, deterministic public IP Inbound: NOT supported (outbound only) How it works (step by step) VM initiates an outbound connection (e.g., HTTPS to an API) NAT Gateway intercepts the flow at the subnet boundary Source IP is translated to the NAT Gateway's static public IP The packet is forwarded to the internet Return traffic is automatically tracked and delivered back to the VM No UDRs. No routing tables. No inline devices. It just works. Strengths Massive SNAT scale — no port exhaustion concerns at typical enterprise scale Deterministic outbound IPs — easy to allowlist with external services Zone resilient — survives availability zone failures Subnet scoped — applies to all VMs in the subnet automatically No routing configuration required Limitations ❌ No traffic inspection or filtering ❌ No FQDN or URL policy enforcement ❌ No threat intelligence integration ❌ Cannot restrict which internet destinations are allowed Best Fit Use Cases ✅ Application tiers calling external SaaS APIs ✅ VMs requiring OS updates and patch downloads ✅ CI/CD build agents and pipeline runners ✅ Spoke VNets in hub‑and‑spoke where east‑west goes through firewall, but simple internet egress is acceptable ✅ Dev/test environments Scenario 2: Azure Firewall What is Azure Firewall? Azure Firewall is a cloud‑native, stateful, L3–L7 network security service. When used for outbound egress, it transforms the egress path from a connectivity function into a security enforcement boundary. Unlike NAT Gateway, Azure Firewall inspects every packet, evaluates it against policy, and either allows or denies it based on network rules, application rules, and threat intelligence feeds. KEY Flow are depicted below: VM → UDR: Forces ALL outbound traffic to Firewall Firewall: Evaluates against policy before allowing Firewall → Internet: Only explicitly permitted flows pass All denied flows: Logged and alertable How it works (step by step) VM initiates an outbound connection UDR intercepts the flow and redirects to Azure Firewall's private IP Azure Firewall evaluates the traffic: Network rules (IP/port match) Application rules (FQDN/URL match) Threat intelligence (known malicious IPs/domains) If allowed: traffic is forwarded via Firewall's public IP If denied: traffic is dropped and logged All flows (allowed and denied) are logged to Log Analytics / Sentinel Strengths ✅ Full L3–L7 inspection ✅ FQDN and URL‑based filtering (application rules) ✅ Threat intelligence integration (Microsoft TI feed) ✅ TLS inspection (Premium SKU) ✅ Centralized governance across multiple VNets via Firewall Manager ✅ Rich logging — every allowed and denied flow is recorded ✅ IDPS (Intrusion Detection and Prevention) available in Premium Limitations ❌ Higher cost (hourly + data processing charges) ❌ Requires UDR configuration on each spoke subnet ❌ Adds latency (small but non‑zero) ❌ Requires careful SNAT configuration at scale Best Fit Use Cases ✅ Regulated industries (financial services, healthcare, government) ✅ Any workload where outbound internet is a security boundary ✅ Environments requiring egress allowlisting for compliance ✅ Hub‑and‑spoke architectures with centralized control plane ✅ SOC environments needing outbound flow telemetry Scenario 3: Load Balancer Outbound What is Load Balancer Outbound? Azure Load Balancer outbound rules were historically the primary mechanism for providing SNAT to VMs behind a Standard Load Balancer. While newer patterns (NAT Gateway, Azure Firewall) have largely replaced this approach for new designs, outbound rules remain valid in specific scenarios. Key flows are depicted below: VMs → Load Balancer: Backend pool members get SNAT LB Outbound Rules: Define port allocation per VM ⚠️ Port exhaustion risk at scale ⚠️ No inspection or policy enforcement How it works (step by step) VM in the backend pool initiates an outbound connection Load Balancer applies SNAT using the frontend public IP Ephemeral ports are allocated per VM from a fixed pool Return traffic is tracked and delivered back to the correct VM If port pool is exhausted: connections fail (SNAT exhaustion) Strengths Lower cost than NAT Gateway or Firewall Tightly integrated with existing load‑balanced workloads Familiar operational model for legacy teams Limitations ❌ SNAT port pool is fixed and must be manually managed ❌ Risk of SNAT exhaustion at scale ❌ No traffic inspection ❌ Less flexible than NAT Gateway ❌ Not recommended for new designs Best Fit Use Cases ✅ Existing architectures already built around Azure Load Balancer ✅ Low outbound connection volume workloads ✅ Transitional architectures during modernization to NAT Gateway Decision Framework: Choosing the Right Outbound Pattern Common Pitfalls to Avoid ⚠️ Pitfall 1: Forgetting SNAT scale limits Load Balancer outbound rules allocate a fixed number of ephemeral ports per VM. At scale this exhausts quickly. Use NAT Gateway instead. ⚠️ Pitfall 2: Over‑securing low‑risk workloads Not every workload needs Azure Firewall for outbound. Dev/test and patch traffic are better served by NAT Gateway — simpler, cheaper, faster. ⚠️ Pitfall 3: Mixing outbound models in the same subnet NAT Gateway and Load Balancer outbound rules cannot coexist on the same subnet. NAT Gateway always takes precedence. Plan your subnet boundaries carefully. ⚠️ Pitfall 4: Blocking Azure platform dependencies Many Azure services still use public endpoints (even when Private Link is available). Ensure your outbound policy allows required Azure service tags before enforcing egress controls. ⚠️ Pitfall 5: Relying on platform defaults Default outbound access is retired for new VNets. Do not assume VMs can reach the internet without explicit configuration. Summary and Key Takeaways Scenario Best Choice Why Simple internet egress at scale NAT Gateway Scalable, predictable, no complexity Security boundary for egress Azure Firewall Inspection, FQDN rules, threat intel Legacy load‑balanced workloads Load Balancer Outbound Transitional only Regulated / compliance environments Azure Firewall Audit logs, policy enforcement Dev / test / patch traffic NAT Gateway Low cost, low friction The core principle Private subnets make outbound access intentional. Choose the outbound pattern that matches the risk level of the workload — not the most complex option available. References https://learn.microsoft.com/azure/nat-gateway/nat-overview https://learn.microsoft.com/azure/firewall/overview https://learn.microsoft.com/azure/load-balancer/outbound-rules https://azure.microsoft.com/blog/default-outbound-access-for-vms-in-azure-will-be-retiredHow AI Agents Are Turning Threat Intelligence Into Validated Detections
The promise of AI-assisted cybersecurity has long been hampered by a fundamental measurement problem: how do organizations validate whether an AI agent can actually perform the complex, multi-step work that security analysts do every day? Traditional benchmarks test whether models can recall MITRE ATT&CK techniques or classify threat actor tactics, but they miss the harder question—can an agent translate raw threat intelligence into production-ready detection rules that find real attacks?microsoft Microsoft Research has addressed this gap with CTI-REALM (Cyber Threat Intelligence Real World Evaluation and LLM Benchmarking), an open-source benchmark that evaluates AI agents on end-to-end detection engineering workflows. Released in March 2026, CTI-REALM measures whether agents can read threat intelligence reports, explore telemetry schemas, iteratively refine KQL queries, and produce validated Sigma rules and KQL detection logic—exactly the workflow security analysts follow when building detections for platforms like Microsoft Sentinel.microsoft Why Traditional Benchmarks Fall Short Existing cybersecurity AI benchmarks primarily test parametric knowledge—can a model name the technique behind a log entry, or correctly label a tactic from a threat report? While useful, these assessments evaluate isolated skills rather than the operational capability security teams actually need: translating narrative threat intelligence into working detection logic that identifies attacks in production environments.microsoft CTI-REALM fills this gap by measuring three critical dimensions that earlier benchmarks overlook:microsoft Operationalization over recall: Agents must produce working Sigma rules and KQL queries validated against real attack telemetry, not just answer multiple-choice questions about threat actors. Complete workflow evaluation: The benchmark scores intermediate decision quality—CTI report selection, MITRE technique mapping, data source identification, and iterative query refinement—not just final output. Realistic tooling: Agents use the same tools security analysts rely on: CTI repositories, schema explorers, Kusto query engines, and MITRE ATT&CK databases. This granular, checkpoint-based scoring reveals precisely where AI agents struggle in the detection pipeline, helping security leaders understand whether performance gaps stem from comprehension failures, query construction issues, or detection specificity problems.microsoft The Benchmark: Real Threat Intelligence, Real Azure Environments Microsoft curated 37 CTI reports from public sources including Microsoft Security, Datadog Security Labs, Palo Alto Networks, and Splunk, selecting scenarios that could be faithfully simulated in sandboxed environments with telemetry suitable for detection development.microsoft The benchmark spans three Azure-relevant platforms: Linux endpoints: Traditional host-based detection scenarios Azure Kubernetes Service (AKS): Container and orchestration layer attacks Azure cloud infrastructure: Multi-source, APT-style attack chains requiring correlation across identity, resource, and network logs Ground-truth scoring validates detection rules at every workflow stage, from technique identification through final KQL query accuracy.microsoft Key Findings: What Works, What Doesn't Microsoft evaluated multiple frontier AI models on CTI-REALM-50, a subset spanning all three platforms. The results reveal both promise and clear limitations:microsoft Performance drops sharply across platform complexity: Linux endpoint detections scored 0.585, AKS scenarios dropped to 0.517, and Azure cloud infrastructure plummeted to 0.282. This reflects the reality that multi-source correlation across identity logs, Azure Activity, and resource-specific telemetry remains exceptionally difficult for AI agents—precisely the scenario SOC teams working in Microsoft Sentinel face when investigating sophisticated, multi-stage cloud attacks.microsoft More reasoning isn't always better: Within model families, medium reasoning configurations consistently outperformed high reasoning modes, suggesting that overthinking hurts performance in tool-rich, iterative agentic environments.microsoft Structured guidance closes performance gaps: Providing smaller models with human-authored workflow guidance improved threat technique identification and closed approximately one-third of the performance gap to much larger models.microsoft What This Means for Azure Security Operations For security architects and SOC teams working with Microsoft Sentinel, CTI-REALM's findings have immediate practical implications: Traditional Detection Engineering AI-Assisted Detection Engineering Analyst reads threat report manually AI agent parses CTI report and extracts techniques Analyst identifies relevant MITRE techniques Agent maps techniques to data sources automatically Analyst explores schema, writes KQL queries Agent iterates on KQL queries using schema tools Analyst validates detection against test data Agent generates Sigma rule + KQL validated against telemetry Process takes hours to days per report Process completes in minutes with human validation The benchmark demonstrates that AI agents can meaningfully accelerate detection development, particularly for Linux and AKS scenarios where success rates exceed 50%. However, the 28% success rate for Azure cloud infrastructure detections underscores a critical reality: human expertise remains essential for validating complex, multi-source detections before operational deployment.microsoft+1 Security teams should view AI agents as analyst augmentation tools rather than replacements. The checkpoint-based scoring in CTI-REALM helps organizations identify where human review is most critical—typically in cloud correlation logic, detection specificity tuning, and false positive reduction. Responsible Adoption: Human-in-the-Loop Remains Non-Negotiable Microsoft's research reinforces that AI-generated detection rules require validation before production use. Organizations adopting AI-assisted detection workflows should implement structured governance:microsoft Validate AI-generated KQL queries against test datasets before enabling in Sentinel analytics rules Require peer review for detections targeting cloud infrastructure, where AI performance is weakest Benchmark models using CTI-REALM before considering downstream operational use Maintain detection metadata tracking whether rules originated from AI or human analysts to support incident response context The benchmark's open-source availability on the Inspect AI repository enables security teams to test models against their own operational requirements before adoption.microsoft The Path Forward CTI-REALM represents a foundational shift in how the security industry evaluates AI capabilities—moving from knowledge recall to operational competence. For Azure practitioners, this matters because the benchmark's platforms (Linux, AKS, Azure cloud) and output formats (Sigma rules, KQL queries) directly mirror working with Microsoft Sentinel's analytics engine.microsoft As Microsoft continues integrating AI capabilities into Security Copilot and the broader unified SIEM+XDR vision, benchmarks like CTI-REALM provide the measurement framework security leaders need to adopt AI responsibly—understanding both capabilities and limitations before operationalizing agent-assisted workflows. The benchmark is freely available to model developers and security teams. Organizations interested in contributing, benchmarking, or exploring partnership opportunities can access the repository and contact Microsoft Research at msecaimrbenchmarking@microsoft.com.microsoft About the Research: CTI-REALM was developed by Microsoft Research and announced March 20, 2026. The full technical paper is available at CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents | Microsoft Security Blog276Views0likes0CommentsEntra ID Login via Azure Bastion Fails After VM Recreation
However, you may encounter a confusing scenario where: An Entra ID user attempts to sign in to a Windows VM through Azure Bastion The connection appears to succeed in the backend logs The session is disconnected within a second Bastion returns a generic sign-in error to the user At first glance, everything looks correctly configured. Terraform applies cleanly, permissions are in place, and Bastion access is allowed. This blog walks through a real-world troubleshooting journey that exposes a non-obvious Entra ID device registration issue, explains the root cause, and provides a clean resolution. Scenario We manage Azure infrastructure using Terraform, with Entra ID login enabled via the AADLoginForWindows VM extension. Azure Bastion is used to provide secure, inbound‑port‑free access to Windows VMs. After deleting and recreating a Windows VM with the same hostname, Entra ID login through Bastion started failing. Traditional local admin login worked, but Entra ID–based access did not. Key Terraform Configuration The VM was deployed with Entra ID login enabled using Infrastructure as Code: AADLoginForWindows extension Role assignments: Virtual Machine Administrator Login or Virtual Machine User Login Bastion configured for Entra ID authentication From an IaC perspective, nothing was misconfigured. Symptoms Observed The issue manifested in multiple subtle ways: Bastion login using Entra ID fails with a generic error message Backend logs show authentication success Session disconnects immediately after connection establishment Running the following on the VM: dsregcmd /status shows: IsDeviceJoined: NO This explains why Entra ID authentication succeeds initially but instantly fails during session creation. Root Cause Explained When a Windows VM is joined to Microsoft Entra ID, a device object is created in Entra ID, keyed to the VM’s Windows hostname. If the VM is later deleted without removing the device object, and a new VM is recreated using the same hostname, the Entra ID join process fails silently due to a hostname collision. Key points: The old Entra ID device object still exists The new VM cannot complete Entra ID registration Bastion authentication succeeds, but authorization fails immediately The VM therefore disconnects the session This is why backend logs look “successful” even though the user experience is not. Resolution Steps Identify the Stale Device Object Navigate to: Azure Portal → Microsoft Entra ID → Devices → All devices Search for the VM hostname (for example, VM01) Open the device object and note the Object ID Confirm it matches the Object ID referenced in the extension logs. Delete the Stale Device This does not delete the VM or any Azure resources. Only the Entra ID registration is removed. You can delete the device using either method: Azure Portal Select the device Choose Delete Azure CLI az ad device delete --id <ObjectId> Retry the Entra ID Join Restart the VM or restart the AADLoginForWindows extension Wait for the extension to re‑execute Verify the join status: dsregcmd /status Expected output: IsDeviceJoined: YES Retry Bastion login using Entra ID The session should now remain connected and function normally Why This Issue Is Easy to Miss Azure VM deletion does not automatically clean up Entra ID device objects Terraform recreations with identical hostnames are common in non‑prod environments Bastion logs are not explicit about device join failures Authentication succeeds, but authorization fails post‑connection Key Takeaways Deleting a device from Microsoft Entra ID does not impact the VM itself Always check for stale Entra ID device objects when reusing hostnames dsregcmd /status is the fastest way to validate join state AADLoginForWindows extension logs are critical for root cause analysis Bastion disconnections immediately after login often indicate identity‑level issues, not networking problems References Troubleshoot Microsoft Entra ID device registration issues Manage and delete stale Entra ID devices AADLoginForWindows extension documentation250Views0likes0CommentsEnterprise UAMI Design in Azure: Trust Boundaries and Blast Radius
As organizations move toward secretless authentication models in Azure, Managed Identity has become the preferred approach for enabling secure communication between services. User Assigned Managed Identity (UAMI) in particular offers flexibility that allows identity reuse across multiple compute resources such as: Azure App Service Azure Function Apps Virtual Machines Azure Kubernetes Service While this flexibility is beneficial from an operational perspective, it also introduces architectural considerations that are often overlooked during initial implementation. In enterprise environments where shared infrastructure patterns are common, the way UAMI is designed and assigned can directly influence the effective trust boundary of the deployment. Understanding Identity Scope in Azure Unlike System Assigned Managed Identity, a UAMI exists independently of the compute resource lifecycle and can be attached to multiple services across: Resource Groups Subscriptions Environments This capability allows a single identity to be reused across development, testing, or production services when required. However, identity reuse across multiple logical environments can expand the operational trust boundary of that identity. Any permission granted to the identity is implicitly inherited by all services to which the identity is attached. From an architectural standpoint, this creates a shared authentication surface across isolated deployment environments. High-Level Architecture: Shared Identity Pattern In many enterprise Azure deployments, it is common to observe patterns where: A single UAMI is assigned to multiple App Services The same identity is reused across automation workloads Identities are provisioned centrally and attached dynamically While this simplifies management and avoids identity sprawl, it may also introduce unintended privilege propagation across services. For example: In this architecture: Multiple App Services across environments share the same managed identity. Each compute instance requests an access token from Microsoft Entra ID using Azure Instance Metadata Service (IMDS). The issued token is then used to authenticate against downstream platform services such as: Azure SQL Database Azure Key Vault Azure Storage Because RBAC permissions are assigned to the shared identity rather than the compute instance itself, the effective authentication boundary becomes identity‑scoped instead of environment‑scoped. As a result, any compromised lower‑tier environment such as DEV may obtain an access token capable of accessing production‑level resources if those permissions are assigned to the shared identity. This expands the operational trust boundary across environments and increases the potential blast radius in the event of identity misuse. Blast Radius Considerations Blast radius refers to the potential impact scope of a security or configuration compromise. When a shared UAMI is used across multiple services, the following conditions may increase the blast radius: Design Pattern Potential Risk Single UAMI across environments Cross‑environment access Subscription‑wide RBAC assignment Broad privilege scope Identity used for automation pipelines Lateral movement Shared identity across teams Ownership ambiguity Because Managed Identity authentication relies on Azure Instance Metadata Service (IMDS), any compromised compute resource with access to IMDS may request an access token using the attached identity. This token can then be used to authenticate with downstream Azure services for which the identity has RBAC permissions. Enterprise Design Recommendations: Environment‑Isolated Identity Model To reduce identity blast radius in enterprise deployments, the following architectural principles may be considered: Environment‑Scoped Identity Provision separate UAMIs per environment: UAMI‑DEV UAMI‑UAT UAMI‑PROD Avoid reusing the same identity across isolated lifecycle stages. Resource‑Level RBAC Assignment Prefer assigning RBAC permissions at: Resource Resource Group instead of Subscription scope wherever feasible. Identity Ownership Model Ensure ownership clarity for identities assigned across shared workloads. Identity lifecycle should be aligned with: Application ownership Service ownership Deployment boundary Least Privilege Assignment Assign roles such as: Key Vault Secrets User Storage Blob Data Reader instead of broader roles such as: Contributor Owner Recommended High‑Level Architecture In this architecture: Each App Service instance is attached to an environment‑specific managed identity. RBAC assignments are scoped at the resource or resource group level. Microsoft Entra ID issues tokens independently for each identity. Trust boundaries remain aligned with deployment environments. A compromised DEV compute instance can only obtain a token associated with UAMI‑DEV. Because UAMI‑DEV does not have RBAC permissions for production resources, lateral access to PROD dependencies is prevented. Blast Radius Containment: This design significantly reduces the potential blast radius by ensuring that: Identity compromise remains environment‑scoped. Token issuance does not grant unintended cross‑environment privileges. RBAC permissions align with application ownership boundaries. Authentication trust boundaries match deployment lifecycle boundaries. Conclusion User Assigned Managed Identity offers significant advantages for secretless authentication in Azure environments. However, architectural considerations related to identity reuse and scope of assignment must be evaluated carefully in enterprise deployments. By aligning identity design with trust boundaries and minimizing the blast radius through scoped RBAC and environment isolation, organizations can implement Managed Identity in a way that balances operational efficiency with security governance.185Views1like0CommentsPrivate DNS and Hub–Spoke Networking for Enterprise AI Workloads on Azure
Introduction As organizations deploy enterprise AI platforms on Azure, security requirements increasingly drive the adoption of private-first architectures. Private networking only Centralized firewalls or NVAs Hub–and–spoke virtual network architectures Private Endpoints for all PaaS services While these patterns are well understood individually, their interaction often exposes hidden failure modes, particularly around DNS and name resolution. During a recent production deployment of a private, enterprise-grade AI workload on Azure, several issues surfaced that initially appeared to be platform or service instability. Closer analysis revealed the real cause: gaps in network and DNS design. This post shares a real-world technical walkthrough of the problem, root causes, resolution steps, and key lessons that now form a reusable blueprint for running AI workloads reliably in private Azure environments. Problem Statement The platform was deployed with the following characteristics: Hub and spoke network topology Custom DNS servers running in the hub Firewall / NVA enforcing strict egress controls AI, data, and platform services exposed through Private Endpoints Azure Container Apps using internal load balancer mode Centralized monitoring, secrets, and identity services Despite successful infrastructure deployment, the environment exhibited non-deterministic production issues, including: Container Apps intermittently failing to start or scale AI platform endpoints becoming unreachable from workload subnets Authentication and secret access failures DNS resolution working in some environments but failing in others Terraform deployments stalling or failing unexpectedly Because the symptoms varied across subnets and environments, root cause identification was initially non-trivial. Root Cause Analysis After end-to-end isolation, the issue was not AI services, authentication, or application logic. The core problem was DNS resolution in a private Azure environment. 1. Custom DNS servers were not Azure-aware The hub DNS servers correctly resolved: Corporate domains On‑premises records However, they could not resolve Azure platform names or Private Endpoint FQDNs by default. Azure relies on an internal recursive resolver (168.63.129.16) that must be explicitly integrated when using custom DNS. 2. Missing conditional forwarders for private DNS zones Many Azure services depend on service-specific private DNS zones, such as: privatelink.cognitiveservices.azure.com privatelink.openai.azure.com privatelink.vaultcore.azure.net privatelink.search.windows.net privatelink.blob.core.windows.net Without conditional forwarders pointing to Azure’s internal DNS, queries either: Failed silently, or Resolved to public endpoints that were blocked by firewall rules 3. Container Apps internal DNS requirements were overlooked When Azure Container Apps are deployed with: internal_load_balancer_enabled = true Azure does not automatically create supporting DNS records. The environment generates: A default domain .internal subdomains for internal FQDNs Without explicitly creating: A private DNS zone matching the default domain *, @, and *.internal wildcard records internal service-to-service communication fails. 4. Private DNS zones were not consistently linked Even when DNS zones existed, they were: Spread across multiple subscriptions Linked to some VNets but not others Missing links to DNS server VNets or shared services VNets As a result, name resolution succeeded in one subnet and failed in another, depending on the lookup path. Resolution No application changes were required. Stability was achieved entirely through architectural corrections. ✅ Step 1: Make custom DNS Azure-aware On all custom DNS servers (or NVAs acting as DNS proxies): Configure conditional forwarders for all Azure private DNS zones Forward those queries to: 168.63.129.16 This IP is Azure’s internal recursive resolver and is mandatory for Private Endpoint resolution. ✅ Step 2: Centralize and link private DNS zones A centralized private DNS model was adopted: All private DNS zones hosted in a shared subscription Linked to: Hub VNet All spoke VNets DNS server VNet Any operational or virtual desktop VNets This ensured consistent resolution regardless of workload location. ✅ Step 3: Explicitly handle Container Apps DNS For Container Apps using internal ingress: Create a private DNS zone matching the environment’s default domain Add: * wildcard record @ apex record *.internal wildcard record Point all records to the Container Apps Environment static IP Add a conditional forwarder for the default domain if using custom DNS This step alone resolved multiple internal connectivity issues. ✅ Step 4: Align routing, NSGs, and service tags Firewall, NSG, and route table rules were aligned to: Allow DNS traffic (TCP/UDP 53) Allow Azure service tags such as: AzureCloud CognitiveServices AzureActiveDirectory Storage AzureMonitor Ensure certain subnets (e.g., Container Apps, Application Gateway) retained direct internet access where required by Azure platform services Key Learnings 1. DNS is a Tier‑0 dependency for AI platforms Many AI “service issues” are DNS failures in disguise. DNS must be treated as foundational platform infrastructure. 2. Private Endpoints require Azure DNS integration If you use: Custom DNS ✅ Private Endpoints ✅ Then forwarding to 168.63.129.16 is non‑negotiable. 3. Container Apps internal ingress has hidden DNS requirements Internal Container Apps environments will not function correctly without manually created DNS zones and .internal records. 4. Centralized DNS prevents environment drift Decentralized or subscription-local DNS zones lead to fragile, inconsistent environments. Centralization improves reliability and operability. 5. Validate networking first, then the platform Before escalating issues to service teams: Validate DNS resolution Verify routing Check Private Endpoint connectivity In many cases, the perceived “platform issue” disappears. Quick Production Validation Checklist Before go-live, always validate: ✅ Private FQDNs resolve to private IPs from all required VNets ✅ UDR/NSG rules allow required Azure service traffic ✅ Managed identities can access all dependent resources ✅ AI portal user workflows succeed (evaluations, agents, etc.) ✅ terraform plan shows only intended changes Conclusion Running private, enterprise-grade AI workloads on Azure is absolutely achievable—but it requires intentional DNS and networking design. By: Making custom DNS Azure-aware Centralizing private DNS zones Explicitly handling Container Apps DNS Aligning routing and firewall rules an unstable environment was transformed into a repeatable, production-ready platform pattern. If you are building AI solutions on Azure with Private Endpoints and hub–spoke networking, getting DNS right early will save weeks of troubleshooting later.588Views2likes0CommentsGuardrails for Generative AI: Securing Developer Workflows
Generative AI is revolutionizing software development that accelerates delivery but introduces compliance and security risks if unchecked. Tools like GitHub Copilot empower developers to write code faster, automate repetitive tasks, and even generate tests and documentation. But speed without safeguards introduces risk. Unchecked AI‑assisted development can lead to security vulnerabilities, data leakage, compliance violations, and ethical concerns. In regulated or enterprise environments, this risk multiplies rapidly as AI scales across teams. The solution? Guardrails—a structured approach to ensure AI-assisted development remains secure, responsible, and enterprise-ready. In this blog, we explore how to embed responsible AI guardrails directly into developer workflows using: Azure AI Content Safety GitHub Copilot enterprise controls Copilot Studio governance Azure AI Foundry CI/CD and ALM integration The goal: maximize developer productivity without compromising trust, security, or compliance. Key Points: Why Guardrails Matter: AI-generated code may include insecure patterns or violate organizational policies. Azure AI Content Safety: Provides APIs to detect harmful or sensitive content in prompts and outputs, ensuring compliance with ethical and legal standards. Copilot Studio Governance: Enables environment strategies, Data Loss Prevention (DLP), and role-based access to control how AI agents interact with enterprise data. Azure AI Foundry: Acts as the control plane for Generative AI turning Responsible AI from policy into operational reality. Integration with GitHub Workflows: Guardrails can be enforced in IDE, Copilot Chat, and CI/CD pipelines using GitHub Actions for automated checks. Outcome: Developers maintain productivity while ensuring secure, compliant, and auditable AI-assisted development. Why Guardrails Are Non-Negotiable AI‑generated code and prompts can unintentionally introduce: Security flaws — injection vulnerabilities, unsafe defaults, insecure patterns Compliance risks — exposure of PII, secrets, or regulated data Policy violations — copyrighted content, restricted logic, or non‑compliant libraries Harmful or biased outputs — especially in user‑facing or regulated scenarios Without guardrails, organizations risk shipping insecure code, violating governance policies, and losing customer trust. Guardrails enable teams to move fast—without breaking trust. The Three Pillars of AI Guardrails Enterprise‑grade AI guardrails operate across three core layers of the developer experience. These pillars are centrally governed and enforced through Azure AI Foundry, which provides lifecycle, evaluation, and observability controls across all three. 1. GitHub Copilot Controls (Developer‑First Safety) GitHub Copilot goes beyond autocomplete and includes built‑in safety mechanisms designed for enterprise use: Duplicate Detection: Filters code that closely matches public repositories. Custom Instructions: Enhance coding standards via .github/copilot-instructions.md. Copilot Chat: Provides contextual help for debugging and secure coding practices. Pro Tip: Use Copilot Enterprise controls to enforce consistent policies across repositories and teams. 2. Azure AI Content Safety (Prompt & Output Protection) This service adds a critical protection layer across prompts and AI outputs: Prompt Injection Detection: Blocks malicious attempts to override instructions or manipulate model behaviour. Groundedness Checks: Ensures outputs align with trusted sources and expected context. Protected Material Detection: Flags copyrighted or sensitive content. Custom Categories: Tailor filters for industry-specific or regulatory requirements. Example: A financial services app can block outputs containing PII or regulatory violations using custom safety categories. 3. Copilot Studio Governance (Enterprise‑Scale Control) For organizations building custom copilots, governance is non‑negotiable. Copilot Studio enables: Data Loss Prevention (DLP): Prevent sensitive data leaks from flowing through risky connectors or channels. Role-Based Access (RBAC): Control who can create, test, approve, deploy and publish copilots. Environment Strategy: Separate dev, test, and production environments. Testing Kits: Validate prompts, responses, and behavior before production rollout. Why it matters: Governance ensures copilots scale safely across teams and geographies without compromising compliance. Azure AI Foundry: The Platform That Operationalizes the Three Pillars While the three pillars define where guardrails are applied, Azure AI Foundry defines how they are governed, evaluated, and enforced at scale. Azure AI Foundry acts as the control plane for Generative AI—turning Responsible AI from policy into operational reality. What Azure AI Foundry Adds Centralized Guardrail Enforcement: Define guardrails once and apply them consistently across: Models, Agents, Tool calls and Outputs. Guardrails specify: Risk types (PII, prompt injection, protected material) Intervention points (input, tool call, tool response, output) Enforcement actions (annotate or block) Built‑In Evaluation & Red‑Teaming: Azure AI Foundry embeds continuous evaluation into the GenAIOps lifecycle: Pre‑deployment testing for safety, groundedness, and task adherence Adversarial testing to detect jailbreaks and misuse Post‑deployment monitoring using built‑in and custom evaluators Guardrails are measured and validated, not assumed. Observability & Auditability: Foundry integrates with Azure Monitor and Application Insights to provide: Token usage and cost visibility Latency and error tracking Safety and quality signals Trace‑level debugging for agent actions Every interaction is logged, traceable, and auditable—supporting compliance reviews and incident investigations. Identity‑First Security for AI Agents: Each AI agent operates as a first‑class identity backed by Microsoft Entra ID: No secrets embedded in prompts or code Least‑privilege access via Azure RBAC Full auditability and revocation Policy‑Driven Platform Governance: Azure AI Foundry aligns with the Azure Cloud Adoption Framework, enabling: Azure Policy enforcement for approved models and regions Cost and quota controls Integration with Microsoft Purview for compliance tracking How to Implement Guardrails in Developer Workflows Shift-Left Security Embed guardrails directly into the IDE using GitHub Copilot and Azure AI Content Safety APIs—catch issues early, when they’re cheapest to fix. Automate Compliance in CI/CD Integrate automated checks into GitHub Actions to enforce policies at pull‑request and build stages. Monitor Continuously Use Azure AI Foundry and governance dashboards to track usage, violations, and policy drift. Educate Developers Conduct readiness sessions and share best practices so developers understand why guardrails exist—not just how they’re enforced. Implementing DLP Policies in Copilot Studio Access Power Platform Admin Center Navigate to Power Platform Admin Centre Ensure you have Tenant Admin or Environment Admin role Create a DLP Policy Go to Data Policies → New Policy. Define data groups: Business (trusted connectors) Non-business Blocked (e.g., HTTP, social channels) Configure Enforcement for Copilot Studio Enable DLP enforcement for copilots using PowerShell Set-PowerVirtualAgentsDlpEnforcement ` -TenantId <tenant-id> ` -Mode Enabled Modes: Disabled (default, no enforcement) SoftEnabled (blocks updates) Enabled (full enforcement) Apply Policy to Environments Choose scope: All environments, specific environments, or exclude certain environments. Block channels (e.g., Direct Line, Teams, Omnichannel) and connectors that pose risk. Validate & Monitor Use Microsoft Purview audit logs for compliance tracking. Configure user-friendly DLP error messages with admin contact and “Learn More” links for makers. Implementing ALM Workflows in Copilot Studio Environment Strategy Use Managed Environments for structured development. Separate Dev, Test, and Prod clearly. Assign roles for makers and approvers. Application Lifecycle Management (ALM) Configure solution-aware agents for packaging and deployment. Use Power Platform pipelines for automated movement across environments. Govern Publishing Require admin approval before publishing copilots to organizational catalog. Enforce role-based access and connector governance. Integrate Compliance Controls Apply Microsoft Purview sensitivity labels and enforce retention policies. Monitor telemetry and usage analytics for policy alignment. Key Takeaways Guardrails are essential for safe, compliant AI‑assisted development. Combine GitHub Copilot productivity with Azure AI Content Safety for robust protection. Govern agents and data using Copilot Studio. Azure AI Foundry operationalizes Responsible AI across the full GenAIOps lifecycle. Responsible AI is not a blocker—it’s an enabler of scale, trust, and long‑term innovation.982Views0likes0Comments🚀 Securing Enterprise AI: From Red Teaming to Risk Cards and Azure Guardrails
AI is no longer experimental—it’s deeply embedded into critical business workflows. From copilots to decision intelligence systems, organizations are rapidly adopting large language models (LLMs). But here’s the reality: AI is not just another application layer—it’s a new attack surface. 🧠 Why Traditional Security Thinking Fails for AI In traditional systems: You secure APIs You validate inputs You control access In AI systems: The input is language The behavior is probabilistic The attack surface is conversational 👉 Which means: You don’t just secure infrastructure—you must secure behavior 🔍 How AI Systems Actually Break (Red Teaming) To secure AI, we first need to understand how it fails. 👉 Red Teaming = intentionally trying to break your AI system using prompts 🧩 Common Attack Patterns 🪤 Jailbreaking “Ignore all previous instructions…” → Attempts to override system rules 🎭 Role Playing “You are a fictional villain…” → AI behaves differently under alternate identities 🔀 Prompt Injection Hidden instructions inside documents or inputs → Critical risk in RAG systems 🔁 Iterative Attacks Repeatedly refining prompts until the model breaks 💡 Key Insight AI attacks are not deterministic—they are creative, iterative, and human-driven 📊 From Attacks to Understanding Risk (Risk Cards) Knowing how AI breaks is only half the story. 👉 You need a structured way to define and communicate risk 🧠 What are Risk Cards? A Risk Card helps answer: What can go wrong? (Hazard) What is the impact? (Harm) How likely is it? (Risk) 🧪 Example: Prompt Injection Risk: External input overrides system behavior Harm: Data leakage, loss of control Affected: Organization, users Mitigation: Input sanitization + prompt isolation 🧪 Example: Hallucination Risk: Incorrect or fabricated output Harm: Wrong business decisions Mitigation: Grounding using trusted data (RAG) 💡 Critical Insight AI risk is not model-specific—it is context-dependent 🏢 Designing Secure AI Systems in Azure Now let’s translate all of this into real-world enterprise architecture 🔷 Secure AI Reference Architecture 🧱 Architecture Layers 1. User Layer Applications / APIs Azure Front Door + WAF 2. Security Layer (First Line of Defense) Azure AI Content Safety Input validation Prompt filtering 3. Orchestration Layer Azure Functions / AKS Prompt templates Context builders 4. Model Layer Azure OpenAI Locked system prompts 5. Grounding Layer (RAG) Azure AI Search Trusted enterprise data 6. Output Control Layer Response filtering Sensitive data masking 7. Monitoring & Governance Azure Monitor Defender for Cloud 🔐 Core Security Principles ❌ Never trust user input ✅ Validate both input and output ✅ Separate system and user instructions ✅ Ground responses in trusted data ✅ Monitor everything ⚠️ Real-World Risk Scenarios 🚨 Prompt Injection via Documents Malicious instructions hidden in uploaded files 👉 Mitigation: Document sanitization + prompt isolation 🚨 Data Leakage AI exposes sensitive or cross-user data 👉 Mitigation: RBAC + tenant isolation 🚨 Tool Misuse (AI Agents) AI triggers unintended real-world actions 👉 Mitigation: Approval workflows + least privilege 🚨 Gradual Jailbreak User bypasses safeguards over multiple interactions 👉 Mitigation: Session monitoring + context reset 📊 Operationalizing AI Security: Risk Register To move from theory to execution, organizations should maintain a Risk Register 🧠 Example Risk Impact Likelihood Score Prompt Injection 5 4 20 Hallucination 5 4 20 Bias 5 3 15 👉 This enables: Prioritization Governance Executive visibility 🚀 Bringing It All Together Let’s simplify everything: 👉 Red Teaming shows how AI breaks 👉 Risk Cards define what can go wrong 👉 Architecture determines whether you are secure 💡 One Line for Leaders “Deploying AI without guardrails is like exposing an API to the internet—understanding attack patterns and implementing layered defenses is essential for safe enterprise adoption.” 🙌 Final Thought AI is powerful—but power without control is risk The organizations that succeed will not just build AI systems… They will build: Secure, governed, and resilient AI platforms401Views1like0CommentsSave the date - January 26, 2026 - AMA: Best practices for applying Zero Trust using Intune
Join us on January 26 at 10:00 AM PT, to Ask Microsoft Anything (AMA) and get the answers you need to implement the right policies, security settings, device configurations, and more. Never trust, always verify. Tune in for tips and insights to help you secure your endpoints using Microsoft Intune as part of your larger Zero Trust strategy. Find out how you can use Intune to protect both access and data on organization-owned devices and personal devices used for work. Go to aka.ms/AMA/IntuneZeroTrust and select "attend" to add this event to your calendar. Have questions? Submit them early by signing in to Tech Community and posting them on the event page!352Views0likes1Comment