Recent Discussions
Short survey: Feedback on Sensitivity Label Suggestions in Microsoft 365 Apps
Hi everyone, I’m looking to gather feedback on user experiences with Sensitivity Label suggestions in Microsoft 365 apps. This short survey aims to understand how label recommendations are working in practice and where improvements may be needed. Your responses will help identify common challenges and opportunities to make the label recommendation process more accurate, useful, and seamless for users. Survey link: Experience with Recommended Sensitivity Labels in Microsoft 365 – Fill out form The survey takes around 3 minutes to complete. Your feedback will directly help us better understand real-world experiences with label suggestions. Thank you very much for taking the time to contribute.Security Copilot Agents in Defender XDR: where things actually stand
With RSAC 2026 behind us and the E5 inclusion now rolling out between April 20 and June 30, anyone planning SOC workflows or sitting on a capacity budget needs to get a clear picture of what is GA, what is preview, and what was just announced. The marketing pages tend to blur those lines. This is my sober look at the current state, with the operational details that matter for adoption decisions. What is actually shipping right now The Phishing Triage Agent is GA. It only handles user-reported phish through Defender for Office 365 P2, but for most SOCs that is a meaningful chunk of the L1 queue. Verdicts come with a natural-language rationale rather than just a label, which is the part that determines whether analysts will trust it. The agent learns from analyst confirmations and overrides, so the feedback loop matters more than the initial setup. There is a setup detail that is easy to miss: the agent will not classify alerts that have already been suppressed by alert tuning. The built-in rule "Auto-Resolve - Email reported by user as malware or phish" needs to be off, and any custom tuning rules that touch this alert type need review. If you skip this, the agent runs on an empty queue and you wonder why nothing is happening. The Threat Intelligence Briefing Agent is also GA. It produces tenant-tailored intel briefings on a regular cadence. Useful, but lower operational impact than the triage agents. Copilot Chat in Defender went GA with the April 2026 update. Conversational Q&A inside the portal, grounded in your incident and entity data. This is the lowest-risk way to get value out of Security Copilot and probably where most teams should start. Public preview, worth watching The Dynamic Threat Detection Agent is the most technically interesting one. It runs continuously in the Defender backend, correlates across Defender and Sentinel telemetry, generates its own hypotheses, and emits a dynamic alert when the evidence converges. Detection source on the alert is Security Copilot. Each alert includes the structured fields (severity, MITRE techniques, remediation) plus a narrative explaining the reasoning. For EU tenants the residency point is worth confirming with whoever owns data protection in your org: the service runs region-local, so customer data and required telemetry stay inside the designated geographic boundary. During public preview it is enabled by default for eligible customers and is free. At GA, currently targeted for late 2026, it transitions to the SCU consumption model and can be disabled. The Threat Hunting Agent is also in public preview. Natural language to KQL with guided hunting. Lower stakes, but useful for teams without deep KQL expertise on hand. Announced at RSAC, still preview Two agents got the headlines in March: The Security Alert Triage Agent extends the agentic triage approach beyond phishing into identity and cloud alerts. The longer-term direction is consolidating phishing, identity, and cloud triage under a single agent. Rollout is from April 2026, in preview. The Security Analyst Agent is the multi-step investigation agent. Deeper context across Defender and Sentinel, prioritised findings, transparent reasoning trace. Preview since March 26. Both look promising on paper, but Microsoft's history of preview features that take a long time to mature is well-documented. I would not plan production workflows around either of them yet. What you actually get with the E5 inclusion This is the licensing change most people are dealing with right now. Security Copilot has been part of the E5 product terms since January 1, 2026. Tenant rollout is phased between April 20 and June 30, 2026, with a 7-day notification before activation. The numbers: 400 SCUs per month for every 1,000 paid user licenses Capped at 10,000 SCUs per month, which you hit at around 25,000 seats Linear scaling below that, so a 3,000-seat tenant gets 1,200 SCUs per month No rollover, the pool resets monthly What is included: chat, promptbooks, agentic scenarios across Defender, Entra, Intune, Purview, and the standalone portal. Agent Builder and the Graph APIs are in. If you also run Sentinel, the included SCUs apply to Security Copilot scenarios there. What is not included: Sentinel data lake compute and storage. Those still run through Azure on the regular meters. Beyond the included pool you pay 6 USD per SCU pay-as-you-go, with 30 days notice before that mode kicks in. Practical things worth knowing before activation A few details that are easy to miss in the docs: Under System > Settings > Copilot in Defender > Preferences, switch from Auto-generate to Generate on demand. Auto-generate will burn SCUs on incidents nobody is going to look at. Generate on demand gives you direct control. In the Security Copilot portal workspace settings, check the data storage location and the data sharing toggle. Data sharing is on by default, which means Microsoft uses interaction data for product improvement. If your compliance position does not allow that, change it before agents start running. Changing it requires the Capacity Contributor role. Agent runs are not equivalent to the same number of analyst chat prompts. A triage agent processing fifty alerts in one run consumes meaningfully more SCUs than fifty manual prompts on the same data. If you have a high-volume phishing pipeline, model that out before you flip the switch broadly. The usage dashboard in the Security Copilot portal breaks down consumption by day, user, and scenario. Output quality depends on telemetry quality. Flaky connectors, gaps in log sources, or a high baseline of misconfigured alerts will produce verdicts that match. Connector health monitoring (the SentinelHealth table in Advanced Hunting is a sensible starting point) is a precondition. The agents only improve if analysts feed the override loop. If your team treats the verdicts as background noise rather than confirming or correcting them, the feedback signal is lost and calibration stays where it shipped. That is a process problem, not a product problem, but it determines whether any of this is worth the SCUs. A reasonable adoption order A rough sequence that minimises capacity surprises: Copilot Chat in Defender first. Lowest risk, immediate value through natural language Q&A in the investigation context. Phishing Triage Agent on a controlled subset, with a review cadence in place. Check the built-in tuning rules first. Watch the SCU dashboard for the first month before adding anything else. Let the Dynamic Threat Detection Agent run while it is in public preview, since it is default-on and free anyway. Compare its alerts against existing Sentinel detections. Security Alert Triage Agent for identity and cloud once the phishing baseline is stable. Establish a monthly review covering agent decisions, false-positive rate, SCU cost, and MTTD/MTTR trends. Technically, agentic triage is moving past phishing into identity and cloud, and the Dynamic Threat Detection Agent represents a genuine attempt at the false-negative problem rather than just another rule engine. Lizenziell, the E5 inclusion removes the biggest barrier to adoption that previously existed. The risk is enabling everything at once. Agents that nobody reviews are agents that consume capacity without delivering value, and the SCU dashboard is the only thing that will tell you that is happening. One agent, one use case, a 30-day baseline, then the next one. The order matters more than the speed.Microsoft Entra Conditional Access Optimization Agent - Move from Static to Continuous Protection
Conditional Access has long been Microsoft Entra’s Zero Trust policy engine—powerful, flexible, and can easily go wrong with misconfiguration over time due to large volume of policies. As the no of tenants increase the no of new users and applications the new modern authentication methods are introduced continuously, and Conditional Access policies that once provided full coverage often drift into partial or inconsistent protection. This is an operational gap which introduces complexity and manageability challenges. The solution to this is utilizing Conditional Access Optimization Agent, an AI‑powered agent integrated with Microsoft Security Copilot that continuously evaluates Conditional Access coverage and recommends targeted improvements aligned to Microsoft Zero Trust best practices. In this article, Let us understand what problem the agent can solve, how it works, how it can be best utilized with the real‑world Entra Conditional Access strategy. The Problem is Conditional Access does not break loudly Most Conditional Access issues are not caused by incorrect syntax or outright failure. Instead, they emerge gradually due to the continuous changes into the enviornment. New users are created but not included in existing policies New SaaS or enterprise apps bypass baseline controls MFA policies exist, but exclusions expand silently Legacy authentication or device code flow remains enabled for edge cases Multiple overlapping policies grow difficult to reason about Although there are tools like What‑If, Insights & Reporting, and Gap Analyzer workbooks help, they all require manual review and interpretation. At enterprise scale with large no of users and applications, this becomes increasingly reactive rather than preventative. What is the Conditional Access Optimization Agent? The Conditional Access Optimization Agent is one of the Microsoft Entra agents built to operate autonomously using Security Copilot. Its purpose is to continuously answer a critical question. Are all users, applications, and agent identities protected by the right Conditional Access policies - right now? The agent analyzes your tenant and recommends the following. Creating new policies Updating existing policies Consolidating similar policies Reviewing unexpected policy behavior patterns All recommendations are reviewable and optional, with actions typically staged in Report‑Only mode before enforcement. How the agents actually works ? The agent operates in two distinct phases - First the Analysis and then Recommendation & remediation During the analysis phase it evaluates the following. Enabled Conditional Access policies User, application, and agent identity coverage Authentication methods and device‑based controls Recent sign‑in activity (24‑hour evaluation window) Redundant or near‑duplicate policies This phase identifies gaps, overlaps, and deviations from Microsoft’s learned best practices. The next and final phase of recommendation and remediation depends on the results from the finding. Based on this the agent can suggest the following. Enforcing MFA where coverage is missing Adding device compliance or app protection requirements Blocking legacy authentication and device code flow Consolidating policies that differ only by minor conditions Creating new policies in report‑only mode Some of offer one click remediation making it easy for the administrators to control and enforce the decisions more appropriately. What are its key capabilities ? Continuous coverage validation The agent continuously checks for new users and applications that fall outside existing Conditional Access policy scope - one of the most common real‑world gaps in Zero Trust deployments. Policy consolidation support Large environments often accumulate near‑duplicate policies over time. The agent analyzes similar policy pairs and proposes consolidation, reducing policy sprawl while preserving intent. Plain‑language explanations Each recommendation includes a clear rationale explaining why the suggestion exists and what risk it addresses, helping administrators validate changes rather than blindly accepting automation. Policy review reports (This feature is still in preview) The agent can generate policy review reports that highlight spikes or dips in enforcement behavior—often early indicators of misconfiguration or unintended impact Beyond classic MFA and device controls, One of the most important use case is the agent also supports passkey adoption campaigns (This feature is still in preview) . It can include the following. Assess user readiness Generate phased deployment plans Guide enforcement once prerequisites are met This makes the agent not only a corrective tool, but it is helpful as a migration and modernization assistant for building phishing‑resistant authentication strategies. Zero Trust strategies utilizing agents For a mature Zero Trust strategies, the agent provides continuous assurance that Conditional Access intent does not drift as identities and applications evolve. The use of Conditional Access Optimization Agent does not replace the architectural design or automatic policy enforcement instead it can be utilized to ensure continuous evaluation, early‑alarm system for any policy drift and can act as a force‑multiplier for identity teams managing change at scale. The object of agent usage is to help close the gap upfront between policy intent depending on the actual use, instead of waiting for the analysis to complete upon resolving incidents and post auditing. In this modernized era, the identity environments are dynamic by default. The Microsoft Entra Conditional Access Optimization Agent reflects a shift toward continuous validation and assisted governance, where policies are no longer assumed to be correct simply because they exist. For organizations already mature in Conditional Access, the agent offers operational resilience. For those still building, it provides guardrails that scale with complexity but without removing human accountability.134Views0likes0CommentsSentinel to Defender Portal Migration - my 5 Gotchas to help you
The migration to the unified Defender portal is one of those transitions where the documentation covers "what's new" but glosses over what breaks on cutover day. Here are the gotchas that consistently catch teams off-guard, along with practical fixes. Gotcha 1: Automatic Connector Enablement When a Sentinel workspace connects to the Defender portal, Microsoft auto-enables certain connectors - often without clear notification. The most common surprises: Connector Auto-Enables? Impact Defender for Endpoint Yes EDR telemetry starts flowing, new alerts created Defender for Cloud Yes Additional incidents, potential ingestion cost increase Defender for Cloud Apps Conditional Depends on existing tenant config Azure AD Identity Protection No Stays in Sentinel workspace only Immediate action: Within 2 hours of connecting, navigate to Security.microsoft.com > Connectors & integrations > Data connectors and audit what auto-enabled. Compare against your pre-migration connector list and disable anything unplanned. Why this matters: Auto-enabled connectors can duplicate data sources - ingesting the same telemetry through both Sentinel and Defender connectors inflates Log Analytics costs by 20-40%. Gotcha 2: Incident Duplication The most disruptive surprise. The same incident appears twice: once from a Sentinel analytics rule, once from the Defender portal's auto-created incident creation rule. SOC teams get paged twice, deduplication breaks, and MTTR metrics go sideways. Diagnosis: SecurityIncident | where TimeGenerated > ago(7d) | summarize IncidentCount = count() by Title | where IncidentCount > 1 | order by IncidentCount desc If you see unexpected duplicates, the cause is almost certainly the auto-enabled Microsoft incident creation rule conflicting with your existing analytics rules. Fix: Disable the auto-created incident creation rule in Sentinel Automation rules, and rely on your existing analytics rule > incident mapping instead. This ensures incidents are created only through Sentinel's pipeline. Gotcha 3: Analytics Rule Title Dependencies The Defender portal matches incidents to analytics rules by title, not by rule ID. This creates subtle problems: Renaming a rule breaks the incident linkage Copying a rule with a similar title causes cross-linkage Two workspaces with identically named rules generate separate incidents for the same alert Prevention checklist: Audit all analytics rule titles for uniqueness before migration Document the title-to-GUID mapping as a reference Avoid renaming rules en masse during migration Use a naming convention like <Severity>_<Tactic>_<Technique> to prevent collisions Gotcha 4: RBAC Gaps Sentinel workspace RBAC roles don't directly translate to Defender portal permissions: Sentinel Role Defender Portal Equivalent Gap Microsoft Sentinel Responder Security Operator Minor - name change Microsoft Sentinel Contributor Security Operator + Security settings (manage) Significant - split across roles Sentinel Automation Contributor Automation Contributor (new) New role required Migration approach: Create new unified RBAC roles in the Defender portal that mirror your existing Sentinel permissions. Test with a pilot group before org-wide rollout. Keep workspace RBAC roles for 30 days as a fallback. Gotcha 5: Automation Rules Don't Auto-Migrate Sentinel automation rules and playbooks don't carry over to the Defender portal automatically. The syntax has changed, and not all Sentinel automation actions are available in Defender. Recommended approach: Export existing Sentinel automation rules (screenshot condition logic and actions) Recreate them in the Defender portal Run both in parallel for one week to validate behavior Retire Sentinel automation rules only after confirming Defender equivalents work correctly Practical Migration Timeline Phase 1 - Pre-migration (1-2 weeks before): Audit connectors, analytics rules, RBAC roles, and automation rules Document everything - titles, GUIDs, permissions, automation logic Test in a pilot environment first Phase 2 - Cutover day: Connect workspace to Defender portal Within 2 hours: audit auto-enabled connectors Within 4 hours: check for duplicate incidents Within 24 hours: validate RBAC and automation rules Phase 3 - Post-migration (1-2 weeks after): Monitor incident volume for duplication spikes Validate automation rules fire correctly Collect SOC team feedback on workflow impact After 1 week of stability: retire legacy automation rules Phase 4 - Cleanup (2-4 weeks after): Remove duplicate automation rules Archive workspace-specific RBAC roles once unified RBAC is stable Update SOC runbooks and documentation The bottom line: treat this as a parallel-run migration, not a lift-and-shift. Budget 2 weeks for parallel operations. Teams that rushed this transition consistently reported longer MTTR during the first month post-migration.139Views0likes0CommentsAuthentication Context (Entra ID) Use case
Microsoft Entra ID has evolved rapidly over the last few years, with Microsoft continuously introducing new identity, access, and security capabilities as part of the broader Zero Trust strategy. While many organizations hold the necessary Entra ID and Microsoft 365 licenses (often through E3 or E5 bundles), a number of these advanced features remain under‑utilised or entirely unused. This is frequently due to limited awareness, overlapping capabilities or uncertainty about where and how these features provide real architectural value. One such capability which is not frequently used is Authentication Context. Although this feature is available for quite some time, it is often misunderstood or overlooked because it does not behave like traditional Conditional Access controls. Consider Authentication Context as a mobile “assurance tag” that connects a resource (or a particular access route to that resource) to one or several Conditional Access (CA) policies, allowing security measures to be enforced with resource-specific accuracy instead of broad, application-wide controls. Put simply, it permits step-up authentication only when users access sensitive information or perform critical actions, while maintaining a smooth experience for the “regular path.” When used intentionally, it enables resource‑level and scenario‑driven access control, allowing organizations to apply stronger authentication only where it is actually needed without increasing friction across the entire user experience. Not expensive Most importantly to use Authentication Context the minimum licensing requirement is Microsoft Entra ID Premium P1 which most customers already have this license. so you not need to convenience for higher license to utilize this feature. But do note Entra Premium 2 is needed if your Conditional Access policy uses advanced signals, such as: User or sign‑in risk (Identity Protection) Privileged Identity Management (PIM) protected roles Risk‑based Conditional Access policies The Workflow Architecturally, Authentication Context works when a claims request is made as part of token issuance commonly expressed via the acrs claim. When the request includes a specific context (for example c1), Entra evaluates CA policies that target that context and forces the required controls (MFA, device compliance, trusted location, etc.). The important constraint: the context must be requested/triggered by a supported workload (e.g., SharePoint) or by an application designed to request the claim; it is not an automatic “detect any action inside any app” feature. Lets look at few high level architecture reference 1. Define “assurance tiers” as contexts Create a small set of contexts (e.g., c1: Confidential Access, c2: Privileged Operations) and publish them for use by supported apps/services. 2. Bind contexts to resources Assign the context to the resource boundary you want to protect—most commonly SharePoint sites (directly or via sensitivity labels), so only those sites trigger the context. (e.g - Specific SharePoint sites like financials, agreements etc ) 3. Attach Conditional Access policies to the context Create CA policies that target the context and define enforcement requirements (Additional MFA strength, mandating device compliance, or location constraint through named locations etc.). The context is the “switch” that activates those policies at the right moment. 4. Validate runtime behavior and app compatibility Because authentication context can impact some client apps and flows, validate supported clients and known limitations (especially for SharePoint/OneDrive/Teams integrations). Some Practical Business Scenarios Scenario A — Confidential SharePoint Sites (M&A / Legal / HR) Problem: You want stronger controls for a subset of SharePoint sites without forcing those controls for all SharePoint access. Architect pattern: Tag the confidential site(s) with Authentication Context and apply a CA policy requiring stronger auth (e.g., compliant device + MFA) for that context. Pre-reqs: SharePoint Online support for authentication context; appropriate licensing and admin permissions; CA policies targeted to the context Scenario B — “Step-up” Inside a Custom Line-of-Business App Problem: Users can access the app normally, but certain operations (approval, export, privileged view) need elevated assurance. Architect pattern: Build the app on OpenID Connect/OAuth2 and explicitly request the authentication context (via acrs) when the user reaches the sensitive path; CA then enforces step-up. Pre-reqs: App integrated with Microsoft identity platform using OIDC/OAuth2; the app can trigger claims requests/handle claim challenges where applicable; CA policies defined for the context Scenario C — Granular “Resource-based” Zero Trust Without Blanket MFA Problem: Security wants strong controls on crown jewels, but business wants minimal prompts for routine work. Architect pattern: Use authentication context to enforce higher assurance only for protected resources (e.g., sensitive SharePoint sites). This provides least privilege at the resource boundary while reducing global friction. Pre-reqs: Clearly defined resource classification; authentication context configured and published; CA policies and monitoring. In a nutshell, Authentication Context allows organizations to move beyond broad, one‑size‑fits‑all Conditional Access policies and adopt a more precise, resource‑driven security model. By using it to link sensitive resources or protected access paths to stronger authentication requirements, organizations can improve security outcomes while minimizing unnecessary user friction. When applied deliberately and aligned to business‑critical assets, Authentication Context helps close the gap between licensing capability and real‑world value—turning underused Entra ID features into practical, scalable Zero Trust controls. If you find this useful, please do not forget to like and add your thoughts 🙂Rescheduled Webinar: Copilot Skilling Series
Rescheduled Webinar Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents Hello everyone! The Copilot Skilling Series webinar on Security Copilot Agents, DSPM AI Observability, and IRM for Agents originally scheduled for April 16th, has been rescheduled for April 28th at 8:00 AM Pacific Time. We are sorry for the inconvenience and hope to see you there on the 28th. Please register for the updated time at http://aka.ms/securitycommunity All the best! The Security Community Team250Views0likes0CommentsCancelled: Microsoft Security Store webinar
Hi everyone! Unfortunately, our webinar covering "A Day in the Life of an Identity Governance Manager Powered by Security Agents" scheduled for March 11th at 8:00 AM PT, has been cancelled. We truly apologize for the inconvenience. Please find other available webinars at https://aka.ms/SecurityCommunity All the best! The Microsoft Security Community Team152Views0likes0CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.642Views0likes0CommentsAzure Cloud HSM: Secure, Compliant & Ready for Enterprise Migration
Azure Cloud HSM is Microsoft’s single-tenant, FIPS 140-3 Level 3 validated hardware security module service, designed for organizations that need full administrative control over cryptographic keys in the cloud. It’s ideal for migration scenarios, especially when moving on-premises HSM workloads to Azure with minimal application changes. Onboarding & Availability No Registration or Allowlist Needed: Azure Cloud HSM is accessible to all customers no special onboarding or monetary policy required. Regional Availability: Private Preview: UK West Public Preview (March 2025): East US, West US, West Europe, North Europe, UK West General Availability (June 2025): All public, US Gov, and AGC regions where Azure Managed HSM is available Choosing the Right Azure HSM Solution Azure offers several key management options: Azure Key Vault (Standard/Premium) Azure Managed HSM Azure Payment HSM Azure Cloud HSM Cloud HSM is best for: Migrating existing on-premises HSM workloads to Azure Applications running in Azure VMs or Web Apps that require direct HSM integration Shrink-wrapped software in IaaS models supporting HSM key stores Common Use Cases: ADCS (Active Directory Certificate Services) SSL/TLS offload for Nginx and Apache Document and code signing Java apps needing JCE provider SQL Server TDE (IaaS) via EKM Oracle TDE Deployment Best Practices 1. Resource Group Strategy Deploy the Cloud HSM resource in a dedicated resource group (e.g., CHSM-SERVER-RG). Deploy client resources (VM, VNET, Private DNS Zone, Private Endpoint) in a separate group (e.g., CHSM-CLIENT-RG) 2. Domain Name Reuse Policy Each Cloud HSM requires a unique domain name, constructed from the resource name and a deterministic hash. Four reuse types: Tenant, Subscription, ResourceGroup, and NoReuse choose based on your naming and recovery needs. 3. Step-by-Step Deployment Provision Cloud HSM: Use Azure Portal, PowerShell, or CLI. Provisioning takes ~10 minutes. Register Resource Provider: (Register-AzResourceProvider -ProviderNamespace Microsoft.HardwareSecurityModules) Create VNET & Private DNS Zone: Set up networking in the client resource group. Create Private Endpoint: Connect the HSM to your VNET for secure, private access. Deploy Admin VM: Use a supported OS (Windows Server, Ubuntu, RHEL, CBL Mariner) and download the Azure Cloud HSM SDK from GitHub. Initialize and Configure Edit azcloudhsm_resource.cfg: Set the hostname to the private link FQDN for hsm1 (found in the Private Endpoint DNS config). Initialize Cluster: Use the management utility (azcloudhsm_mgmt_util) to connect to server 0 and complete initialization. Partition Owner Key Management: Generate the PO key securely (preferably offline). Store PO.key on encrypted USB in a physical safe. Sign the partition cert and upload it to the HSM. Promote Roles: Promote Precrypto Officer (PRECO) to Crypto Officer (CO) and set strong password Security, Compliance, and Operations Single-Tenant Isolation: Only your organization has admin access to your HSM cluster. No Microsoft Access: Microsoft cannot access your keys or credentials. FIPS 140-3 Level 3 Compliance: All hardware and firmware are validated and maintained by Microsoft and the HSM vendor. Tamper Protection: Physical and logical tamper events trigger key zeroization. No Free Tier: Billing starts upon provisioning and includes all three HSM nodes in the cluster. No Key Sharing with Azure Services: Cloud HSM is not integrated with other Azure services for key usage. Operational Tips Credential Management: Store PO.key offline; use environment variables or Azure Key Vault for operational credentials. Rotate credentials regularly and document all procedures. Backup & Recovery: Backups are automatic and encrypted; always confirm backup/restore after initialization. Support: All support is through Microsoft open a support request for any issues. Azure Cloud HSM vs. Azure Managed HSM Feature / Aspect Azure Cloud HSM Azure Managed HSM Deployment Model Single-tenant, dedicated HSM cluster (Marvell LiquidSecurity hardware) Multi-tenant, fully managed HSM service FIPS Certification FIPS 140-3 Level 3 FIPS 140-2 Level 3 Administrative Control Full admin control (Partition Owner, Crypto Officer, Crypto User roles) Azure manages HSM lifecycle; customers manage keys and RBAC Key Management Customer-managed keys and partitions; direct HSM access Azure-managed HSM; customer-managed keys via Azure APIs Integration PKCS#11, OpenSSL, JCE, KSP/CNG, direct SDK access Azure REST APIs, Azure CLI, PowerShell, Key Vault SDKs Use Cases Migration from on-prem HSMs, legacy apps, custom PKI, direct cryptographic ops Cloud-native apps, SaaS, PaaS, Azure-integrated workloads Network Access Private VNET only; not accessible by other Azure services Accessible by Azure services (e.g., Storage, SQL, Disk Encryption) Key Usage by Azure Services Not supported (no integration with Azure services) Supported (can be used for disk, storage, SQL encryption, etc.) BYOK/Key Import Supported (with key wrap methods) Supported (with Azure Key Vault import tools) Key Export Supported (if enabled at key creation) Supported (with exportable keys) Billing Hourly fee per cluster (3 HSMs per cluster); always-on Consumption-based (per operation, per key, per hour) Availability High availability via 3-node cluster; automatic failover and backup Geo-redundant, managed by Azure Firmware Management Microsoft manages firmware; customer cannot update Fully managed by Azure Compliance Meets strictest compliance (FIPS 140-3 Level 3, single-tenant isolation) Meets broad compliance (FIPS 140-2 Level 3, multi-tenant isolation) Best For Enterprises migrating on-prem HSM workloads, custom/legacy integration needs Cloud-native workloads, Azure service integration, simplified management When to Choose Each? Azure Cloud HSM is ideal if you: Need full administrative control and single-tenant isolation. Are migrating existing on-premises HSM workloads to Azure. Require direct HSM access for legacy or custom applications. Need to meet the highest compliance standards (FIPS 140-3 Level 3). Azure Managed HSM is best if you: Want a fully managed, cloud-native HSM experience. Need seamless integration with Azure services (Storage, SQL, Disk Encryption, etc.). Prefer simplified key management with Azure RBAC and APIs. Are building new applications or SaaS/PaaS solutions in Azure. Scenario Recommended Solution Migrating on-prem HSM to Azure Azure Cloud HSM Cloud-native app needing Azure service keys Azure Managed HSM Custom PKI or direct cryptographic operations Azure Cloud HSM SaaS/PaaS with Azure integration Azure Managed HSM Highest compliance, single-tenant isolation Azure Cloud HSM Simplified management, multi-tenant Azure Managed HSM Azure Cloud HSM is the go-to solution for organizations migrating HSM-backed workloads to Azure, offering robust security, compliance, and operational flexibility. By following best practices for onboarding, deployment, and credential management, you can ensure a smooth and secure transition to the cloud.254Views0likes0CommentsMicrosoft Sentinel Graph with Microsoft Security Solutions
Why I Chose Sentinel Graph Modern security operations demand speed and clarity. Attackers exploit complex relationships across identities, devices, and workloads. I needed a solution that could: Correlate signals across identity, endpoint and cloud workloads. Predict lateral movement and highlight blast radius for compromised accounts. Integrate seamlessly with Microsoft Defender, Entra ID and Purview. Sentinel Graph delivered exactly that, acting as the reasoning layer for AI-driven defense. What's new: Sentinel Graph Public Preview Sentinel Graph introduces: Graph-based threat hunting: Traverse relationships across millions of entities. Blast radius analysis: Visualize the impact of compromised accounts or assets. AI-powered reasoning: Built for integration with Security Copilot. Native integration with Microsoft Defender and Purview for unified security posture. Uncover Hidden Security Risks Sentinel Graph helps security teams: Expose lateral movement paths that attackers could exploit. Identify choke points where defenses can be strengthened. Reveal risky relationships between identities, devices, and resources that traditional tools miss. Prioritize remediation by visualizing the most critical nodes in an attack path. This capability transforms threat hunting from reactive alert triage to proactive risk discovery, enabling defenders to harden their environment before an attack occurs. How to Enable Defense at All Stages Sentinel Graph strengthens defense across: Prevention: Identify choke points and harden critical paths before attackers exploit them. Detection: Use graph traversal to uncover hidden attack paths and suspicious relationships. Investigation: Quickly pivot from alerts to full graph-based context for deeper analysis. Response: Contain threats faster by visualizing blast radius and isolating impacted entities. This end-to-end approach ensures security teams can anticipate, detect, and respond with precision. How I Implemented It Step 1: Enabling Sentinel Graph If you already have the Sentinel Data Lake, the graph is auto provisioned when you sign in to the Microsoft Defender portal. Hunting graph and blast radius experiences appear directly in Defender. New to Data Lake? Use the Sentinel Data Lake onboarding flow to enable both the data lake and graph. Step 2: Integration with Microsoft Defender Practical examples from my project: Query: Show me all entities connected to this suspicious IP address. → Revealed lateral movement attempts across multiple endpoints. Query: Map the blast radius of a compromised account. → Identified linked service principals and privileged accounts for isolation. Step 3: Integration with Microsoft Purview In Purview Insider Risk Management, follow Data Risk Graph setup instructions. In Purview Data Security Investigations, enable Data Risk Graph for sensitive data flow analysis. Example: Query: Highlight all paths where sensitive data intersects with external connectors. → Helped detect risky data exfiltration paths. Step 4: AI-Powered Insights Using Microsoft Security Copilot, I asked: Predict the next hop for this attacker based on current graph state. Identify choke points in this attack path. This reduced investigation time and improved proactive defense. If you want to experience the power of Microsoft Sentinel Graph, here’s how you can get started Enable Sentinel Graph In your Sentinel workspace, turn on the Sentinel Data Lake. The graph will be auto provisioned when you sign in to the Microsoft Defender portal. Connect Microsoft Security Solutions Use built-in connectors to integrate Microsoft Defender, Microsoft Entra ID, and Microsoft Purview. This ensures unified visibility across identities, endpoints, and data. Explore Graph Queries Start hunting with Sentinel Notebooks or take it a step further by integrating with Microsoft Security Copilot for natural language investigations. Example: “Show me the blast radius of a compromised account.” or “Find everything connected to this suspicious IP address.” You can sign up here for a free preview of Sentinel graph MCP tools, which will also roll out starting December 1, 2025.104Views0likes0CommentsKnow MCP risks before you deploy!
The Model Context Protocol (MCP) is emerging as a powerful standard for enabling AI agents to interact with tools and data. However, like any evolving technology, MCP introduces new security challenges that organizations must address before deploying it in production environments. Major MCP Vulnerabilities MCP’s flexibility comes with risks. Here are the most critical vulnerabilities: Prompt Injection Attackers embed hidden instructions in user input, manipulating the model to trigger unauthorized MCP actions and bypass safety rules. Tool Poisoning Malicious MCP servers provide misleading tool descriptions or parameters, tricking agents into leaking sensitive data or executing harmful commands. Remote Code Execution Untrusted servers can inject OS-level commands through compromised endpoints, enabling full control over the host environment. Unauthenticated Access Rogue MCP servers bypass authentication and directly call sensitive tools, extracting internal data without user consent. Confused Deputy (OAuth Proxy) A malicious server misuses OAuth tokens issued for a trusted agent, performing unauthorized actions under a legitimate identity. MCP Configuration Poisoning Attackers silently modify approved configuration files so agents execute malicious commands as if they were part of the original setup. Token or Credential Theft Plaintext MCP config files expose API keys, cloud credentials, and access tokens, making them easy targets for malware or filesystem attacks. Path Traversal Older MCP filesystem implementations allow navigation outside the intended directory, exposing sensitive project or system files. Token Passthrough Some servers blindly accept forwarded tokens, allowing compromised agents to impersonate other services without validation. Session Hijacking Session IDs appearing in URLs can be captured from logs or redirects and reused to access active sessions. Current Known Limitations While MCP is promising, it has structural limitations that organizations must plan for: Lack of Native Tool Authenticity Verification There is no built-in mechanism to verify if a tool or server is genuine. Trust relies on external validation, increasing exposure to tool poisoning attacks. Weak Context Isolation Multi-session environments risk cross-contamination, where sensitive data from one session leaks into another. Limited Built-In Encryption Enforcement MCP depends on HTTPS/TLS for secure communication but does not enforce encryption across all channels by default. Monitoring & Auditing Gaps MCP lacks native logging and auditing capabilities. Organizations must integrate with external SIEM tools like Microsoft Sentinel for visibility. Dynamic Registration Risks Current implementations allow dynamic client registration without granular controls, enabling rogue client onboarding. Scalability Constraints Large-scale deployments require manual tuning for performance and security. There is no standardized approach for load balancing or high availability. Configuration Management Challenges Credentials often stored in plaintext within MCP config files. Lack of automated secret rotation or secure vault integration makes them vulnerable. Limited Standardization Across Vendors MCP is still evolving, and interoperability between different implementations is inconsistent, creating integration complexity. Mitigation Best Practices To reduce risk and strengthen MCP deployments: Enforce OAuth 2.1 with PKCE and strong RBAC. Use HTTPS/TLS for all MCP communications. Deploy MCP servers in isolated networks with private endpoints. Validate tools before integration; avoid untrusted sources. Integrate with Microsoft Defender for Cloud and Sentinel for monitoring. Encrypt and rotate credentials; never store in plaintext. Implement policy-as-code for configuration governance. MCP opens new possibilities for AI-driven automation, but without robust security, it can become an attack vector. Organizations must start with a secure baseline, continuously monitor, and adopt best practices to operationalize MCP safely.362Views0likes0CommentsMicrosoft Sentinel device log destination roadmap
I just attended the 11/5/2025 Microsoft webinar "Adopting Unified Custom Detections in Microsoft Sentinel via the Defender Portal: Now Better Than Ever" and my question posted to Q&A was not answered by the team delivering the session. The moderator told us that if our question was not answered we were to post the question in this forum. Here is the question again: "Will firewall and other device logs continue to go to Azure Log Analytics indefinitely? By Indefinitely I mean not changing in the roadmap to something else like Data Lake or Event Grid/Service Bus, etc." Thank you, John75Views0likes0CommentsUsing Microsoft Graph Security API for Custom Security Automations
Hi Security Experts, I’ve recently started exploring the Microsoft Graph Security API to centralize and automate security operations across different Microsoft 365 services. The idea is to build a single automation layer that can: Collect alerts from Defender for Endpoint, Defender for Cloud, and Identity Protection; Enrich them with context (user, device, and location data); And automatically push them to an external system like Jira, n8n, or a custom SOAR workflow. I was able to authenticate and list alerts using the endpoint: “GET https://graph.microsoft.com/v1.0/security/alerts” However, I’m still trying to understand the best practices for handling rate limits, pagination, and permissions — especially when integrating continuous polling or real-time ingestion into external tools. Has anyone here implemented Graph Security API automations in production? I’d love to hear about your experiences — specifically around performance, alert filtering, and authentication (App Registration vs Managed Identity). Thanks in advance, LucaHigh CPU Usage by Microsoft Defender (MsMpEng.exe) on Azure Windows Server 2019
Hi everyone, I’ve been seeing consistent CPU spikes from MsMpEng.exe (Antimalware Service Executable) on several Windows Server 2019 Datacenter VMs hosted in Azure. The usage reaches 100% for about 10–15 minutes daily, always around the same time. No manual scans are scheduled, and limiting CPU usage with Set-MpPreference -ScanAvgCPULoadFactor didn’t help. Could this be related to Defender’s cloud protection update cycle, or possibly a backend maintenance task from Defender for Cloud? Is there a recommended way to throttle or schedule these background Defender tasks in production environments? Appreciate any insights, LucaDefender for Endpoint Conflicting with Internal Firewall Authentication
Hi Security Experts, After onboarding a few devices into Defender for Endpoint, I noticed that those machines started having connection drops to the company’s internal firewall. They constantly re-authenticate before regaining web access. Devices not onboarded into Defender don’t experience this issue. Could Defender’s network protection or proxy policies be interfering with the internal firewall authentication flow? Any recommendations on how to keep Defender active while keeping the internal firewall as the primary control point? Thanks for any suggestions, LucaAutomating Defender Alerts with CISA KEV and n8n – Has anyone tried similar workflows?
Hi everyone, I’ve been experimenting with n8n automation to improve vulnerability management. I created a workflow that cross-references Microsoft Defender for Endpoint vulnerabilities with the CISA Known Exploited Vulnerabilities (KEV) catalog, and then automatically creates Jira tickets for remediation. The flow takes about 16 seconds to run and prioritizes only the CVEs that are both present in the environment and listed in KEV. Has anyone here built similar automation (maybe with Logic Apps, Power Automate, or Sentinel playbooks)? Would love to hear how others handle vulnerability prioritization or ticket creation!Automação de Alertas do Defender com o Catálogo KEV da CISA usando n8n
Overview Recently, I decided to explore how automation could help simplify daily security operations, especially in vulnerability management. While studying n8n, an open-source automation platform, I saw the opportunity to connect it with Microsoft Defender for Endpoint and the CISA Known Exploited Vulnerabilities (KEV) Catalog. The goal was simple: build an automated workflow that identifies which vulnerabilities detected in Defender are actively exploited in the wild, and then create actionable tickets in Jira for remediation teams — automatically and with full context. Why I Built This Most security teams deal with thousands of vulnerabilities every week, but only a small portion are actually being exploited. I wanted to find a way to prioritize what truly matters without adding more manual work. Defender for Endpoint already provides strong vulnerability data, but by combining it with the CISA KEV catalog, we can instantly highlight high-risk CVEs that need urgent attention. This project was also a great opportunity to test n8n’s flexibility and API-handling capabilities in a real-world cybersecurity scenario.How to Resolve Microsoft Authenticator App Issues
The Microsoft Authenticator app is a critical tool for securing accounts through multi-factor authentication (MFA). However, users may sometimes experience issues such as login failures, missing notifications, or app crashes. This guide will walk you through troubleshooting and resolving common Microsoft Authenticator app problems. https://dellenny.com/how-to-resolve-microsoft-authenticator-app-issues/394Views0likes0Comments
Events
AMA: What’s New in Microsoft Purview Data Security Investigations
Join us to learn about the latest updates to Microsoft Purview Data Security Investigations (DSI)—including new capabilities like t...
Monday, May 11, 2026, 09:00 AM PDTOnline
1like
20Attendees
0Comments
Recent Blogs
- Co-authors: Kayla Rohde & Kenneth Johnson Having multiple cybersecurity technologies, controls, systems, and stakeholders operating together without conflict is not a temporary inconvenience. It is...Apr 30, 2026248Views0likes0Comments
- Where AV helps—and what it may not cover Antivirus engines and traditional code scanners are highly effective at identifying known or suspicious executable content, such as binaries, scripts, or e...Apr 24, 2026182Views0likes0Comments