microsoft 365 defender
586 TopicsDeclutter and Defend: Reducing promotional mail noise with Microsoft Defender
Enterprise inboxes are overwhelmed with graymail — legitimate, bulk email like newsletters, vendor promotions, and product updates that isn't malicious but buries the messages that matter. When high volumes of these mails land in the inbox, it crowds out priority communications and can dull security vigilance. Employees conditioned to ignore repetitive emails may miss signs of a real threat. It also creates recurring work for admins and security teams who must continuously tune filters, manage exception requests, and chase noise from user reports for email that isn’t malicious. Because graymail passes every spam filter check, traditional defenses don't separate it — leaving this signal-to-noise gap unaddressed. Today we’re excited to announce that Microsoft Defender now includes built-in graymail filtering. It is delivered natively through a new Promotions experience in Outlook that automatically classifies and separates solicited bulk email, so it no longer competes with business-critical communication in the inbox. Now in Public Preview, this capability learns from how users interact with graymail to become more accurate over time. Coupled with the existing Bulk Senders Insight report, Defender brings data-driven bulk classification and control into the security workflows you already use. What Is Graymail? Graymail is legitimate, solicited bulk email that isn't malicious—product newsletters, event announcements, marketing promotions, and software update notifications from reputable, authenticated senders. It is distinct from spam, which is unsolicited, and from phishing, which involves malicious messages impersonating trusted entities. Because graymail comes from real organizations with proper authentication, traditional spam filters aren't designed to handle it—every technical indicator says these emails are legitimate, even as they collectively bury the messages that actually matter. Graymail handling in Microsoft Defender Microsoft Defender's approach is built on three principles: classify intelligently, deliver natively, and learn continuously. Promotions Folder — Intelligent Inbox Organization A dedicated Promotions folder, natively provisioned in Outlook, now keeps legitimate bulk mail out of the primary inbox. Promotional content is separated from priority emails without being sent to Junk, which means users can still access and browse newsletters and updates at their own pace. The folder appears at the top level of the mailbox for easy discovery and is visible across all Outlook experiences. Non-spam bulk mail below the organization's configured Bulk Complaint Level threshold is automatically routed to the Promotions folder. Messages from senders the user has explicitly allowed continue to land in the Inbox. Messages identified as spam continue to go to Junk. To enable the Promotions folder administrators need to enable the "Bulk Moves Enabled" setting in their anti-spam policy. The Promotions folder is then created for all users and used for routing only when this setting is ON. Existing mail flow is unaffected. Promotional mail tagging and Mailbox Rule Support Messages classified as graymail will automatically be labeled with a "Promotions" system tag in Outlook. The tag provides instant visual context without requiring users to open each message and is visible in Outlook on the Web and the native Outlook desktop apps for Windows and Mac. During Public Preview, the tagging component is opt-in, requiring administrators to enable it by configuring an Exchange Transport Rule. Once generally available, it will be on by default. Because this classification is integrated at the client level, the Promotions tag can also be used as a condition in Outlook mailbox rules. This enables custom routing logic for advanced scenarios like moving all promotions-tagged messages from a specific sender to a custom folder, flagging certain promotional emails for follow-up, or auto-forwarding or deleting promotions that meet specific criteria. This transforms the Promotions classification from a one-way filter into a flexible building block for personal and organizational workflows—particularly valuable for power users and teams with compliance or archival requirements. Adaptive Learning Microsoft Defender's graymail filtering gets smarter with every interaction. The system learns directly from how users handle their mail. When a user moves a message out of the Promotions folder and back to the Inbox, future emails from that sender will no longer be placed in Promotions. When a user moves a message from the Inbox into the Promotions folder, future emails from that sender will be routed to Promotions automatically. This creates a personalized, self-improving experience that becomes more accurate over time - no manual rule configuration required, no safe-sender lists to maintain, and no filtering rules for IT teams to manage on behalf of individual employees. Built into existing Security Workflows Administrators also gain visibility through the Bulk Senders Insight report, which provides data-driven guidance on what your organization actually receives and can help tune thresholds. Graymail has long been the unsolved middle ground of email security—too legitimate to block, too noisy to ignore. Microsoft Defender now handles it where it should be handled: inside the platform, inside the mailbox, and inside the security workflows your organization already relies on. No new portals, no new vendors, no compromise between security and user experience. Get Started Configure promotions tagging and the promotions folder today - Bulk email detection documentation on Microsoft Learn. Monitor the experience using the Bulk Senders Insight report.Enable per‑user language selection for phishing simulation emails and landing pages
We use Attack Simulation Training to deliver phishing simulations to a global, multilingual user base. While Microsoft Defender supports multi‑language content, phishing simulation emails and landing pages are currently delivered in a single selected language per campaign. We are requesting a feature that allows phishing simulation emails and associated landing pages (including credential‑harvest pages) to automatically render in each user’s preferred language, based on: Outlook mailbox language settings, and/or Microsoft Entra ID user language preferences This capability would: Improve realism and accuracy of phishing simulations Ensure users experience simulations in the same language they normally work in Improve behavioral measurement in global organizations Reduce the need to create and manage multiple parallel simulations by language Providing consistent, per‑user language alignment across simulation emails, landing pages, and follow‑up training would significantly enhance the effectiveness of Attack Simulation Training for large, multilingual enterprises.Enable automatic per‑user language selection for Defender training modules
We use Attack Simulation Training and Microsoft Defender training modules as part of our security awareness program for a global audience. Currently, training content is assigned in a single language per campaign, even though users already have preferred language settings defined in Outlook and Microsoft Entra ID (Azure AD). This creates challenges for multinational organizations and often requires duplicating campaigns or accepting that some users receive training in a non‑preferred language. We are requesting a capability that allows Defender training modules to automatically display in each user’s preferred language, based on: Outlook mailbox language settings, and/or Microsoft Entra ID user language preferences Enabling per‑user language selection would: Improve comprehension and learning outcomes Increase training effectiveness for non‑native speakers Reduce administrative overhead and duplicated campaigns Align Defender training with existing Microsoft 365 localization behavior Defender already supports training content in multiple languages. Allowing dynamic language delivery per user would significantly improve scalability and usability for enterprise security awareness programs.Feature Request: Extend Security Copilot inclusion (M365 E5) to M365 A5 Education tenants
Background At Ignite 2025, Microsoft announced that Security Copilot is included for all Microsoft 365 E5 customers, with a phased rollout starting November 18, 2025. This is a significant step forward for security operations. The gap Microsoft 365 A5 for Education is the academic equivalent of E5 — it includes the same core security stack: Microsoft Defender, Entra, Intune, and Purview. However, the Security Copilot inclusion explicitly covers only commercial E5 customers. There is no public roadmap or timeline for extending this benefit to A5 education tenants. Why this matters Education institutions face the same cybersecurity threats as commercial organizations — often with fewer dedicated security resources. The A5 license was positioned as the premium security offering for education. Excluding it from Security Copilot inclusion creates an inequity between commercial and education customers holding functionally equivalent license tiers. Request We would like Microsoft to: Confirm whether Security Copilot inclusion will be extended to M365 A5 Education tenants If yes, provide an indicative timeline If no, clarify the rationale and what alternative paths exist for education customers Are other EDU admins in the same situation? Would appreciate any upvotes or comments to help raise visibility with the product team.125Views5likes1CommentVPN Integration not persistent
Hello, We tried to configure https://learn.microsoft.com/en-us/defender-for-identity/vpn-integration from supported Cisco VPN GW. We established the RADIUS Accounting logs to be sent to DC with MDI sensors installed. Yet when we enabled this in Defender Portal (Settings > Identities > VPN) by checking the box and inserting the shared secret, the configuration is not persistent. We hit save, and we are presented with the success green message, but once we refresh the page or go elsewhere in the portal, the checkbox is not checked. Has anyone encountered the same issue? Thanks, SimonCredential Exposure Risk & Response Workbook
How to set up the Workbook Use the steps outlined in the Identify and Remediate Credentials article to get the right rules in place to start capturing credential data. You may choose to use custom regex patterns or more specific SITs that align with your scenario. This workbook will help you once that is done. This workbook transforms credential leakage detection into a measurable, executive-ready capability. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Prerequisites License requirements for Microsoft Purview Information Protection depend on the scenarios and features you use. To understand your licensing requirements and options for Microsoft Purview Information Protection, see the Information Protection sections from Microsoft 365 guidance for security & compliance and the related PDF download for feature-level licensing requirements. Before you start, all endpoint interaction with Sensitive content is already being included in the audit logging with Endpoint DLP enabled (Endpoint DLP must be enabled). For Microsoft 365 SharePoint, OneDrive Exchange, and Teams you can enable policies that generate events but not incidents for important sensitive information types. Install Power BI Desktop to make use of the templates Downloads - Microsoft Power BI Step-by-step guided walkthrough In this guide, we will provide high-level steps to get started using the new tooling. Get the latest version of the report that you are interested in. In this case, we will show the Board report. Open the report. If Power BI Desktop is installed, it should look like this: 3. You must authenticate with the https://api.security.microsoft.com, select Organizational account, and sign in. Then click Connect. 4. You will also have to authenticate with httpps://api.security.microsoft.com/api/advancedhunting, select Organizational account, and sign in. Then click Connect. What the Workbook Delivers The workbook moves programs to something that is measurable. Combined with customers' outcome‑based metrics (operational risk, control risk, end‑user impact), it enables an executive‑level, data‑driven narrative for investment and policy decisions. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Troubleshooting tips: If you are receiving a (400): Bad request error, it is likely that you do not have the necessary tables from the endpoint in Advanced Hunting. Those errors may also show if there are empty values passed from the left-hand side of the KQL queries. Detection trend Apply filtering to this view based on the DLP policies that monitor credentials. Trend Analysis Over Time Displays daily detection counts, helping identify spikes in credential leakage activity and enabling proactive investigation. Workload and Credential Type Breakdown Shows which workloads (e.g., Endpoint, Exchange, OneDrive) and credential types are most affected, guiding targeted security measures. Detection Source Visibility Highlight which security tools (Sentinel, Cloud App Security, Defender) are catching leaks, ensuring monitoring coverage, and identifying gaps. Detailed Credential Exposure Lists exposed credentials for quick validation and remediation, reducing the risk of misuse or compromise. (This part is dependent on the AI component) Supports Incident Response Enables rapid triage by correlating detection trends with specific credentials and sources, improving response times. Compliance and Audit Readiness Provides clear evidence of credential monitoring and leakage detection for regulatory and governance reporting. Credential incident trends Lifecycle Tracking of Credential Alerts Visualizes creation and resolution trends over time, helping teams measure response efficiency and identify periods of heightened risk. Workload and Credential Type Breakdown Shows which workloads (Endpoint, Exchange, OneDrive) and credential types are most impacted, enabling targeted mitigation strategies. Incident Type Analysis Highlights the distribution of alerts by category (e.g., CredRisk, Agent), supporting prioritization of critical incidents. Detailed Alert Context Provides message IDs and associated credentials for precise investigation and remediation, reducing time to contain threats. Performance and SLA Monitoring Tracks resolution timelines to ensure compliance with internal security SLAs and regulatory requirements. Audit and Governance Support Offers clear evidence of alert handling and closure, strengthening accountability and reporting. Content view Workload-Level Risk Visibility Highlights which workloads (e.g., SharePoint, Endpoint) have the highest credential exposure, enabling targeted security hardening. Departmental Risk Breakdown Shows which departments (Security, Logistics, Sales) are most impacted, helping prioritise remediation for critical business areas. Credential Type Analysis Identifies exposed credential types such as API keys, shared access keys, and tokens, guiding policy enforcement and rotation strategies. User and Document Correlation Links exposed credentials to specific users and documents, supporting rapid investigation and containment of leaks. Comprehensive Drill-Down Enables navigation from department → credential type → user → document for precise root cause analysis. Governance and Compliance Support Provides auditable evidence of credential exposure across workloads and departments, strengthening regulatory reporting. For endpoint, this view is an excellent way to catch applications that are not treating secrets in a safe way and expose them in temporary files. Force-directed graph Visual Alert Correlation Displays a force-directed graph linking users to alert categories, making it easy to identify patterns and clusters of credential-related risks. High-Risk User Identification Highlights users with multiple or severe alerts, enabling prioritisation for investigation and remediation. Credential Type and Department Context Shows which credential types and departments are most associated with alerts, supporting targeted security measures. Alert Severity and Details Provides a detailed table of alerts with severity and category, helping analysts quickly assess impact and urgency. Improved Threat Hunting Enables analysts to trace relationships between users, alert types, and credential exposure for deeper root cause analysis. Compliance and Reporting Offers clear evidence of monitoring and categorisation of credential-related alerts for governance and audit purposes. Security incidents correlated to credential leakage Focused on Credential Leakage Provides a dedicated view of alerts related to exposed credentials, enabling quick detection and response. Role-Based Risk Analysis Breaks down incidents by department and role, helping prioritise remediation for high-risk groups such as developers and security teams. User-Level Investigation Allows drill-down to individual users involved in credential-related alerts for rapid containment and corrective action. Credential Type Insights Highlight which types of credentials (e.g., API keys, passwords) are most vulnerable, guiding policy improvements and rotation strategies. Alert Source Correlation Displays which security tools (Sentinel, MCAS, Defender) are detecting leaks, ensuring coverage and identifying monitoring gaps. Compliance and Governance Support Offers auditable evidence of credential monitoring, supporting regulatory and internal security requirements. App and Network correlated to credential leakage For network detection, adjust the query in production to remove standard applications if they are too noisy. We have seen cases where Word and other commonly used applications make calls using FTP services as an example. While other applications may add too much noise. Token Detection Event Traceability Shows detected Token credentials events linked directly to individual User IDs and Device IDs for investigation. Application Usage Context Identifies that the detected activity is associated with the application ms‑teams.exe as an example. External URL Association Displays the Remote URL connected to the token detection event. Remote IP Visibility Lists the Remote IP addresses associated with the activity. Entity-Level Correlation Links UserId, DeviceId, Application, Remote URL, and Remote IP within a single event flow. You can select port used or how Apps are linked as well. Detection Count Aggregation Summarises the number of credential events tied to each correlated entity path. Turn detection into decisions. Deploy the workbook today to get measurable insights, accelerate triage, and deliver audit-ready governance. Start driving risk-aligned investment and policy changes with confidence. The PBI report is located here. Based on what you identify, you may be using tools such as Data Security Investigations to go deeper. We are also working on surfacing the AI triaging in a context that will enrich the DLP analyst experience.Authorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.Sentinel to Defender Portal Migration - my 5 Gotchas to help you
The migration to the unified Defender portal is one of those transitions where the documentation covers "what's new" but glosses over what breaks on cutover day. Here are the gotchas that consistently catch teams off-guard, along with practical fixes. Gotcha 1: Automatic Connector Enablement When a Sentinel workspace connects to the Defender portal, Microsoft auto-enables certain connectors - often without clear notification. The most common surprises: Connector Auto-Enables? Impact Defender for Endpoint Yes EDR telemetry starts flowing, new alerts created Defender for Cloud Yes Additional incidents, potential ingestion cost increase Defender for Cloud Apps Conditional Depends on existing tenant config Azure AD Identity Protection No Stays in Sentinel workspace only Immediate action: Within 2 hours of connecting, navigate to Security.microsoft.com > Connectors & integrations > Data connectors and audit what auto-enabled. Compare against your pre-migration connector list and disable anything unplanned. Why this matters: Auto-enabled connectors can duplicate data sources - ingesting the same telemetry through both Sentinel and Defender connectors inflates Log Analytics costs by 20-40%. Gotcha 2: Incident Duplication The most disruptive surprise. The same incident appears twice: once from a Sentinel analytics rule, once from the Defender portal's auto-created incident creation rule. SOC teams get paged twice, deduplication breaks, and MTTR metrics go sideways. Diagnosis: SecurityIncident | where TimeGenerated > ago(7d) | summarize IncidentCount = count() by Title | where IncidentCount > 1 | order by IncidentCount desc If you see unexpected duplicates, the cause is almost certainly the auto-enabled Microsoft incident creation rule conflicting with your existing analytics rules. Fix: Disable the auto-created incident creation rule in Sentinel Automation rules, and rely on your existing analytics rule > incident mapping instead. This ensures incidents are created only through Sentinel's pipeline. Gotcha 3: Analytics Rule Title Dependencies The Defender portal matches incidents to analytics rules by title, not by rule ID. This creates subtle problems: Renaming a rule breaks the incident linkage Copying a rule with a similar title causes cross-linkage Two workspaces with identically named rules generate separate incidents for the same alert Prevention checklist: Audit all analytics rule titles for uniqueness before migration Document the title-to-GUID mapping as a reference Avoid renaming rules en masse during migration Use a naming convention like <Severity>_<Tactic>_<Technique> to prevent collisions Gotcha 4: RBAC Gaps Sentinel workspace RBAC roles don't directly translate to Defender portal permissions: Sentinel Role Defender Portal Equivalent Gap Microsoft Sentinel Responder Security Operator Minor - name change Microsoft Sentinel Contributor Security Operator + Security settings (manage) Significant - split across roles Sentinel Automation Contributor Automation Contributor (new) New role required Migration approach: Create new unified RBAC roles in the Defender portal that mirror your existing Sentinel permissions. Test with a pilot group before org-wide rollout. Keep workspace RBAC roles for 30 days as a fallback. Gotcha 5: Automation Rules Don't Auto-Migrate Sentinel automation rules and playbooks don't carry over to the Defender portal automatically. The syntax has changed, and not all Sentinel automation actions are available in Defender. Recommended approach: Export existing Sentinel automation rules (screenshot condition logic and actions) Recreate them in the Defender portal Run both in parallel for one week to validate behavior Retire Sentinel automation rules only after confirming Defender equivalents work correctly Practical Migration Timeline Phase 1 - Pre-migration (1-2 weeks before): Audit connectors, analytics rules, RBAC roles, and automation rules Document everything - titles, GUIDs, permissions, automation logic Test in a pilot environment first Phase 2 - Cutover day: Connect workspace to Defender portal Within 2 hours: audit auto-enabled connectors Within 4 hours: check for duplicate incidents Within 24 hours: validate RBAC and automation rules Phase 3 - Post-migration (1-2 weeks after): Monitor incident volume for duplication spikes Validate automation rules fire correctly Collect SOC team feedback on workflow impact After 1 week of stability: retire legacy automation rules Phase 4 - Cleanup (2-4 weeks after): Remove duplicate automation rules Archive workspace-specific RBAC roles once unified RBAC is stable Update SOC runbooks and documentation The bottom line: treat this as a parallel-run migration, not a lift-and-shift. Budget 2 weeks for parallel operations. Teams that rushed this transition consistently reported longer MTTR during the first month post-migration.87Views0likes0CommentsDo XDR Alerts cover the same alerts available in Alert Policies?
The alerts in question are the 'User requested to release a quarantined message', 'User clicked a malicious link', etc. About 8 of these we send to 'email address removed for privacy reasons'. That administrator account has an EOM license, so Outlook rules can be set. We set rules to forward those 8 alerts to our 'email address removed for privacy reasons' address. This is, very specifically, so the alert passes through the @tenant.com address, and our ticketing endpoint knows what tenant sent it. But this ISN'T ideal because it requires an EOP license (or similar - this actually hasn't been an issue until now just because of our customer environments). I've looked at the following alternatives: - Setting email address removed for privacy reasons as the recipient directly on the Alert Policies in question. This results in the mail going directly from microsoft to our Ticketing Portal - so it ends up sorted into Microsoft tickets. and the right team doesn't get it. SMTP Forwarding via either Exchange AC User controls or Mail Flow Rules. But these aren't traditional forwarding, and they have the same issue as above. Making administrator @tenant.com a SHARED mailbox that we can also login to (for administration purposes). But this doesn't allow you to set Outlook rules (or even login to Outlook). I've checked out the newer alerts under Defender's Settings panel - XDR alerts, I think they're called. Wondering if these can be leveraged at all for this? Essentially, trying to get these Alerts to come to our external ticketing address, from the tenants domain (instead of Microsoft). I could probably update Autotask's rules to check for a header, and set that header via Mail Flow rules, but.. just hoping I don't have to do that for everyone.