microsoft 365 defender
583 TopicsVPN Integration not persistent
Hello, We tried to configure https://learn.microsoft.com/en-us/defender-for-identity/vpn-integration from supported Cisco VPN GW. We established the RADIUS Accounting logs to be sent to DC with MDI sensors installed. Yet when we enabled this in Defender Portal (Settings > Identities > VPN) by checking the box and inserting the shared secret, the configuration is not persistent. We hit save, and we are presented with the success green message, but once we refresh the page or go elsewhere in the portal, the checkbox is not checked. Has anyone encountered the same issue? Thanks, SimonCredential Exposure Risk & Response Workbook
How to set up the Workbook Use the steps outlined in the Identify and Remediate Credentials article to get the right rules in place to start capturing credential data. You may choose to use custom regex patterns or more specific SITs that align with your scenario. This workbook will help you once that is done. This workbook transforms credential leakage detection into a measurable, executive-ready capability. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Prerequisites License requirements for Microsoft Purview Information Protection depend on the scenarios and features you use. To understand your licensing requirements and options for Microsoft Purview Information Protection, see the Information Protection sections from Microsoft 365 guidance for security & compliance and the related PDF download for feature-level licensing requirements. Before you start, all endpoint interaction with Sensitive content is already being included in the audit logging with Endpoint DLP enabled (Endpoint DLP must be enabled). For Microsoft 365 SharePoint, OneDrive Exchange, and Teams you can enable policies that generate events but not incidents for important sensitive information types. Install Power BI Desktop to make use of the templates Downloads - Microsoft Power BI Step-by-step guided walkthrough In this guide, we will provide high-level steps to get started using the new tooling. Get the latest version of the report that you are interested in. In this case, we will show the Board report. Open the report. If Power BI Desktop is installed, it should look like this: 3. You must authenticate with the https://api.security.microsoft.com, select Organizational account, and sign in. Then click Connect. 4. You will also have to authenticate with httpps://api.security.microsoft.com/api/advancedhunting, select Organizational account, and sign in. Then click Connect. What the Workbook Delivers The workbook moves programs to something that is measurable. Combined with customers' outcome‑based metrics (operational risk, control risk, end‑user impact), it enables an executive‑level, data‑driven narrative for investment and policy decisions. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Troubleshooting tips: If you are receiving a (400): Bad request error, it is likely that you do not have the necessary tables from the endpoint in Advanced Hunting. Those errors may also show if there are empty values passed from the left-hand side of the KQL queries. Detection trend Apply filtering to this view based on the DLP policies that monitor credentials. Trend Analysis Over Time Displays daily detection counts, helping identify spikes in credential leakage activity and enabling proactive investigation. Workload and Credential Type Breakdown Shows which workloads (e.g., Endpoint, Exchange, OneDrive) and credential types are most affected, guiding targeted security measures. Detection Source Visibility Highlight which security tools (Sentinel, Cloud App Security, Defender) are catching leaks, ensuring monitoring coverage, and identifying gaps. Detailed Credential Exposure Lists exposed credentials for quick validation and remediation, reducing the risk of misuse or compromise. (This part is dependent on the AI component) Supports Incident Response Enables rapid triage by correlating detection trends with specific credentials and sources, improving response times. Compliance and Audit Readiness Provides clear evidence of credential monitoring and leakage detection for regulatory and governance reporting. Credential incident trends Lifecycle Tracking of Credential Alerts Visualizes creation and resolution trends over time, helping teams measure response efficiency and identify periods of heightened risk. Workload and Credential Type Breakdown Shows which workloads (Endpoint, Exchange, OneDrive) and credential types are most impacted, enabling targeted mitigation strategies. Incident Type Analysis Highlights the distribution of alerts by category (e.g., CredRisk, Agent), supporting prioritization of critical incidents. Detailed Alert Context Provides message IDs and associated credentials for precise investigation and remediation, reducing time to contain threats. Performance and SLA Monitoring Tracks resolution timelines to ensure compliance with internal security SLAs and regulatory requirements. Audit and Governance Support Offers clear evidence of alert handling and closure, strengthening accountability and reporting. Content view Workload-Level Risk Visibility Highlights which workloads (e.g., SharePoint, Endpoint) have the highest credential exposure, enabling targeted security hardening. Departmental Risk Breakdown Shows which departments (Security, Logistics, Sales) are most impacted, helping prioritise remediation for critical business areas. Credential Type Analysis Identifies exposed credential types such as API keys, shared access keys, and tokens, guiding policy enforcement and rotation strategies. User and Document Correlation Links exposed credentials to specific users and documents, supporting rapid investigation and containment of leaks. Comprehensive Drill-Down Enables navigation from department → credential type → user → document for precise root cause analysis. Governance and Compliance Support Provides auditable evidence of credential exposure across workloads and departments, strengthening regulatory reporting. For endpoint, this view is an excellent way to catch applications that are not treating secrets in a safe way and expose them in temporary files. Force-directed graph Visual Alert Correlation Displays a force-directed graph linking users to alert categories, making it easy to identify patterns and clusters of credential-related risks. High-Risk User Identification Highlights users with multiple or severe alerts, enabling prioritisation for investigation and remediation. Credential Type and Department Context Shows which credential types and departments are most associated with alerts, supporting targeted security measures. Alert Severity and Details Provides a detailed table of alerts with severity and category, helping analysts quickly assess impact and urgency. Improved Threat Hunting Enables analysts to trace relationships between users, alert types, and credential exposure for deeper root cause analysis. Compliance and Reporting Offers clear evidence of monitoring and categorisation of credential-related alerts for governance and audit purposes. Security incidents correlated to credential leakage Focused on Credential Leakage Provides a dedicated view of alerts related to exposed credentials, enabling quick detection and response. Role-Based Risk Analysis Breaks down incidents by department and role, helping prioritise remediation for high-risk groups such as developers and security teams. User-Level Investigation Allows drill-down to individual users involved in credential-related alerts for rapid containment and corrective action. Credential Type Insights Highlight which types of credentials (e.g., API keys, passwords) are most vulnerable, guiding policy improvements and rotation strategies. Alert Source Correlation Displays which security tools (Sentinel, MCAS, Defender) are detecting leaks, ensuring coverage and identifying monitoring gaps. Compliance and Governance Support Offers auditable evidence of credential monitoring, supporting regulatory and internal security requirements. App and Network correlated to credential leakage For network detection, adjust the query in production to remove standard applications if they are too noisy. We have seen cases where Word and other commonly used applications make calls using FTP services as an example. While other applications may add too much noise. Token Detection Event Traceability Shows detected Token credentials events linked directly to individual User IDs and Device IDs for investigation. Application Usage Context Identifies that the detected activity is associated with the application ms‑teams.exe as an example. External URL Association Displays the Remote URL connected to the token detection event. Remote IP Visibility Lists the Remote IP addresses associated with the activity. Entity-Level Correlation Links UserId, DeviceId, Application, Remote URL, and Remote IP within a single event flow. You can select port used or how Apps are linked as well. Detection Count Aggregation Summarises the number of credential events tied to each correlated entity path. Turn detection into decisions. Deploy the workbook today to get measurable insights, accelerate triage, and deliver audit-ready governance. Start driving risk-aligned investment and policy changes with confidence. The PBI report is located here. Based on what you identify, you may be using tools such as Data Security Investigations to go deeper. We are also working on surfacing the AI triaging in a context that will enrich the DLP analyst experience.Authorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.Sentinel to Defender Portal Migration - my 5 Gotchas to help you
The migration to the unified Defender portal is one of those transitions where the documentation covers "what's new" but glosses over what breaks on cutover day. Here are the gotchas that consistently catch teams off-guard, along with practical fixes. Gotcha 1: Automatic Connector Enablement When a Sentinel workspace connects to the Defender portal, Microsoft auto-enables certain connectors - often without clear notification. The most common surprises: Connector Auto-Enables? Impact Defender for Endpoint Yes EDR telemetry starts flowing, new alerts created Defender for Cloud Yes Additional incidents, potential ingestion cost increase Defender for Cloud Apps Conditional Depends on existing tenant config Azure AD Identity Protection No Stays in Sentinel workspace only Immediate action: Within 2 hours of connecting, navigate to Security.microsoft.com > Connectors & integrations > Data connectors and audit what auto-enabled. Compare against your pre-migration connector list and disable anything unplanned. Why this matters: Auto-enabled connectors can duplicate data sources - ingesting the same telemetry through both Sentinel and Defender connectors inflates Log Analytics costs by 20-40%. Gotcha 2: Incident Duplication The most disruptive surprise. The same incident appears twice: once from a Sentinel analytics rule, once from the Defender portal's auto-created incident creation rule. SOC teams get paged twice, deduplication breaks, and MTTR metrics go sideways. Diagnosis: SecurityIncident | where TimeGenerated > ago(7d) | summarize IncidentCount = count() by Title | where IncidentCount > 1 | order by IncidentCount desc If you see unexpected duplicates, the cause is almost certainly the auto-enabled Microsoft incident creation rule conflicting with your existing analytics rules. Fix: Disable the auto-created incident creation rule in Sentinel Automation rules, and rely on your existing analytics rule > incident mapping instead. This ensures incidents are created only through Sentinel's pipeline. Gotcha 3: Analytics Rule Title Dependencies The Defender portal matches incidents to analytics rules by title, not by rule ID. This creates subtle problems: Renaming a rule breaks the incident linkage Copying a rule with a similar title causes cross-linkage Two workspaces with identically named rules generate separate incidents for the same alert Prevention checklist: Audit all analytics rule titles for uniqueness before migration Document the title-to-GUID mapping as a reference Avoid renaming rules en masse during migration Use a naming convention like <Severity>_<Tactic>_<Technique> to prevent collisions Gotcha 4: RBAC Gaps Sentinel workspace RBAC roles don't directly translate to Defender portal permissions: Sentinel Role Defender Portal Equivalent Gap Microsoft Sentinel Responder Security Operator Minor - name change Microsoft Sentinel Contributor Security Operator + Security settings (manage) Significant - split across roles Sentinel Automation Contributor Automation Contributor (new) New role required Migration approach: Create new unified RBAC roles in the Defender portal that mirror your existing Sentinel permissions. Test with a pilot group before org-wide rollout. Keep workspace RBAC roles for 30 days as a fallback. Gotcha 5: Automation Rules Don't Auto-Migrate Sentinel automation rules and playbooks don't carry over to the Defender portal automatically. The syntax has changed, and not all Sentinel automation actions are available in Defender. Recommended approach: Export existing Sentinel automation rules (screenshot condition logic and actions) Recreate them in the Defender portal Run both in parallel for one week to validate behavior Retire Sentinel automation rules only after confirming Defender equivalents work correctly Practical Migration Timeline Phase 1 - Pre-migration (1-2 weeks before): Audit connectors, analytics rules, RBAC roles, and automation rules Document everything - titles, GUIDs, permissions, automation logic Test in a pilot environment first Phase 2 - Cutover day: Connect workspace to Defender portal Within 2 hours: audit auto-enabled connectors Within 4 hours: check for duplicate incidents Within 24 hours: validate RBAC and automation rules Phase 3 - Post-migration (1-2 weeks after): Monitor incident volume for duplication spikes Validate automation rules fire correctly Collect SOC team feedback on workflow impact After 1 week of stability: retire legacy automation rules Phase 4 - Cleanup (2-4 weeks after): Remove duplicate automation rules Archive workspace-specific RBAC roles once unified RBAC is stable Update SOC runbooks and documentation The bottom line: treat this as a parallel-run migration, not a lift-and-shift. Budget 2 weeks for parallel operations. Teams that rushed this transition consistently reported longer MTTR during the first month post-migration.65Views0likes0CommentsDo XDR Alerts cover the same alerts available in Alert Policies?
The alerts in question are the 'User requested to release a quarantined message', 'User clicked a malicious link', etc. About 8 of these we send to 'email address removed for privacy reasons'. That administrator account has an EOM license, so Outlook rules can be set. We set rules to forward those 8 alerts to our 'email address removed for privacy reasons' address. This is, very specifically, so the alert passes through the @tenant.com address, and our ticketing endpoint knows what tenant sent it. But this ISN'T ideal because it requires an EOP license (or similar - this actually hasn't been an issue until now just because of our customer environments). I've looked at the following alternatives: - Setting email address removed for privacy reasons as the recipient directly on the Alert Policies in question. This results in the mail going directly from microsoft to our Ticketing Portal - so it ends up sorted into Microsoft tickets. and the right team doesn't get it. SMTP Forwarding via either Exchange AC User controls or Mail Flow Rules. But these aren't traditional forwarding, and they have the same issue as above. Making administrator @tenant.com a SHARED mailbox that we can also login to (for administration purposes). But this doesn't allow you to set Outlook rules (or even login to Outlook). I've checked out the newer alerts under Defender's Settings panel - XDR alerts, I think they're called. Wondering if these can be leveraged at all for this? Essentially, trying to get these Alerts to come to our external ticketing address, from the tenants domain (instead of Microsoft). I could probably update Autotask's rules to check for a header, and set that header via Mail Flow rules, but.. just hoping I don't have to do that for everyone.Impersonation Protection: Users to Protect should also be Trusted Senders
Hey all, sort of a weird question here. Teaching my staff about Impersonation Protection, and it's kind of occurred to me that any external sender added to 'Senders to Protect' sort of implicitly should also be a 'Trusted Sender'. Example - we're an MSP, and we want our Help Desk (email address removed for privacy reasons) to be protected from impersonation. Specifically, we want to protect the 'Help Desk' name. So we add email address removed for privacy reasons to Senders to protect. However, we ALSO want to make sure our emails come thru. So we've ALSO had to add email address removed for privacy reasons to Trusted Senders on other tenants. Chats with Copilot have sort of given me an understanding that this is essentially a 'which is more usefuI' scenario. But CoPilot makes things up, and I want some human input. In theory, ANYONE we add to 'trusted senders' we ALSO want protected from Impersonation. Anyone we protect from Impersonation we ALSO want to trust. Copilot says you SHOULDN'T do both. Which is better / more practical?Feature Request: Extend Security Copilot inclusion (M365 E5) to M365 A5 Education tenants
Background At Ignite 2025, Microsoft announced that Security Copilot is included for all Microsoft 365 E5 customers, with a phased rollout starting November 18, 2025. This is a significant step forward for security operations. The gap Microsoft 365 A5 for Education is the academic equivalent of E5 — it includes the same core security stack: Microsoft Defender, Entra, Intune, and Purview. However, the Security Copilot inclusion explicitly covers only commercial E5 customers. There is no public roadmap or timeline for extending this benefit to A5 education tenants. Why this matters Education institutions face the same cybersecurity threats as commercial organizations — often with fewer dedicated security resources. The A5 license was positioned as the premium security offering for education. Excluding it from Security Copilot inclusion creates an inequity between commercial and education customers holding functionally equivalent license tiers. Request We would like Microsoft to: Confirm whether Security Copilot inclusion will be extended to M365 A5 Education tenants If yes, provide an indicative timeline If no, clarify the rationale and what alternative paths exist for education customers Are other EDU admins in the same situation? Would appreciate any upvotes or comments to help raise visibility with the product team.83Views4likes0CommentsI built a free, open-source M365 security assessment tool - looking for feedback
I work as an IT consultant, and a good chunk of my time is spent assessing Microsoft 365 environments for small and mid-sized businesses. Every engagement started the same way: connect to five different PowerShell modules, run dozens of commands across Entra ID, Exchange Online, Defender, SharePoint, and Teams, manually compare each setting against CIS benchmarks, then spend hours assembling everything into a report the client could actually read. The tools that automate this either cost thousands per year, require standing up Azure infrastructure just to run, or only cover one service area. I wanted something simpler: one command that connects, assesses, and produces a client-ready deliverable. So I built it. What M365 Assess does https://github.com/Daren9m/M365-Assess is a PowerShell-based security assessment tool that runs against a Microsoft 365 tenant and produces a comprehensive set of reports. Here is what you get from a single run: 57 automated security checks aligned to the CIS Microsoft 365 Foundations Benchmark v6.0.1, covering Entra ID, Exchange Online, Defender for Office 365, SharePoint Online, and Teams 12 compliance frameworks mapped simultaneously -- every finding is cross-referenced against NIST 800-53, NIST CSF 2.0, ISO 27001:2022, SOC 2, HIPAA, PCI DSS v4.0.1, CMMC 2.0, CISA SCuBA, and DISA STIG (plus CIS profiles for E3 L1/L2 and E5 L1/L2) 20+ CSV exports covering users, mailboxes, MFA status, admin roles, conditional access policies, mail flow rules, device compliance, and more A self-contained HTML report with an executive summary, severity badges, sortable tables, and a compliance overview dashboard -- no external dependencies, fully base64-encoded, just open it in any browser or email it directly The entire assessment is read-only. It never modifies tenant settings. Only Get-* cmdlets are used. A few things I'm proud of Real-time progress in the console. As the assessment runs, you see each check complete with live status indicators and timing. No staring at a blank terminal wondering if it hung. The HTML report is a single file. Logos, backgrounds, fonts -- everything is embedded. You can email the report as an attachment and it renders perfectly. It supports dark mode (auto-detects system preference), and all tables are sortable by clicking column headers. Compliance framework mapping. This was the feature that took the most work. The compliance overview shows coverage percentages across all 12 frameworks, with drill-down to individual controls. Each finding links back to its CIS control ID and maps to every applicable framework control. Pass/Fail detail tables. Each security check shows the CIS control reference, what was checked, what the expected value is, what the actual value is, and a clear Pass/Fail/Warning status. Findings include remediation descriptions to help prioritize fixes. Quick start If you want to try it out, it takes about 5 minutes to get running: # Install prerequisites (if you don't have them already) Install-Module Microsoft.Graph, ExchangeOnlineManagement -Scope CurrentUser Clone and run git clone https://github.com/Daren9m/M365-Assess.git cd M365-Assess .\Invoke-M365Assessment.ps1 The interactive wizard walks you through selecting assessment sections, entering your tenant ID, and choosing an authentication method (interactive browser login, certificate-based, or pre-existing connections). Results land in a timestamped folder with all CSVs and the HTML report. Requires PowerShell 7.x and runs on Windows (macOS and Linux are experimental -- I would love help testing those platforms). Cloud support M365 Assess works with: Commercial (global) tenants GCC, GCC High, and DoD environments If you work in government cloud, the tool handles the different endpoint URIs automatically. What is next This is actively maintained and I have a roadmap of improvements: More automated checks -- 140 CIS v6.0.1 controls are tracked in the registry, with 57 automated today. Expanding coverage is the top priority. Remediation commands -- PowerShell snippets and portal steps for each finding, so you can fix issues directly from the report. XLSX compliance matrix -- A spreadsheet export for audit teams who need to work in Excel. Standalone report regeneration -- Re-run the report from existing CSV data without re-assessing the tenant. I would love your feedback I have been building this for my own consulting work, but I think it could be useful to the broader community. If you try it, I would genuinely appreciate hearing: What checks should I prioritize next? Which security controls matter most in your environment? What compliance frameworks are most requested by your clients or auditors? How does the report land with non-technical stakeholders? Is the executive summary useful, or does it need work? macOS/Linux users -- does it run? What breaks? I have tested it on macOS, but not extensively. Bug reports, feature requests, and contributions are all welcome on GitHub. Repository: https://github.com/Daren9m/M365-Assess License: MIT (free for commercial and personal use) Runtime: PowerShell 7.x Thanks for reading. Happy to answer any questions in the comments.823Views2likes1CommentFrom Impersonation Calls to Transparent Reporting: Defending the New Front Door of Attacks
Email is still a major entry point—but it’s no longer the only one that matters. Today’s attackers are increasingly shifting to collaboration channels like Microsoft Teams, where trust is implicit and interaction is real time. Decisions happen fast, and that changes the economics of attacks. Adversaries can pressure users, adapt on the fly, and accelerate their objectives before traditional controls have time to respond. They can then pivot laterally across identities, endpoints, and cloud apps. And it’s not just chats and shared links anymore. Teams calling has emerged as a high-impact social-engineering path—a “front door” attackers can use to bypass inbox defenses. They can impersonate familiar brands or internal functions. They can also try to extract credentials or persuade a user to take immediate action. In a typical flow, an attacker leverages urgency and context. For example, they may reference an “account issue” following suspicious email activity. They then use the real-time pressure of a call to drive a user toward compromise. That’s why protection must happen directly in the collaboration experience. At RSA 2026, we’re announcing new Microsoft Defender capabilities designed for exactly this reality. They give SOC teams visibility that matches how attacks unfold across Microsoft Teams. They also help end users easily identify impersonation attempts, so they can stop them before compromise. And we’re introducing the new Protection and Posture Insights report, which provides tenant-specific insights about your collaboration security with Microsoft Defender. Protect your organization from voice-based attacks in Microsoft Teams Voice phishing (vishing) is a fast-growing vector because it lets attackers bypass message-based filters and manipulate targets in real time. But security teams haven’t had the same level of coverage for Teams calls that they’ve come to expect for email and messages. That’s why we’re excited to announce inline protection and SOC- investigation capabilities for Microsoft Teams calls. Microsoft Defender can now stop the interaction while it’s happening and SOC teams can then investigate the full path after the fact. Hunt and remediate suspicious calls When attackers use Teams calls to impersonate a brand, internal IT, or a trusted organization, security teams need more than anecdotal user reports—they need forensic visibility and the ability to act. Microsoft Defender has turned Teams calling from a blind spot into a first-class SOC signal, so you can now: Investigate Teams calling activity at scale through Advanced hunting. Use new call-focused data to identify suspicious patterns and validate risk across the organization. This includes unusual external callers, first-time contacts, or activity that aligns with brand impersonation patterns. Pivot directly into a call’s details using a call entity experience. Analysts can quickly understand what happened and who was involved, without stitching together context across multiple tools. Take mitigation actions inline by blocking malicious domains or addresses in Teams via the Tenant Allow/Block List. This turns investigation into immediate containment and helps prevent repeat attempts. Close the loop with end-user reporting. Pair what users flag as a security risk with what analysts can hunt and confirm. The SOC can move faster and reduce ambiguity when seconds matter. Stop impersonation in real time While insights are critical, the most effective way to reduce vishing impact is to interrupt social engineering while the user is still deciding what to do. Now, when a Teams call appears to be impersonating a known organization or trusted entity, users will see a persistent in-call warning banner. It shows during the incoming-call experience and while on the call. That gives users clear, contextual guidance before they comply with attacker instructions. It also extends the same protection approach used for chat impersonation into the calling surface. ime notification informing the user that the call is suspicious. And because improving protection depends on learning from real interactions, users can also provide feedback by reporting a call as not a security risk to help improve the accuracy of warnings over time. That makes Defender the only collaboration security tool that provides inline user feedback – in real-time. Turn Defender telemetry into executive-ready security understanding with the Protection & Posture Insights report To help organizations clearly understand the threats targeting their environment and how Defender is helping protect against them, we are introducing the Protection & Posture Insights report. It is available directly in the Defender portal and built on tenant-specific telemetry. The report provides a customized view of the spam, phishing, and malware campaigns observed against users—showing how attackers are attempting to gain access, what techniques are being used, who is being targeted, and where risk is concentrated across the environment. The Protection & Posture Insights report goes beyond surface-level threat counts to highlight patterns and exposure unique to each tenant, including emerging phishing techniques, malware delivery methods, and zero-day threats identified through detonation analysis. It also shows how these threats are handled across delivery locations—such as inbox, junk, and quarantine—and which detection technologies and policies are engaged, giving teams a clearer understanding of how attackers are interacting with their environment. In addition to threat visibility, the report delivers personalized insights and targeted security policy recommendations based on each customer’s configuration and observed threat activity. By surfacing coverage gaps, priority account targeting, and opportunities to strengthen policy enforcement, teams can take focused action to reduce exposure and improve security posture. With consistent, tenant-specific reporting over time, organizations can validate results, track progress, and share credible, executive-ready security outcomes—without manual data assembly. & Posture Insights report This kind of personalized visibility answers the most important question for any security team: what was stopped in my environment, and why. It’s also helpful to pair those tenant-specific insights with an objective, industry-wide view. That’s why we publish official email security performance benchmarking. We use consistent, real-world measurements of detection and efficacy across phishing, malware, and spam. That way, you can compare Microsoft Defender against other secure email gateway (SEG) and integrated cloud email security (ICES) solutions. For a deeper look at what the latest results reveal, check out From transparency to action: What the latest Microsoft email security benchmark reveals. These new Microsoft Defender capabilities close a critical gap in collaboration security. They help customers interrupt Teams call–based social engineering. They also give the SOC actionable call visibility and faster containment to prevent repeat attempts. Combined with the Protection & Posture Insights report, security teams can more easily report what was stopped in their tenant. They can also prioritize the next control improvements and strengthen end‑to‑end SOC outcomes across email and Teams. Visit Us at RSA 2026 Join us at the Microsoft booth at the Moscone Center to see these innovations in action! More information: Learn more about Defender for Office 365 Find out how to protect your organization against multi-modal attacks Check out our recent blog: Disrupting threats targeting Microsoft TeamsI have absolutely no idea what Microsoft Defender 365 wants me to do here
The process starts with an emal: There's more below on the email - an offer for credit monitoring, an option to add another device, an option to download the mobile app - but I don't want to do any of the, so I click on the "Open Defender" button, which results in this: OK, so my laptop is the bad boy here, there's that Status not of "Action recommended", with no "recommendations" and the only live link here is "Add device", something I don't need to do. The only potential "problem" I can even guess at here is that Microsoft is telling me that the laptop needs updating. Since I seldom use the laptop, only when traveling, I'd guess the next time I'd fire it up the update will occur, but of course I really don't know that's the recommended action it's warning me about, do I? You'd expect that if something is warning you "ACTION NEEDED!!!" they'd be a little more explicit, wouldn't you?190Views0likes3Comments