Forum Widgets
Latest Discussions
Cannot setup phone sign in with Microsoft Authenticator
Hi All, My new Redmi Turbo 4 was working with Microsoft authenticator, but in the past month, it started malfunctioning, so I decided to reset the authenticator app and sign back into it. Now I can't setup the app to do phone sign-in, and the sign in request notifications does not come to the new phone. (old phone is currently still operational). Is there like a shadow ban to chinese android phones?jackliuauJan 02, 2026Copper Contributor13Views0likes0CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.AladinHDec 30, 2025Iron Contributor258Views0likes0CommentsIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu KiranIs practice Labs Enough for the AZ-305 Exam?
Hello everyone, Just a quick question — how should I best prepare for the AZ-305 exam? Is retaking the Learning Path quizzes enough, or should I also practice with other types of tests? Any advice would be greatly appreciated. Thanks in advance!Thomas354Nov 30, 2025Copper Contributor108Views1like1CommentWindows Hello passkeys dialog appearing and cannot remove or suppress it.
Hi everyone, I’m dealing with a persistent Windows Hello and passkey issue in Chrome and Brave and yes this is relevant as they're the only browsers having this issue whilst Edge for example is fine, and at this point I’m trying to understand whether this is expected behavior, a bug, or a design oversight. PS. Yes, I'm in contact with related browser support teams but since they seem utterly hopeless i'm asking here, since its at least partially Windows Hello issue. Problem description Even with: Password managers disabled in browser settings, Windows Hello disabled in Chrome/Brave settings, Windows Hello PIN enabled only for device login, Passkeys still stored under chrome://settings/passkeys (which I cannot delete since its used for logging on the device), The devices are connected to Entra ID but this is not required to reproduce the issue although a buisness account configuration creates a Passkey with Windows Hello afaik. Observed behavior When I attempt to sign in on office.com, Windows Hello automatically triggers a dialog offering authentication via passkeys, even though: I don’t want passkeys used for browser logins, passkeys are turned off everywhere they can be, Windows Hello is intended only for local device authentication. The dialog cannot be suppressed, disabled, or hidden(trust me, i tried for weeks). It effectively forces the Windows Hello prompt as a primary option, which causes problems both personally and in business contexts (wrong credential signaling, misleading users that are supposed to use a dedicated password manager solution insted of browser password managers, enforcing an unwanted authentication flow, etc.). What I already verified Many, many, (too many) Windows registry workarounds that never worked. Dug through almost all flags on those browsers. Chrome/Brave → Password Manager: disabled Chrome/Brave → Windows Hello toggle: off Looked through what feels like almost every related option in Windows Settings. Tried gpedit.msc local rules System up to date Windows Hello configured to use PIN, but stores "passkeys used to log on to this device" Why this is a problem Windows Hello automatically assumes that the device-level Windows Hello credentials should always be available as a WebAuthn authenticator. This feels like a big security and UX issue due to: unexpected authentication dialogs, Inability to controll where and how passkey credential are shared to applications, inability to turn the feature off, no administrative or local option to disable Hello for WebAuthn separately from device login. Buisness users either having issues with keeping passwords in order (our buissnes uses a dedicated Password Manager but this behaviour covers its dialog option) or not having PIN to their devices (when I disable windows hello entierly, since when there is no passkeys the option doesn't appear) Questions Is there any supported way to disable Windows Hello as a WebAuthn/passkey option in browsers, while keeping Hello enabled for local device login? Is this expected behavior from the Windows Hello, or is it considered a bug? Are there registry/policy settings (documented or upcoming) that allow disabling the Windows platform authenticator specifically for browsers like Chrome and Brave? Is Microsoft aware of this issue? If so, is it tracked anywhere? Additional notes This issue replicates 100% across (as long as there are passkeys configured): Windows 11 devices i've managed to get my hands on, Chrome and Brave (latest versions), multiple Microsoft accounts and tenants, multiple clean installations. Any guidance or clarification from the Windows security or identity teams would be greatly appreciated. And honestly if there is any more info i could possibly provide PLEASE ask away.AddjamNov 24, 2025Copper Contributor456Views1like2CommentsAdd Privacy Scrub Service to Microsoft Defender?
Microsoft Defender protects accounts against phishing and malware, but attackers increasingly exploit nuisance data broker sites that publish personal information (names, emails, addresses). These sites are scraped to personalize phishing campaigns, making them harder to detect. I propose a premium Defender add‑on that automatically files opt‑out requests with major data brokers (similar to DeleteMe).PMChefaloNov 22, 2025Copper Contributor59Views0likes1CommentAzure Cloud HSM: Secure, Compliant & Ready for Enterprise Migration
Azure Cloud HSM is Microsoft’s single-tenant, FIPS 140-3 Level 3 validated hardware security module service, designed for organizations that need full administrative control over cryptographic keys in the cloud. It’s ideal for migration scenarios, especially when moving on-premises HSM workloads to Azure with minimal application changes. Onboarding & Availability No Registration or Allowlist Needed: Azure Cloud HSM is accessible to all customers no special onboarding or monetary policy required. Regional Availability: Private Preview: UK West Public Preview (March 2025): East US, West US, West Europe, North Europe, UK West General Availability (June 2025): All public, US Gov, and AGC regions where Azure Managed HSM is available Choosing the Right Azure HSM Solution Azure offers several key management options: Azure Key Vault (Standard/Premium) Azure Managed HSM Azure Payment HSM Azure Cloud HSM Cloud HSM is best for: Migrating existing on-premises HSM workloads to Azure Applications running in Azure VMs or Web Apps that require direct HSM integration Shrink-wrapped software in IaaS models supporting HSM key stores Common Use Cases: ADCS (Active Directory Certificate Services) SSL/TLS offload for Nginx and Apache Document and code signing Java apps needing JCE provider SQL Server TDE (IaaS) via EKM Oracle TDE Deployment Best Practices 1. Resource Group Strategy Deploy the Cloud HSM resource in a dedicated resource group (e.g., CHSM-SERVER-RG). Deploy client resources (VM, VNET, Private DNS Zone, Private Endpoint) in a separate group (e.g., CHSM-CLIENT-RG) 2. Domain Name Reuse Policy Each Cloud HSM requires a unique domain name, constructed from the resource name and a deterministic hash. Four reuse types: Tenant, Subscription, ResourceGroup, and NoReuse choose based on your naming and recovery needs. 3. Step-by-Step Deployment Provision Cloud HSM: Use Azure Portal, PowerShell, or CLI. Provisioning takes ~10 minutes. Register Resource Provider: (Register-AzResourceProvider -ProviderNamespace Microsoft.HardwareSecurityModules) Create VNET & Private DNS Zone: Set up networking in the client resource group. Create Private Endpoint: Connect the HSM to your VNET for secure, private access. Deploy Admin VM: Use a supported OS (Windows Server, Ubuntu, RHEL, CBL Mariner) and download the Azure Cloud HSM SDK from GitHub. Initialize and Configure Edit azcloudhsm_resource.cfg: Set the hostname to the private link FQDN for hsm1 (found in the Private Endpoint DNS config). Initialize Cluster: Use the management utility (azcloudhsm_mgmt_util) to connect to server 0 and complete initialization. Partition Owner Key Management: Generate the PO key securely (preferably offline). Store PO.key on encrypted USB in a physical safe. Sign the partition cert and upload it to the HSM. Promote Roles: Promote Precrypto Officer (PRECO) to Crypto Officer (CO) and set strong password Security, Compliance, and Operations Single-Tenant Isolation: Only your organization has admin access to your HSM cluster. No Microsoft Access: Microsoft cannot access your keys or credentials. FIPS 140-3 Level 3 Compliance: All hardware and firmware are validated and maintained by Microsoft and the HSM vendor. Tamper Protection: Physical and logical tamper events trigger key zeroization. No Free Tier: Billing starts upon provisioning and includes all three HSM nodes in the cluster. No Key Sharing with Azure Services: Cloud HSM is not integrated with other Azure services for key usage. Operational Tips Credential Management: Store PO.key offline; use environment variables or Azure Key Vault for operational credentials. Rotate credentials regularly and document all procedures. Backup & Recovery: Backups are automatic and encrypted; always confirm backup/restore after initialization. Support: All support is through Microsoft open a support request for any issues. Azure Cloud HSM vs. Azure Managed HSM Feature / Aspect Azure Cloud HSM Azure Managed HSM Deployment Model Single-tenant, dedicated HSM cluster (Marvell LiquidSecurity hardware) Multi-tenant, fully managed HSM service FIPS Certification FIPS 140-3 Level 3 FIPS 140-2 Level 3 Administrative Control Full admin control (Partition Owner, Crypto Officer, Crypto User roles) Azure manages HSM lifecycle; customers manage keys and RBAC Key Management Customer-managed keys and partitions; direct HSM access Azure-managed HSM; customer-managed keys via Azure APIs Integration PKCS#11, OpenSSL, JCE, KSP/CNG, direct SDK access Azure REST APIs, Azure CLI, PowerShell, Key Vault SDKs Use Cases Migration from on-prem HSMs, legacy apps, custom PKI, direct cryptographic ops Cloud-native apps, SaaS, PaaS, Azure-integrated workloads Network Access Private VNET only; not accessible by other Azure services Accessible by Azure services (e.g., Storage, SQL, Disk Encryption) Key Usage by Azure Services Not supported (no integration with Azure services) Supported (can be used for disk, storage, SQL encryption, etc.) BYOK/Key Import Supported (with key wrap methods) Supported (with Azure Key Vault import tools) Key Export Supported (if enabled at key creation) Supported (with exportable keys) Billing Hourly fee per cluster (3 HSMs per cluster); always-on Consumption-based (per operation, per key, per hour) Availability High availability via 3-node cluster; automatic failover and backup Geo-redundant, managed by Azure Firmware Management Microsoft manages firmware; customer cannot update Fully managed by Azure Compliance Meets strictest compliance (FIPS 140-3 Level 3, single-tenant isolation) Meets broad compliance (FIPS 140-2 Level 3, multi-tenant isolation) Best For Enterprises migrating on-prem HSM workloads, custom/legacy integration needs Cloud-native workloads, Azure service integration, simplified management When to Choose Each? Azure Cloud HSM is ideal if you: Need full administrative control and single-tenant isolation. Are migrating existing on-premises HSM workloads to Azure. Require direct HSM access for legacy or custom applications. Need to meet the highest compliance standards (FIPS 140-3 Level 3). Azure Managed HSM is best if you: Want a fully managed, cloud-native HSM experience. Need seamless integration with Azure services (Storage, SQL, Disk Encryption, etc.). Prefer simplified key management with Azure RBAC and APIs. Are building new applications or SaaS/PaaS solutions in Azure. Scenario Recommended Solution Migrating on-prem HSM to Azure Azure Cloud HSM Cloud-native app needing Azure service keys Azure Managed HSM Custom PKI or direct cryptographic operations Azure Cloud HSM SaaS/PaaS with Azure integration Azure Managed HSM Highest compliance, single-tenant isolation Azure Cloud HSM Simplified management, multi-tenant Azure Managed HSM Azure Cloud HSM is the go-to solution for organizations migrating HSM-backed workloads to Azure, offering robust security, compliance, and operational flexibility. By following best practices for onboarding, deployment, and credential management, you can ensure a smooth and secure transition to the cloud.78Views0likes0CommentsEnterprise Strategy for Secure Agentic AI: From Compliance to Implementation
Imagine an AI system that doesn’t just answer questions but takes action querying your databases, updating records, triggering workflows, even processing refunds without human intervention. That’s Agentic AI and it’s here. But with great power comes great responsibility. This autonomy introduces new attack surfaces and regulatory obligations. The Model Context Protocol (MCP) Server the gateway between your AI agent and critical systems becomes your Tier-0 control point. If it fails, the blast radius is enormous. This is the story of how enterprises can secure Agentic AI, stay compliant and implement Zero Trust architectures using Azure AI Foundry. Think of it as a roadmap a journey with three milestones - Milestone 1: Securing the Foundation Our journey starts with understanding the paradigm shift. Traditional AI with RAG (Retrieval-Augmented Generation) is like a librarian: It retrieves pre-indexed data. It summarizes information. It never changes the books or places orders. Security here is simple: protect the index, validate queries, prevent data leaks. But Agentic AI? It’s a staffer with system access. It can: Execute tools and business logic autonomously. Chain operations: read → analyze → write → notify. Modify data and trigger workflows. Bottom line: RAG is a “smart librarian.” Agentic AI is a “staffer with system access.” Treat the security model accordingly. And that means new risks: unauthorized access, privilege escalation, financial impact, data corruption. So what’s the defense? Ten critical security controls your first line of protection: Here’s what a production‑grade, Zero Trust MCP gateway needs. Its intentionally simplified in the demo (e.g., no auth) to highlight where you must harden in production. (https://github.com/davisanc/ai-foundry-mcp-gateway) Authentication Demo: None Prod: Microsoft Entra ID, JWT validation, Managed Identity, automatic credential rotation Authorization & RBAC Demo: None Prod: Tool‑level RBAC via Entra; least privilege; explicit allow‑lists per agent/capability Input Validation Demo: Basic (ext whitelist, 10MB, filename sanitize) Prod: JSON Schema validation, injection guards (SQL/command), business‑rule checks Rate Limiting Demo: None Prod: Multi‑tier (per‑agent, per‑tool, global), adaptive throttling, backoff Audit Logging Demo: Console → App Service logs Prod: Structured logs w/ correlation IDs, compliance metadata, PII redaction Session Management Demo: In‑memory UUID sessions Prod: Encrypted distributed storage (Redis/Cosmos DB), tenant isolation, expirations File Upload Security Demo: Ext whitelist, size limits, memory‑only Prod: 7‑layer defense (validate, MIME, malware scanning via Defender for Storage), encryption at rest, signed URLs Network Security Demo: Public App Service + HTTPS Prod: Private Endpoints, VNet integration, NSGs, Azure Firewall no public exposure Secrets Management Demo: App Service env vars (not in code) Prod: Azure Key Vault + Managed Identity, rotation, access audit Observability & Threat Detection (5‑Layer Stack) Layer 1: Application Insights (requests, dependencies, custom security events) Layer 2: Azure AI Content Safety (harmful content, jailbreaks) Layer 3: Microsoft Defender for AI (prompt injection incl. ASCII smuggling, credential theft, anomalous tool usage) Layer 4: Microsoft Purview for AI (PII/PHI classification, DLP on outputs, lineage, policy) Layer 5: Microsoft Sentinel (SIEM correlation, custom rules, automated response) Note: Azure AI Content Safety is built into Azure AI Foundry for real‑time filtering on both prompts and completions. Picture this as an airport security model: multiple checkpoints, each catching what the previous missed. That’s defense-in-depth. Zero Trust in Practice ~ A Day in the Life of a Prompt Every agent request passes through 8 sequential checkpoints, mapped to MITRE ATLAS tactics/mitigations (e.g., AML.M0011 Input Validation, AML.M0004 Output Filtering, AML.M0015 Adversarial Input Detection). The design goal is defense‑in‑depth: multiple independent controls, different detection signals, and layered failure modes. Checkpoints 1‑7: Enforcement (deny/contain before business systems) Checkpoint 8: Monitoring (detect/respond, hunt, learn, harden) AML.M0009 – Control Access to ML Models AML.M0011 – Validate ML Model Inputs AML.M0000 – Limit ML Model Availability AML.M0014 – ML Artifact Logging AML.M0004 – Output Filtering AML.M0015 – Adversarial Input Detection If one control slips, the others still stand. Resilience is the product of layers. Milestone 2: Navigating Compliance Next stop: regulatory readiness. The EU AI Act is the world’s first comprehensive AI law. If your AI system operates in or impacts the EU market, compliance isn’t optional, it’s mandatory. Agentic AI often falls under high-risk classification. That means: Risk management systems. Technical documentation. Logging and traceability. Transparency and human oversight. Fail to comply? Fines up to €30M or 6% of global turnover. Azure helps you meet these obligations: Entra ID for identity and RBAC. Purview for data classification and DLP. Defender for AI for prompt injection detection. Content Safety for harmful content filtering. Sentinel for SIEM correlation and incident response. And this isn’t just about today. Future regulations are coming US AI Executive Orders, UK AI Roadmap, ISO/IEC 42001 standards. The trend is clear: transparency, explainability, and continuous monitoring will be universal. Milestone 3: Implementation Deep-Dive Now, the hands-on part. How do you build this strategy into reality? Step 1: Entra ID Authentication Register your MCP app in Entra ID. Configure OAuth2 and JWT validation. Enable Managed Identity for downstream resources. Step 2: Apply the 10 Controls RBAC: Tool-level access checks. Validation: JSON schema + injection prevention. Rate Limiting: Express middleware or Azure API Management. Audit Logging: Structured logs with correlation IDs. Session Mgmt: Redis with encryption. File Security: MIME checks + Defender for Storage. Network: Private Endpoints + VNet. Secrets: Azure Key Vault. Observability: App Insights + Defender for AI + Purview + Sentinel. Step 3: Secure CI/CD Pipelines Embed compliance checks in Azure DevOps: Pre-build: Secret scanning. Build: RBAC & validation tests. Deploy: Managed Identity for service connections. Post-deploy: Compliance scans via Azure Policy. Step 4: Build the 5-Layer Observability Stack App Insights → Telemetry. Content Safety → Harmful content detection. Defender for AI → Prompt injection monitoring. Purview → PII/PHI classification and lineage. Sentinel → SIEM correlation and automated response. The Destination: A Secure, Compliant Future By now, you’ve seen the full roadmap: Secure the foundation with Zero Trust and layered controls. Navigate compliance with EU AI Act and prepare for global regulations. Implement the strategy using Azure-native tools and CI/CD best practices. Because in the world of Agentic AI, security isn’t optional, compliance isn’t negotiable, and observability is your lifeline. Resources https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-azure-ai-foundry https://learn.microsoft.com/en-us/azure/defender-for-cloud/ai-threat-protection https://learn.microsoft.com/en-us/purview/ai-microsoft-purview https://atlas.mitre.org/ https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence https://techcommunity.microsoft.com/blog/microsoft-security-blog/microsoft-sentinel-mcp-server---generally-available-with-exciting-new-capabiliti/4470125umamasurkar28Nov 19, 2025Microsoft190Views1like1CommentMicrosoft Sentinel Graph with Microsoft Security Solutions
Why I Chose Sentinel Graph Modern security operations demand speed and clarity. Attackers exploit complex relationships across identities, devices, and workloads. I needed a solution that could: Correlate signals across identity, endpoint and cloud workloads. Predict lateral movement and highlight blast radius for compromised accounts. Integrate seamlessly with Microsoft Defender, Entra ID and Purview. Sentinel Graph delivered exactly that, acting as the reasoning layer for AI-driven defense. What's new: Sentinel Graph Public Preview Sentinel Graph introduces: Graph-based threat hunting: Traverse relationships across millions of entities. Blast radius analysis: Visualize the impact of compromised accounts or assets. AI-powered reasoning: Built for integration with Security Copilot. Native integration with Microsoft Defender and Purview for unified security posture. Uncover Hidden Security Risks Sentinel Graph helps security teams: Expose lateral movement paths that attackers could exploit. Identify choke points where defenses can be strengthened. Reveal risky relationships between identities, devices, and resources that traditional tools miss. Prioritize remediation by visualizing the most critical nodes in an attack path. This capability transforms threat hunting from reactive alert triage to proactive risk discovery, enabling defenders to harden their environment before an attack occurs. How to Enable Defense at All Stages Sentinel Graph strengthens defense across: Prevention: Identify choke points and harden critical paths before attackers exploit them. Detection: Use graph traversal to uncover hidden attack paths and suspicious relationships. Investigation: Quickly pivot from alerts to full graph-based context for deeper analysis. Response: Contain threats faster by visualizing blast radius and isolating impacted entities. This end-to-end approach ensures security teams can anticipate, detect, and respond with precision. How I Implemented It Step 1: Enabling Sentinel Graph If you already have the Sentinel Data Lake, the graph is auto provisioned when you sign in to the Microsoft Defender portal. Hunting graph and blast radius experiences appear directly in Defender. New to Data Lake? Use the Sentinel Data Lake onboarding flow to enable both the data lake and graph. Step 2: Integration with Microsoft Defender Practical examples from my project: Query: Show me all entities connected to this suspicious IP address. → Revealed lateral movement attempts across multiple endpoints. Query: Map the blast radius of a compromised account. → Identified linked service principals and privileged accounts for isolation. Step 3: Integration with Microsoft Purview In Purview Insider Risk Management, follow Data Risk Graph setup instructions. In Purview Data Security Investigations, enable Data Risk Graph for sensitive data flow analysis. Example: Query: Highlight all paths where sensitive data intersects with external connectors. → Helped detect risky data exfiltration paths. Step 4: AI-Powered Insights Using Microsoft Security Copilot, I asked: Predict the next hop for this attacker based on current graph state. Identify choke points in this attack path. This reduced investigation time and improved proactive defense. If you want to experience the power of Microsoft Sentinel Graph, here’s how you can get started Enable Sentinel Graph In your Sentinel workspace, turn on the Sentinel Data Lake. The graph will be auto provisioned when you sign in to the Microsoft Defender portal. Connect Microsoft Security Solutions Use built-in connectors to integrate Microsoft Defender, Microsoft Entra ID, and Microsoft Purview. This ensures unified visibility across identities, endpoints, and data. Explore Graph Queries Start hunting with Sentinel Notebooks or take it a step further by integrating with Microsoft Security Copilot for natural language investigations. Example: “Show me the blast radius of a compromised account.” or “Find everything connected to this suspicious IP address.” You can sign up here for a free preview of Sentinel graph MCP tools, which will also roll out starting December 1, 2025.36Views0likes0Comments
Resources
Tags
- cloud security984 Topics
- security774 Topics
- microsoft information protection518 Topics
- azure498 Topics
- information protection and governance484 Topics
- microsoft 365419 Topics
- microsoft sentinel343 Topics
- azure active directory240 Topics
- data loss prevention215 Topics
- microsoft 365 defender168 Topics