Security operations centers are increasingly overwhelmed. Analysts must triage large volumes of alerts, investigate complex signals across multiple environments, and determine which threats require immediate action. Much of this work still involves manually gathering context, reconstructing timelines, and making decisions under time pressure.
As Microsoft Ignite 2025, we introduced how Security Copilot is bringing agentic AI directly into Microsoft Defender to transform how SOC teams detect, triage, and investigate threats. Building on that vision, Copilot continues to expand its capabilities with two complementary forms of AI: autonomous agents that reason dynamically to execute complex security tasks, and assistive experiences that help analysts complete their daily workflows faster and with greater scale.
Together, these innovations are designed to reduce operational burden while enabling analysts to focus on the decisions that matter most.
Autonomous AI: agents that triage alerts and investigate risk
Our vision is to bring autonomous AI across the SOC lifecycle, moving from isolated AI-enabled tasks to outcome-driven agentic transformation that elevates SOC teams across all experience levels. By applying frontier LLM reasoning to security telemetry and threat intelligence, Security Copilot is uniquely positioned to embed specialized agents at every stage—from anticipating risk and preventing attacks, to detecting, triaging, investigating, and responding. The result is a SOC that operates at machine speed while keeping humans firmly in control.
During RSA Conference 2026, we’re expanding that vision within the triage and investigation stage of the SOC lifecycle with the launch of one expanded agent and one new agent.
We’ve already demonstrated the impact of outcome-driven autonomous workflows with agentic phishing triage: our agent identifies 6.5 times more malicious alerts than human analysts working alone. Today, that same capability is extending beyond phishing to identity and cloud alerts.
The Security Alert Triage Agent helps analysts autonomously determine whether phishing, identity and cloud alerts represent real threats or just false alarms. The agent provides natural language verdicts and transparent, step-by-step reasoning that explains how it reached each decision. At Public Preview, for identity, it supports triage of alert types involving password spray attempts, suspicious inbox rules associated with business email compromise (BEC), and accounts potentially compromised following a password spray attack. For cloud, it supports more than 30 alert types related to cloud container activity.
This agent is designed to handle alerts that are both high risk and high noise. Identity and cloud alerts often require longer and more complex investigations, and missing them has important implications. For identity alerts, the challenge is scale—high-volume signals such as password spray generate noise, making it difficult to quickly isolate real compromise. The agent helps by rapidly triaging these alerts and filtering out false positives, allowing analysts to focus on identity activity that truly indicates risk. For cloud alerts, the challenge is different: alert volume may be lower, but investigations are inherently more complex and require deep expertise. In these cases, the agent applies advanced analysis across multiple signals to investigate alerts that would otherwise be burdensome and difficult to analyze manually, helping ensure critical cloud threats are surfaced quickly and not overlooked.
By providing natural language verdicts and transparent decision logic, the agent walks teams step-by-step through investigations that would typically require senior-level expertise. Clear explanations and visual decision graphs show how each conclusion was reached, reducing investigation effort and increasing confidence in outcomes. This transparency frees teams to focus on responding to real threats, while giving junior analysts visibility into the reasoning behind each verdict. The result is specialized expertise embedded directly into daily SOC workflows, raising the floor for the entire team.
At RSA Conference 2026, we’re also announcing the Security Analyst Agent in Microsoft Defender. This agent performs deep, multi-step investigations across Microsoft Defender and Sentinel telemetry to surface high-impact risks and deliver prioritized insights in minutes. Each finding includes clear reasoning and supporting evidence, enabling analysts to quickly understand and act on the results.
Today, teams often rely on advanced hunting to investigate potential threats by writing queries, iteratively refining hypotheses, and correlating results across multiple datasets. While powerful, this process typically requires manually piecing together context across tools, reconstructing timelines, and sifting through large volumes of telemetry to determine whether suspicious activity represents real risk. Given the breadth and complexity of modern threats, these investigations can take days or even weeks.
The Security Analyst Agent builds on the power of advanced hunting by autonomously orchestrating parts of that investigative process. The agent retrieves and analyzes large volumes of security data (up to ~100MB), correlates signals across telemetry sources, and iteratively explores hypotheses to surface patterns and threats that might otherwise go unnoticed. The results are synthesized into clear, risk-relevant findings with supporting evidence trails, helping analysts quickly understand what matters most. In doing so, the agent performs the kind of deep analytical work typically carried out by experienced security analysts.
Assistive AI: Chat experience in the analyst’s flow of work
While autonomous agents help execute complex security tasks with dynamic reasoning, Security Copilot also brings assistive AI directly into analysts’ daily workflows. These capabilities are designed to accelerate manual tasks, helping analysts gather context, and make decisions faster.
Today, Copilot is already embedded across Microsoft Defender experiences. Analysts can generate natural language summaries of incidents, receive guided response recommendations, draft incident reports, generate KQL queries with natural language, and more. These capabilities help accelerate specific tasks, but interactions with Copilot typically occur as individual actions within a side panel or embedded experience.
We’re now taking the next step by introducing a chat experience for Security Copilot directly within Microsoft Defender, enabling teams to interact with AI through an ongoing, two-way conversation. Analysts can ask questions, explore hypotheses, and follow investigative threads across incidents, alerts, identities, devices, IPs, and other evidence—without switching tools or manually piecing together context. Copilot understands the analyst’s investigation context, grounding each response in the relevant signals and telemetry already available in Defender.
Throughout the interaction, Copilot does more than respond. It actively advances the investigation by initiating step-by-step analysis, such as examining a specific entity, while continuously incorporating new signals as they emerge. Analysts can follow up in real time, refining their line of inquiry and digging deeper as the conversation evolves. This creates a more fluid, iterative workflow that lowers the barrier to AI adoption and enables SOC teams to operate with the speed and scale needed to stay ahead of modern threats.
Alongside this new embedded chat experience for Security Copilot, we are also extending conversational capabilities to third-party agents. From the Agents library in Defender, teams can start a chat with any eligible agent to validate findings, gather additional context, and accelerate response. For example, users can interact with XBOW’s Pentest Analysis Agent to determine whether vulnerabilities flagged by Microsoft Defender for Cloud are truly exploitable. The agent can initiate a pentest, explain the results, and recommend next steps—such as improving detection coverage in Microsoft Sentinel—to strengthen defenses.
Learn more at RSA Conference 2026!
To learn more about Security Copilot in Microsoft Defender, visit us at booth #5744. Our team will be demonstrating how AI is helping SOC teams move faster through alert triage, investigation, and response.
You can join our booth sessions:
- Empowering the SOC with assistive and autonomous AI with Yuval Derman | March 23rd at 5.15PM
- Security Copilot agents: Insight. Action. Impact. with Lizzie Heinze and Donna Lee | March 24th at 3.00PM
You can also register for Security Copilot in action: An agentic approach to modern security on March 24th at 8.30AM here.