Blog Post

Microsoft Sentinel Blog
8 MIN READ

The Agentic SOC Era: How Sentinel MCP Enables Autonomous Security Reasoning

Iftekhar Hussain's avatar
Feb 03, 2026

Security teams already have access to enormous volumes of telemetry across identity, endpoint, network, and cloud. The real challenge isn’t collecting more data. It’s turning that data into faster decisions and better protection. This is where AI changes the game.  

Microsoft Sentinel MCP Server, now generally available, augments analysts with intelligence that can reason across signals, automate investigations, and surface what truly mattersAI helps Security Operations Center (SOCs) move faster, reduce fatigue, and measurably improve the security posture of the organization. 

The Problem: Data is Growing Faster Than Analysts Can Keep Up

While modern SOCs are rich in telemetry, the day-to-day reality for analysts is far less empowering.

Security analysts and detection engineers spend countless hours trying to:

  • Remember schema names and values across a growing number of tables.
  • Translate natural investigative questions into exact column names and joins.
  • Write complex queries to answer what should be simple questions.
  • Interpret fragmented results and relate them back to the security problem at hand.

The outcome is predictable: critical insights remain buried, not because the data isn’t there, but because accessing it requires deep technical context and manual effort.

When Context Comes from Time, Not Volume

Given the dynamic nature of attackers, determining whether an activity is truly malicious often requires looking far beyond a short time window. Many threats deliberately blend into normal day to day operations by moving slowly, exploiting business seasonality, and making subtle incremental changes that only become obvious when viewed over time. Without access to long-term security data, SOC teams are forced to make decisions based on limited context window. 

Having extended visibility into the environment allows analysts to understand what “normal” really looks like across business cycles, seasonal usage patterns, and organizational changes. This long-term perspective unlocks: 

  • More confident anomaly detection by separating genuine threats from expected variation. 
  • Richer behavioral baselines for users, service principals, applications and devices. 
  • Lifecycle tracking of identities, permissions and workloads as they evolve. 
  • Detection of slow-moving attacks that unfold gradually over weeks or months. 

Long-term data turns isolated events into meaningful behavior and enables investigations that are grounded in context rather than guesswork. 

Introducing Sentinel MCP Data Exploration Tools

KQL and Spark notebooks continue to serve as essential tools for security analysts and detections engineers who need precise control, repeatable analytics, and performance-tuned queries over the Sentinel data lake. Microsoft Sentinel MCP Server enables analysts to now have a fast, conversational way to explore the data and instantly uncover security insights without hunting for schemas or crafting complex queries.  

This new interaction layer democratizes access to big security data, while still allowing teams to convert those natural-language explorations into full KQL queries or Jupyter Notebook cells when they’re ready to operationalize the insight. 

The MCP Server for Sentinel enables AI-driven agents such as Security CopilotGitHub CopilotAzure Foundry, and ChatGPT Enterprise to perform advanced reasoning over large-scale security telemetry. 

Under the hood:  

  • Queries expressed in natural language and parsed into actionable intents. 
  • The underlying model applies semantic interpretation to map intent to relevant security artefacts. 
  • It orchestrates retrieval by combining enterprise security datasets with embedded domain knowledge for evidence correlation. 
  • Outputs are delivered as structured, explainable artefacts optimized for analyst workflows and downstream automation. 

Let's look at some examples.

Example 1: Find dormant Service Principals recently reactivated 

To see this in practice, consider a security analyst using VS Code not as a development tool, but as a lightweight, guided workspace powered by GitHub Copilot and Sentinel MCP where questions are asked in plain language and insights are returned without requiring query or coding expertise.

If you’d like to follow along, first connect GitHub Copilot Agent in VS Code to the Sentinel MCP server using the setup instructions. (Instructions here)

For these examples, I am using Claude Opus 4.5 as the model in

Prompt:

"Find service principals dormant for 60+ days that generated sign-ins in the last 7 days. Summarize by app name and sign-in volume. Render a ranked ASCII table and Pareto chart, include 80/20 analysis and a security assessment. State explicitly if no dormant principals reactivated."

The analyst doesn’t specify tables, joins or time filters. They just describe the outcome they want.

Behind the scenes, the MCP server:

  • Uses its tools to query service principal sign-in activity from the Sentinel data lake.
  • Identifies principals with zero activity for >60 days and checks whether they generated sign-ins in the current week.
  • Builds an ASCII Pareto chart of service principal activity for the week, then layers analytic commentary on top.

Example 2: Hunting Rare Parent–Child Process Chains With MCP.

Some threats only reveal themselves in the long tail of process behavior, the rare, unusual parent–child combinations that almost never occur in normal operations. To explore this, the analyst can now simply ask the MCP tools:

Prompt:

"Find parent–child process combinations in DeviceProcessEvents seen fewer than 5 times in the last 90 days. Return a ranked table and an ASCII process tree for each rare chain, including device, user, integrity level, and command line. Assess risk and state explicitly if no rare combinations are found."

No joins, no windowing functions, no manual group-bys across billions of events. Just plane simple intent.

The MCP tools then:

  • Queries DeviceProcessEvents in the Sentinel data lake for the last 90 days.
  • Aggregates process pairs and filters to combinations seen fewer than 5 times.
  • Enriches results with device, user, command line, integrity level and timestamp.
  • Applies its own reasoning layer to categorize each pair by risk level and produce an analyst-ready report.

The answer comes back as a structured investigation summary.

The MCP server performs extensive work by scanning DeviceProcessEvents from 90 days to detect unusual chains which it then ranks by risk level before converting them into actionable results.

Example 3 - Detecting Scope Drift Across Service Principals Using Multi-Source Correlation

Now let’s tie it back with the example we discussed earlier about dealing with long range data and how to analyze to surface scope drift etc. Let’s see how analysts use MCP tools uncover this.

Prompt:

“Detect scope drift in service principals or automation accounts that gradually expand access or behavior beyond their established baseline. Correlate across AADServicePrincipalSignInLogs, AuditLogs, DeviceNetworkEvents and SecurityAlert. Build a 90-day behavioral baseline per principal and compare with the last 7 days. Compute a Drift Score and flag any entity exceeding 150% of baseline deviation.”

The MCP server automatically:

  • Pulls sign-in logs, permission change logs, network events and alert metadata from the Sentinel data lake
  • Builds 90-day baselines for:
    • average daily sign-ins
    • distinct IP ranges
    • resource scopes
  • Compares the last 7 days against those baselines.
  • Computes a weighted Drift Score.
  • Generates an ASCII summary table, heat indicators and a full risk report.

The result is a fully correlated identity-centric investigation.

Traditionally analysts need to query data sets manually put timelines together and build a baseline by hand to find the scope drift pattern. This is where the MCP server becomes especially powerful. It pulls these signals together, builds the behavioral baseline automatically and surfaces drift in a single, coherent investigation.

Example 4 – From External Intelligence to Cross-Environment Validation

One of the most powerful shifts enabled by LLMs in a modern SOC is their ability to reason over external knowledge and immediately operationalize it against internal telemetry. Instead of treating threat research as something analysts read and manually translate into hunts, the model can ingest a public report, extract attacker behavior, and turn it into a structured investigation plan.

In this scenario, the LLM starts by reading an external blog post describing an AiTM phishing campaign. From that narrative, it identifies concrete tactics and signals proxy-based sign-in flows, MFA bypass via session cookies, anomalous sign-ins, mailbox rule creation, and BEC-style outbound activity. It then converts those descriptions into explicit hypotheses that can be tested against security data.

Prompt:

"Review the AiTM phishing campaign details from this article: https://www.microsoft.com/en-us/security/blog/2022/07/12/from-cookie-theft-to-bec-attackers-use-aitm-phishing-sites-as-entry-point-to-further-financial-fraud/ 

Based on the tactics described (proxy sign-in pages, MFA bypass via session cookies, unusual sign-ins, mailbox rule creation, BEC-style outbound emails), check all my environments for any similar activities in the last 90 days.

Look for:

  • unusual sign-in locations or devices
  • token/session-reuse patterns
  • new inbox rules
  • suspicious mailbox access
  • unusual outbound replies on financial threads

Give me:

  • Any matching suspicious events per environment.
  • A short explanation of why they might relate to AiTM behavior.
  • A simple risk summary (high, medium, low) for each environment.
  • Only use telemetry available in each MCP environment and clearly state if something isn’t present”

What makes this especially valuable for multi-tenant organizations is that the same reasoning is applied independently within each environment, using only the telemetry that exists in that tenant. The model doesn’t assume uniform visibility. If a signal such as mailbox rules or session reuse isn’t available in a given environment, it explicitly calls that out rather than inferring results.

The outcome is not just detection, but comparability. Each tenant receives:

  • a list of suspicious events, if any.
  • a short explanation mapping those events back to AiTM tactics.
  • and a clear risk rating based on observed evidence.

This is where ability of AI tools with Sentinel MCP tools fundamentally change security operations. They bridge the gap between human-readable threat intelligence and machine-scale validation, allowing organizations to consistently test real-world attack techniques across many environments faster, more accurately, and with far less manual effort.

In an agentic SOC, this pattern becomes repeatable: new threat research comes in, the model reasons over it, and every environment is checked for exposure turning external knowledge into immediate defensive insight.

Tips for success: 

Writing effective prompts for the MCP tools are all about being explicit, grounded and structured. You get the most accurate, non-hallucinated results. Think of your prompt as a task description for an analyst, precise, scoped and strictly evidence driven. Here are few best practices.

  • Specify the exact tables and time ranges.
    Example: “Query DeviceProcessEvents for the last 90 days and compare the last 7 days against baseline.”
  • Define thresholds and what “rare,” “baseline,” or “anomalous” means.
    Example: “Flag parent–child processes seen fewer than 5 times in the 90-day window.”
  • Request clear output formats.
    Example: “Return results in a table with columns for parent, child, count, device, and timestamp.”
    Example: “Generate an ASCII tree showing the rare process chains.”
  • Tell MCP to rely strictly on tool output and avoid assumptions.
    Example: “Base all findings only on retrieved logs; do not create fields or values not present in data.”
  • Use iterative prompts to refine results.
    Example: “Now focus only on SYSTEM-level processes,” or “Filter to only high-severity deviations.”

The SOC is evolving from a reactive, alert-driven function into an agentic system. one where autonomous workflows, intelligent reasoning and continuous context-awareness work alongside analysts to defend the enterprise at scale. Microsoft Sentinel MCP Server sits at the heart of this shift.

By layering natural-language reasoning on top of the Sentinel, it gives defenders a way to interact with their entire security estate conversationally, while enabling agents and copilots to operate on the same rich foundation.

Resources

 

Updated Feb 03, 2026
Version 1.0
No CommentsBe the first to comment