security operations
66 TopicsIntroducing Agentic Secret Finder: Finding Real Credentials Where Traditional Tools Fail
Agentic Secret Finder (ASF) is an AI-powered capability in Microsoft Security Copilot that detects leaked credentials in unstructured content, such as emails, chat logs, documents, and screenshots, where traditional pattern-matching tools struggle. Agentic Secret Finder (ASF) is “agentic” because it relies on a multi‑step, multi‑agent reasoning workflow rather than a single pass detector. Detection, verification, and contextual analysis are handled by distinct reasoning stages, allowing ASF to find real credentials without flooding users with false positives. Unlike regex-based scanners, ASF uses reasoning to identify not just credentials, but the systems they unlock, helping security teams understand exposure and respond faster. In benchmark testing on synthetic datasets, ASF achieved 98.33% true credential detection with zero false alarms on realistic emails, chats, notes, and documents—while traditional regex scanners detected only about 40% of the same credentials. ASF is now generally available in Security Copilot, supporting 20+ credential types with high precision and actionable context. The Problem: Credentials Hide Where Traditional Tools Can't See When security incidents happen, leaked credentials don't always appear in clean, predictable formats. They show up buried in email threads, pasted into Teams messages, embedded in Word documents, or captured in screenshots of logs and terminals. These are exactly the places where security teams spend the most time and where traditional credential scanning tools fail. Most existing tools rely on regular expressions or simple pattern matching. This works reasonably well for structured environments like source code repositories, where credentials follow predictable formats. But in real-world incidents, credentials look different. A storage key might be split across multiple messages in an email thread. A credential could be reformatted, partially redacted, or embedded alongside explanatory text. In these situations, pattern matching produces two painful outcomes: it misses real credentials because the format doesn’t match a known rule, or it floods analysts with false positives that waste time. Security teams are left manually reviewing content, guessing which findings are real, and piecing together what systems might actually be at risk. In practice, this failure mode has a real human cost that security analysts end up reviewing thousands of alerts, manually inspecting email threads and chat logs, and trying to determine whether a suspicious string actually unlocks a storage account, API, or production service. Teams can spend days reconstructing context across messages and documents just to understand what a credential grants access to, slowing containment and increasing risk during active incidents. This is the gap Agentic Secret Finder was built to close. The Solution: ASF Brings Reasoning to Credential Detection Agentic Secret Finder approaches credential detection as a reasoning problem, not a string-matching exercise. Instead of asking "does this text match a pattern?" ASF asks human-like questions: Is this text describing a credential or access mechanism? Does the value look real and usable? What system or resource could this access? This shift is subtle but powerful. ASF doesn't just detect credentials, it connects them to doors: the specific targets those credentials unlock, such as API endpoints, storage accounts, applications, or services. This is critical for triage. Instead of stopping at “this looks like a credential,” ASF tells analysts what that credential actually opens. Without context, a credential triggers manual follow‑up. When it’s linked to a specific target, analysts can immediately assess impact and act. By understanding messy, real-world content the way a human investigator would, ASF delivers findings that security teams can trust and act on immediately. It's designed specifically for the unstructured, noisy environments where incidents actually unfold. Why ASF Outperforms Traditional Pattern Matching Traditional credential scanners are built for clean data. ASF is built for reality. Traditional tools struggle when: Credentials appear in natural language descriptions rather than code Context determines whether a string is sensitive or benign Credentials are incomplete, malformed, or partially redacted ASF excels because it: Reasons through context, understanding surrounding text to identify what's truly sensitive Detects credentials and their associated resources together, providing the "what" and the "where" in a single pass Handles noisy, unstructured inputs like emails, chat logs, documents Assigns confidence scores to help teams prioritize findings and reduce alert fatigue What ASF Can Do Today ASF is now generally available in Microsoft Security Copilot, with capabilities shaped directly by real security workflows across incident response, red teaming, and SOC operations. ASF detects over 20 major credential categories, spanning cloud provider credentials like Azure Storage Keys and AWS Access Keys, authentication credentials including Microsoft Entra passwords and OAuth tokens, database connection strings, SSH private keys, API keys, and generic credentials that don't fit predefined patterns. This broad coverage means analysts can scan investigation artifacts without worrying whether the credential type is supported. What makes ASF particularly effective is where it works. Email threads where credentials are discussed across multiple messages. Teams chats where credentials are pasted quickly during troubleshooting. Word documents and internal wikis where credentials are documented for operational handoffs. Incident reports and post-mortem notes written under pressure. These are the environments where traditional pattern-matching tools fail, and where ASF delivers the most value. In benchmark evaluations, ASF achieved 100% recall with 0% false positives on synthetic datasets containing embedded Azure Storage credentials, compared to 40% recall from traditional regex‑based tools such as CredScan. In more complex scenarios involving multiple credential types and noisy email content, ASF maintained 98.33% recall with 0% false positives. These results were observed on synthetically generated evaluation datasets spanning emails, chats, notes, and documents, designed to reflect how engineers communicate and how credentials may be inadvertently shared in real‑world workflows. Scenario Precision Recall Single credential type 100% 100% Complex, multiple credential types 100% 98.33% ASF is currently integrated into Security Copilot, actively supporting incident response workflows, and working toward deeper integrations with developer platforms such as GitHub to bring contextual credential detection to source code analysis at scale. Using ASF in Security Copilot ASF is available as a skill in Microsoft Security Copilot, making credential detection a seamless part of analyst workflows. How to use ASF: Enable the ASF skill in Security Copilot via "Manage Sources" → "Manage Plugins" (Figure 1) Select "FindSecretInText" from Promptbook (Figure 2) Submit unstructured content directly in the Copilot prompt: paste the text blob that might contain credentials (Figure 3) ASF analyzes the content using its multi-agent workflow, detecting credentials and associated doors (Figure 4) Review actionable findings with contextual details What's Next for ASF ASF is a living capability. Over the next six months, we are working towards coverage and deepening integrations: Exploring integrations with GitHub to reduce false positives in credential scanning for code repositories Optimizing for large-scale analysis to handle enterprise-wide scans efficiently with reduced latency Exploring graph-based risk modeling to map relationships between credentials, services, and attack paths Our long-term vision goes beyond detection: we want to help security teams understand how credentials are used, what risks exist if they're exposed, and what the impact of rotation or revocation would be. By moving from "what's leaked" to "what does it mean," ASF will enable smarter prioritization, faster response, and more confident decision-making.550Views0likes2CommentsMicrosoft partners with DataBahn to accelerate enterprise deployments for Microsoft Sentinel
Enterprise security teams are collecting more telemetry than ever across cloud platforms, endpoints, SaaS applications, and on-premises infrastructure. Security teams want broader data coverage and longer retention without losing control of cost and data quality. This post explains the new DataBahn integration with Microsoft Sentinel, why it matters for SIEM operations, and how to think about using a security data pipeline alongside Sentinel for onboarding, normalization, routing, and governance. DataBahn joins Microsoft Sentinel partner ecosystem This integration reflects Microsoft Sentinel’s open partner ecosystem, giving customers choice in the partners they use alongside Microsoft Sentinel to manage their security data pipelines. DataBahn joins a broader set of complementary partners, enabling customers to tailor solutions for their unique security data needs. DataBahn is available through Microsoft Marketplace and is eligible for customers to apply existing Azure Consumption Commitments toward the purchase of DataBahn. Why this matters for security operations teams Security teams are under relentless pressure to ingest more data, move faster through SIEM migrations, and preserve data fidelity for detections and investigations, all while managing costs effectively. The challenge isn’t just ingesting data, but ensuring the right telemetry arrives in a consistent, governed format that analysts and detections can trust. This is where a security data pipeline, alongside Microsoft Sentinel’s native connectors and DCRs, can add value. It helps streamline onboarding of third-party and custom sources, improve normalization consistency, and provide operational visibility across diverse environments as deployments scale. What DataBahn integration is positioned to do with Microsoft Sentinel Security teams want broader coverage and need to ensure third-party data is consistently shaped, routed, and governed at scale. This is where a security data pipeline like DataBahn complements Microsoft Sentinel. Sitting upstream of ingestion, the pipeline layer standardizes onboarding and shaping across sources while providing operational visibility into data flow and pipeline health. Together, the collaboration focuses on reducing onboarding friction, improving normalization consistency, enabling intentional routing, and strengthening governance signals so teams can quickly detect source changes, parser breaks, or data gaps—while staying aligned with Sentinel analytics and detection workflows. This model gives Sentinel customers more choice to move faster, onboard data at scale, and retain control over data routing. Key capabilities Bidirectional data integration The integration enables seamless delivery of telemetry into Sentinel while aligning with Sentinel detection logic and schema expectations. This helps ensure telemetry pipelines remain consistent with: Sentinel detection formats Custom analytics rules Sentinel data models and schemas Automated table and DCR management As detections evolve, pipeline configurations can adapt to maintain detection fidelity and data consistency. Advanced management API DataBahn provides an advanced management API that allows organizations to programmatically configure and manage pipeline integrations with Sentinel. This enables teams to: Automate pipeline configuration Manage operational workflows Integrate pipeline management into broader security or DevOps automation processes Automatic identification of configuration conflicts In complex environments with multiple telemetry sources and routing rules, configuration conflicts can arise across filtering logic, enrichment pipelines, and detection dependencies. The integration helps automatically: Detect conflicts in filtering rules and pipeline logic Identify clashes with detection dependencies Highlight missing configurations or coverage gaps Automated detection of configuration conflicts and pipeline rule dependencies This visibility allows SOC teams to quickly identify issues that could impact detection reliability. Centralized pipeline management The integration enables centralized management of data collection and transformation workflows associated with Sentinel telemetry pipelines. This provides unified visibility and control across telemetry sources while maintaining compatibility with Sentinel analytics and detections. Centralized management simplifies operations across large environments where multiple telemetry pipelines must be maintained. Centralized pipeline management for telemetry sources across the environment Flexible data transformation and customization Security telemetry often arrives in inconsistent formats across vendors and platforms. The platform supports flexible transformation capabilities that allow organizations to: Normalize logs into standard or custom Sentinel table formats Add or derive fields required by Sentinel detections Apply filtering or enrichment rules before ingestion Configuration can be performed through a single-screen workflow, enabling teams to modify schemas and define filtering logic without disrupting downstream analytics. Flexible data transformation to align telemetry with Microsoft Sentinel ASIM schemas The platform also provides schema drift detection and source health monitoring, helping teams maintain reliable telemetry pipelines as environments evolve. Closing Effective security operations depend on how quickly a SOC can onboard new data, scale effectively, and maintain high‑quality investigations. Sentinel provides a cloud‑native, AI-ready foundation to ingest security data from first- and third‑party data sources—while enabling economical, large‑scale retention and deep analytics using open data formats and multiple analytics engines. DataBahn’s partnership with Sentinel is positioned as a pipeline layer that can help teams onboard third-party sources, shape and normalize data, and apply routing and governance patterns before data lands in Sentinel. Learn more DataBahn for Microsoft Sentinel DataBahn Press Release - Databahn Deepens Partnership with Microsoft Sentinel Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn Microsoft Sentinel—AI-Ready Platform | Microsoft Security Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations | Microsoft Learn Microsoft Sentinel data lake is now generally available | Microsoft Community Hub1.1KViews2likes1CommentUnlocking value with Microsoft Sentinel data lake
As security telemetry explodes and AI‑driven defense becomes the norm, it is critical to centralize and retain massive volumes of data for deep analysis and long‑term insights. Security teams are fundamentally rethinking how they manage, analyze, and act on security data. The Microsoft Sentinel data lake is a game changer for modern security operations, providing the foundation for agentic defense, deeper insights, and graph‑based enrichment. Security teams can centralize signals, simplify data management, and run advanced analytics, without compromising costs or performance. Across industries, organizations are using the Sentinel data lake to unify distributed data, search across years of telemetry, correlate sophisticated threats using graph-powered analytics, and operationalize agentic workflows at scale, turning raw security data into actionable intelligence. In this blog we will highlight some of the ways Sentinel data lake is transforming modern security operations. Unified, cost-effective security data foundation The challenge Many organizations tell us they have been forced to make difficult tradeoffs: high ingestion costs meant selectively choosing which logs to keep, often leaving data that might have been critical during an investigation. This selective logging creates blind spots, fragmented visibility, and unnecessary operational complexity across security operations. As a result, CISOs increasingly view selective logging as a material security risk to their organizations. How Sentinel data lake helps The Sentinel data lake removes these constraints by providing a cost‑effective, security‑optimized foundation for centralizing large volumes of security data. With the data lake, security teams can finally retain the breadth of telemetry they need without the financial penalties traditionally associated with long‑term security data retention. Organizations benefit from: A unified security data foundation designed to simplify investigations Long‑term, cost‑effective retention for up to 12 years Flexible querying across high‑volume data sets 6x data compression in storage, enabling significantly lower retention costs at scale Why it matters By unifying data in a purpose-built security data lake, SOC teams gain reliable, comprehensive visibility without the budget limitations that once forced them to choose between cost and completeness. This stronger foundation not only improves day‑to‑day investigations; it unlocks the advanced analytics and AI‑powered capabilities that future proof SOCs for AI driven defense. With full visibility restored, organizations are better equipped to identify emerging threats, respond with confidence, and modernize their security operations on their own terms. Historical security analysis The challenge SOC teams often struggle with short SIEM retention windows that limit how far back investigators can look. Critical logs age out before teams can fully piece together an attack, making root‑cause analysis slow and incomplete. This challenge grows when incidents span long periods, when new threat indicators emerge, or when organizations need to understand how a compromise evolved over time. Without access to historical telemetry, analysts face significant blind spots that weaken both investigations and hunting efforts. How Sentinel data lake helps The Sentinel data lake solves this by enabling organizations to retain and analyze years of security data at a fraction of the cost of traditional SIEM retention. Teams can use KQL and notebooks to run deep, long‑range investigations, perform advanced anomaly detection, and correlate older events that would have been impossible to recover in the analytics tier. Historical data enables retro analysis when new threat intel emerges. SOC teams can instantly look back to validate whether newly discovered indicators, techniques, or threat actors were already present in their environment. Organizations benefit from: Years of cost‑effective retention that extend far beyond traditional SIEM windows Deep forensic investigations using KQL and notebooks over historical data Improved anomaly detection with long‑range patterns and baselines Faster scoping of incidents with access to full historical context Why it matters By unlocking access to years of searchable telemetry, SOC teams are no longer limited by short retention windows or forced to make compromises that weaken security. They can retrace the full scope of an incident, hunt for slow‑moving threats, and quickly respond to new IOCs, powered by the historical context modern attacks demand. This long‑range visibility strengthens both detection and response, giving organizations the confidence and continuity they need to stay ahead of evolving threats. Graph-powered attack-path visibility and entity correlation The challenge Traditional investigations often rely on reviewing logs in isolation, making it difficult to connect identity activity, endpoint behavior, cloud access, and threat intelligence in a meaningful way. As a result, SOC teams find it difficult to trace attack paths, understand lateral movement, and build complete investigative context. Without a unified view of how entities relate to each other, investigations become slow, fragmented, and are prone to missed signals. How Sentinel data lake helps The Sentinel data lake enables powerful graph‑based correlation across identity, asset, activity, and threat intelligence data. Using graph models, analysts can visually explore how entities connect, identify hidden attack paths, pinpoint exposed routes to sensitive assets, and understand the full blast radius of compromised accounts or devices. This graph‑driven context turns complex telemetry into intuitive visuals that dramatically accelerate both pre‑breach context and incident response. Organizations benefit from: Graph‑powered correlation across identity, asset, activity, and threat intelligence data Visualization of attack paths and lateral movement that logs alone cannot expose Context‑rich investigations supported by relationship‑driven insights Greater cross‑domain visibility that strengthens both detection and response Why it matters With graph‑powered context, SOC teams move beyond event‑by‑event analysis and gain a deep understanding of how their environment behaves as a system. This visibility speeds investigations, strengthens posture before attackers strike, and provides analysts with a clear, intuitive way to uncover relationships that traditional log searches simply can’t reveal. Agentic workflows powered by MCP server The challenge SOC teams are under constant pressure from rising alert volumes, repetitive manual investigative steps, and skill gaps that make consistent triage challenging. Even experienced analysts struggle to reason across large, distributed datasets, and junior analysts often lack the experience needed to understand complex threat scenarios. These challenges slow down response and increase the risk of missed signals. How the Sentinel data lake helps The Sentinel data lake, combined with the Model Context Protocol (MCP), enables AI agents to reason over unified, contextual security data using natural‑language prompts. Analysts can ask questions directly: “Does this user have other suspicious activity?” or “What assets are at risk?”, and agents automatically interpret the request, query the data lake, and return actionable insights. These AI‑powered workflows reduce repetitive effort, strengthen investigative consistency, and help teams operate at a higher level of speed and precision. Organizations benefit from: AI‑assisted investigations that reduce manual effort and accelerate triage Agentic workflows powered by MCP to automate multi‑step reasoning over unified data Natural‑language interactions that make complex queries accessible to all analysts Consistent, high‑quality analysis regardless of analyst experience level Why it matters By introducing agentic, AI‑driven workflows, SOC teams can automate time‑consuming tasks, reduce alert fatigue, and empower every analyst, regardless of seniority, to quickly arrive at high‑quality insights. This shift not only accelerates investigations but also frees teams to focus on high‑value, proactive security work. As organizations continue modernizing their SOC, agentic workflows represent a major step forward in bridging the gap between human expertise and scalable, AI‑powered analysis. The future of security operations starts here The Sentinel data lake is becoming the backbone of modern security operations—unifying security data, expanding investigative reach, and enabling graph‑driven, AI‑powered analysis at scale. By centralizing telemetry on a cost‑effective, AI‑ready foundation, and running advanced analytics on that data, security teams can move beyond fragmented insights to correlate threats with clarity and act faster with confidence. These four use cases are just the beginning. Whether you’re strengthening investigations, advancing threat hunting, operationalizing AI, or preparing your SOC for what’s next, the Sentinel data lake provides the scale, intelligence, and flexibility to reduce complexity and stay ahead of evolving threats. Now is the time to accelerate toward a more resilient, adaptive, and future‑ready security posture. Get started with Microsoft Sentinel data lake today497Views0likes0CommentsWhat’s new in Microsoft Sentinel: February 2026
February brings a set of new innovations to Sentinel that helps you work with security content across your SOC. This month’s updates focus on how security teams ingest, manage, and operationalize content, with new connectors, multi-tenant content distribution capabilities, and an enhanced UEBA Essentials solution to surface high‑risk behavior faster across cloud and identity environments. We’re also introducing new partner-built agentic experiences available through Microsoft Security Store, enabling customers to extend Sentinel with specialized expertise directly inside their existing workflows. Together, these innovations help SOC teams move faster, scale smarter, and unlock deeper security insight without added complexity. Expand your visibility and capabilities with Sentinel content Seamlessly onboard security data with growing out-of-the-box connectors (general availability) Sentinel continues to expand its connector ecosystem, making it easier for security teams to bring together data from across cloud, SaaS, and on-premises‑premises environments so nothing critical slips through the cracks. With broader coverage and faster onboarding, SOCs can unlock unified visibility, stronger analytics, and deeper context across their entire security stack. Customers can now use out-of-the-box connectors and solutions for: o Mimecast Audit Logs o CrowdStrike Falcon Endpoint Protection o Vectra XDR o Palo Alto Networks Cloud NGFW o SocPrime o Proofpoint on Demand (POD) Email Security o Pathlock o MongoDB o Contrast ADR For the full list of connectors, see our documentation. Share your input on what to prioritize next with our App Assure team. Microsoft 365 Copilot data connector (public preview) The Microsoft 365 Copilot connector brings Microsoft 365 Copilot audit logs and activity data into Sentinel, giving security teams visibility into how Microsoft 365 Copilot is being used across their organization. Once ingested, this data can power analytics rules, custom detections, workbooks, automation, and investigations, helping SOC teams quickly spot anomalies, misuse, and policy violations. Customers can also send this data to the Sentinel data lake for advanced scenarios, such as custom graphs and MCP integrations, while benefiting from lower cost ingestion and flexible retention. Learn more here. Transition your Sentinel connectors to the codeless connector framework (CCF) Microsoft is modernizing data connectors by shifting from Azure Function based connectors to the codeless connector framework (CCF). CCF enables partners, customers, and developers to build custom connectors that ingest data into Sentinel with a fully SaaS managed experience, built-in health monitoring, centralized credential management, and enhanced performance. We recommend that customers review their deployed connectors and move to the latest CCF versions to ensure uninterrupted data collection and continued access to the latest Sentinel capabilities. As part of Azure’s modernization of custom data collection, the legacy custom data collection API will be retired in September 2026. Centrally manage and distribute Sentinel content across multiple tenants (public preview) For partners and SOCs managing multiple Sentinel tenants, you can centrally manage and distribute Sentinel content across multiple tenants from the Microsoft Defender portal. With multi-tenant content distribution, you can replicate analytics rules, automation rules, workbooks, and alert tuning rules across tenants instead of rebuilding the same detections, automation, and dashboards in one environment at a time. This helps you onboard new tenants faster, reduce configuration drift, and maintain a consistent security baseline while still keeping local execution in each target tenant under centralized control. Learn more: New content types supported in multi-tenant content distribution Find high-risk anomalous behavior faster with an enhanced UEBA essentials solution (public preview) UEBA Essentials solution now helps SOC teams uncover high‑risk anomalous behavior faster across Azure, AWS, GCP, and Okta. With expanded multi-cloud anomaly detection and new queries powered by the anomalies table, analysts can quickly surface the riskiest activity, establish reliable behavioral baselines, and understand anomalies in context without chasing noisy or disconnected signals. UEBA Essentials aligns activity to MITRE ATT&CK, highlights complex malicious IP patterns, and builds a comprehensive anomaly profile for users in seconds, reducing investigation time while improving signal quality across identity and cloud environments. UEBA Essentials is available directly from the Sentinel content hub, with 30+ prebuilt UEBA queries ready to deploy. Behavior analytics can be enabled automatically from the connectors page as new data sources are added, making it easy to turn deeper insight into immediate action. For more information, see: UEBA Solution Power Boost: Practical Tools for Anomaly Detection Extend Sentinel with partner-built Security Copilot agents in Microsoft Security Store (general availability) You can extend Sentinel with partner-built Security Copilot agents that are discoverable and deployable through Microsoft Security Store in the Defender experience. These AI-powered agents are created by trusted partners specifically to work with Sentinel to deliver packaged expertise for investigation, triage, and response without requiring you to build your own agentic workflows from scratch. These partner-built agents work with Sentinel analytics and incidents to help SOC teams triage faster, investigate deeper, and surface insights that would otherwise take hours of manual effort. For example, these agents can review Sentinel and Defender environments, map attacker activity, or automate forensic analysis and SOC reporting. BlueVoyant’s Watchtower agent helps optimize Sentinel and Defender configurations, AdaQuest’s Data Leak agent accelerates response by surfacing risky data exposure and identity misuse, and Glueckkanja’s Attack Mapping agent automatically maps fragmented entities and attacker behavior into a coherent investigation story. Together, these agents show how the Security Store turns partner innovation into enterprise-ready, Security Copilot-powered capabilities that you can use in your existing SOC workflows. Browse these and more partner-built Security Copilot agents in the Security Store within the Defender portal. At Ignite, we announced the native integration of Security Store within the Defender portal. Read more about the GA announcement here: Microsoft Security Store: Now Generally Available Explore Sentinel experience Enhanced reports in the Threat Intelligence Briefing Agent (general availability) The Threat Intelligence Briefing Agent now applies a structured knowledge graph to Microsoft Defender for Threat Intelligence, enabling it to surface fresher, more relevant threats tailored to a customer’s specific industry and region. Building on this foundation, the agent also features embedded, high‑fidelity Microsoft Threat Intelligence citations, providing authoritative context directly within each insight. With these advancements, security teams gain clearer, more actionable guidance and mitigation steps through context‑rich insights aligned to their environment, helping them focus on what matters most and respond more confidently to emerging threats. Learn more: Microsoft Security Copilot Threat Intelligence Briefing Agent in Microsoft Defender Microsoft Purview Data Security Investigations (DSI) integrated with Sentinel graph (general availability) Sentinel now brings together data‑centric and threat‑centric insights to help teams understand risk faster and respond with more confidence. By combining AI‑powered deep content analysis from Microsoft Purview with activity‑centric graph analytics in Sentinel, security teams can identify sensitive or risky data, see how it was accessed, moved, or exposed, and take action from a single experience. This gives SOC and data security teams a full, contextual view of the potential blast radius, connecting what happened to the data with who accessed it and how, so investigations are faster, clearer, and more actionable. Start using the Microsoft Purview Data Security Investigations (DSI) integration with the Sentinel graph to give your analysts richer context and streamline end‑to‑end data risk investigations. Deadline to migrate the Sentinel experience from Azure to Defender extended to March 2027 To reduce friction and support customers of all sizes, we are extending the sunset date for managing Sentinel in the Azure portal to March 31, 2027. This additional time ensures customers can transition confidently while taking advantage of new capabilities that are becoming available in the Defender portal. Learn more about this decision, why you should start planning your move today, and find helpful resources here: UPDATE: New timeline for transitioning Sentinel experience to Defender portal Events and webinars Stay connected with the latest security innovations and best practices through global conferences and expert‑led sessions that bring the community together to learn, connect, and explore how Microsoft is delivering AI‑driven, end‑to‑end security for the modern enterprise. Join us at RSAC, March 23–26, 2026 at the Moscone Center in San Francisco Register for RSAC and stop by the Microsoft booth to see our latest security innovations in action. Learn how Sentinel SIEM and platform help organizations stay ahead of threats, simplify operations, and protect what matters most. Register today! Microsoft Security Webinars Discover upcoming sessions on Sentinel SIEM & platform, Defender, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Additional resources Blogs: UPDATE: New timeline for transitioning Sentinel experience to Defender portal, Accelerate your move to Microsoft Sentinel with AI-powered SIEM migration tool, Automating Microsoft Sentinel: A blog series on enabling Smart Security, The Agentic SOC Era: How Sentinel MCP Enables Autonomous Security Reasoning Documentation: What Is a Security Graph? , SIEM migration tool, Onboarding to Microsoft Sentinel data lake from the Defender portal Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!2.3KViews4likes1CommentThe Microsoft Copilot Data Connector for Microsoft Sentinel is Now in Public Preview
We are happy to announce a new data connector that is available to the public: the Microsoft Copilot data connector for Microsoft Sentinel. The new Microsoft Copilot data connector will allow for audit logs and activities generated by different offerings of Copilot to be ingested into Microsoft Sentinel and Microsoft Sentinel data lake. This allows for Copilot activities to be leveraged within Microsoft Sentinel features such as analytic rules/custom detections, Workbooks, automation, and more. This also allows for Copilot data to be sent to Sentinel data lake, which opens the possibilities for integrations with custom graphs, MCP server, and more while offering lower cost ingestion and longer retention as needed. Eligibility for the Connector The connector is available for all customers within Microsoft Sentinel, but will only ingest data for environments that have access to Copilot licenses and SCUs as the activities rely on Copilot being used. These logs are available via the Purview Unified Audit Log (UAL) feed, which is available and enabled for all users by default. A big value of this new connector is that it eliminates the need for users to go to the Purview Portal in order to see these activities, as they are proactively brought into the workspace, enabling SOCs to generate detections and proactively threat hunt on this information. Note: This data connector is a single-tenant connector, meaning that it will ingest the data for the entire tenant that it resides in. This connector is not designed to handle multi-tenant configurations. What’s Included in the Connector The following are record types from Office 365 Management API that will be supported as part of this connector: 261 CopilotInteraction 310 CreateCopilotPlugin 311 UpdateCopilotPlugin 312 DeleteCopilotPlugin 313 EnableCopilotPlugin 314 DisableCopilotPlugin 315 CreateCopilotWorkspace 316 UpdateCopilotWorkspace 317 DeleteCopilotWorkspace 318 EnableCopilotWorkspace 319 DisableCopilotWorkspace 320 CreateCopilotPromptBook 321 UpdateCopilotPromptBook 322 DeleteCopilotPromptBook 323 EnableCopilotPromptBook 324 DisableCopilotPromptBook 325 UpdateCopilotSettings 334 TeamCopilotInteraction 363 Microsoft365CopilotScheduledPrompt 371 OutlookCopilotAutomation 389 CopilotForSecurityTrigger 390 CopilotAgentManagement These are great options for monitoring users who have permission to make changes to Copilot across the environment. This data can assist with identifying if there are anomalous interactions taking place between users and Copilot, unauthorized attempts of access, or malicious prompt usage. How to Deploy the Connector The connector is available via the Microsoft Sentinel Content Hub and can be installed today. To find the connector: Within the Defender Portal, expand the Microsoft Sentinel navigation in the left menu. Expand Configuration and select Content Hub. Within the search bar, search for “Copilot”. Click on the solution that appears and click Install. Once the solution is installed, the connector can be configured by clicking on the connector within the solution and selecting Open Connector Page. To enable the connector, the user will need either Global Administrator or Security Administrator on the tenant. Once the connector is enabled, the data will be sent to the table named CopilotActivity. Note: Data ingestion costs apply when using this data connector. Pricing will be based on the settings for the Microsoft Sentinel workspace or at the Microsoft Sentinel data lake tier pricing. As this data connector is in Public Preview, users can start deploying this connector right now! As always, let us know what you think in the comments so that we may continue to build what is most valuable to you. We hope that this new data connector continues to assist your SOC with high valuable insights that best empowers your security. Resources: Office Management API Event Number List: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#auditlogrecordtype Purview Unified Audit Log Library: Audit log activities | Microsoft Learn Copilot Inclusion in the Microsoft E5 Subscription: Learn about Security Copilot inclusion in Microsoft 365 E5 subscription | Microsoft Learn Microsoft Sentinel: What is Microsoft Sentinel SIEM? | Microsoft Learn Microsoft Sentinel Platform: Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn7KViews0likes1CommentHow to Become a Microsoft Security Copilot Ninja: The Complete Level 400 Training
Learn how to become a Microsoft Security Copilot (Copilot) Ninja! This blog will walk you through the resources you'll need to master and make best use of Microsoft's Security Copilot product!172KViews27likes22CommentsWhat’s new in Microsoft Sentinel: January 2026
Welcome back! As we kick off the new year, we’re bringing key Ignite 2025 announcements into your day‑to‑day Sentinel experience so you can turn insights into measurable SecOps outcomes with the AI-ready Sentinel SIEM and platform. Building on last year’s momentum around AI-ready platforms, agentic defense, data lake innovation, and advanced SIEM capabilities, the January edition delivers security features designed to elevate your operations. This release brings powerful enhancements, including streamlined ingestion of Microsoft Defender data into the Sentinel data lake, deeper partner connector integrations, QRadar migration support, enriched UEBA insights, and a refreshed ASIM schema for consistent normalization. Together, these advancements help security teams simplify operations, strengthen detection, and unlock greater value from their data. What’s new Ingest Defender data directly into Sentinel data lake At Ignite 2025, Microsoft announced that Defender for Endpoint (MDE) data could be ingested directly into the Sentinel data lake. Building on this, we now support direct ingestion of Microsoft Defender for Office (MDO) and Microsoft Defender for Cloud Apps (MDA) data as well. You can choose to ingest supported XDR tables exclusively into the data lake tier by selecting the data lake tier option when configuring the retention settings. Table settings are easily managed through the built-in table management experience in the Defender portal, enabling cost-effective, long-term data retention without moving data to the analytics tier. This expansion delivers improved visibility, deeper historical analysis, reduced total cost of ownership, and empowers modern security operations with advanced capabilities. Growing connector ecosystem At Ignite 2025, Microsoft unveiled new connectors and integrations that seamlessly unify signals across multi-cloud and multiplatform environments. These innovations deliver enhanced visibility and scalable security insights across cloud, endpoint, and identity platforms. To explore the complete list of new solutions from Microsoft Sentinel and our third-party partners see – Ignite 2025: New Microsoft Sentinel Connectors Announcement. Accelerate your migration from QRadar to Microsoft Sentinel Microsoft is excited to announce support for QRadar-to-Microsoft Sentinel migrations through the enhanced, AI-powered SIEM migration experience. This new capability simplifies and streamlines the process by helping organizations efficiently migrate detection rules and enable required data connectors in Sentinel’s cloud-native SIEM. As a result, customers gain improved visibility, accelerated threat detection, and more modern security operations powered by Microsoft’s intelligent cloud. Early adopters are seeing smoother transitions with minimal disruption, more predictable outcomes, and greater value from their SIEM investment. In addition, Microsoft provides free migration support through the Cloud Accelerate Factory program. Eligible customers receive expert guidance to quickly deploy Sentinel and migrate from Splunk and QRadar using the new SIEM migration experience, in collaboration with their preferred migration partner. For details, contact your Microsoft representative or visit: https://aka.ms/FactoryCustomerPortal To learn more, tune in to our webinar on February 2, 2025, at 9:00AM PST: https://aka.ms/SecurityCommunity Announcing the Behaviors Layer within UEBA Now in public preview! Microsoft Sentinel’s Behaviors layer is a new UEBA capability that provides a high-level behavioral lens on top of raw security telemetry. Its goal is to answer the question “What happened? Who did what to whom?” in your environment by aggregating, sequencing, and enriching events into human-readable behaviors. Each “behavior” is a synthesized security event or pattern that describes an action (or sequence of actions) an entity performed. This includes rich context such as involved entities, MITRE ATT&CK tactics/techniques, and a plain-English description. Learn more here: Turn Complexity into Clarity: Introducing the New UEBA Behaviors Layer in Microsoft Sentinel Unified schema alignment for ASIM Following our Advanced Security Information Model (ASIM) reaching General Availability in September, this latest refresh delivers comprehensive alignment across all ASIM schemas to the latest ASIM standard. This update ensures: Consistent field coverage across all major activity types. A stable baseline for accelerating parser development, normalization improvements, and future ASIM-driven experiences. Key additions across older schemas: Inspection fields – enabling normalization of security findings across all activity types. Risk fields – providing consistent representation of source-reported risk. Full details are available here: Advanced Security Information Model (ASIM) schemas Additional resources Blogs: What’s new in Microsoft Sentinel: December 2025, Automating IOC hunts in Microsoft Sentinel data lake, Efficiently process high volume logs and optimize costs with Microsoft Sentinel data lake Documentation: Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn, Use the SIEM migration experience - Microsoft Sentinel | Microsoft Learn Sign up for upcoming webinars: Feb. 2 | 9:00am | Accelerate your SIEM migration to Microsoft Sentinel Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!1.8KViews1like0CommentsMicrosoft Sentinel Platform: Audit Logs and Where to Find Them
Looking to understand where audit activities for Sentinel Platform are surfaced? Look no further than this writeup! With the launch of the Sentinel Platform, a new suite of features for the Microsoft Sentinel service, users may find themselves wanting to monitor who is using these new features and how. This blog sets out to highlight how auditing and monitoring can be achieved and where this data can be found. *Thank you to my teammates Ian Parramore and David Hoerster for reviewing and contributing to this blog.* What are Audit Logs? Audit logs are documented activities that are eligible for usage within SOC tools, such as a SIEM. These logs are meant to exist as a paper trail to show: Who performed an action What type of action was performed When the action was performed Where the action was performed How the action was performed Audit logs can be generated by many platforms, whether they are Microsoft services or platforms outside of the Microsoft ecosystem. Each source is a great option for a SOC to monitor. Types of Audit Logs Audit logs can vary in how they are classified or where they are placed. Focusing just on Microsoft, the logs can vary based on platform. A few examples are: - Windows: Events generated by the operating system that are available in EventViewer - Azure – Diagnostic logs generated by services that can be sent to Azure Log Analytics - Defender – Audit logs generated by Defender services that are sent to M365 Audit Logs What is the CloudAppEvents Table? The CloudAppEvents table is a data table that is provided via Advanced Hunting in Defender. This table contains events of applications being used within the environment. This table is also a destination for Microsoft audit logs that are being sent to Purview. Purview’s audit log blade includes logs from platforms like M365, Defender, and now Sentinel Platform. How to Check if the Purview Unified Audit Logging is Enabled For CloudAppEvents to receive data, Audit Logging within Purview must be enabled and M365 needs to be configured to be connected as a Defender for Cloud Apps component. Enabling Audit Logs By default, Purview Auditing is enabled by default within environments. In the event that they have been disabled, Audit logs can be enabled and checked via PowerShell. To do so, the user must have the Audit Logs role within Exchange Online. The command to run is: Get-AdminAuditLogConfig | Format-List UnifiedAuditLogIngestionEnabled The result will either be true if auditing is already turned on, and false if it is disabled. If the result is false, the setting will need to be enabled. To do so: Install the Exchange Online module with: Import-Module ExchangeOnlineManagement Connect and authenticate to Exchange Online with an interactive window Connect-ExchangeOnline -UserPrincipalName USER PRINCIPAL NAME HERE Run the command to enable auditing Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $true Note: It may take 60 minutes for the change to take effect. Connecting M365 to MDA To connect M365 to MDA as a connector: Within the Defender portal, go to System > Settings. Within Settings, choose Cloud Apps. Within the settings navigation, go to Connected apps > App Connectors. If Microsoft 365 is not already listed, click Connect an app. Find and select Microsoft 365. Within the settings, select the boxes for Microsoft 365 activities. Once set, click on the Connect Microsoft 365 button. Note: You have to have a proper license that includes Microsoft Defender for Cloud Apps in order to have these settings for app connections. Monitoring Activities As laid out within the public documentation, there are several categories of audit logs for Sentinel platform, including: Onboarding/offboarding KQL activities Job activities Notebook activities AI tool activities Graph activities All of these events are surfaced within the Purview audit viewer and CloudAppEvents table. When querying the table, each of the activities will appear under a column called ActionType. Onboarding Friendly Name Operation Description Changed subscription of Sentinel lake SentinelLakeSubscriptionChanged User modified the billing subscription ID or resource group associated with the data lake. Onboarded data for Sentinel lake SentinelLakeDefaultDataOnboarded During onboarding, default data onboard is logged. Setup Sentinel lake SentinelLakeSetup At the time of onboarding, Sentinel data lake is set up and the details are logged. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘SentinelLakeDefaultDataOnboarded’ | project AccountDisplayName, Application, Timestamp, ActionType KQL Queries There is only one type of activity for KQL available. These logs function similarly to how LAQueryLogs work today, showing details like who ran the query, if it was successful, and what the body of the query was. Friendly Name Operation Description Completed KQL query KQLQueryCompleted User runs interactive queries via KQL on data in their Microsoft Sentinel data lake For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘KQLQueryCompleted’ | project Timestamp, AccountDisplayName, Application, RawEventData.Interface, RawEventData.QueryText, RawEventData.TotalRows Jobs Jobs pertain to KQL jobs and Notebook jobs offered by the Sentinel Platform. These logs will detail job activities as well as actions taken against jobs. This will be useful for monitoring who is creating or modifying jobs, as well as monitoring that jobs are properly running. Friendly Name Operation Description Completed a job run adhoc JobRunAdhocCompleted Adhoc Job execution completed. Completed a job run scheduled JobRunScheduledCompleted Scheduled Job execution completed. Created job JobCreated A job is created. Deleted a custom table CustomTableDelete As part of a job run, the custom table was deleted. Deleted job JobDeleted A job is deleted. Disabled job JobDisabled The job is disabled. Enabled job JobEnabled A disabled job is reenabled. Ran a job adhoc JobRunAdhoc Job is triggered manually and run started. Ran a job on schedule JobRunScheduled Job run is triggered due to schedule. Read from table TableRead As part of the job run, a table is read. Stopped a job run JobRunStopped User manually cancels or stops an ongoing job run. Updated job JobUpdated The job definition and/or configuration and schedule details of the job if updated. Writing to a custom table CustomTableWrite As part of the job run, data was written to a custom table. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘JobCreated’ | project Timestamp, AccountDisplayName, Application, ActionType, RawEventData.JobName, RawEventData.JobType, RawEventData.Interface AI Tools AI tool logs pertain to events being generated by MCP server usage. This is generated any time that users operate with MCP server and leverage one of the tools available today to run prompts and sessions. Friendly Name Operation Description Completed AI tool run SentinelAIToolRunCompleted Sentinel AI tool run completed Created AI tool SentinelAIToolCreated User creates a Sentinel AI tool Started AI tool run SentinelAIToolRunStarted Sentinel AI tool run started For querying these activities, the query would be: CloudAppEvents | where ActionType == ‘SentinelAIToolRunStarted’ | project Timestamp, AccountDisplayName, ActionType, Application, RawEventData.Interface, RawEventData.ToolName Notebooks Notebook activities pertain to actions performed by users via Notebooks. This can include querying data via a Notebook, writing to a table via Notebooks, or launching new Notebook sessions. Friendly Name Operation Description Deleted a custom table CustomTableDelete User deleted a table as part of their notebook execution. Read from table TableRead User read a table as part of their notebook execution. Started session SessionStarted User started a notebook session. Stopped session SessionStopped User stopped a notebook session. Wrote to a custom table CustomTableWrite User wrote to a table as part of their notebook execution. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘TableRead’ Graph Usage Graph activities pertain to users modifying or running a graph based scenario within the environment. This can include creating a new graph scenario, deleting one, or running a scenario. Created a graph scenario GraphScenarioCreated User created a graph instance for a pre-defined graph scenario. Deleted a graph scenario GraphScenarioDeleted User deleted or disable a graph instance for a pre-defined graph scenario. Ran a graph query GraphQueryRun User ran a graph query. For querying these activities, one example would be: CloudAppEvents | where ActionType == 'GraphQueryRun' | project AccountDisplayName, IsExternalUser, IsImpersonated, RawEventData.['GraphName'], RawEventData.['CreationTime'] Monitoring without Access to the CloudAppSecurity Table If accessing the CloudAppSecurity table is not possible in the environment, both Defender and Purview allow for manually searching for activities within the environment. For Purview (https://purview.microsoft.com), the audit page can be found by going to the Audit blade within the Purview portal. For Defender, the audit blade can be found under Permissions > Audit To run a search that will match Sentinel Platform related activities, the easiest method is using the Activities – friendly names field to filter for Sentinel Platform. Custom Ingestion of Audit Logs If looking to ingest the data into a table, a custom connector can be used to fetch the information. The Purview Audit Logs use the Office Management API when calling events programmatically. This leverages registered applications with proper permissions to poll the API and forward the data into a data collection rule. As the Office Management API does not support filtering entirely within the content URI, making a custom connector for this source is a bit more tricky. For a custom connector to work, it will need to: Call the API Review each content URL Filter for events that are related to Sentinel Platform This leaves two options for accomplishing a custom connector route: A code-based connector that is hosted within an Azure Function A codeless connector paired with a filtering data collection rule This blog will just focus on the codeless connector as an example. A codeless connector can be made from scratch or by referencing an existing connector within the Microsoft Sentinel GitHub repository. For the connector, an example API call would appear as such: https://manage.office.com/api/v1.0/{tenant-id}/activity/feed/subscriptions/content?contentType=Audit.General&startTime={startTime}&endTime={endTime} When using a registered application, it will need ActivityFeed.Read read permissions on the Office Management API for it to be able to call the API and view the information returned. The catch with the Management API is that it uses a content URL in the API response, thus requiring one more step. Luckily, Codeless Connectors support nested actions and JSON. An example of a connector that does this today is the Salesforce connector. When looking to filter the events to be specifically the Sentinel Platform audit logs, the queries listed above can be used in the body of a data collection rule. For example: "streams": [ "Custom-PurviewAudit" ], "destinations": [ "logAnalyticsWorkspace" ], "transformKql": "source | where ActionType has_any (‘GraphQueryRun’, ‘TableRead’… etc) "outputStream": "Custom-SentinelPlatformAuditLogs” Note that putting all of the audit logs may lead to a schema mismatch depending on how they are being parsed. If concerned about this, consider placing each event into different tables, such as SentinelLakeQueries, KQLJobActions, etc. This can all be defined within the data collection rule, though the custom tables for each action will need to exist before defining them within the data collection rule. Closing Now that audit logs are flowing, actions taken by users within the environment can be used for detections, hunting, Workbooks, and automation. Since the logs are being ingested via a data collection rule, they can also be sent to Microsoft Sentinel data lake if desired. May this blog lead to some creativity and stronger monitoring of the Sentinel Platform!2.9KViews2likes3CommentsWhat’s New in Microsoft Sentinel: November 2025
Welcome to our new Microsoft Sentinel blog series! We’re excited to launch a new blog series focused on Microsoft Sentinel. From the latest product innovations and feature updates to industry recognition, success stories, and major events, you’ll find it all here. This first post kicks off the series by celebrating Microsoft’s recognition as a Leader in the 2025 Gartner Magic Quadrant for SIEM 1 . It also introduces the latest innovations designed to deliver measurable impact and empower defenders with adaptable, collaborative tools in an evolving threat landscape. Microsoft is recognized as a Leader in 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) Microsoft Sentinel continues to drive security innovation—and the industry is taking notice. Microsoft was named a leader in the 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) 1 , published on October 8, 2025. We believe this acknowledgment reinforces our commitment to helping organizations stay secure in a rapidly changing threat landscape. Read blog for more information. Take advantage of M365 E5 benefit and Microsoft Sentinel promotional pricing Microsoft 365 E5 benefit Customers with Microsoft 365 E5, A5, F5, or G5 licenses automatically receive up to 5 MB of free data ingestion per user per day, covering key security data sources like Azure AD sign-in logs and Microsoft Cloud App Security discovery logs—no enrollment required. Read more about M365 benefits for Microsoft Sentinel. New 50GB promotional pricing To make Microsoft Sentinel more accessible to small and mid-sized organizations, we introduced a new 50 GB commitment tier in public preview, with promotional pricing starting October 1, 2025, through March 31, 2026. Customers who choose the 50 GB commitment tier during this period will maintain their promotional rate until March 31, 2027. Available globally with regional variations in regional pricing it is accessible through EA, CSP, and Direct channels. For more information see Microsoft Sentinel pricing page. Partner Integrations: Strengthening TI collaboration and workflow automation Microsoft Sentinel continues to expand its ecosystem with powerful partner integrations that enhance security operations. With Cyware, customers can now share threat intelligence bi-directionally across trusted destinations, ISACs, and multi-tenant environments—enabling real-time intelligence exchange that strengthens defenses and accelerates coordinated response. Learn more about the Cyware integration. Learn more about the Cyware integration here. Meanwhile, BlinkOps integration combined with Sentinel’s SOAR capabilities empowers SOC teams to automate repetitive tasks, orchestrate complex playbooks, and streamline workflows end-to-end. This automation reduces operational overhead, cuts Mean Time to Respond (MTTR) and frees analysts for strategic threat hunting. Learn more about the BlinkOps integration. Learn more about the BlinkOps integration. Harnessing Microsoft Sentinel Innovations Security is being reengineered for the AI era, moving beyond static, rule-based controls and reactive post-breach response toward platform-led, machine-speed defense. To overcome fragmented tools, sprawling signals, and legacy architectures that cannot keep pace with modern attacks, Microsoft Sentinel has evolved into both a SIEM and a unified security platform for agentic defense. These updates introduce architectural enhancements and advanced capabilities that enable AI-driven security operations at scale, helping organizations detect, investigate, and respond with unprecedented speed and precision. Microsoft Sentinel graph – Public Preview Unified graph analytics for deeper context and threat reasoning. Microsoft Sentinel graph delivers an interactive, visual map of entity relationships, helping analysts uncover hidden attack paths, lateral movement, and root causes for pre- and post-breach investigations. Read tech community blog for more details. Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview Context is key to effective security automation. Microsoft Sentinel MCP server introduces a standardized protocol for building context-aware solutions, enabling developers to create smarter integrations and workflows within Sentinel. This opens the door to richer automation scenarios and more adaptive security operations. Read tech community blog for more details. Enhanced UEBA with New Data Sources – Public Preview We are excited to announce support for six new sources in our user entity and behavior analytics algorithm, including AWS, GCP, Okta, and Azure. Now, customers can gain deeper, cross-platform visibility into anomalous behavior for earlier and more confident detection. Read our blog and check out our Ninja Training to learn more. Developer Solutions for Microsoft Sentinel platform – Public Preview Expanded APIs, solution templates, and integration capabilities empower developers to build and distribute custom workflows and apps via Microsoft Security Store. This unlocks faster innovation, streamlined operations, and new revenue opportunities, extending Sentinel beyond out-of-the-box functionality for greater agility and resilience. Read tech community blog for more details. Growing ecosystem of Microsoft Sentinel data connectors We are excited to announce the general availability of four new data connectors: AWS Server Access Logs, Google Kubernetes Engine, Palo Alto CSPM, and Palo Alto Cortex Xpanse. Visit find your Microsoft Sentinel data connector page for the list of data connectors currently supported. We are also inviting Private Previews for four additional connectors: AWS EKS, Qualys VM KB, Alibaba Cloud Network, and Holm Security towards our commitment to expand the breadth and depth to support new data sources. Our customer support team can help you sign up for previews. New agentless data connector for Microsoft Sentinel Solution for SAP applications We’re excited to announce the general availability of a new agentless connector for Microsoft Sentinel solution for SAP applications, designed to simplify integration and enhance security visibility. This connector enables seamless ingestion of SAP logs and telemetry directly into Microsoft Sentinel, helping SOC teams monitor critical business processes, detect anomalies, and respond to threats faster—all while reducing operational overhead. Events, Webinars and Training Stay connected with the latest security innovation and best practices. From global conferences to expert-led sessions, these events offer opportunities to learn, network, and explore how Microsoft is shaping AI-driven, end-to-end security for the modern enterprise. Microsoft Ignite 2025 Security takes center stage at Microsoft Ignite, with dedicated sessions and hands-on experiences for security professionals and leaders. Join us in San Francisco, November 17–21, 2025, or online, to explore our AI-first, end-to-end security platform designed to protect identities, devices, data, applications, clouds, infrastructure—and critically—AI systems and agents. Register today! Microsoft Security Webinars Stay ahead of emerging threats and best practices with expert-led webinars from the Microsoft Security Community. Discover upcoming sessions on Microsoft Sentinel SIEM & platform, Defender, Intune, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Onboard Microsoft Sentinel in Defender – Video Series Microsoft leads the industry in both SIEM and XDR, delivering a unified experience that brings these capabilities together seamlessly in the Microsoft Defender portal. This integration empowers security teams to correlate insights, streamline workflows, and strengthen defenses across the entire threat landscape. Ready to get started? Explore our video series to learn how to onboard your Microsoft Sentinel experience and unlock the full potential of integrated security. Watch Microsoft Sentinel is now in Defender video series. MDTI Convergence into Microsoft Sentinel & Defender XDR overview Discover how Microsoft Defender Threat Intelligence Premium is transforming cybersecurity by integrating into Defender XDR, Sentinel, and the Defender portal. Watch this session to learn about new features, expanded access to threat intelligence, and how these updates strengthen your security posture. Partner Sentinel Bootcamp Transform your security team from Sentinel beginners to advanced practitioners. This comprehensive 2-day bootcamp helps participants master architecture design, data ingestion strategies, multi-tenant management, and advanced analytics while learning to leverage Microsoft's AI-first security platform for real-world threat detection and response. Register here for the bootcamp. Looking to dive deeper into Microsoft Sentinel development? Check out the official https://aka.ms/AppAssure_SentinelDeveloper. It’s the central reference for developers and security teams who want to build custom integrations, automate workflows, and extend Sentinel’s capabilities. Bookmark this link as your starting point for hands-on guidance and tools. Stay Connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. 1 Gartner® Magic Quadrant™ for Security Information and Event Management, Andrew Davies, Eric Ahlm, Angel Berrios, Darren Livingstone, 8 October 20253.2KViews2likes3CommentsUsing parameterized functions with KQL-based custom plugins in Microsoft Security Copilot
In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog. But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic Parameterized functions are important in the context of Security Copilot plugins because of: Dynamic prompt completion: Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic. Plugin reusability: By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions. Maintainability and modularity: Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds. Validation: Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it's treated as a value, not as part of the query logic. Plugin Spec mapping: OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless. Practical example In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher. Without using functions, this entire query would have to form part of the plugin Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot. CloudAppEvents | where RawEventData.TargetDomain has_any ( 'grok.com', 'x.ai', 'mistral.ai', 'cohere.ai', 'perplexity.ai', 'huggingface.co', 'adventureai.gg', 'ai.google/discover/palm2', 'ai.meta.com/llama', 'ai2006.io', 'aibuddy.chat', 'aidungeon.io', 'aigcdeep.com', 'ai-ghostwriter.com', 'aiisajoke.com', 'ailessonplan.com', 'aipoemgenerator.org', 'aissistify.com', 'ai-writer.com', 'aiwritingpal.com', 'akeeva.co', 'aleph-alpha.com/luminous', 'alphacode.deepmind.com', 'analogenie.com', 'anthropic.com/index/claude-2', 'anthropic.com/index/introducing-claude', 'anyword.com', 'app.getmerlin.in', 'app.inferkit.com', 'app.longshot.ai', 'app.neuro-flash.com', 'applaime.com', 'articlefiesta.com', 'articleforge.com', 'askbrian.ai', 'aws.amazon.com/bedrock/titan', 'azure.microsoft.com/en-us/products/ai-services/openai-service', 'bard.google.com', 'beacons.ai/linea_builds', 'bearly.ai', 'beatoven.ai', 'beautiful.ai', 'beewriter.com', 'bettersynonyms.com', 'blenderbot.ai', 'bomml.ai', 'bots.miku.gg', 'browsegpt.ai', 'bulkgpt.ai', 'buster.ai', 'censusgpt.com', 'chai-research.com', 'character.ai', 'charley.ai', 'charshift.com', 'chat.lmsys.org', 'chat.mymap.ai', 'chatbase.co', 'chatbotgen.com', 'chatgpt.com', 'chatgptdemo.net', 'chatgptduo.com', 'chatgptspanish.org', 'chatpdf.com', 'chattab.app', 'claid.ai', 'claralabs.com', 'claude.ai/login', 'clipdrop.co/stable-diffusion', 'cmdj.app', 'codesnippets.ai', 'cohere.com', 'cohesive.so', 'compose.ai', 'contentbot.ai', 'contentvillain.com', 'copy.ai', 'copymatic.ai', 'copymonkey.ai', 'copysmith.ai', 'copyter.com', 'coursebox.ai', 'coverler.com', 'craftly.ai', 'crammer.app', 'creaitor.ai', 'dante-ai.com', 'databricks.com', 'deepai.org', 'deep-image.ai', 'deepreview.eu', 'descrii.tech', 'designs.ai', 'docgpt.ai', 'dreamily.ai', 'editgpt.app', 'edwardbot.com', 'eilla.ai', 'elai.io', 'elephas.app', 'eleuther.ai', 'essayailab.com', 'essay-builder.ai', 'essaygrader.ai', 'essaypal.ai', 'falconllm.tii.ae', 'finechat.ai', 'finito.ai', 'fireflies.ai', 'firefly.adobe.com', 'firetexts.co', 'flowgpt.com', 'flowrite.com', 'forethought.ai', 'formwise.ai', 'frase.io', 'freedomgpt.com', 'gajix.com', 'gemini.google.com', 'genei.io', 'generatorxyz.com', 'getchunky.io', 'getgptapi.com', 'getliner.com', 'getsmartgpt.com', 'getvoila.ai', 'gista.co', 'github.com/features/copilot', 'giti.ai', 'gizzmo.ai', 'glasp.co', 'gliglish.com', 'godinabox.co', 'gozen.io', 'gpt.h2o.ai', 'gpt3demo.com', 'gpt4all.io', 'gpt-4chan+)', 'gpt6.ai', 'gptassistant.app', 'gptfy.co', 'gptgame.app', 'gptgo.ai', 'gptkit.ai', 'gpt-persona.com', 'gpt-ppt.neftup.app', 'gptzero.me', 'grammarly.com', 'hal9.com', 'headlime.com', 'heimdallapp.org', 'helperai.info', 'heygen.com', 'heygpt.chat', 'hippocraticai.com', 'huggingface.co/spaces/tiiuae/falcon-180b-demo', 'humanpal.io', 'hypotenuse.ai', 'ichatwithgpt.com', 'ideasai.com', 'ingestai.io', 'inkforall.com', 'inputai.com/chat/gpt-4', 'instantanswers.xyz', 'instatext.io', 'iris.ai', 'jasper.ai', 'jigso.io', 'kafkai.com', 'kibo.vercel.app', 'kloud.chat', 'koala.sh', 'krater.ai', 'lamini.ai', 'langchain.com', 'laragpt.com', 'learn.xyz', 'learnitive.com', 'learnt.ai', 'letsenhance.io', 'letsrevive.app', 'lexalytics.com', 'lgresearch.ai', 'linke.ai', 'localbot.ai', 'luis.ai', 'lumen5.com', 'machinetranslation.com', 'magicstudio.com', 'magisto.com', 'mailshake.com/ai-email-writer', 'markcopy.ai', 'meetmaya.world', 'merlin.foyer.work', 'mieux.ai', 'mightygpt.com', 'mosaicml.com', 'murf.ai', 'myaiteam.com', 'mygptwizard.com', 'narakeet.com', 'nat.dev', 'nbox.ai', 'netus.ai', 'neural.love', 'neuraltext.com', 'newswriter.ai', 'nextbrain.ai', 'noluai.com', 'notion.so', 'novelai.net', 'numind.ai', 'ocoya.com', 'ollama.ai', 'openai.com', 'ora.ai', 'otterwriter.com', 'outwrite.com', 'pagelines.com', 'parallelgpt.ai', 'peppercontent.io', 'perplexity.ai', 'personal.ai', 'phind.com', 'phrasee.co', 'play.ht', 'poe.com', 'predis.ai', 'premai.io', 'preppally.com', 'presentationgpt.com', 'privatellm.app', 'projectdecember.net', 'promptclub.ai', 'promptfolder.com', 'promptitude.io', 'qopywriter.ai', 'quickchat.ai/emerson', 'quillbot.com', 'rawshorts.com', 'read.ai', 'rebecc.ai', 'refraction.dev', 'regem.in/ai-writer', 'regie.ai', 'regisai.com', 'relevanceai.com', 'replika.com', 'replit.com', 'resemble.ai', 'resumerevival.xyz', 'riku.ai', 'rizzai.com', 'roamaround.app', 'rovioai.com', 'rytr.me', 'saga.so', 'sapling.ai', 'scribbyo.com', 'seowriting.ai', 'shakespearetoolbar.com', 'shortlyai.com', 'simpleshow.com', 'sitegpt.ai', 'smartwriter.ai', 'sonantic.io', 'soofy.io', 'soundful.com', 'speechify.com', 'splice.com', 'stability.ai', 'stableaudio.com', 'starryai.com', 'stealthgpt.ai', 'steve.ai', 'stork.ai', 'storyd.ai', 'storyscapeai.app', 'storytailor.ai', 'streamlit.io/generative-ai', 'summari.com', 'synesthesia.io', 'tabnine.com', 'talkai.info', 'talkpal.ai', 'talktowalle.com', 'team-gpt.com', 'tethered.dev', 'texta.ai', 'textcortex.com', 'textsynth.com', 'thirdai.com/pocketllm', 'threadcreator.com', 'thundercontent.com', 'tldrthis.com', 'tome.app', 'toolsaday.com/writing/text-genie', 'to-teach.ai', 'tutorai.me', 'tweetyai.com', 'twoslash.ai', 'typeright.com', 'typli.ai', 'uminal.com', 'unbounce.com/product/smart-copy', 'uniglobalcareers.com/cv-generator', 'usechat.ai', 'usemano.com', 'videomuse.app', 'vidext.app', 'virtualghostwriter.com', 'voicemod.net', 'warmer.ai', 'webllm.mlc.ai', 'wellsaidlabs.com', 'wepik.com', 'we-spots.com', 'wordplay.ai', 'wordtune.com', 'workflos.ai', 'woxo.tech', 'wpaibot.com', 'writecream.com', 'writefull.com', 'writegpt.ai', 'writeholo.com', 'writeme.ai', 'writer.com', 'writersbrew.app', 'writerx.co', 'writesonic.com', 'writesparkle.ai', 'writier.io', 'yarnit.app', 'zevbot.com', 'zomani.ai' ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ ("U.S. Social Security Number (SSN)", "Credit Card Number", "EU Tax Identification Number (TIN)","Amazon S3 Client Secret Access Key","All Credential Types"), 90, tostring(sit_SensitiveInfoTypeName) in~ ("All Full names"), 40, tostring(sit_SensitiveInfoTypeName) in~ ("Project Obsidian", "Phone Number"), 70, tostring(sit_SensitiveInfoTypeName) in~ ("IP"), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == "High" //| where RiskLevel == "High" | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok Create the parameters in the Log Analytics UI Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department. 3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned. Effect of switched parameters: To edit the function, follow the steps below: Navigate to the Logs menu for your Log Analytics workspace then select the function icon Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊 Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function: Next, we still use direct skill invocation, but this time specify our own parameters: Lastly, we test it out with a natural language prompt: tment Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below: Conclusion In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback. Additional Resources Using parameterized functions in Microsoft Defender XDR Using parameterized functions with Azure Data Explorer Functions in Azure Monitor log queries - Azure Monitor | Microsoft Learn Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub1.1KViews0likes1Comment