microsoft sentinel
229 TopicsIgnite 2025: New Microsoft Sentinel Connectors Announcement
Microsoft Sentinel continues to set the pace for innovation in cloud-native SIEMs, empowering security teams to meet today’s challenges with scalable analytics, built-in AI, and a cost-effective data lake. Recognized as a leader by Gartner and Forrester, Microsoft Sentinel is a platform for all of security, evolving to unify signals, cut costs, and power agentic AI for the modern SOC. As Microsoft Sentinel’s capabilities expand, so does its connector ecosystem. With over 350+ integrations available, organizations can seamlessly bring data from a wide range of sources into Microsoft Sentinel’s analytics and data lake tiers. This momentum is driven by our partners, who continue to deliver new and enhanced connectors that address real customer needs. The past year has seen rapid growth in both the number and diversity of connectors, ensuring that Microsoft Sentinel remains robust, flexible, and ready to meet the demands of any security environment. Today we showcase some of the most recent additions to our growing Microsoft Sentinel ecosystem spanning categories such as cloud security, endpoint protection, identity, IT operations, threat intelligence, compliance, and more: New and notable integrations BlinkOps and Microsoft Sentinel BlinkOps is an enterprise-ready agentic security automation platform that integrates seamlessly with Microsoft Sentinel to accelerate incident response and streamline operations. With Blink, analysts can rapidly build sophisticated workflows and custom security agents—without writing a single line of code—enabling agile, scalable automation with both Microsoft Sentinel and any other security platform. This integration helps eliminate alert fatigue, reduce mean time to resolution (MTTR), and free teams to focus on what matters most: driving faster operations, staying ahead of cyber threats, and unlocking new levels of efficiency through reliable, trusted orchestration. Check Point for Microsoft Sentinel solutions Check Point’s External Risk Management (ERM) IOC and Alerts integration with Microsoft Sentinel streamlines how organizations detect and respond to external threats by automatically sending both alerts and indicators of compromise (IOCs) into Microsoft Sentinel. Through this integration, customers can configure SOAR playbooks to trigger automated actions such as updating security policies, blocking malicious traffic, and executing other security operations tasks. This orchestration reduces manual effort, accelerates response times, and allows IT teams, network administrators, and security personnel to focus on strategic threat analysis—strengthening the organization’s overall security posture. Cloudflare for Microsoft Sentinel Cloudflare’s integration with Microsoft Sentinel, powered by Logpush, brings detailed security telemetry from its Zero Trust and network services into your SIEM environment. By forwarding logs such as DNS queries, HTTP requests, and access events through Logpush, the connector enables SOC teams to correlate Cloudflare data with other sources for comprehensive threat detection. This integration supports automated workflows for alerting and investigation, helping organizations strengthen visibility across web traffic and identity-based access while reducing manual overhead. Contrast ADR for Microsoft Sentinel Contrast Security gives Microsoft Sentinel users their first-ever integration with Application Detection and Response (ADR), delivering real-time visibility into application and API attacks, eliminating the application-layer blind spot. By embedding security directly into applications, Contrast enables continuous monitoring and precise blocking of attacks, and with AI assistance, the ability to fix underlying software vulnerabilities in minutes. This integration helps security teams prioritize actionable insights, reduce noise, and better understand the severity of threats targeting APIs and web apps. GreyNoise Enterprise Solution for Microsoft Sentinel GreyNoise helps Microsoft Sentinel users cut through the noise by identifying and filtering out internet background traffic that clutters security alerts. Drawing from a global sensor network, GreyNoise classifies IP addresses that are scanning the internet, allowing SOC teams to deprioritize benign activity and focus on real threats. The integration supports automated triage, threat hunting, and enrichment workflows, giving analysts the context they need to investigate faster and more effectively. iboss Connector for Microsoft Sentinel The iboss Connector for Microsoft Sentinel delivers real-time ingestion of URL event logs, enriching your SIEM with high-fidelity web traffic insights. Logs are forwarded in Common Event Format (CEF) over Syslog, enabling streamlined integration without the need for a proxy. With built-in parser functions and custom workbooks, the solution supports rapid threat detection and investigation. This integration is especially valuable for organizations adopting Zero Trust principles, offering granular visibility into user access patterns and helping analysts accelerate response workflows. Mimecast Mimecast’s integration with Microsoft Sentinel consolidates email security telemetry into a unified threat detection environment. By streaming data from Mimecast into Microsoft Sentinel’s Log Analytics workspace, security teams can craft custom queries, automate response workflows, and prioritize high-risk events. This connector supports a wide range of use cases, from phishing detection to compliance monitoring, while helping reduce mean time to respond (MTTR). MongoDB Atlas Solution for Microsoft Sentinel MongoDB Atlas integrates with Microsoft Sentinel to provide visibility into database activity and security events across cloud environments. By forwarding database logs into Sentinel, this connector enables SOC teams to monitor access patterns, detect anomalies, and correlate database alerts with broader security signals. The integration allows for custom queries and dashboards to be built on real-time log data, helping organizations strengthen data security, streamline investigations, and maintain compliance for critical workloads. Onapsis Defend Onapsis Defend integrates with Microsoft Sentinel Solution for SAP to deliver real-time security monitoring and threat detection from both cloud and on-premises SAP systems. By forwarding Onapsis's unique SAP exploit detection, proprietary SAP zero-day rules, and expert SAP-focused insights into Microsoft Sentinel, this integration enables SOC teams to correlate SAP-specific risks with enterprise-wide telemetry and accelerate incident response. The integration supports prebuilt analytics rules and dashboards, helping organizations detect suspicious behavior and malicious activity, prioritize remediation, and strengthen compliance across complex SAP application landscapes. Proofpoint on Demand (POD) Email Security for Microsoft Sentinel Proofpoint’s Core Email Protection integrates with Microsoft Sentinel to deliver granular email security telemetry for advanced threat analysis. By forwarding events such as phishing attempts, malware detections, and policy violations into Microsoft Sentinel, SOC teams can correlate Proofpoint data with other sources for a unified view of risk. The connector supports custom queries, dashboards, and automated playbooks, enabling faster investigations and streamlined remediation workflows. This integration helps organizations strengthen email defenses and improve response efficiency across complex attack surfaces. Proofpoint TAP Solution Proofpoint’s Targeted Attack Protection (TAP), part of its Core Email Protection, integrates with Microsoft Sentinel to centralize email security telemetry for advanced threat detection and response. By streaming logs and events from Proofpoint into Microsoft Sentinel, SOC teams gain visibility into phishing attempts, malicious attachments, and compromised accounts. The connector supports custom queries, dashboards, and automated playbooks, enabling faster investigations and streamlined remediation workflows. This integration helps organizations strengthen email defenses while reducing manual effort across incident response processes. RSA ID Plus Admin Log Connector The RSA ID Plus Admin Log Connector integrates with Microsoft Sentinel to provide centralized visibility into administrative activity within RSA ID Plus Connector. By streaming admin-level logs into Sentinel, SOC teams can monitor changes, track authentication-related operations, and correlate identity events with broader security signals. The connector supports custom queries and dashboards, enabling organizations to strengthen oversight and streamline investigations across their hybrid environments. Rubrik Integrations with Microsoft Sentinel for Ransomware Protection Rubrik’s integration with Microsoft Sentinel strengthens ransomware resilience by combining data security with real-time threat detection. The connector streams anomaly alerts, such as suspicious deletions, modifications, encryptions, or downloads, directly into Microsoft Sentinel, enabling fast investigations and more informed responses. With built-in automation, security teams can trigger recovery workflows from within Microsoft Sentinel, restoring clean backups or isolating affected systems. The integration bridges IT and SecOps, helping organizations minimize downtime and maintain business continuity when facing data-centric threats. Samsung Knox Asset Intelligence for Microsoft Sentinel Samsung’s Knox Asset Intelligence integration with Microsoft Sentinel equips security teams with near real-time visibility into mobile device threats across Samsung Galaxy enterprise fleets. By streaming security events and logs from managed Samsung devices into Microsoft Sentinel via the Azure Monitor Log Ingestion API, organizations can monitor risk posture, detect anomalies, and investigate incidents from a centralized dashboard. This solution is especially valuable for SOC teams monitoring endpoints for large mobile workforces, offering data-driven insights to reduce blind spots and strengthen endpoint security without disrupting device performance. SAP S/4HANA Public Cloud – Microsoft Sentinel SAP S/4HANA Cloud, public edition integrates with Microsoft Sentinel Solution for SAP to deliver unified, real-time security monitoring for cloud ERP environments. This connector leverages Microsoft’s native SAP integration capabilities to stream SAP logs into Microsoft Sentinel, enabling SOC teams to correlate SAP-specific events with enterprise-wide telemetry for faster, more accurate threat detection and response. SAP Enterprise Threat Detection – Microsoft Sentinel SAP Enterprise Threat Detection integrates with Microsoft Sentinel Solution for SAP to deliver unified, real-time security monitoring across SAP landscapes and the broader enterprise. Normalized SAP logs, alerts, and investigation reports flow into Microsoft Sentinel, enabling SOC teams to correlate SAP-specific alerts with enterprise telemetry for faster, more accurate threat detection and response. SecurityBridge: SAP Data to Microsoft Sentinel SecurityBridge extends Microsoft Sentinel for SAP’s reach into SAP environments, offering real-time monitoring and threat detection across both cloud and on-premises SAP systems. By funneling normalized SAP security events into Microsoft Sentinel, this integration enables SOC teams to correlate SAP-specific risks with broader enterprise telemetry. With support for S/4HANA, SAP BTP, and NetWeaver-based applications, SecurityBridge simplifies SAP security auditing and provides prebuilt dashboards and templates to accelerate investigations. Tanium Microsoft Sentinel Connector Tanium’s integration with Microsoft Sentinel bridges real-time endpoint intelligence and SIEM analytics, offering a unified approach to threat detection and response. By streaming real-time telemetry and alerts into Microsoft Sentinel,Tanium enables security teams to monitor endpoint health, investigate incidents, and trigger automated remediation, all from a single console. The connector supports prebuilt workbooks and playbooks, helping organizations reduce dwell time and align IT and security operations around a shared source of truth. Team Cymru Pure Signal Scout for Microsoft Sentinel Team Cymru’s Pure Signal™ Scout integration with Microsoft Sentinel delivers high-fidelity threat intelligence drawn from global internet telemetry. By enriching Microsoft Sentinel alerts with real-time context on IPs, domains, and adversary infrastructure, Scout enables security teams to proactively monitor third-party compromise, track threat actor infrastructure, and reduce false positives. The integration supports external threat hunting and attribution, enabling analysts to discover command-and-control activity, signals of data exfiltration and compromise with greater precision. For organizations seeking to build preemptive defenses by elevating threat visibility beyond their borders, Scout offers a lens into the broader threat landscape at internet scale. Veeam App for Microsoft Sentinel The Veeam App for Microsoft Sentinel enhances data protection by streaming backup and recovery telemetry into your SIEM environment. The solution provides visibility into backup job status, anomalies, and potential ransomware indicators, enabling SOC teams to correlate these events with broader security signals. With support for custom queries and automated playbooks, this integration helps organizations accelerate investigations, trigger recovery workflows, and maintain resilience against data-centric threats. WithSecure Elements via Function for Microsoft Sentinel WithSecure’s Elements platform integrates with Microsoft Sentinel to provide centralized visibility into endpoint protection and detection events. By streaming incident and malware telemetry into Microsoft Sentinel, organizations can correlate endpoint data with broader security signals for faster, more informed responses. The solution supports a proactive approach to cybersecurity, combining predictive, preventive, and responsive capabilities, making it well-suited for teams seeking speed and flexibility without sacrificing depth. This integration helps reduce complexity while enhancing situational awareness across hybrid environments, and for companies to prevent or minimize any disruption. In addition to these solutions from our third-party partners, we are also excited to announce the following connectors published by the Microsoft Sentinel team, available now in Azure Marketplace and Microsoft Sentinel content hub. Alibaba Cloud Action Trail Logs AWS: Network Firewall AWS: Route 53 DNS AWS: Security Hub Findings AWS: Server Access Cisco Secure Endpoint GCP: Apigee GCP: CDN GCP: Cloud Monitor GCP: Cloud Run GCP: DNS GCP: Google Kubernetes Engine (GKE) GCP: NAT GCP: Resource Manager GCP: SQL GCP: VPC Flow GCP: IAM OneLogin IAM Oracle Cloud Infrastructure Palo Alto: Cortex Xpanse CCF Palo Alto: Prisma Cloud CWPP Ping One Qualys Vulnerability Management Salesforce Service Cloud Slack Audit Snowflake App Assure: The Microsoft Sentinel promise Every connector in the Microsoft Sentinel ecosystem is built to work out of the box, backed by the App Assure team and the Microsoft Sentinel promise. In the unlikely event that customers encounter any issues, App Assure stands ready to assist to ensure rapid resolution. With the new Microsoft Sentinel data lake features, we extend our promise for customers looking to bring their data to the lake. To request a new connector or features for an existing one, contact us via our intake form. Learn More Microsoft Sentinel data lake Microsoft Sentinel data lake: Unify signals, cut costs, and power agentic AI Introducing Microsoft Sentinel data lake What is Microsoft Sentinel data lake Unlocking Developer Innovation with Microsoft Sentinel data lake Microsoft Sentinel Codeless Connector Framework (CCF) Create a codeless connector for Microsoft Sentinel What’s New in Microsoft Sentinel Microsoft App Assure App Assure home page App Assure services App Assure blog App Assure’s promise: Migrate to Sentinel with confidence App Assure’s Sentinel promise now extends to Microsoft Sentinel data lake RSAC 2025 new Microsoft Sentinel connectors announcement Microsoft Security Microsoft’s Secure Future Initiative Microsoft Unified SecOps3.8KViews2likes0CommentsAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 2: Automation Rules – Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, let’s dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the “Actions” button 3) When writing an Analytic Rule, under the “Automated response” tab The process for each is generally the same, except for the Incident route and we’ll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled “Automatically Close Incident When Affected Machine is SRV02021” but really it’s up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), it’s preferred to use incident triggers as they’re potentially the aggregation of multiple alerts and the odds are that you’re going to want to take the same automation steps for all of the alerts since they’re all related. It’s better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of “SRV2021” By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that we’ve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the “Actions” drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the “Add action” button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to “High” and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to “Closed” and a “Benign Positive – Suspicious by expected”. This default action can be deleted and you can then set up your own action. In a future episode of this blog we’re going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that they’ve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to “Indefinite” by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.7KViews2likes2CommentsMicrosoft Sentinel Platform: Audit Logs and Where to Find Them
*Thank you to my teammates Ian Parramore and David Hoerster for reviewing and contributing to this blog.* With the launch of the Sentinel Platform, a new suite of features for the Microsoft Sentinel service, users may find themselves wanting to monitor who is using these new features and how. This blog sets out to highlight how auditing and monitoring can be achieved and where this data can be found. What are Audit Logs? Audit logs are documented activities that are eligible for usage within SOC tools, such as a SIEM. These logs are meant to exist as a paper trail to show: Who performed an action What type of action was performed When the action was performed Where the action was performed How the action was performed Audit logs can be generated by many platforms, whether they are Microsoft services or platforms outside of the Microsoft ecosystem. Each source is a great option for a SOC to monitor. Types of Audit Logs Audit logs can vary in how they are classified or where they are placed. Focusing just on Microsoft, the logs can vary based on platform. A few examples are: - Windows: Events generated by the operating system that are available in EventViewer - Azure – Diagnostic logs generated by services that can be sent to Azure Log Analytics - Defender – Audit logs generated by Defender services that are sent to M365 Audit Logs What is the CloudAppEvents Table? The CloudAppEvents table is a data table that is provided via Advanced Hunting in Defender. This table contains events of applications being used within the environment. This table is also a destination for Microsoft audit logs that are being sent to Purview. Purview’s audit log blade includes logs from platforms like M365, Defender, and now Sentinel Platform. How to Check if the Purview Unified Audit Logging is Enabled For CloudAppEvents to receive data, Audit Logging within Purview must be enabled and M365 needs to be configured to be connected as a Defender for Cloud Apps component. Enabling Audit Logs By default, Purview Auditing is enabled by default within environments. In the event that they have been disabled, Audit logs can be enabled and checked via PowerShell. To do so, the user must have the Audit Logs role within Exchange Online. The command to run is: Get-AdminAuditLogConfig | Format-List UnifiedAuditLogIngestionEnabled The result will either be true if auditing is already turned on, and false if it is disabled. If the result is false, the setting will need to be enabled. To do so: Install the Exchange Online module with: Import-Module ExchangeOnlineManagement Connect and authenticate to Exchange Online with an interactive window Connect-ExchangeOnline -UserPrincipalName USER PRINCIPAL NAME HERE Run the command to enable auditing Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $true Note: It may take 60 minutes for the change to take effect. Connecting M365 to MDA To connect M365 to MDA as a connector: Within the Defender portal, go to System > Settings. Within Settings, choose Cloud Apps. Within the settings navigation, go to Connected apps > App Connectors. If Microsoft 365 is not already listed, click Connect an app. Find and select Microsoft 365. Within the settings, select the boxes for Microsoft 365 activities. Once set, click on the Connect Microsoft 365 button. Note: You have to have a proper license that includes Microsoft Defender for Cloud Apps in order to have these settings for app connections. Monitoring Activities As laid out within the public documentation, there are several categories of audit logs for Sentinel platform, including: Onboarding/offboarding KQL activities Job activities Notebook activities AI tool activities Graph activities All of these events are surfaced within the Purview audit viewer and CloudAppEvents table. When querying the table, each of the activities will appear under a column called ActionType. Onboarding Friendly Name Operation Description Changed subscription of Sentinel lake SentinelLakeSubscriptionChanged User modified the billing subscription ID or resource group associated with the data lake. Onboarded data for Sentinel lake SentinelLakeDefaultDataOnboarded During onboarding, default data onboard is logged. Setup Sentinel lake SentinelLakeSetup At the time of onboarding, Sentinel data lake is set up and the details are logged. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘SentinelLakeDefaultDataOnboarded’ | project AccountDisplayName, Application, Timestamp, ActionType KQL Queries There is only one type of activity for KQL available. These logs function similarly to how LAQueryLogs work today, showing details like who ran the query, if it was successful, and what the body of the query was. Friendly Name Operation Description Completed KQL query KQLQueryCompleted User runs interactive queries via KQL on data in their Microsoft Sentinel data lake For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘KQLQueryCompleted’ | project Timestamp, AccountDisplayName, Application, RawEventData.Interface, RawEventData.QueryText, RawEventData.TotalRows Jobs Jobs pertain to KQL jobs and Notebook jobs offered by the Sentinel Platform. These logs will detail job activities as well as actions taken against jobs. This will be useful for monitoring who is creating or modifying jobs, as well as monitoring that jobs are properly running. Friendly Name Operation Description Completed a job run adhoc JobRunAdhocCompleted Adhoc Job execution completed. Completed a job run scheduled JobRunScheduledCompleted Scheduled Job execution completed. Created job JobCreated A job is created. Deleted a custom table CustomTableDelete As part of a job run, the custom table was deleted. Deleted job JobDeleted A job is deleted. Disabled job JobDisabled The job is disabled. Enabled job JobEnabled A disabled job is reenabled. Ran a job adhoc JobRunAdhoc Job is triggered manually and run started. Ran a job on schedule JobRunScheduled Job run is triggered due to schedule. Read from table TableRead As part of the job run, a table is read. Stopped a job run JobRunStopped User manually cancels or stops an ongoing job run. Updated job JobUpdated The job definition and/or configuration and schedule details of the job if updated. Writing to a custom table CustomTableWrite As part of the job run, data was written to a custom table. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘JobCreated’ | project Timestamp, AccountDisplayName, Application, ActionType, RawEventData.JobName, RawEventData.JobType, RawEventData.Interface AI Tools AI tool logs pertain to events being generated by MCP server usage. This is generated any time that users operate with MCP server and leverage one of the tools available today to run prompts and sessions. Friendly Name Operation Description Completed AI tool run SentinelAIToolRunCompleted Sentinel AI tool run completed Created AI tool SentinelAIToolCreated User creates a Sentinel AI tool Started AI tool run SentinelAIToolRunStarted Sentinel AI tool run started For querying these activities, the query would be: CloudAppEvents | where ActionType == ‘SentinelAIToolRunStarted’ | project Timestamp, AccountDisplayName, ActionType, Application, RawEventData.Interface, RawEventData.ToolName Notebooks Notebook activities pertain to actions performed by users via Notebooks. This can include querying data via a Notebook, writing to a table via Notebooks, or launching new Notebook sessions. Friendly Name Operation Description Deleted a custom table CustomTableDelete User deleted a table as part of their notebook execution. Read from table TableRead User read a table as part of their notebook execution. Started session SessionStarted User started a notebook session. Stopped session SessionStopped User stopped a notebook session. Wrote to a custom table CustomTableWrite User wrote to a table as part of their notebook execution. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘TableRead’ Graph Usage Graph activities pertain to users modifying or running a graph based scenario within the environment. This can include creating a new graph scenario, deleting one, or running a scenario. Created a graph scenario GraphScenarioCreated User created a graph instance for a pre-defined graph scenario. Deleted a graph scenario GraphScenarioDeleted User deleted or disable a graph instance for a pre-defined graph scenario. Ran a graph query GraphQueryRun User ran a graph query. For querying these activities, one example would be: CloudAppEvents | where ActionType == 'GraphQueryRun' | project AccountDisplayName, IsExternalUser, IsImpersonated, RawEventData.['GraphName'], RawEventData.['CreationTime'] Monitoring without Access to the CloudAppSecurity Table If accessing the CloudAppSecurity table is not possible in the environment, both Defender and Purview allow for manually searching for activities within the environment. For Purview (https://purview.microsoft.com), the audit page can be found by going to the Audit blade within the Purview portal. For Defender, the audit blade can be found under Permissions > Audit To run a search that will match Sentinel Platform related activities, the easiest method is using the Activities – friendly names field to filter for Sentinel Platform. Custom Ingestion of Audit Logs If looking to ingest the data into a table, a custom connector can be used to fetch the information. The Purview Audit Logs use the Office Management API when calling events programmatically. This leverages registered applications with proper permissions to poll the API and forward the data into a data collection rule. As the Office Management API does not support filtering entirely within the content URI, making a custom connector for this source is a bit more tricky. For a custom connector to work, it will need to: Call the API Review each content URL Filter for events that are related to Sentinel Platform This leaves two options for accomplishing a custom connector route: A code-based connector that is hosted within an Azure Function A codeless connector paired with a filtering data collection rule This blog will just focus on the codeless connector as an example. A codeless connector can be made from scratch or by referencing an existing connector within the Microsoft Sentinel GitHub repository. For the connector, an example API call would appear as such: https://manage.office.com/api/v1.0/{tenant-id}/activity/feed/subscriptions/content?contentType=Audit.General&startTime={startTime}&endTime={endTime} When using a registered application, it will need ActivityFeed.Read read permissions on the Office Management API for it to be able to call the API and view the information returned. The catch with the Management API is that it uses a content URL in the API response, thus requiring one more step. Luckily, Codeless Connectors support nested actions and JSON. An example of a connector that does this today is the Salesforce connector. When looking to filter the events to be specifically the Sentinel Platform audit logs, the queries listed above can be used in the body of a data collection rule. For example: "streams": [ "Custom-PurviewAudit" ], "destinations": [ "logAnalyticsWorkspace" ], "transformKql": "source | where ActionType has_any (‘GraphQueryRun’, ‘TableRead’… etc) "outputStream": "Custom-SentinelPlatformAuditLogs” Note that putting all of the audit logs may lead to a schema mismatch depending on how they are being parsed. If concerned about this, consider placing each event into different tables, such as SentinelLakeQueries, KQLJobActions, etc. This can all be defined within the data collection rule, though the custom tables for each action will need to exist before defining them within the data collection rule. Closing Now that audit logs are flowing, actions taken by users within the environment can be used for detections, hunting, Workbooks, and automation. Since the logs are being ingested via a data collection rule, they can also be sent to Microsoft Sentinel data lake if desired. May this blog lead to some creativity and stronger monitoring of the Sentinel Platform!1.6KViews1like3CommentsCall to Action: Migrate Your Classic Alert‑trigger Automations to Automation Rules Before March 2026
Reminder: Following the Retirement Announcement published in March 2023, classic alert‑trigger automation in Microsoft Sentinel, where playbooks are triggered directly from analytic rules will be deprecated in March 2026. To ensure your alert‑driven automations continue to run without interruption, please migrate to automation rules. How to Identify the Impacted Rules To help you quickly identify if any analytic rules should be updated, we created a Logic App solution that identifies all the analytic rules which use classic automation. The solution is available in two deployment options: Subscription-level version – scans all workspaces in the subscription Workspace-level version – scans a single workspace (useful when subscription Owner permissions are not available) Both options create an output of a list of analytic rules which need to be updated as described in the next section. The solution is published as part of Sentinel's GitHub community solutions with deployment instructions and further details: sentinel-classic-automation-detection After deploying and running the tool, open the Logic App run history. The final action contains the list of the analytic rules which require migration. How to Migrate the identified Analytic Rules from Classic Alert‑trigger Automations Once you have identified the analytic rules using classic automation, follow the official Microsoft Learn guidance to migrate them to automation rules: migrate-playbooks-to-automation-rules In short: Identify classic automation in your analytic rules Create automation rules to replace the classic automation To complete the migration, remove the Alert Automation Classic Actions identified in step #1 by clicking on the three dots and 'Remove' Summary Classic automation will retire in March 2026 Use the detection tool to identify any remaining classic alert-trigger playbooks Follow the official Microsoft documentation to complete the required changes Act now to ensure your SOC automations continue to run smoothly778Views0likes0CommentsEfficiently process high volume logs and optimize costs with Microsoft Sentinel data lake
As organizations scale their security monitoring, a key challenge is maintaining visibility while controlling costs. High‑volume logs—such as firewall, proxy, and endpoint data—are essential for achieving full threat visibility and must be ingested to understand the complete threat environment. With Microsoft Sentinel data lake, you can ingest high‑volume logs directly into the data lake tier—significantly reducing storage costs while maintaining full visibility. After ingestion, you can extract, enrich, summarize, or normalize events to highlight what matters most for security. Only the enriched, high-value events are then promoted to the Analytics tier for correlation, detection, and investigation. This approach offers several advantages: Higher ROI : Organizations get more value from their security data by storing infrequently queried or low‑detection‑value logs in the lower‑cost data lake tier—boosting ROI while preserving complete visibility. With raw log storage dramatically cheaper in the data lake tier, teams can retain more data, uncover deeper insights, and optimize spend without compromising security. Performance optimization: Send only high‑value security data to the Analytics tier to keep performance fast and efficient Targeted insights: Post-ingestion processing allows analysts to classify and enrich logs, removing noise and enhancing relevance. Flexible retention: The data lake enables long-term retention with 6x compression rate enabling a historical analysis and deeper insights In this post, we will demonstrate how to leverage the Sentinel data lake by ingesting data into the CommonSecurityLog table. We will classify source and destination IPs as public or private, enrich the logs with IP classification, and then group and summarize repeated SourceIP-DestinationIP- RequestURL pairs to highlight outbound communication patterns. We will highlight how post-ingestion processing can reduce costs while improving the analytical value of your data. You can achieve this via KQL jobs in Microsoft Sentinel data lake. What are KQL jobs? KQL jobs in Sentinel data lake are automated one-time or scheduled jobs that run Kusto Query Language (KQL) queries directly on data lake. These jobs help security teams investigate and hunt for threats more easily by automating processes like checking logs against known threats, and enriching and grouping network logs before they are promoted to the Analytics tier. This automation reduces storage costs while producing high-value, investigation-ready datasets. Scenario Firewall and network logs typically contain a wide range of fields, such as SourceIP, DestinationIP, RequestURL, and DeviceAction. While these logs provide extensive information, much of it may not be required for analytics detections, nor does every log entry need to be available in the Analytics tier. Additionally, these logs often lack contextual details, such as traffic is internal or external, and do not inherently highlight repeated communication patterns or spikes that may warrant investigation. For high-volume logs, organizations can leverage post-ingestion enrichment and summarization within the data lake tier. This process transforms raw logs into structured, context-rich datasets, complete with IP classification and grouped connection patterns, making them ready for targeted investigation or selective promotion to the Analytics tier. This approach ensures that only the most relevant and actionable data is surfaced for analytics, optimizing both cost and operational efficiency. Step 1 - Filter and enrich logs Test and refine your query using lake explorer before automating. The following KQL query filters out empty IPs, classifies each source and destination IP as Public, Private, or Unknown, and groups repeated SourceIP-DestinationIP-RequestURL combinations to reveal meaningful traffic patterns: CommonSecurityLog | where isnotempty(SourceIP) or isnotempty(DestinationIP) | extend SrcIPType = iff(ipv4_is_private(SourceIP), "Private", iff(isempty(SourceIP), "Unknown", "Public")), DstIPType = iff(ipv4_is_private(DestinationIP), "Private", iff(isempty(DestinationIP), "Unknown", "Public")) | summarize Count = count() by SourceIP, SrcIPType, DestinationIP, DstIPType, RequestURL This classification adds context to high-volume network logs, making it easier to identify traffic between internal and external networks, as well as significantly reducing the volume of data surfaced into the Analytics tier. Step 2 - Automate post-ingestion processing using KQL jobs Once the query logic is validated in the lake explorer, you can automate it to run continuously as a KQL job: Scheduled KQL jobs can process new logs and store the results in a custom table in Analytics tier. Using Custom detection rules on prompted data, you can setup alerts for any anomalies. Using Microsoft Sentinel Workbooks you can visualize the enriched data for monitoring and analysis. Automating this workflow ensures that every log batch arriving in the data lake is enriched and included in summarized log in Analytics tier, while maintaining balance between cost and visibility. How to schedule this process using a KQL job To run post-ingestion processing query and retain results periodically, we would like to schedule this job to run every hour to summarize network log and store results in a custom table in Analytics tier. To avoid missing any logs, we recommend adding a delay of 15 minutes in the query to make sure all logs are available in lake and included in the job runs. let dt_lookBack = 1h; // Look back window duration let delay = 15m; // Delay to allow late-arriving data let endTime = now() - delay; let startTime = endTime - dt_lookBack; CommonSecurityLog | where TimeGenerated >= startTime and TimeGenerated < endTime | where isnotempty(SourceIP) or isnotempty(DestinationIP) | extend SrcIPType = iff(ipv4_is_private(SourceIP), "Private", iff(isempty(SourceIP), "Unknown", "Public")), DstIPType = iff(ipv4_is_private(DestinationIP), "Private", iff(isempty(DestinationIP), "Unknown", "Public")) | summarize Count = count() by SourceIP, SrcIPType, DestinationIP, DstIPType, RequestURL | order by Count desc KQL jobs can run ad-hoc or in a schedule (by minutes, hourly, daily, weekly or monthly). To obtain the enriched log continuously in Analytics tier, we will schedule this hourly: Results are automatically available in the Analytics tier and can be used to set up a new custom detection rule. Cost of this KQL job in Sentinel data lake The cost of running KQL jobs in Sentinel data lake depends on the volume of data scanned and how frequently the jobs run. Data lake KQL queries and jobs are priced at Analyzed data. Estimated cost analysis details of data lake: Item Calculation / Notes Cost ($) Data lake ingestion and processing (1TB/month) 1TB × ($0.05 + $0.1) per GB $153.60 Hourly KQL jobs for 1-hour lookback (1.4 GB/hour) 1.4 GB × $0.005 × 24 × 30 $5.04 Enriched data sent to Analytics (10% of 1TB) 102.4 GB × $4.3 per GB $440.32 Total data lake approach $153.6 + $440.32 + 5.04 $598.96 Note: This sample pricing model applies to East US region. This pricing model allows organizations to perform large-scale threat hunting and intelligence matching without the high expenses typically associated with traditional SIEMs. Summary of monthly costs Approach Estimated Monthly Cost ($) Raw data ingestion to Analytics tier (1TB) $4,386 Raw data ingestion to data lake tier + enriched summary to Analytics $598.96 Savings $3,787.04 For more details around Microsoft Sentinel data lake costs for KQL queries and jobs, see https://azure.microsoft.com/en-us/pricing/calculator. Summary and next steps High-volume logs are critical for security visibility, but ingesting all raw data directly into the Analytics tier can be expensive and unwieldy. By storing logs in Sentinel data lake and performing post-ingestion enrichment, organizations can classify and contextualize events, reduce costs, and maintain a lean, high-performing analytics environment. This approach demonstrates that smarter processing, is the key to maximizing both efficiency and security insight in Microsoft Sentinel. Get started with Microsoft Sentinel data lake today. Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn KQL and the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Microsoft Sentinel Pricing | Microsoft SecurityBuild less, secure more: Simplify security data management with Microsoft Sentinel data lake
Most DIY security data lakes start with good intentions—promising flexibility, control, and cost savings. But in reality, they lead to endless data ingestion fixes, schema drift battles, and soaring costs. They fail because they lack the capabilities to manage security data that’s not just massive, but complex, dynamic, and compliance heavy. Ingestion chaos: Every team implements its own pipelines for diverse data sets like firewalls, endpoint, network, threat intelligence feeds, and more. Normalization gaps: Each team uses its own schema; Therefore queries, detections, and ML models can’t be reused. Operational drag: Constant tuning for compression, tiering, and schema evolution. Scaling costs: What began as $1K per month becomes $100K per month as data doubles every quarter. Fragmented analytics: Detection in one data store, hunting in another, and investigation in a third. Security data isn’t just telemetry. It must be unified, normalized, cross-correlated, and enriched with context— in near real time. A security data lake needs more than raw storage—it needs intelligence. You need a centralized platform that goes beyond storing data to deliver actionable insights. Microsoft Sentinel data lake does exactly that. It is more than a storage solution, it’s the foundation for modern, AI-powered security operations. Whether you're scaling your SOC, building deeper analytics, or preparing for future threats, the Sentinel data lake is ready to support your journey. Empowering security teams with a unified, security data lake With Sentinel data lake, there’s no need to build your own security data lake. This fully managed, cloud-native solution—purpose-built for security—redefines how teams manage, analyze, and act on all their security data, cost-effectively. Since its launch, organizations across industries have embraced Sentinel data lake for its transformative impact on security operations. Customers highlight its ability to unify data from diverse sources, enabling faster threat detection and deeper investigations. Cost efficiency is a standout benefit, thanks to tiered storage and flexible retention options that help reduce expenses. With petabytes of data already ingested, users are gaining real-time and historical insights at scale—empowering security teams like never before. Sentinel data lake advantages Sentinel data lake allows multiple analytics engines like Kusto, Spark, and ML to run on a single copy of data, simplifying management, reducing costs, and supporting deeper security analysis. Security-aware data model: Out-of-the-box normalization aligned to the Microsoft Security graph schema, Kusto Query Language (KQL) and Structured Query Language (SQL). Native integration: Directly connects to Sentinel graph, Security Copilot, and Model Context Protocol (MCP) Tools with no ETL or duplication. Query at any scale: Store petabytes, query in seconds using both KQL and SQL endpoints. Governance by design: Inherits Fabric’s unified data governance, lineage, and RBAC already secured for compliance. AI-ready: Enables natural-language hunting and threat reasoning through Security Copilot agents over the same data. In short, Sentinel data lake is a data fabric, built for security. Why does this matter? It matters because the advantages of a data lake for security are only realized when all the security data is: Unified: No more fragmented analytics across multiple silos. Normalized: Consistent schema for queries, detections, and ML models. Scalable: Elastic compute and storage without manual tuning. Integrated: Works seamlessly with SIEM, SOAR, and AI tools. Governed: Compliance and RBAC baked in from day one. AI-Enhanced: Enables graph-based reasoning and MCP tool integration for advanced threat detection. Sentinel delivers all of this and much more without the operational burden. Get started with Microsoft Sentinel data lake today Connect Your Data Sources Start by plugging into your existing security telemetry. Sentinel provides built-in connectors for Microsoft 365, Defender, Entra, and many third-party tools. Instead of writing custom ingestion scripts or managing brittle pipelines, data flows automatically into your Sentinel data lake workspace. This means your SOC can begin analyzing logs within minutes, not weeks. No schema headaches and no manual ETL. Just secure, governed ingestion at scale. Query Natively Using KQL or SQL Once your data is in the lake, you can query it natively using Kusto Query Language (KQL) or SQL. This dual-query capability is a game changer because analysts can pivot from detection to investigation to hunting without exporting or rehydrating data. Imagine running a KQL query to find suspicious sign-ins, then switching to SQL for a compliance report on the same dataset. No duplication and no latency. It is analytics without boundaries. Build Cross-System Analytics Your SOC does not operate in isolation. Many organizations run multiple SIEMs such as Splunk, QRadar, or Chronicle alongside Sentinel. With Sentinel data lake, you can centralize normalized data and expose it through open APIs and Delta Parquet. This allows you to build cross-system analytics without painful data wrangling. Want to correlate a Splunk alert with Defender telemetry? Or enrich Chronicle detections with Microsoft threat intelligence? Sentinel data lake makes it possible in a secure and scalable way. Enable AI and Graph Intelligence Security is not just about raw data, it is about context. Sentinel data lake integrates with Sentinel graph, enabling relational reasoning across entities like users, devices, IPs, and alerts. Add Security Copilot and MCP Tools, and you unlock AI-driven hunting and threat reasoning. Scale Confidently Finally, scale without fear. Traditional DIY lakes demand constant tuning for partitioning, compression, and schema evolution. Sentinel data lake, built on Microsoft Fabric, handles elasticity for you. Whether you are ingesting gigabytes or petabytes, compute and storage scale automatically. No more late-night calls to fix performance bottlenecks. No more surprise bills from uncontrolled growth. You get predictable performance and cost efficiency so your team can focus on threats, not plumbing. A new chapter for security teams For years, security engineers have poured countless hours into building and maintaining custom data lakes. They have stitched together pipelines, fought schema drift, and tuned performance just to keep the lights on. Every new data source or compliance requirement meant starting the cycle all over again. This is exhausting, and it distracts teams from what truly matters: stopping threats. Sentinel data lake changes that story. Instead of spending nights fixing ingestion scripts or weekends scaling storage, you can focus on AI-powered detection, investigation, and response. The heavy lifting is already done for you. Your data is unified, queryable, and ready for AI-driven insights the moment it lands. This is not just another tool. It is a foundation for modern security operations. A data lake that speaks the language of security, integrates with the multi-cloud, multiplatform ecosystems, and scales as fast as your data grows. No more plumbing. No more patchwork. Just a clear path to faster threat detection and smarter defense. The old way was about building. The new way is about protecting. Microsoft Sentinel data lake was built so you can do exactly that. Get started today Microsoft Sentinel—AI-Ready Platform | Microsoft Security Sentinel data lake onboarding Microsoft Sentinel data lake is now generally available | Microsoft Community Hub Microsoft Sentinel data lake FAQ | Microsoft Community Hub Plan costs and understand pricing and billing - Microsoft Sentinel | Microsoft Learn Microsoft Sentinel data lake ninja training Behind this post are the brilliant minds of vkokkengada chaitra_satish whose ideas inspired this content. Proud to share their expertise with a wider audience.789Views0likes0CommentsWhat’s New in Microsoft Sentinel: November 2025
Welcome to our new Microsoft Sentinel blog series! We’re excited to launch a new blog series focused on Microsoft Sentinel. From the latest product innovations and feature updates to industry recognition, success stories, and major events, you’ll find it all here. This first post kicks off the series by celebrating Microsoft’s recognition as a Leader in the 2025 Gartner Magic Quadrant for SIEM 1 . It also introduces the latest innovations designed to deliver measurable impact and empower defenders with adaptable, collaborative tools in an evolving threat landscape. Microsoft is recognized as a Leader in 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) Microsoft Sentinel continues to drive security innovation—and the industry is taking notice. Microsoft was named a leader in the 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) 1 , published on October 8, 2025. We believe this acknowledgment reinforces our commitment to helping organizations stay secure in a rapidly changing threat landscape. Read blog for more information. Take advantage of M365 E5 benefit and Microsoft Sentinel promotional pricing Microsoft 365 E5 benefit Customers with Microsoft 365 E5, A5, F5, or G5 licenses automatically receive up to 5 MB of free data ingestion per user per day, covering key security data sources like Azure AD sign-in logs and Microsoft Cloud App Security discovery logs—no enrollment required. Read more about M365 benefits for Microsoft Sentinel. New 50GB promotional pricing To make Microsoft Sentinel more accessible to small and mid-sized organizations, we introduced a new 50 GB commitment tier in public preview, with promotional pricing starting October 1, 2025, through March 31, 2026. Customers who choose the 50 GB commitment tier during this period will maintain their promotional rate until March 31, 2027. Available globally with regional variations in regional pricing it is accessible through EA, CSP, and Direct channels. For more information see Microsoft Sentinel pricing page. Partner Integrations: Strengthening TI collaboration and workflow automation Microsoft Sentinel continues to expand its ecosystem with powerful partner integrations that enhance security operations. With Cyware, customers can now share threat intelligence bi-directionally across trusted destinations, ISACs, and multi-tenant environments—enabling real-time intelligence exchange that strengthens defenses and accelerates coordinated response. Learn more about the Cyware integration. Learn more about the Cyware integration here. Meanwhile, BlinkOps integration combined with Sentinel’s SOAR capabilities empowers SOC teams to automate repetitive tasks, orchestrate complex playbooks, and streamline workflows end-to-end. This automation reduces operational overhead, cuts Mean Time to Respond (MTTR) and frees analysts for strategic threat hunting. Learn more about the BlinkOps integration. Learn more about the BlinkOps integration. Harnessing Microsoft Sentinel Innovations Security is being reengineered for the AI era, moving beyond static, rule-based controls and reactive post-breach response toward platform-led, machine-speed defense. To overcome fragmented tools, sprawling signals, and legacy architectures that cannot keep pace with modern attacks, Microsoft Sentinel has evolved into both a SIEM and a unified security platform for agentic defense. These updates introduce architectural enhancements and advanced capabilities that enable AI-driven security operations at scale, helping organizations detect, investigate, and respond with unprecedented speed and precision. Microsoft Sentinel graph – Public Preview Unified graph analytics for deeper context and threat reasoning. Microsoft Sentinel graph delivers an interactive, visual map of entity relationships, helping analysts uncover hidden attack paths, lateral movement, and root causes for pre- and post-breach investigations. Read tech community blog for more details. Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview Context is key to effective security automation. Microsoft Sentinel MCP server introduces a standardized protocol for building context-aware solutions, enabling developers to create smarter integrations and workflows within Sentinel. This opens the door to richer automation scenarios and more adaptive security operations. Read tech community blog for more details. Enhanced UEBA with New Data Sources – Public Preview We are excited to announce support for six new sources in our user entity and behavior analytics algorithm, including AWS, GCP, Okta, and Azure. Now, customers can gain deeper, cross-platform visibility into anomalous behavior for earlier and more confident detection. Read our blog and check out our Ninja Training to learn more. Developer Solutions for Microsoft Sentinel platform – Public Preview Expanded APIs, solution templates, and integration capabilities empower developers to build and distribute custom workflows and apps via Microsoft Security Store. This unlocks faster innovation, streamlined operations, and new revenue opportunities, extending Sentinel beyond out-of-the-box functionality for greater agility and resilience. Read tech community blog for more details. Growing ecosystem of Microsoft Sentinel data connectors We are excited to announce the general availability of four new data connectors: AWS Server Access Logs, Google Kubernetes Engine, Palo Alto CSPM, and Palo Alto Cortex Xpanse. Visit find your Microsoft Sentinel data connector page for the list of data connectors currently supported. We are also inviting Private Previews for four additional connectors: AWS EKS, Qualys VM KB, Alibaba Cloud Network, and Holm Security towards our commitment to expand the breadth and depth to support new data sources. Our customer support team can help you sign up for previews. New agentless data connector for Microsoft Sentinel Solution for SAP applications We’re excited to announce the general availability of a new agentless connector for Microsoft Sentinel solution for SAP applications, designed to simplify integration and enhance security visibility. This connector enables seamless ingestion of SAP logs and telemetry directly into Microsoft Sentinel, helping SOC teams monitor critical business processes, detect anomalies, and respond to threats faster—all while reducing operational overhead. Events, Webinars and Training Stay connected with the latest security innovation and best practices. From global conferences to expert-led sessions, these events offer opportunities to learn, network, and explore how Microsoft is shaping AI-driven, end-to-end security for the modern enterprise. Microsoft Ignite 2025 Security takes center stage at Microsoft Ignite, with dedicated sessions and hands-on experiences for security professionals and leaders. Join us in San Francisco, November 17–21, 2025, or online, to explore our AI-first, end-to-end security platform designed to protect identities, devices, data, applications, clouds, infrastructure—and critically—AI systems and agents. Register today! Microsoft Security Webinars Stay ahead of emerging threats and best practices with expert-led webinars from the Microsoft Security Community. Discover upcoming sessions on Microsoft Sentinel SIEM & platform, Defender, Intune, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Onboard Microsoft Sentinel in Defender – Video Series Microsoft leads the industry in both SIEM and XDR, delivering a unified experience that brings these capabilities together seamlessly in the Microsoft Defender portal. This integration empowers security teams to correlate insights, streamline workflows, and strengthen defenses across the entire threat landscape. Ready to get started? Explore our video series to learn how to onboard your Microsoft Sentinel experience and unlock the full potential of integrated security. Watch Microsoft Sentinel is now in Defender video series. MDTI Convergence into Microsoft Sentinel & Defender XDR overview Discover how Microsoft Defender Threat Intelligence Premium is transforming cybersecurity by integrating into Defender XDR, Sentinel, and the Defender portal. Watch this session to learn about new features, expanded access to threat intelligence, and how these updates strengthen your security posture. Partner Sentinel Bootcamp Transform your security team from Sentinel beginners to advanced practitioners. This comprehensive 2-day bootcamp helps participants master architecture design, data ingestion strategies, multi-tenant management, and advanced analytics while learning to leverage Microsoft's AI-first security platform for real-world threat detection and response. Register here for the bootcamp. Looking to dive deeper into Microsoft Sentinel development? Check out the official https://aka.ms/AppAssure_SentinelDeveloper. It’s the central reference for developers and security teams who want to build custom integrations, automate workflows, and extend Sentinel’s capabilities. Bookmark this link as your starting point for hands-on guidance and tools. Stay Connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. 1 Gartner® Magic Quadrant™ for Security Information and Event Management, Andrew Davies, Eric Ahlm, Angel Berrios, Darren Livingstone, 8 October 20252.8KViews2likes3CommentsAnnouncing Public Preview: New STIX Objects in Microsoft Sentinel
Security teams often struggle to understand the full context of an attack. In many cases, they rely solely on Indicators of Compromise (IoCs) without the broader insights provided by threat intelligence developed on Threat Actors, Attack Patterns, Identities - and the Relationships between each. This lack of context available to enrich their workflows limits their ability to connect the dots, prioritize threats effectively, and respond comprehensively to evolving attacks. To help customers build out a thorough, real-time understanding of threats, we are excited to announce the public preview of new Threat Intelligence (TI) object support in Microsoft Sentinel and in the Unified SOC Platform. In addition to Indicators of Compromise (IoCs), Microsoft Sentinel now supports Threat Actors, Attack Patterns, Identities, and Relationships. This enhancement empowers organizations to take their threat intelligence management to the next level. In this blog, we’ll highlight key scenarios for which your team would use STIX objects, as well as demos showing how to create objects and new relationships and how to use them to hunt threats across your organization Key Scenarios STIX objects are a critical tool for incident responders attempting to understand an attack and threat intelligence analysts seeking more information on critical threats. It is designed to improve interoperability and sharing of threat intelligence across different systems and organizations. Below, we’ve highlighted four ways Unified SOC Platform customers can begin using STIX objects to protect their organization. Ingesting Objects: You can now ingest these objects from various commercial feeds through several methods including STIX TAXII servers, API, files, or manual input. Curating Threat Intelligence: Curate and manage any of the supported Threat Intelligence objects. Creating Relationships: Establish connections between objects to enhance threat detection and response. For example: Connecting Threat Actor to Attack Pattern: The threat actor "APT29" uses the attack pattern "Phishing via Email" to gain initial access. Linking Indicator to Threat Actor: An indicator (malicious domain) is attributed to the threat actor "APT29". Associating Identity (Victim) with Attack Pattern: The organization "Example Corp" is targeted by the attack pattern "Phishing via Email". Hunt and Investigate Threats More Effectively: Match curated TI data against your logs in the unified SOC platform powered by Microsoft Sentinel. Use these insights to detect, investigate, and hunt threats more efficiently, keeping your organization secure. Get Started Today with the new Hunting Model The ability to ingest and manage these new Threat Intelligence objects is now available in public preview. To enable this data in your workspaces for hunting and detection, submit your request here and we will provide further details. Demo and screen shots Demo 1: Hunt and detect threats using STIX objects Scenario: Linking an IOC to a Threat Actor: An indicator (malicious domain) is attributed to the threat actor " Sangria tempest " via the new TI relationship builder. Please note that the Sangria tempest actor object and the IOC are already present in this demo. These objects can be added automatically or created manually. To create new relationship, sign into your Sentinel instance and go to Add new à TI relationship. In the New TI relationship builder, you can select existing TI objects and define how it's related to one or more other TI objects. After defining a TI object’s relationship, click on “Common” to provide metadata for this relationship, such as Description, Tags, and Confidence score: p time, source, and description. Another type of meta data a customer can add to a relationship is the Traffic Light Protocol (TLP). The TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It uses four colors to indicate different levels of sensitivity and the corresponding sharing permissions: TLP:RED: Information is highly sensitive and should not be shared outside of the specific group or meeting where it was originally disclosed. TLP:AMBER: Information can be shared with members of the organization, but not publicly. It is intended to be used within the organization to protect sensitive information. TLP:GREEN: Information can be shared with peers and partner organizations within the community, but not publicly. It is intended for a wider audience within the community. TLP:WHITE: Information can be shared freely and publicly without any restrictions. Once the relationship is created, your newly created relationship can be viewed from the “Relationships” tab. Now, retrieve information about relationships and indicators associated with the threat actor 'Sangria Tempest'. For Microsoft Sentinel customers leveraging the Azure portal experience, you can access this in Log Analytics. For customers who have migrated to the unified SecOps platform in the Defender portal, you can go find this under “Advanced Hunting”. The following KQL query provides you with all TI objects related to “Sangria Tempest.” You can use this query for any threat actor name. let THREAT_ACTOR_NAME = 'Sangria Tempest'; let ThreatIntelObjectsPlus = (ThreatIntelObjects | union (ThreatIntelIndicators | extend StixType = 'indicator') | extend tlId = tostring(Data.id) | extend StixTypes = StixType | extend Pattern = case(StixType == "indicator", Data.pattern, StixType == "attack-pattern", Data.name, "Unkown") | extend feedSource = base64_decode_tostring(tostring(split(Id, '---')[0])) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let ThreatActorsWithThatName = (ThreatIntelObjects | where StixType == 'threat-actor' | where Data.name == THREAT_ACTOR_NAME | extend tlId = tostring(Data.id) | extend ActorName = tostring(Data.name) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let AllRelationships = (ThreatIntelObjects | where StixType == 'relationship' | extend tlSourceRef = tostring(Data.source_ref) | extend tlTargetRef = tostring(Data.target_ref) | extend tlId = tostring(Data.id) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let SourceRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlSourceRef | join ThreatIntelObjectsPlus on $left.tlTargetRef == $right.tlId); let TargetRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlTargetRef | join ThreatIntelObjectsPlus on $left.tlSourceRef == $right.tlId); SourceRelationships | union TargetRelationships | project ActorName, StixTypes, ObservableValue, Pattern, Tags, feedSource You now have all the information your organization has available about Sangria Tempest, correlated to maximize your understanding of the threat actor and its associations to threat infrastructure and activity. Demo 2: Curate and attribute objects We have created new UX to streamline TI object creation, which includes the capability to attribute to other objects, so while you are creating a new IoC, you can also attribute that indicator to a Threat Actor, all from one place. To create a new TI object and attribute it to one or multiple threat actors, follow the steps below: Go to Add new a TI Object. In the Context menu, select any object type. Enter all the required information in the fields on the right-hand side for your selected indicator type. While creating a new TI object, you can do TI object curation. This includes defining the relationship. You can also quickly duplicate TI objects, making it easier for those who create multiple TI objects daily. Please note that we also introduced an “Add and duplicate” button to allow customers to create multiple TI objects with the same metadata to streamline a manual bulk process. Demo 3: New supported IoC types The attack pattern builder now supports the creation of four new indicator types. These enable customers to build more specific attack patterns that boost understanding of and organizational knowledge around threats. These new indicators include: X509 certificate X509 certificates are used to authenticate the identity of devices and servers, ensuring secure communication over the internet. They are crucial in preventing man-in-the-middle attacks and verifying the legitimacy of websites and services. For instance, if a certificate is suddenly replaced or a new, unknown certificate appears, it could indicate a compromised server or a malicious actor attempting to intercept communications. JA3 JA3 fingerprints are unique identifiers generated from the TLS/SSL handshake process. They help in identifying specific applications and tools used in network traffic, making it easier to detect malicious activities For example, if a network traffic analysis reveals a JA3 fingerprint matching that of the Cobalt Strike tool, it could indicate an ongoing cyber attack. JA3S JA3S fingerprints extend the capabilities of JA3 by also including server-specific characteristics in the fingerprinting process. This provides a more comprehensive view of the network traffic and helps in identifying both client and server-side threats For instance, if a server starts communicating with an unknown external IP address using a specific JA3S fingerprint, it could be a sign of a compromised server or data exfiltration attempt. User agent User Agents provide information about the client software making requests to a server, such as the browser or operating system. They are useful in identifying and profiling devices and applications accessing a network For example, if a User Agent string associated with a known malicious browser extension appears in network logs, it could indicate a compromised device. Conclusion: The ability to ingest, curate, and establish relationships between various threat intelligence objects such as Threat Actors, Attack Patterns, and Identities provides a powerful framework for incident responders and threat intelligence analysts. The use of STIX objects not only improves interoperability and sharing of threat intelligence but also empowers organizations to hunt and investigate threats more efficiently. As customers adopt these new capabilities, they will find themselves better equipped to understand the full context of an attack and build robust defenses against future threats. With the public preview of Threat Intelligence (TI) object support, organizations are encouraged to explore these new tools and integrate them into their security operations, taking the first step towards a more informed and proactive approach to cybersecurity.7.7KViews4likes2CommentsDetect more, spend less: the future of threat intelligence correlation
With more data and intelligence than ever, it’s often a challenge to manage it all while making sure you’re maximizing its value for security investigations. We’ve made it easier for customers leveraging Microsoft’s SIEM and XDR. Now, customers can create custom detections that correlate threat intelligence from feeds brought in through the SIEM with their XDR data, without the need to ingest their XDR data as well. Why this matters Traditionally, correlating threat intelligence with endpoint and identity data required ingesting large volumes of XDR data into the SIEM. While effective, this approach often drove up ingestion and retention costs. The new capability eliminates that dependency, allowing security teams to: Reduce costs – Avoid unnecessary data ingestion charges while still leveraging XDR insights for detection. Accelerate detection – Query XDR and SIEM data seamlessly in near real time, enabling faster identification of threats. Maintain flexibility – Use custom detection rules to tailor alerts to your organization’s unique threat landscape. How it works Threat intelligence integration – Use curated threat indicators from Microsoft or your own threat intelligence platform (TIP) to power detections. Build custom detection rules that query both Sentinel and Defender XDR tables directly. This means you can match threat intelligence indicators—such as malicious IPs, domains, or file hashes (including from third party/non-Microsoft IOCs)—against Defender XDR telemetry without duplicating data in Sentinel. These rules can run on a schedule or in near real time, ensuring timely detection of suspicious activity. Examples of KQL Queries that can be used with Custom Detections, providing the following capabilities: Query the new TI tables (ThreatIntelIndicators and ThreatIntelObjects). Correlate with Defender XDR data without ingestion into Sentinel using Custom Detection rules. Enrich detections with threat actor context for better triage. Example 1: This query identifies any Domain indicators of compromise (IOCs) from threat intelligence (TI) by searching for matches in DeviceNetworkEvents. let dt_lookBack = 1h; // device events time window (how far back to look at traffic) let ioc_lookBack = 14d; // TI time window (how far back to read indicators) let DeviceNetworkEvents_ = DeviceNetworkEvents | where isnotempty(RemoteUrl) | where Timestamp >= ago(dt_lookBack) | where ActionType !has "ConnectionFailed" | extend Domain = tostring(parse_url(tolower(RemoteUrl)).Host) | where isnotempty(Domain) | project-rename DeviceNetworkEvents_TimeGenerated = Timestamp; let DeviceNetworkEventDomains = DeviceNetworkEvents_ | distinct Domain | summarize make_list(Domain); ThreatIntelIndicators | extend IndicatorType = replace(@"\[|\]|\""", "", tostring(split(ObservableKey, ":", 0))) | where IndicatorType == "domain-name" | extend DomainName = tolower(ObservableValue) | where TimeGenerated >= ago(ioc_lookBack) | extend IndicatorId = tostring(split(Id, "--")[2]) | where DomainName in (DeviceNetworkEventDomains) | summarize LatestIndicatorTime = arg_max(TimeGenerated, *) by Id, ObservableValue | where IsActive and (ValidUntil > now() or isempty(ValidUntil)) | extend Description = tostring(parse_json(Data).description) | extend TrafficLightProtocolLevel = tostring(parse_json(AdditionalFields).TLPLevel) | where Description !contains_cs "State: inactive;" and Description !contains_cs "State: falsepos;" | project-reorder *, IsActive, Tags, TrafficLightProtocolLevel, DomainName, Type | join kind=innerunique (DeviceNetworkEvents_) on $left.DomainName == $right.Domain | where DeviceNetworkEvents_TimeGenerated < ValidUntil | summarize DeviceNetworkEvents_TimeGenerated = arg_max(DeviceNetworkEvents_TimeGenerated, *) by IndicatorId | project DeviceNetworkEvents_TimeGenerated, IndicatorId, Url = RemoteUrl, Confidence, Description, Tags, TrafficLightProtocolLevel, ActionType, DeviceId, DeviceName, InitiatingProcessAccountUpn, InitiatingProcessCommandLine, RemoteIP, RemotePort, ReportId | extend Name = tostring(split(InitiatingProcessAccountUpn, '@', 0)[0]), UPNSuffix = tostring(split(InitiatingProcessAccountUpn, '@', 1)[0]) | extend Timestamp = DeviceNetworkEvents_TimeGenerated, UserPrincipalName = InitiatingProcessAccountUpn Example 2: Detect Malicious File Hashes Using ThreatIntelIndicators. Identifies a match in DeviceFileEvents event data from any FileHash IOC from TI let dt_lookBack = 1h; // device events time window (how far back to look at traffic) let ioc_lookBack = 14d; // TI time window (how far back to read indicators) let DeviceFileEvents_ = (union (DeviceFileEvents | where Timestamp > ago(dt_lookBack) | where isnotempty(SHA1) | extend FileHashValue = SHA1), (DeviceFileEvents | where Timestamp > ago(dt_lookBack) | where isnotempty(SHA256) | extend FileHashValue = SHA256)); let Hashes = DeviceFileEvents_ | distinct FileHashValue; ThreatIntelIndicators //extract key part of kv pair | extend IndicatorType = replace(@"\[|\]|\""", "", tostring(split(ObservableKey, ":", 0))) | where IndicatorType == "file" | extend FileHashType = replace("'", "", substring(ObservableKey, indexof(ObservableKey, "hashes.") + 7, strlen(ObservableKey) - indexof(ObservableKey, "hashes.") - 7)) | extend FileHashValue = ObservableValue | extend IndicatorId = tostring(split(Id, "--")[2]) | where isnotempty(FileHashValue) | where TimeGenerated > ago(ioc_lookBack) // | where FileHashValue in (Hashes) | summarize LatestIndicatorTime = arg_max(TimeGenerated, *) by Id, ObservableValue | where IsActive and (ValidUntil > now() or isempty(ValidUntil)) | extend Description = tostring(parse_json(Data).description) | where Description !contains_cs "State: inactive;" and Description !contains_cs "State: falsepos;" | project-reorder *, FileHashType, FileHashValue, Type | join kind=innerunique (DeviceFileEvents_) on $left.FileHashValue == $right.FileHashValue | where TimeGenerated < ValidUntil | summarize TimeGenerated = arg_max(Timestamp, *) by IndicatorId, DeviceId | extend TrafficLightProtocolLevel = tostring(parse_json(AdditionalFields).TLPLevel) | extend Description = tostring(parse_json(Data).description) | extend ActivityGroupNames = extract(@"ActivityGroup:(\S+)", 1, tostring(parse_json(Data).labels)) | project TimeGenerated, TrafficLightProtocolLevel, Description, ActivityGroupNames, IndicatorId, FileHashValue, FileHashType, ValidUntil, Confidence, ActionType, DeviceId, DeviceName, FolderPath, RequestAccountDomain, RequestAccountName, RequestAccountSid, MachineGroup, ReportId | extend Timestamp = TimeGenerated Important notes To create these types of Custom Detections, some columns like Timestamp and ReportId are required in the query results, for more information: Create custom detection rules in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Use asset mappings and entity mappings. Prioritise mapping for high-value entities: User accounts (for credential-based attacks) Devices/hosts (for lateral movement) IP addresses and domains (for network-based threats) Files and processes (for malware execution) Key benefits Cost optimization: No need to ingest Defender XDR data into Sentinel to correlate with threat intelligence. Reduces data ingestion and retention costs significantly while maintaining full detection capability. Extended lookback: Analyse historical data up to 30 days without additional storage costs. Enhanced threat context: Leverage ThreatIntelIndicators and ThreatIntelObject tables to enrich alerts with threat actor details, confidence scores, and campaign context. Flexible and customizable detection logic: Build custom KQL-based rules tailored to your organization’s threat landscape. Combine multiple data sources (including third party/non-Microsoft sources) and enrich alerts with contextual threat intelligence. Faster, proactive threat detection: Detect threats without waiting for data ingestion pipelines. Supports scheduled or near real-time queries, improving response times. Key takeaway Security teams can maximize the value of threat intelligence while optimizing costs. By reducing data duplication and enabling advanced correlation, organizations can strengthen their security posture without compromising efficiency. Useful links Custom detection rules get a boost—explore what’s new in Microsoft Defender | Microsoft Community Hub Create custom detection rules in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Threat intelligence - Microsoft Sentinel | Microsoft Learn938Views0likes1Comment