siem
517 TopicsGo agentless with Microsoft Sentinel Solution for SAP
What a title during Agentic AI times đ đ˘UPDATE: Agentless reached GA! See details here. Dear community, Bringing SAP workloads under the protection of your SIEM solution is a primary concern for many customers out there. The window for defenders is small âCritical SAP vulnerabilities being weaponized in less than 72 hours of a patch release and new unprotected SAP applications provisioned in cloud (IaaS) environments being discovered and compromised in less than three hours.â (SAP SE + Onapsis, Apr 6 2024) Having a turn-key solution as much as possible leads to better adoption of SAP security. Agent-based solutions running in Docker containers, Kubernetes, or other self-hosted environemnts are not for everyone. Microsoft Sentinel for SAPâs latest new capability re-uses the SAP Cloud Connector to profit from already existing setups, established integration processes, and well-understood SAP components. Meet agentless âđ¤ The new integration path leverages SAP Integration Suite to connect Microsoft Sentinel with your SAP systems. The Cloud integration capability of SAP Integration Suite speaks all relevant protocols, has connectivity into all the places where your SAP systems might live, is strategic for most SAP customers, and is fully SAP RISE compatible by design. Are you deployed on SAP Business Technology Platform yet? Simply upload our Sentinel for SAP integration package (see bottom box in below image) to your SAP Cloud Integration instance, configure it for your environment, and off you go. Best of all: The already existing SAP security content (detections, workbooks, and playbooks) in Microsoft Sentinel continues to function the same way as it does for the Docker-based collector agent variant. The integration marks your steppingstone to bring your SAP threat signals into the Unified Security Operations Platform â a combination of Defender XDR and Sentinel â that looks beyond SAP at your whole IT estate. Microsoft Sentinel solution for SAP applications is certified for SAP S/4HANA Cloud, Private Edition RISE with SAP, and SAP S/4HANA on-premises. So, you are all good to gođ You are already dockerized or agentless? Then proceed to this post to learn more about what to do once the SAP logs arrived in Sentinel. Final Words During the preview we saw drastically reduced deployment times for SAP customers being less familiar with Docker, Kubernetes and Linux administration. Cherry on the cake: the network challenges donât have to be tackled again. The colleagues running your SAP Cloud Connector went through that process a long time ago. SAP Basis rocks đ¤ Get started from here on Microsoft Learn. Find more details on our blog on the SAP Community. Cheers Martin1.5KViews1like0CommentsMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAPâs official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting â on-premises, hyperscalers, RISE, or GROW â to be protected. Letâs hear from an agentless customer: âWith the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOCâs visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident responseâ SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilanteâ for showing us around the new connector! But we didnât stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. Weâre bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more â yeah, I have the data stored somewhere to tick the audit report check box â but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. â ď¸ Deprecation notice for containerized data connector agent â ď¸ The containerised SAP data connector will be deprecated on 30 September 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAPâs roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Noteđ: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. â ď¸Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment â make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ⤠Get started from here ⤠Explore partner add-ons here. ⤠Share feature requests here. Next Steps Once deployed, I recommend to check AryaGâs insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap đŹ. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin454Views1like0CommentsMicrosoft Application Protection Incidents
Hi, Seeing a small number of incidents within Sentinel with the Alert product name of 'Microsoft Application Protection'. Can view them in Sentinel but when clicking on the hyper link to be taken to the defender portal I can't access them? Two things, which Defender suite are these alerts coming from? Which roles/permissions are required to view them within the Defender/Unified portal?25Views0likes1CommentAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, weâre showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, weâve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [weâll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] â Automate the mundane away Part 3: Playbooks 1 â Playbooks Part I â Fundamentals Part 4: Playbooks 2 â Playbooks Part II â Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) â Putting it all together Part 2: Automation Rules â Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, letâs say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least âhighâ. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, letâs dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the âActionsâ button 3) When writing an Analytic Rule, under the âAutomated responseâ tab The process for each is generally the same, except for the Incident route and weâll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled âAutomatically Close Incident When Affected Machine is SRV02021â but really itâs up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), itâs preferred to use incident triggers as theyâre potentially the aggregation of multiple alerts and the odds are that youâre going to want to take the same automation steps for all of the alerts since theyâre all related. Itâs better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of âSRV2021â By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that weâve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the âActionsâ drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the âAdd actionâ button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to âHighâ and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to âClosedâ and a âBenign Positive â Suspicious by expectedâ. This default action can be deleted and you can then set up your own action. In a future episode of this blog weâre going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that theyâve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to âIndefiniteâ by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.5KViews2likes1Comment[DevOps] dps.sentinel.azure.com no longer responds
Hello, Ive been using Repository connections in sentinel to a central DevOps for almost two years now. Today i got my first automated email on error for a webhook related to my last commit from the central repo to my Sentinel intances. Its a webhook that is automticly created in connections that are made the last year (the once from 2 years ago dont have this webhook automaticly created). The hook is found in devops -> service hooks -> webhooks "run state change" for each connected sentinel However, after todays run (which was successfull, all content deployed) this hook generates alerts. It says it cant reach: (EU in my case) eu.prod.dps.sentinel.azure.com full url: https://eu.prod.dps.sentinel.azure.com/webhooks/ado/workspaces/[REDACTED]/sourceControls/[REDACTED] So, what happened to this domain? why is it no longer responding and when was it going offline? I THINK this is the hook that sets the status under Sentinel -> Repositories in the GUI. this success status in screenshoot is from 2025/02/06, no new success has been registered in the receiving Sentinel instance. For the Sentinel that is 2 year old and dont have a hook in my DevOps that last deployment status says "Unknown" - so im fairly sure thats what the webhook is doing. So a second question would be, how can i set up a new webhook ? (it want ID and password of the "Azure Sentinel Content Deployment App" - i will never know that password....) so i cant manually add ieather (if the URL ever comes back online or if a new one exists?). please let me know.126Views1like3CommentsMicrosoft Sentinelâs AI-driven UEBA ushers in the next era of behavioral analytics
Co-author - Ashwin Patil Security teams today face an overwhelming challenge: every data point is now a potential security signal and SOCs are drowning in complex logs, trying to find the needle in the haystack. Microsoft Sentinel User and Entity Behavior Analytics (UEBA) brings the power of AI to automatically surface anomalous behaviors, helping analysts cut through the noise, save time, and focus on what truly matters. Microsoft Sentinel UEBA has already helped SOCs uncover insider threats, detect compromised accounts, and reveal subtle attack signals that traditional rule-based methods often miss. These capabilities were previously powered by a core set of high-value data sources - such as sign-in activity, audit logs, and identity signals - that consistently delivered rich context and accurate detections. Today, weâre excited to announce a major expansion: Sentinel UEBA now supports six new data sources including Microsoft first- and third-party platforms like Azure, AWS, GCP, and Okta, bringing deeper visibility, broader context, and more powerful anomaly detection tailored to your environment. This isnât just about ingesting more logs. Itâs about transforming how SOCs understand behavior, detect threats, and prioritize response. With this evolution, analysts gain a unified, cross-platform view of user and entity behavior, enabling them to correlate signals, uncover hidden risks, and act faster with greater confidence. Newly supported data sources are built for real-world security use cases: Authentication activities MDE DeviceLogonEvents â Ideal for spotting lateral movement and unusual access. AADManagedIdentitySignInLogs â Critical for spotting stealthy abuse of non - human identities. AADServicePrincipalSignInLogs - Identifying anomalies in service principal usage such as token theft or over - privileged automation. Cloud platforms & identity management AWS CloudTrail Login Events - Surfaces risky AWS account activity based on AWS CloudTrail ConsoleLogin events and logon related attributes. GCP Audit Logs - Failed IAM Access, Captures denied access attempts indicating reconnaissance, brute force, or privilege misuse in GCP. Okta MFA & Auth Security Change Events â Flags MFA challenges, resets, and policy modifications that may reveal MFA fatigue, session hijacking, or policy tampering. Currently supports the Okta_CL table (unified Okta connector support coming soon). These sources feed directly into UEBAâs entity profiles and baselines - enriching users, devices, and service identities with behavioral context and anomalies that would otherwise be fragmented across platforms. This will complement our existing supported log sources - monitoring Entra ID sign-in logs, Azure Activity logs and Windows Security Events. Due to the unified schema available across data sources, UEBA enables feature-rich investigation and the capability to correlate across data sources, cross platform identities or devices insights, anomalies, and more. AI-powered UEBA that understands your environment Microsoft Sentinel UEBA goes beyond simple log collection - it continuously learns from your environment. By applying AI models trained on your organizationâs behavioral data, UEBA builds dynamic baselines and peer groups, enabling it to spot truly anomalous activity. UBEA builds baselines from 10 days (for uncommon activities) to 6 months, both for the user and their dynamically calculated peers. Then, insights are surfaced on the activities and logs - such as an uncommon activity or first-time activity - not only for the user but among peers. Those insights are used by an advanced AI model to identify high confidence anomalies. So, if a user signs in for the first time from an uncommon location, a common pattern in the environment due to reliance on global vendors, for example, then this will not be identified as an anomaly, keeping the noise down. However, in a tightly controlled environment, this same behavior can be an indication of an attack and will surface in the Anomalies table. Including those signals in custom detections can help affect the severity of an alert. So, while logic is maintained, the SOC is focused on the right priorities. How to use UEBA for maximum impact Security teams can leverage UEBA in several key ways. All the examples below leverage UEBAâs dynamic behavioral baselines looking back up to 6 months. Teams can also leverage the hunting queries from the "UEBA essentials" solution in Microsoft Sentinel's Content Hub. Behavior Analytics: Detect unusual logon times, MFA fatigue, or service principal misuse across hybrid environments. Get visibility into geo-location of events and Threat Intelligence insights. Hereâs an example of how you can easily discover Accounts authenticating without MFA and from uncommonly connected countries using UEBA behaviorAnalytics table: BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.IsMfaUsed == "No" | where ActivityInsights.CountryUncommonlyConnectedFromInTenant == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 âŻ// Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn Anomaly detection Identify lateral movement, dormant account reactivation, or brute-force attempts, even when they span cloud platforms. Below are examples of how to discover UEBA Anomalous AwsCloudTrail anomalies via various UEBA activity insights or device insights attributes: Anomalies | where AnomalyTemplateName in ( ⯠⯠"UEBA Anomalous Logon in AwsCloudTrail", // AWS ClousTrail anomalies ⯠⯠"UEBA Anomalous MFA Failures in Okta_CL", "UEBA Anomalous Activity in Okta_CL", // Okta Anomalies ⯠⯠"UEBA Anomalous Activity in GCP Audit Logs",âŻ// GCP Failed IAM access anomalies ⯠⯠"UEBA Anomalous Authentication" // For Authentication related anomalies ) | project TimeGenerated, _WorkspaceId, AnomalyTemplateName, AnomalyScore, Description, AnomalyDetails, ActivityInsights, DeviceInsights, UserInsights, Tactics, Techniques Alert optimization Use UEBA signals to dynamically adjust alert severity in custom detectionsâturning noisy alerts into high-fidelity detections. The example below shows all the users with anomalous sign in patterns based on UEBA. Joining the results with any of the AWS alerts with same AWS identity will increase fidelity. BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.FirstTimeConnectionViaISPInTenant == True or ActivityInsights.FirstTimeUserConnectedFromCountry == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 âŻ// Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn, ActivityInsights | evaluate bag_unpack(ActivityInsights) Another example shows anomalous key vault access from service principal with uncommon source country location. Joining this activity with other alerts from the same service principle increases fidelity of the alerts. You can also join the anomaly UEBA Anomalous Authentication with other alerts from the same identity to bring the full power of UEBA into your detections. BehaviorAnalytics | where TimeGenerated > ago(1d) | where EventSource == "Authentication" and SourceSystem == "AAD" | evaluate bag_unpack(ActivityInsights) | where LogonMethod == "Service Principal" and Resource == "Azure Key Vault" | where ActionUncommonlyPerformedByUser == "True" and CountryUncommonlyConnectedFromByUser == "True" | where InvestigationPriority > 0 Final thoughts This release marks a new chapter for Sentinel UEBAâbringing together AI, behavioral analytics, and cross-cloud and identity management visibility to help defenders stay ahead of threats. If you havenât explored UEBA yet, nowâs the time. Enable it in your workspace settings and donât forget to enable anomalies as well (in Anomalies settings). And if youâre already using it, these new sources will help you unlock even more value. Stay tuned for our upcoming Ninja show and webinar (register at aka.ms/secwebinars), where weâll dive deeper into use cases. Until then, explore the new sources, use the UEBA workbook, update your watchlists, and let UEBA do the heavy lifting. UEBA onboarding and setting documentation Identify threats using UEBA UEBA enrichments and insights reference UEBA anomalies reference4.5KViews5likes5CommentsSentinel Data Connector: Google Workspace (G Suite) (using Azure Functions)
I'm encountering a problem when attempting to run the GWorkspace_Report workbook in Azure Sentinel. The query is throwing this error related to the union operator: 'union' operator: Failed to resolve table expression named 'GWorkspace_ReportsAPI_gcp_CL' I've double-checked, and the GoogleWorkspaceReports connector is installed and updated to version 3.0.2. Has anyone seen this or know what might be causing the table GWorkspace_ReportsAPI_gcp_CL to be unresolved? Thanks!84Views0likes2CommentsTrend Micro Vision One Connector Not working
Hi All, Before I get nuked in the comments to raise an issue on the Sentinel Repo. Here me out đ Around a month ago, the logs stopped ingesting. A quick snoop around revealed the reason. But I'm not sure if I should raise an issue, or try to fix the issue, risking voiding any future support I can get, since the connector and the app that comes with it are market solutions. Function app was not running due to a dependency issue. Spotted this on the diagnostic logs, under the "exceptions" table. "module named _cffi_backend not found" a python package google tells me, thats used to interact with C code. So logically, I need to find the requirement.txt and make sure the dependency is there. Also make sure the python version on the runtime and Azure matches, The logs were initially flowing as usual . I had completed integrating Trend Micro using Azure Functions based connector around 7 months ago. Worked like a toyota helix until now. So once again, would like to know the community's thoughts on it. Thxx121Views1like1CommentMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions weâve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for securityâbringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the âdefault data lakeâ workspace. In the future, this will be enabled for non-default Sentinel workspaces as well â meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoftâs guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers donât need to manually configure or manage this processâevery new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. Itâs a seamless way to support both operational and long-term use casesâwithout duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting â only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data thatâs not already in the Sentinel data lake wonât be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does âData processingâ meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics âCommitment Tiersâ. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If Iâm an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and weâre excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please donât hesitate to reach outâweâre here to support you every step of the way.1.9KViews1like8Comments