threat detection and response
13 TopicsIntroducing a Unified Security Operations Platform with Microsoft Sentinel and Defender XDR
Read about our announcement of an exciting private preview that represents the next step in the SOC protection and efficiency journey by bringing together the power of Microsoft Sentinel, Microsoft Defender XDR and Microsoft Security Copilot into a unified security operations platform.82KViews17likes12CommentsIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.5KViews4likes2CommentsAnnouncing Public Preview: New STIX Objects in Microsoft Sentinel
Security teams often struggle to understand the full context of an attack. In many cases, they rely solely on Indicators of Compromise (IoCs) without the broader insights provided by threat intelligence developed on Threat Actors, Attack Patterns, Identities - and the Relationships between each. This lack of context available to enrich their workflows limits their ability to connect the dots, prioritize threats effectively, and respond comprehensively to evolving attacks. To help customers build out a thorough, real-time understanding of threats, we are excited to announce the public preview of new Threat Intelligence (TI) object support in Microsoft Sentinel and in the Unified SOC Platform. In addition to Indicators of Compromise (IoCs), Microsoft Sentinel now supports Threat Actors, Attack Patterns, Identities, and Relationships. This enhancement empowers organizations to take their threat intelligence management to the next level. In this blog, we’ll highlight key scenarios for which your team would use STIX objects, as well as demos showing how to create objects and new relationships and how to use them to hunt threats across your organization Key Scenarios STIX objects are a critical tool for incident responders attempting to understand an attack and threat intelligence analysts seeking more information on critical threats. It is designed to improve interoperability and sharing of threat intelligence across different systems and organizations. Below, we’ve highlighted four ways Unified SOC Platform customers can begin using STIX objects to protect their organization. Ingesting Objects: You can now ingest these objects from various commercial feeds through several methods including STIX TAXII servers, API, files, or manual input. Curating Threat Intelligence: Curate and manage any of the supported Threat Intelligence objects. Creating Relationships: Establish connections between objects to enhance threat detection and response. For example: Connecting Threat Actor to Attack Pattern: The threat actor "APT29" uses the attack pattern "Phishing via Email" to gain initial access. Linking Indicator to Threat Actor: An indicator (malicious domain) is attributed to the threat actor "APT29". Associating Identity (Victim) with Attack Pattern: The organization "Example Corp" is targeted by the attack pattern "Phishing via Email". Hunt and Investigate Threats More Effectively: Match curated TI data against your logs in the unified SOC platform powered by Microsoft Sentinel. Use these insights to detect, investigate, and hunt threats more efficiently, keeping your organization secure. Get Started Today with the new Hunting Model The ability to ingest and manage these new Threat Intelligence objects is now available in public preview. To enable this data in your workspaces for hunting and detection, submit your request here and we will provide further details. Demo and screen shots Demo 1: Hunt and detect threats using STIX objects Scenario: Linking an IOC to a Threat Actor: An indicator (malicious domain) is attributed to the threat actor " Sangria tempest " via the new TI relationship builder. Please note that the Sangria tempest actor object and the IOC are already present in this demo. These objects can be added automatically or created manually. To create new relationship, sign into your Sentinel instance and go to Add new à TI relationship. In the New TI relationship builder, you can select existing TI objects and define how it's related to one or more other TI objects. After defining a TI object’s relationship, click on “Common” to provide metadata for this relationship, such as Description, Tags, and Confidence score: p time, source, and description. Another type of meta data a customer can add to a relationship is the Traffic Light Protocol (TLP). The TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It uses four colors to indicate different levels of sensitivity and the corresponding sharing permissions: TLP:RED: Information is highly sensitive and should not be shared outside of the specific group or meeting where it was originally disclosed. TLP:AMBER: Information can be shared with members of the organization, but not publicly. It is intended to be used within the organization to protect sensitive information. TLP:GREEN: Information can be shared with peers and partner organizations within the community, but not publicly. It is intended for a wider audience within the community. TLP:WHITE: Information can be shared freely and publicly without any restrictions. Once the relationship is created, your newly created relationship can be viewed from the “Relationships” tab. Now, retrieve information about relationships and indicators associated with the threat actor 'Sangria Tempest'. For Microsoft Sentinel customers leveraging the Azure portal experience, you can access this in Log Analytics. For customers who have migrated to the unified SecOps platform in the Defender portal, you can go find this under “Advanced Hunting”. The following KQL query provides you with all TI objects related to “Sangria Tempest.” You can use this query for any threat actor name. let THREAT_ACTOR_NAME = 'Sangria Tempest'; let ThreatIntelObjectsPlus = (ThreatIntelObjects | union (ThreatIntelIndicators | extend StixType = 'indicator') | extend tlId = tostring(Data.id) | extend StixTypes = StixType | extend Pattern = case(StixType == "indicator", Data.pattern, StixType == "attack-pattern", Data.name, "Unkown") | extend feedSource = base64_decode_tostring(tostring(split(Id, '---')[0])) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let ThreatActorsWithThatName = (ThreatIntelObjects | where StixType == 'threat-actor' | where Data.name == THREAT_ACTOR_NAME | extend tlId = tostring(Data.id) | extend ActorName = tostring(Data.name) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let AllRelationships = (ThreatIntelObjects | where StixType == 'relationship' | extend tlSourceRef = tostring(Data.source_ref) | extend tlTargetRef = tostring(Data.target_ref) | extend tlId = tostring(Data.id) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let SourceRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlSourceRef | join ThreatIntelObjectsPlus on $left.tlTargetRef == $right.tlId); let TargetRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlTargetRef | join ThreatIntelObjectsPlus on $left.tlSourceRef == $right.tlId); SourceRelationships | union TargetRelationships | project ActorName, StixTypes, ObservableValue, Pattern, Tags, feedSource You now have all the information your organization has available about Sangria Tempest, correlated to maximize your understanding of the threat actor and its associations to threat infrastructure and activity. Demo 2: Curate and attribute objects We have created new UX to streamline TI object creation, which includes the capability to attribute to other objects, so while you are creating a new IoC, you can also attribute that indicator to a Threat Actor, all from one place. To create a new TI object and attribute it to one or multiple threat actors, follow the steps below: Go to Add new a TI Object. In the Context menu, select any object type. Enter all the required information in the fields on the right-hand side for your selected indicator type. While creating a new TI object, you can do TI object curation. This includes defining the relationship. You can also quickly duplicate TI objects, making it easier for those who create multiple TI objects daily. Please note that we also introduced an “Add and duplicate” button to allow customers to create multiple TI objects with the same metadata to streamline a manual bulk process. Demo 3: New supported IoC types The attack pattern builder now supports the creation of four new indicator types. These enable customers to build more specific attack patterns that boost understanding of and organizational knowledge around threats. These new indicators include: X509 certificate X509 certificates are used to authenticate the identity of devices and servers, ensuring secure communication over the internet. They are crucial in preventing man-in-the-middle attacks and verifying the legitimacy of websites and services. For instance, if a certificate is suddenly replaced or a new, unknown certificate appears, it could indicate a compromised server or a malicious actor attempting to intercept communications. JA3 JA3 fingerprints are unique identifiers generated from the TLS/SSL handshake process. They help in identifying specific applications and tools used in network traffic, making it easier to detect malicious activities For example, if a network traffic analysis reveals a JA3 fingerprint matching that of the Cobalt Strike tool, it could indicate an ongoing cyber attack. JA3S JA3S fingerprints extend the capabilities of JA3 by also including server-specific characteristics in the fingerprinting process. This provides a more comprehensive view of the network traffic and helps in identifying both client and server-side threats For instance, if a server starts communicating with an unknown external IP address using a specific JA3S fingerprint, it could be a sign of a compromised server or data exfiltration attempt. User agent User Agents provide information about the client software making requests to a server, such as the browser or operating system. They are useful in identifying and profiling devices and applications accessing a network For example, if a User Agent string associated with a known malicious browser extension appears in network logs, it could indicate a compromised device. Conclusion: The ability to ingest, curate, and establish relationships between various threat intelligence objects such as Threat Actors, Attack Patterns, and Identities provides a powerful framework for incident responders and threat intelligence analysts. The use of STIX objects not only improves interoperability and sharing of threat intelligence but also empowers organizations to hunt and investigate threats more efficiently. As customers adopt these new capabilities, they will find themselves better equipped to understand the full context of an attack and build robust defenses against future threats. With the public preview of Threat Intelligence (TI) object support, organizations are encouraged to explore these new tools and integrate them into their security operations, taking the first step towards a more informed and proactive approach to cybersecurity.7.5KViews4likes1CommentWhat’s New in Microsoft Sentinel: November 2025
Welcome to our new Microsoft Sentinel blog series! We’re excited to launch a new blog series focused on Microsoft Sentinel. From the latest product innovations and feature updates to industry recognition, success stories, and major events, you’ll find it all here. This first post kicks off the series by celebrating Microsoft’s recognition as a Leader in the 2025 Gartner Magic Quadrant for SIEM 1 . It also introduces the latest innovations designed to deliver measurable impact and empower defenders with adaptable, collaborative tools in an evolving threat landscape. Microsoft is recognized as a Leader in 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) Microsoft Sentinel continues to drive security innovation—and the industry is taking notice. Microsoft was named a leader in the 2025 Gartner Magic Quadrant for Security Information and Event Management (SIEM) 1 , published on October 8, 2025. We believe this acknowledgment reinforces our commitment to helping organizations stay secure in a rapidly changing threat landscape. Read blog for more information. Take advantage of M365 E5 benefit and Microsoft Sentinel promotional pricing Microsoft 365 E5 benefit Customers with Microsoft 365 E5, A5, F5, or G5 licenses automatically receive up to 5 MB of free data ingestion per user per day, covering key security data sources like Azure AD sign-in logs and Microsoft Cloud App Security discovery logs—no enrollment required. Read more about M365 benefits for Microsoft Sentinel. New 50GB promotional pricing To make Microsoft Sentinel more accessible to small and mid-sized organizations, we introduced a new 50 GB commitment tier in public preview, with promotional pricing starting October 1, 2025, through March 31, 2026. Customers who choose the 50 GB commitment tier during this period will maintain their promotional rate until March 31, 2027. Available globally with regional variations in regional pricing it is accessible through EA, CSP, and Direct channels. For more information see Microsoft Sentinel pricing page. Partner Integrations: Strengthening TI collaboration and workflow automation Microsoft Sentinel continues to expand its ecosystem with powerful partner integrations that enhance security operations. With Cyware, customers can now share threat intelligence bi-directionally across trusted destinations, ISACs, and multi-tenant environments—enabling real-time intelligence exchange that strengthens defenses and accelerates coordinated response. Learn more about the Cyware integration. Learn more about the Cyware integration here. Meanwhile, BlinkOps integration combined with Sentinel’s SOAR capabilities empowers SOC teams to automate repetitive tasks, orchestrate complex playbooks, and streamline workflows end-to-end. This automation reduces operational overhead, cuts Mean Time to Respond (MTTR) and frees analysts for strategic threat hunting. Learn more about the BlinkOps integration. Learn more about the BlinkOps integration. Harnessing Microsoft Sentinel Innovations Security is being reengineered for the AI era, moving beyond static, rule-based controls and reactive post-breach response toward platform-led, machine-speed defense. To overcome fragmented tools, sprawling signals, and legacy architectures that cannot keep pace with modern attacks, Microsoft Sentinel has evolved into both a SIEM and a unified security platform for agentic defense. These updates introduce architectural enhancements and advanced capabilities that enable AI-driven security operations at scale, helping organizations detect, investigate, and respond with unprecedented speed and precision. Microsoft Sentinel graph – Public Preview Unified graph analytics for deeper context and threat reasoning. Microsoft Sentinel graph delivers an interactive, visual map of entity relationships, helping analysts uncover hidden attack paths, lateral movement, and root causes for pre- and post-breach investigations. Read tech community blog for more details. Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview Context is key to effective security automation. Microsoft Sentinel MCP server introduces a standardized protocol for building context-aware solutions, enabling developers to create smarter integrations and workflows within Sentinel. This opens the door to richer automation scenarios and more adaptive security operations. Read tech community blog for more details. Enhanced UEBA with New Data Sources – Public Preview We are excited to announce support for six new sources in our user entity and behavior analytics algorithm, including AWS, GCP, Okta, and Azure. Now, customers can gain deeper, cross-platform visibility into anomalous behavior for earlier and more confident detection. Read our blog and check out our Ninja Training to learn more. Developer Solutions for Microsoft Sentinel platform – Public Preview Expanded APIs, solution templates, and integration capabilities empower developers to build and distribute custom workflows and apps via Microsoft Security Store. This unlocks faster innovation, streamlined operations, and new revenue opportunities, extending Sentinel beyond out-of-the-box functionality for greater agility and resilience. Read tech community blog for more details. Growing ecosystem of Microsoft Sentinel data connectors We are excited to announce the general availability of four new data connectors: AWS Server Access Logs, Google Kubernetes Engine, Palo Alto CSPM, and Palo Alto Cortex Xpanse. Visit find your Microsoft Sentinel data connector page for the list of data connectors currently supported. We are also inviting Private Previews for four additional connectors: AWS EKS, Qualys VM KB, Alibaba Cloud Network, and Holm Security towards our commitment to expand the breadth and depth to support new data sources. Our customer support team can help you sign up for previews. New agentless data connector for Microsoft Sentinel Solution for SAP applications We’re excited to announce the general availability of a new agentless connector for Microsoft Sentinel solution for SAP applications, designed to simplify integration and enhance security visibility. This connector enables seamless ingestion of SAP logs and telemetry directly into Microsoft Sentinel, helping SOC teams monitor critical business processes, detect anomalies, and respond to threats faster—all while reducing operational overhead. Events, Webinars and Training Stay connected with the latest security innovation and best practices. From global conferences to expert-led sessions, these events offer opportunities to learn, network, and explore how Microsoft is shaping AI-driven, end-to-end security for the modern enterprise. Microsoft Ignite 2025 Security takes center stage at Microsoft Ignite, with dedicated sessions and hands-on experiences for security professionals and leaders. Join us in San Francisco, November 17–21, 2025, or online, to explore our AI-first, end-to-end security platform designed to protect identities, devices, data, applications, clouds, infrastructure—and critically—AI systems and agents. Register today! Microsoft Security Webinars Stay ahead of emerging threats and best practices with expert-led webinars from the Microsoft Security Community. Discover upcoming sessions on Microsoft Sentinel SIEM & platform, Defender, Intune, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Onboard Microsoft Sentinel in Defender – Video Series Microsoft leads the industry in both SIEM and XDR, delivering a unified experience that brings these capabilities together seamlessly in the Microsoft Defender portal. This integration empowers security teams to correlate insights, streamline workflows, and strengthen defenses across the entire threat landscape. Ready to get started? Explore our video series to learn how to onboard your Microsoft Sentinel experience and unlock the full potential of integrated security. Watch Microsoft Sentinel is now in Defender video series. MDTI Convergence into Microsoft Sentinel & Defender XDR overview Discover how Microsoft Defender Threat Intelligence Premium is transforming cybersecurity by integrating into Defender XDR, Sentinel, and the Defender portal. Watch this session to learn about new features, expanded access to threat intelligence, and how these updates strengthen your security posture. Partner Sentinel Bootcamp Transform your security team from Sentinel beginners to advanced practitioners. This comprehensive 2-day bootcamp helps participants master architecture design, data ingestion strategies, multi-tenant management, and advanced analytics while learning to leverage Microsoft's AI-first security platform for real-world threat detection and response. Register here for the bootcamp. Looking to dive deeper into Microsoft Sentinel development? Check out the official https://aka.ms/AppAssure_SentinelDeveloper. It’s the central reference for developers and security teams who want to build custom integrations, automate workflows, and extend Sentinel’s capabilities. Bookmark this link as your starting point for hands-on guidance and tools. Stay Connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. 1 Gartner® Magic Quadrant™ for Security Information and Event Management, Andrew Davies, Eric Ahlm, Angel Berrios, Darren Livingstone, 8 October 20251.2KViews1like1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on 30 September 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin788Views1like0CommentsMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process—every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting – only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does “Data processing” meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics “Commitment Tiers”. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way.2.1KViews1like8CommentsHow to Set Up Sentinel Data Connectors for Kubernetes and GitHub
A guide to configure and use Sentinel Connectors to collect logs and data from your Kubernetes clusters and GitHub CI/CD pipelines. Part 2 of 3 part series about security monitoring of your Kubernetes Clusters and CI/CD pipelines by @singhabhi and @Umesh_Nagdev Link to Part 1 Link to Part 3 Introduction In part 1 of this series, we discussed the type of log sources you should consider for monitoring the security of your Kubernetes environment. This blog will demonstrate how to connect some of the critical log sources using Sentinel Data Connectors. Sentinel Data Connectors are a set of tools that enable you to collect and analyze logs and data from various sources, such as cloud services, applications, devices, and networks. Sentinel Data Connectors can help you monitor the health, performance, and security of your Kubernetes clusters and GitHub CI/CD pipelines, as well as detect and respond to threats and incidents. In this document, we will show you how to set up Sentinel Data Connectors for three types of sources: Kubernetes clusters, GitHub CI/CD pipelines, and Defender for Containers alerts and Defender for Cloud recommendations. We will also explain how to use the connectors to view and query the collected data in Sentinel. Security monitoring use cases Let’s first highlight some security risks you would want to monitor with Sentinel: 1. Pod Security Monitoring: Log source: Defender of Containers Risks monitored: Detect unauthorized or suspicious pods running in the cluster. Monitor for privilege escalation attempts within pods. Track and alert on changes to pod security policies. 2. Network Security Monitoring: Log source: Defender of Containers Risks monitored: Identify and alert on unexpected network traffic patterns. Monitor for unauthorized ingress and egress traffic. Detect and investigate potential denial-of-service (DoS) attacks. 3. Container Image Security: Log source: Defender for Cloud - Defender Cloud Security Posture Management (DCSPM) Risks monitored: Scan container images for vulnerabilities before deployment. Monitor for unauthorized or unsigned images. Track changes to container image repositories. 4. Kubelet Activity Monitoring: Log source: Defender of Containers Risks monitored: Monitor kubelet logs for signs of compromise or unauthorized access. Detect abnormal activities related to node management. 5. API Server Security: Log source: Defender of Containers Risks monitored: Monitor Kubernetes API server logs for suspicious activities. Track and alert on failed authentication attempts. Detect unusual API server request patterns. 6. RBAC (Role-Based Access Control) Monitoring: Log source: AKS Diagnostics Logs, Azure AD logs, Azure Monitor Container Insights Risks monitored: Monitor changes to RBAC policies and roles. Detect and alert on unauthorized access attempts. Track role binding changes and escalations. 7. Secrets and ConfigMap Access Monitoring: Log source: Defender of Containers Risks monitored: Monitor for unauthorized access to Kubernetes secrets and ConfigMaps. Detect changes to sensitive configuration data. Track usage patterns of sensitive information. 8. Audit Logging: Log source: AKS Diagnostic Logs Risks monitored: Enable and monitor Kubernetes audit logs for cluster-wide activities. Correlate audit logs to identify security events and policy violations. Regularly review audit logs for anomalies and potential threats. 9. Compliance Monitoring: Log source: Defender for Cloud - Defender Cloud Security Posture Management (DCSPM) Risks monitored: Ensure compliance with security standards and policies. Monitor for deviations from security best practices. Generate reports on compliance status and potential risks. 10. Container Runtime Security: Log source: Defender of Containers Risks monitored: Monitor runtime activities of containers for abnormal behavior. Detect and alert on suspicious system calls within containers. Integrate with container runtime security tools for enhanced monitoring. 11. Incident Response and Forensics: Log source: Defender of Containers Risks monitored: Develop and test incident response plans for Kubernetes security incidents. Monitor for indicators of compromise (IoCs) and initiate investigations in Sentinel Collect and analyze forensics data in the event of a security incident in Sentinel 12. Cluster Health Monitoring: Log source: AKS Diagnostic Logs Risks monitored: Regularly monitor the overall health of the Kubernetes cluster. Detect and alert on abnormal resource consumption or performance issues. Ensure the availability of critical components and services. Prerequisites Before you can set up Sentinel Data Connectors, you need to have the following: Sentinel workspace. This is where you store and analyze the data collected by the connectors. Enable Sentinel on the Log Analytics Workspace where you are exporting all of the below mentioned log sources . Instructions on how to setup Sentinel Kubernetes cluster. This is the source of the data for the Kubernetes Cluster using Diagnostics logs. You can use any Kubernetes cluster that supports the Kubernetes API, such as Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), or Amazon Elastic Kubernetes Service (EKS). We will showcase this with AKS. Instructions on how to deploy AKS GitHub account. This is the source of the code and manifests used for creating container images which are then deployed in your Kubernetes Clusters. You will need to configure DCSPM DevOps security for secure scanning of artifacts. Or if you are using a third-party scanning tool you will need to send the scan results to Sentinel. Container Registry. The images stored in the registry need to be scanned for vulnerabilities. You will need access to the scan logs this can be done via Defender for Cloud DCSPM Defender for Containers subscription. This is a service that provides security and compliance monitoring for your Kubernetes clusters. You need to enable Defender for Containers on your subscription where your Kubernetes cluster is located and configure it to send alerts to the Sentinel workspace. Instructions on how to enable Defender for Containers on a subscription. A Defender for Cloud DSPM subscription. This is a service that provides security and compliance recommendations for your cloud resources such as AKS, ACR, and Azure tenant. You need to enable Defender for Cloud DCSPM on your subscription with AKS cluster and configure it to send recommendations to the Sentinel workspace. Instructions on how to enable DCSPM on a subscription. How to Set Up Kubernetes Cluster Connector The Kubernetes Cluster connector allows you to collect logs and metrics from your Kubernetes cluster, such as cluster events, pod logs, node metrics, and container metrics. To ingest AKS logs into Sentinel, deploy the Azure Kubernetes Solution for Sentinel then, follow the steps below to enable the AKS data connector. Configure AKS data connector to ingest logs into Sentinel: In Microsoft Sentinel, go to the "Data connectors" page. Find and configure the "Azure Kubernetes Service (AKS)" connector. Launch the Azure Policy wizard under configuration to enable logging. Verify Integration: After configuration, verify that logs from your AKS cluster are flowing into Sentinel. Create Sentinel Workbooks and Queries (to be elaborated in part 3): Leverage Microsoft Sentinel workbooks and Kusto Query Language (KQL) queries to create visualizations and reports based on AKS logs. Customize the workbooks and queries based on your specific security and monitoring requirements. Set Up Alerts and Incidents (to be elaborated in part 3): Configure alerts within Microsoft Sentinel based on specific events or patterns detected in AKS logs. Set up incidents and response workflows to investigate and respond to security events. Monitor and Fine-Tune: Regularly monitor the integration, alerts, and logs to ensure that the AKS logs are being properly processed in MicrosoftSentinel. Fine-tune your configurations based on feedback, new security requirements, or changes to your AKS environment. How to Set Up GitHub connector To ingest logs into Sentinel, deploy the Microsoft Sentinel - Continuous Threat Monitoring for GitHub. Enable the two connector that are installed as part of this solution: GitHub Enterprise Audit Log connector: this connector collects GitHub audit logs which tracks changes to repository, user added/removed, pull request activities, etc. GitHub (using Webhooks) connector: to ingest you can ingest the scan data using a built in data connector for GitHub events. This connector can pull events related to code scanning alert, repository vulnerability alert (via Dependabot) and Secret Scanning Alert. How to Set Up Defender for Containers Alerts and Defender for Cloud Connector Sentinel has a buil-in data connector to ingest Defender for Cloud alerts and recommendation. You can find the details https://learn.microsoft.com/en-us/azure/sentinel/connect-defender-for-cloud#connect-to-microsoft-defender-for-cloud Setting up AKS data connector and additional logging for Sentinel Setup the Diagnostic Settings for the Azure Kubernetes Services to send the events to a Sentinel-enabled Log Analytics workspace. https://learn.microsoft.com/en-us/azure/aks/monitor-aks#aks-control-planeresource-logs. In our scenario we are using the following logs In addition, you will also need to enable Container Insights to get the Pod level data so you can run the search queries for risk related to Pod specifics like pods running in Default namespace. You can refer to this https://learn.microsoft.com/en-us/azure/azure-monitor/containers/kubernetes-monitoring-enable?tabs=cli#enable-container-insights resource to enable Container Insights. The logs will go to ContainerLogsV2 in Log Analytics Workspace https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-logs-schema#enable-the-containerlogv2-schema The following pic shows the ContainerLogv2 Schema as an example. You will need to for to Sentinel Content Hub and enable the following. This will give you Workbook, several hunting queries and a data connector to ingest AKS data. Your AKS cluster will populate the data in the following tables, which we will use to write custom search queries in the section below.7.3KViews1like0Comments