threat detection and response
11 TopicsMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on 30 September 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin437Views1like0CommentsMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process—every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting – only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does “Data processing” meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics “Commitment Tiers”. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way.1.8KViews1like8CommentsAI-Powered MITRE ATT&CK Tagging for SOC Optimization
This post is part of an update series highlighting new SOC optimization capabilities designed to help SOC teams maximize security value with less manual effort. In this post, we focus on AI-powered MITRE ATT&CK Tagging, which streamlines the process of aligning your detections with the MITRE framework. For an overview of our other recent updates, stay tuned for related posts in this series. Security teams rely on precise, consistent tagging to understand detection coverage, align with frameworks like MITRE ATT&CK, and respond effectively to threats. Yet in practice, tagging detections manually is error-prone, inconsistent, and resource-intensive — leaving gaps in coverage and missed opportunities for insight. To address this challenge, we’re excited to introduce a powerful new capability within SOC Optimization: AI MITRE ATT&CK Tagging. Problem Statement In today’s evolving threat landscape, aligning detection rules with the MITRE ATT&CK framework is critical for understanding and improving an organization’s security posture. MITRE tagging provides a common language to describe attacker behaviour, enabling security teams to assess their threat coverage, identify detection gaps, and drive a threat-informed defence. It powers key SOC experiences in Microsoft Sentinel, such as MITRE coverage views, use case recommendations, incident investigation context, and coverage optimization workflows. When tagging is missing or incomplete — for example, when only tactics are mapped without corresponding techniques — the ability to accurately assess protection against known adversary behaviours is weakened. This limits visibility into which threats are covered, complicates incident correlation, and prevents clear communication of coverage gaps to stakeholders. As a result, security teams struggle to prioritize detection improvements and risk leaving critical areas under protected. These gaps lead to: Incomplete visibility into coverage against known threats Limited ability to recommend or prioritize relevant use cases Fragmented alignment between detection rules and incident response workflows Without consistent MITRE tagging, teams spend valuable time manually reviewing and mapping rules — delaying threat response and reducing overall SOC efficiency. The Solution AI MITRE ATT&CK Tagging automates this process using artificial intelligence models that run directly in your workspace. The model scans your detection content and identifies which MITRE ATT&CK tactics and techniques apply, offering recommended tags for detections that are currently untagged. These recommendations can be easily reviewed and applied, allowing you to: Achieve complete detection coverage aligned with the MITRE ATT&CK framework Eliminate manual effort and reduce human error in tagging Enhance detection clarity and response workflows Gain insights into security posture with more structured and actionable data “AI-based tagging helps us to reduce manual workload that previously we tagged detections manually, as well as helps faster triage. Besides, AI-based tagging will be standardized, helping to reduce inconsistencies due to human error”. Farid Kalaidji, Security Lead at Pink Elephant How it looks like Let’s say you’re reviewing your detection posture and come across a new card in SOC Optimization: “Coverage improvement by AI MITRE Tagging”. The card highlights a list of detection rules in your environment that are missing MITRE ATT&CK mappings and offers AI-suggested tags to help close those gaps. You click into the experience and the relevant rules, each with recommended tactic and technique tags. Now you can quickly get a sense of where coverage is missing and what can be improved. If you’re looking for efficiency, you can simply click “Apply All” to tag every recommended rule at once. It’s a quick way to bring your rules up to date and ensure your MITRE coverage reflects your true detection posture – no manual tagging required. This improves not just the MIRTE blade, but also use case recommendation, incident investigation context, and overall visibility into your threat coverage. lease note that by selecting "choose rule", you also have the option to review and tag individual rules from the list. y heading to the MITRE ATT&CK blade, you can validate the improved coverage. The updated view includes newly applied tactics and techniques, reflecting your improved posture. Next Steps Get started with SOC Optimization today. We hope this detailed walkthrough helps you understand the benefits of this feature and improve your security coverage. Microsoft will continue to invest in this feature to assist our customers in defending against evolving security threats. Learn More SOC optimization documentation: SOC optimization overview ; Recommendations logic Short overview and demo: SOC optimization Ninja show In depth webinar: Manage your data, costs and protections with SOC optimization SOC optimization API: Introducing SOC Optimization API | Microsoft Community Hub MITRE ATT&CK coverage: View MITRE coverage for your organization from Microsoft Sentinel1.9KViews0likes0CommentsRisk-based Recommendation for SOC Optimization
This post is part of a blog series highlighting new SOC optimization capabilities designed to help SOC teams maximize security value and reduce costs, leveraging tailored dynamic recommendations. In this post, we will focus on Risk-Based Optimization, an exciting new capability that helps prioritize detection coverage based on the business risks most pertinent to your organization. Security teams often face the challenge of deciding where to focus detection efforts, especially when resources are limited and threats are constantly evolving. Traditional approaches treat all detections equally, making it difficult to align security operations with organizational priorities. The Risk-Based Optimization capability surfaces high-value detection recommendations tied directly to financial, compliance, legal, and reputational risks, helping teams make informed decisions about where to strengthen coverage. “Risk-Based Optimization has significantly influenced decision-making in threat management by providing a structured approach to prioritize and address risks.” Elie El Karkafi, Senior Solutions Architect, ampiO Solutions Importance of risk-based security prioritization One of the most pressing challenges today is that many organizations struggle to align their detection strategies with the real-world business risks that matter most. This disconnect is not just operational — it's organizational. Research shows that just 69% of board members see eye-to-eye with their CISOs (Harvard Business Review - Link). While business stakeholders focus on maintaining operations, controlling costs, and enabling growth, cybersecurity teams focus on threat mitigation, technical coverage, and vulnerability management. Without a shared understanding of risk, misalignment is inevitable. For example, the board might prioritize operational continuity, while the security team might focus on patching critical vulnerabilities - even if those vulnerabilities have no meaningful impact on core business services. This mismatch leads to: Security blind spots where high-value assets remain under protected Misallocation of resources, with low-impact threats consuming equal effort Difficulty communicating security priorities to business leadership Limited ROI visibility, as security investments aren’t tied to business outcomes What’s needed is a shared framework that allows both technical and non-technical stakeholders to view and prioritize cybersecurity risk in business terms. This includes understanding the financial impact of asset compromise — for example, what is the estimated loss if a major airline’s booking system is taken offline, millions of customer records are breached, and the incident becomes public? These are no longer theoretical scenarios — they are real and must be addressed accordingly. A risk-based approach to prioritization begins with understanding your environment: Inventory critical assets, including systems, users, and processes — both internal and external Threat model the ways those assets could be compromised or disrupted Assess exposure, considering threat likelihood and your organization’s risk tolerance Prioritize protections by assigning financial or operational impact values to potential losses Without this structured prioritization, organizations risk spending time and money without truly improving their security posture where it counts. New Risk-Based Optimization solution To help bridge the gap between business risk and security operations, we’re introducing Risk-Based Optimization. With Risk-Based Optimization capability, customers can: Identify under protected, high-risk areas Understand which business risks are impacted, such as financial fraud, data breaches, or operational downtime Receive recommendations aligned with both MITRE ATT&CK tactics and business consequences Key benefits include: Enhanced coverage across broad, business-risk-aligned threat scenarios Prioritization of high-risk threats affecting mission-critical functions Operational efficiency by concentrating resources on high-value detections Visual context through radar charts and MITRE coverage maps Actionable recommendations that integrate into detection tagging and configuration workflows As part of the public preview, Risk-Based Optimization includes three foundational use cases that align threat types with specific business risks: Credential Exploitation Network Intrusion Data Exfiltration These scenarios surface directly within the SOC Optimization experience in the unified portal, alongside existing recommendations. Users receive coverage scores and improvement suggestions that span both SIEM and XDR content — all mapped to relevant MITRE tactics, techniques, and sub-techniques for full visibility and traceability. Risk-Based Optimizations offer a broad, business-centric lens to kickstart a more strategic coverage approach. Customers can begin with these high-level optimizations, then drill down into more specific threat scenarios as needed. “Very impressed with the ease of use and intuitiveness of the feature. It enables Security Operations to focus on making risk-based decisions without being bogged down in technical complexity. The outcomes directly support broader organizational goals. I’m genuinely amazed by how straightforward it is with clear and impactful outcomes”. Shivniel Gounder, Principal Cybersecurity Engineer, DEFEND “It's aligning security measures with business risks, helping to focus resources on high-impact risks. And based on these insights and recommendations, we could have actionable steps to improve security coverage better and better.” Michael Morten Sonne, blog.sonnes.cloud, Microsoft MVP How it looks like Risk-based optimization brings clarity to a challenge every security team faces: how do we know if we're protected where it matters most? In the unified Microsoft security portal, SOC Optimization now surfaces a set of cards, each highlighting a different business risk where your coverage could be improved. Let’s take Credential Exploitation as an example. The card alerts you that your current coverage is low, and that improvements are available. With one click on “Learn about risk types”, you're taken into a detailed view that explains what the risk entails, what business areas it impacts (like financial, compliance, legal, etc.), and how your current MITRE ATT&CK coverage compares to the recommended baseline. The experience is designed for action — you don’t need to search for rules or hunt for guidance. The system surfaces exactly what detections to add, and with a direct link to the Content Hub, you can start improving your coverage immediately. This connected workflow extends into the MITRE Blade as well, where you can view the scenario’s tactics and techniques across the ATT&CK framework, helping you validate improvements and maintain alignment with real-world threats. Risk-based recommendations help transform detection management from a reactive task into a strategic advantage - bridging the gap between technical depth and business impact. “The whole addition of Risk-Based Scenarios is fantastic in terms of driving businesses to act to configure their detection rules. I would like to see this more widely adopted in the future to really build up the visibility of business risks in detections.” Vebjørn Høyland, Senior Cybersecurity Consultant, Move AS Next Steps Get started with Microsoft Sentinel in the Defender portal today to take advantage of SOC Optimization recommendations, tailored for your organization. Microsoft will continue to invest in SOC optimization features to help our customers in enhancing their security against evolving cyberthreats. Learn More SOC optimization documentation: SOC optimization overview ; Recommendation's logic Short overview and demo: SOC optimization Ninja show In depth webinar: Manage your data, costs and protections with SOC optimization SOC optimization API: Introducing SOC Optimization API | Microsoft Community Hub MITRE ATT&CK coverage: View MITRE coverage for your organization from Microsoft Sentinel1.4KViews0likes1CommentAnnouncing Public Preview: New STIX Objects in Microsoft Sentinel
Security teams often struggle to understand the full context of an attack. In many cases, they rely solely on Indicators of Compromise (IoCs) without the broader insights provided by threat intelligence developed on Threat Actors, Attack Patterns, Identities - and the Relationships between each. This lack of context available to enrich their workflows limits their ability to connect the dots, prioritize threats effectively, and respond comprehensively to evolving attacks. To help customers build out a thorough, real-time understanding of threats, we are excited to announce the public preview of new Threat Intelligence (TI) object support in Microsoft Sentinel and in the Unified SOC Platform. In addition to Indicators of Compromise (IoCs), Microsoft Sentinel now supports Threat Actors, Attack Patterns, Identities, and Relationships. This enhancement empowers organizations to take their threat intelligence management to the next level. In this blog, we’ll highlight key scenarios for which your team would use STIX objects, as well as demos showing how to create objects and new relationships and how to use them to hunt threats across your organization Key Scenarios STIX objects are a critical tool for incident responders attempting to understand an attack and threat intelligence analysts seeking more information on critical threats. It is designed to improve interoperability and sharing of threat intelligence across different systems and organizations. Below, we’ve highlighted four ways Unified SOC Platform customers can begin using STIX objects to protect their organization. Ingesting Objects: You can now ingest these objects from various commercial feeds through several methods including STIX TAXII servers, API, files, or manual input. Curating Threat Intelligence: Curate and manage any of the supported Threat Intelligence objects. Creating Relationships: Establish connections between objects to enhance threat detection and response. For example: Connecting Threat Actor to Attack Pattern: The threat actor "APT29" uses the attack pattern "Phishing via Email" to gain initial access. Linking Indicator to Threat Actor: An indicator (malicious domain) is attributed to the threat actor "APT29". Associating Identity (Victim) with Attack Pattern: The organization "Example Corp" is targeted by the attack pattern "Phishing via Email". Hunt and Investigate Threats More Effectively: Match curated TI data against your logs in the unified SOC platform powered by Microsoft Sentinel. Use these insights to detect, investigate, and hunt threats more efficiently, keeping your organization secure. Get Started Today with the new Hunting Model The ability to ingest and manage these new Threat Intelligence objects is now available in public preview. To enable this data in your workspaces for hunting and detection, submit your request here and we will provide further details. Demo and screen shots Demo 1: Hunt and detect threats using STIX objects Scenario: Linking an IOC to a Threat Actor: An indicator (malicious domain) is attributed to the threat actor " Sangria tempest " via the new TI relationship builder. Please note that the Sangria tempest actor object and the IOC are already present in this demo. These objects can be added automatically or created manually. To create new relationship, sign into your Sentinel instance and go to Add new à TI relationship. In the New TI relationship builder, you can select existing TI objects and define how it's related to one or more other TI objects. After defining a TI object’s relationship, click on “Common” to provide metadata for this relationship, such as Description, Tags, and Confidence score: p time, source, and description. Another type of meta data a customer can add to a relationship is the Traffic Light Protocol (TLP). The TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It uses four colors to indicate different levels of sensitivity and the corresponding sharing permissions: TLP:RED: Information is highly sensitive and should not be shared outside of the specific group or meeting where it was originally disclosed. TLP:AMBER: Information can be shared with members of the organization, but not publicly. It is intended to be used within the organization to protect sensitive information. TLP:GREEN: Information can be shared with peers and partner organizations within the community, but not publicly. It is intended for a wider audience within the community. TLP:WHITE: Information can be shared freely and publicly without any restrictions. Once the relationship is created, your newly created relationship can be viewed from the “Relationships” tab. Now, retrieve information about relationships and indicators associated with the threat actor 'Sangria Tempest'. For Microsoft Sentinel customers leveraging the Azure portal experience, you can access this in Log Analytics. For customers who have migrated to the unified SecOps platform in the Defender portal, you can go find this under “Advanced Hunting”. The following KQL query provides you with all TI objects related to “Sangria Tempest.” You can use this query for any threat actor name. let THREAT_ACTOR_NAME = 'Sangria Tempest'; let ThreatIntelObjectsPlus = (ThreatIntelObjects | union (ThreatIntelIndicators | extend StixType = 'indicator') | extend tlId = tostring(Data.id) | extend StixTypes = StixType | extend Pattern = case(StixType == "indicator", Data.pattern, StixType == "attack-pattern", Data.name, "Unkown") | extend feedSource = base64_decode_tostring(tostring(split(Id, '---')[0])) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let ThreatActorsWithThatName = (ThreatIntelObjects | where StixType == 'threat-actor' | where Data.name == THREAT_ACTOR_NAME | extend tlId = tostring(Data.id) | extend ActorName = tostring(Data.name) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let AllRelationships = (ThreatIntelObjects | where StixType == 'relationship' | extend tlSourceRef = tostring(Data.source_ref) | extend tlTargetRef = tostring(Data.target_ref) | extend tlId = tostring(Data.id) | summarize arg_max(TimeGenerated, *) by Id | where IsDeleted == false); let SourceRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlSourceRef | join ThreatIntelObjectsPlus on $left.tlTargetRef == $right.tlId); let TargetRelationships = (ThreatActorsWithThatName | join AllRelationships on $left.tlId == $right.tlTargetRef | join ThreatIntelObjectsPlus on $left.tlSourceRef == $right.tlId); SourceRelationships | union TargetRelationships | project ActorName, StixTypes, ObservableValue, Pattern, Tags, feedSource You now have all the information your organization has available about Sangria Tempest, correlated to maximize your understanding of the threat actor and its associations to threat infrastructure and activity. Demo 2: Curate and attribute objects We have created new UX to streamline TI object creation, which includes the capability to attribute to other objects, so while you are creating a new IoC, you can also attribute that indicator to a Threat Actor, all from one place. To create a new TI object and attribute it to one or multiple threat actors, follow the steps below: Go to Add new a TI Object. In the Context menu, select any object type. Enter all the required information in the fields on the right-hand side for your selected indicator type. While creating a new TI object, you can do TI object curation. This includes defining the relationship. You can also quickly duplicate TI objects, making it easier for those who create multiple TI objects daily. Please note that we also introduced an “Add and duplicate” button to allow customers to create multiple TI objects with the same metadata to streamline a manual bulk process. Demo 3: New supported IoC types The attack pattern builder now supports the creation of four new indicator types. These enable customers to build more specific attack patterns that boost understanding of and organizational knowledge around threats. These new indicators include: X509 certificate X509 certificates are used to authenticate the identity of devices and servers, ensuring secure communication over the internet. They are crucial in preventing man-in-the-middle attacks and verifying the legitimacy of websites and services. For instance, if a certificate is suddenly replaced or a new, unknown certificate appears, it could indicate a compromised server or a malicious actor attempting to intercept communications. JA3 JA3 fingerprints are unique identifiers generated from the TLS/SSL handshake process. They help in identifying specific applications and tools used in network traffic, making it easier to detect malicious activities For example, if a network traffic analysis reveals a JA3 fingerprint matching that of the Cobalt Strike tool, it could indicate an ongoing cyber attack. JA3S JA3S fingerprints extend the capabilities of JA3 by also including server-specific characteristics in the fingerprinting process. This provides a more comprehensive view of the network traffic and helps in identifying both client and server-side threats For instance, if a server starts communicating with an unknown external IP address using a specific JA3S fingerprint, it could be a sign of a compromised server or data exfiltration attempt. User agent User Agents provide information about the client software making requests to a server, such as the browser or operating system. They are useful in identifying and profiling devices and applications accessing a network For example, if a User Agent string associated with a known malicious browser extension appears in network logs, it could indicate a compromised device. Conclusion: The ability to ingest, curate, and establish relationships between various threat intelligence objects such as Threat Actors, Attack Patterns, and Identities provides a powerful framework for incident responders and threat intelligence analysts. The use of STIX objects not only improves interoperability and sharing of threat intelligence but also empowers organizations to hunt and investigate threats more efficiently. As customers adopt these new capabilities, they will find themselves better equipped to understand the full context of an attack and build robust defenses against future threats. With the public preview of Threat Intelligence (TI) object support, organizations are encouraged to explore these new tools and integrate them into their security operations, taking the first step towards a more informed and proactive approach to cybersecurity.7.5KViews4likes1CommentIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.5KViews4likes2CommentsWhat’s New: Exciting new Microsoft Sentinel Connectors Announcement - Ignite 2024
Microsoft Sentinel continues to be a leading cloud-native security information and event management (SIEM) solution, empowering organizations to detect, investigate, and respond to threats across their digital ecosystem at scale. Microsoft Sentinel offers robust out of the box (OOTB) content, allowing seamless connections with a wide array of data sources from both Microsoft and third-party providers. This enables comprehensive collection and analysis of security signals across multicloud, multiplatform environments, enhancing your overall security posture. In this Ignite 2024 blog post, we are thrilled to present the latest integrations contributed by our esteemed Partners. These new integrations further expand the capabilities of Microsoft Sentinel, enabling you to connect your existing security solutions and leverage Microsoft Sentinel’s powerful analytics and automation capabilities to fortify your defenses against evolving cyber threats. Featured ISV 1Password for Microsoft Sentinel The integration between 1Password Extended Access Management and Microsoft Sentinel provides businesses with real-time visibility and alerts for login attempts and account changes. It enables quick detection of security threats and streamlines reporting by monitoring both managed and unmanaged apps from a single, centralized platform, ensuring faster response times and enhanced security. Cisco Secure Email Threat Defense Sentinel Application This application collects threat information from Cisco Secure Email Threat Defense and ingests it into Microsoft Sentinel for visualization and analysis. It enhances email security by detecting and blocking advanced threats, providing comprehensive visibility and fast remediation. Cribl Stream Solution for Microsoft Sentinel Cribl Stream accelerates SIEM migrations by ingesting, transforming, and enriching third party data into Microsoft Sentinel. It simplifies data onboarding, optimizes data in various formats, and helps maintain compliance, enhancing security operations and threat detection. FortiNDR Cloud FortiNDR Cloud integrates Fortinet’s network detection and response capabilities with Microsoft Sentinel, providing advanced threat detection and automated response. Fortinet FortiNDR Cloud enhances network security by helping to identify and mitigate threats in real-time. Pure Storage Solution for Microsoft Sentinel This solution integrates Pure Storage’s data storage capabilities with Sentinel, providing enhanced data protection and performance. It helps optimize storage infrastructure and improve data security. New and Notable CyberArk Audit for Microsoft Sentinel This solution extracts audit trail data from CyberArk and integrates it with Microsoft Sentinel, providing a comprehensive view of system and user activities. It enhances incident response with automated workflows and real-time threat detection. Cybersixgill Actionable Alerts for Microsoft Sentinel Cybersixgill provides contextual and actionable alerts based on data from the deep and dark web. It helps SOC analysts detect phishing, data leaks, and vulnerabilities, enhancing incident response and threat remediation. Cyware For Microsoft Sentinel Cyware integrates with Microsoft Sentinel to automate incident response and enhance threat hunting. It uses Logic Apps and hunting queries to streamline security operations and provides contextual threat intelligence. Ermes Browser Security for Microsoft Sentinel Ermes Browser Security ingests security and audit events into Microsoft Sentinel, providing enhanced visibility and reporting. It helps monitor and respond to web threats, improving the organization’s security posture. Gigamon Data Connector for Microsoft Sentinel This solution integrates Gigamon GigaVUE Cloud Suite, including Application Metadata Intelligence, with Microsoft Sentinel, providing comprehensive network traffic visibility and insights. It helps detect anomalies and optimize network performance, enhancing overall security. Illumio Sentinel Integration Illumio integrates its micro-segmentation capabilities with Microsoft Sentinel, providing real-time visibility and control over network traffic. It helps prevent lateral movement of threats and enhances overall network security. Infoblox App for Microsoft Sentinel The Infoblox solution enhances SecOps capabilities by seamlessly integrating Infoblox's AI-driven analytics, providing actionable insights, dashboards, and playbooks derived from DNS intelligence. These insights empower SecOps teams to achieve rapid incident response and remediation, all within the familiar Microsoft Sentinel user interface. LUMINAR Threat Intelligence for Microsoft Sentinel LUMINAR integrates threat intelligence and leaked credentials data into Microsoft Sentinel, helping organizations maintain visibility of their threat landscape. It provides timely, actionable insights to help detect and respond to threats before they impact the organization. Prancer PenSuite AI Prancer PenSuite AI now supercharges Microsoft Sentinel by injecting pentesting and real-time AppSec data into SOC operations. With powerful red teaming simulations, it empowers teams to detect vulnerabilities earlier, respond faster, and stay ahead of evolving threats. Phosphorus Connector for Microsoft Sentinel Phosphorus Cybersecurity’s Intelligent Active Discovery provides in-depth context for xIoT assets, that enhances threat detection and allows for targeted responses, enabling organizations to isolate or secure specific devices based on their criticality. Silverfort for Microsoft Sentinel Silverfort integrates its Unified Identity Protection Platform with Microsoft Sentinel, securing authentication and access to sensitive systems, both on-premises and in the cloud without requiring agents or proxies. Transmit Security Data Connector for Sentinel Transmit Security integrates its identity and access management capabilities with Sentinel, providing real-time monitoring and threat detection for user activities. It helps secure identities and prevent unauthorized access. In addition to commercially supported integrations, Microsoft Sentinel Content Hub also connects you to hundreds of community-based solutions as well as thousands of practitioner contributions. For more details and instructions on how to set up these integrations see Microsoft Sentinel data connectors | Microsoft Learn. To our partners: Thank you for your unwavering partnership and invaluable contributions on this journey to deliver the most comprehensive, timely insights and security value to our mutual customers. Security is indeed a team sport, and we are grateful to be working together to enhance the security landscape. Your dedication and innovation are instrumental in our collective success. We hope you find these new partner solutions useful, and we look forward to hearing your feedback and suggestions. Stay tuned for more updates and announcements on Microsoft Sentinel and its partner ecosystem. Learn More Microsoft’s commitment to Security Microsoft’s Secure Future Initiative Unified SecOps | SIEM and XDR Solutions Unified Platform documentation | Microsoft Defender XDR What else is new with Microsoft Sentinel? Microsoft Sentinel product home Schema Mapping Microsoft Sentinel Partner Solution Contributions Update – Ignite 2023 Additional resources: Sentinel Ignite 2024 Blog Latest Microsoft Tech Community Sentinel blog announcements Microsoft Sentinel solution for SAP Microsoft Sentinel solution for Power Platform Microsoft Sentinel pricing Microsoft Sentinel customer stories Microsoft Sentinel documentation3.3KViews0likes0CommentsIntroducing a Unified Security Operations Platform with Microsoft Sentinel and Defender XDR
Read about our announcement of an exciting private preview that represents the next step in the SOC protection and efficiency journey by bringing together the power of Microsoft Sentinel, Microsoft Defender XDR and Microsoft Security Copilot into a unified security operations platform.82KViews17likes12Comments