siem
583 TopicsThe Sentinel migration mental model question: what's actually retiring vs what isn't?
Something I keep seeing come up in conversations with other Sentinel operators lately, and I think it's worth surfacing here as a proper discussion. There's a consistent gap in how the migration to the Defender portal is being understood, and I think it's causing some teams to either over-scope their effort or under-prepare. The gap is this: the Microsoft comms have consistently told us *what* is happening (Azure portal experience retires March 31, 2027), but the question that actually drives migration planning, what is architecturally changing versus what is just moving to a different screen, doesn't have a clean answer anywhere in the community right now. The framing I've been working with, which I'd genuinely like to get other practitioners to poke holes in: What's retiring: The Azure portal UI experience for Sentinel operations. Incident management, analytics rule configuration, hunting, automation management: all of that moves to the Defender portal. What isn't changing: The Log Analytics workspace, all ingested data, your KQL rules, connectors, retention config, billing. None of that moves. The Defender XDR data lake is a separate Microsoft-managed layer, not a replacement for your workspace. Where it gets genuinely complex: MSSP/multi-tenant setups, teams with meaningful SOAR investments, and anyone who's built tooling against the SecurityInsights API for incident management (which now needs to shift to Microsoft Graph for unified incidents). The deadline extension from July 2026 to March 2027 tells its own story. Microsoft acknowledged that scale operators needed more time and capabilities. If you're in that camp, that extra runway is for proper planning, not deferral. A few questions I'd genuinely love to hear about from people who've started the migration or are actively scoping it: For those who've done the onboarding already: what was the thing that caught you most off guard that isn't well-documented? For anyone running Sentinel across multiple tenants: how are you approaching the GDAP gap while Microsoft completes that capability? Are you using B2B authentication as the interim path, or Azure Lighthouse for cross-workspace querying? I've been writing up a more detailed breakdown of this, covering the RBAC transition, automation review, and the MSSP-specific path, and the community discussion here is genuinely useful for making sure the practitioner perspective covers the right edge cases. Happy to share more context on anything above if useful.108Views1like3CommentsMicrosoft partners with DataBahn to accelerate enterprise deployments for Microsoft Sentinel
Enterprise security teams are collecting more telemetry than ever across cloud platforms, endpoints, SaaS applications, and on-premises infrastructure. Security teams want broader data coverage and longer retention without losing control of cost and data quality. This post explains the new DataBahn integration with Microsoft Sentinel, why it matters for SIEM operations, and how to think about using a security data pipeline alongside Sentinel for onboarding, normalization, routing, and governance. DataBahn joins Microsoft Sentinel partner ecosystem This integration reflects Microsoft Sentinel’s open partner ecosystem, giving customers choice in the partners they use alongside Microsoft Sentinel to manage their security data pipelines. DataBahn joins a broader set of complementary partners, enabling customers to tailor solutions for their unique security data needs. DataBahn is available through Microsoft Marketplace and is eligible for customers to apply existing Azure Consumption Commitments toward the purchase of DataBahn. Why this matters for security operations teams Security teams are under relentless pressure to ingest more data, move faster through SIEM migrations, and preserve data fidelity for detections and investigations, all while managing costs effectively. The challenge isn’t just ingesting data, but ensuring the right telemetry arrives in a consistent, governed format that analysts and detections can trust. This is where a security data pipeline, alongside Microsoft Sentinel’s native connectors and DCRs, can add value. It helps streamline onboarding of third-party and custom sources, improve normalization consistency, and provide operational visibility across diverse environments as deployments scale. What DataBahn integration is positioned to do with Microsoft Sentinel Security teams want broader coverage and need to ensure third-party data is consistently shaped, routed, and governed at scale. This is where a security data pipeline like DataBahn complements Microsoft Sentinel. Sitting upstream of ingestion, the pipeline layer standardizes onboarding and shaping across sources while providing operational visibility into data flow and pipeline health. Together, the collaboration focuses on reducing onboarding friction, improving normalization consistency, enabling intentional routing, and strengthening governance signals so teams can quickly detect source changes, parser breaks, or data gaps—while staying aligned with Sentinel analytics and detection workflows. This model gives Sentinel customers more choice to move faster, onboard data at scale, and retain control over data routing. Key capabilities Bidirectional data integration The integration enables seamless delivery of telemetry into Sentinel while aligning with Sentinel detection logic and schema expectations. This helps ensure telemetry pipelines remain consistent with: Sentinel detection formats Custom analytics rules Sentinel data models and schemas Automated table and DCR management As detections evolve, pipeline configurations can adapt to maintain detection fidelity and data consistency. Advanced management API DataBahn provides an advanced management API that allows organizations to programmatically configure and manage pipeline integrations with Sentinel. This enables teams to: Automate pipeline configuration Manage operational workflows Integrate pipeline management into broader security or DevOps automation processes Automatic identification of configuration conflicts In complex environments with multiple telemetry sources and routing rules, configuration conflicts can arise across filtering logic, enrichment pipelines, and detection dependencies. The integration helps automatically: Detect conflicts in filtering rules and pipeline logic Identify clashes with detection dependencies Highlight missing configurations or coverage gaps Automated detection of configuration conflicts and pipeline rule dependencies This visibility allows SOC teams to quickly identify issues that could impact detection reliability. Centralized pipeline management The integration enables centralized management of data collection and transformation workflows associated with Sentinel telemetry pipelines. This provides unified visibility and control across telemetry sources while maintaining compatibility with Sentinel analytics and detections. Centralized management simplifies operations across large environments where multiple telemetry pipelines must be maintained. Centralized pipeline management for telemetry sources across the environment Flexible data transformation and customization Security telemetry often arrives in inconsistent formats across vendors and platforms. The platform supports flexible transformation capabilities that allow organizations to: Normalize logs into standard or custom Sentinel table formats Add or derive fields required by Sentinel detections Apply filtering or enrichment rules before ingestion Configuration can be performed through a single-screen workflow, enabling teams to modify schemas and define filtering logic without disrupting downstream analytics. Flexible data transformation to align telemetry with Microsoft Sentinel ASIM schemas The platform also provides schema drift detection and source health monitoring, helping teams maintain reliable telemetry pipelines as environments evolve. Closing Effective security operations depend on how quickly a SOC can onboard new data, scale effectively, and maintain high‑quality investigations. Sentinel provides a cloud‑native, AI-ready foundation to ingest security data from first- and third‑party data sources—while enabling economical, large‑scale retention and deep analytics using open data formats and multiple analytics engines. DataBahn’s partnership with Sentinel is positioned as a pipeline layer that can help teams onboard third-party sources, shape and normalize data, and apply routing and governance patterns before data lands in Sentinel. Learn more DataBahn for Microsoft Sentinel DataBahn Press Release - Databahn Deepens Partnership with Microsoft Sentinel Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn Microsoft Sentinel—AI-Ready Platform | Microsoft Security Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations | Microsoft Learn Microsoft Sentinel data lake is now generally available | Microsoft Community Hub1.1KViews2likes1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on September 14th, 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Follow our agentless migration playlist for a smooth transition. Spotlight on new Features with agentless Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin2KViews1like0CommentsEndpoint and EDR Ecosystem Connectors in Microsoft Sentinel
Most SOCs operate in mixed endpoint environments. Even if Microsoft Defender for Endpoint is your primary EDR, you may still run Cisco Secure Endpoint, WithSecure Elements, Knox, or Lookout in specific regions, subsidiaries, mobile fleets, or regulatory enclaves. The goal is not to replace any tool, but to standardize how signals become detections and response actions. This article explains an engineering-first approach: ingestion correctness, schema normalization, entity mapping, incident merging, and cross-platform response orchestration. Think of these connectors as four different lenses on endpoint risk. Two provide classic EDR detections (Cisco, WithSecure). Two provide mobile security and posture signals (Knox, Lookout). The highest-fidelity outcomes come from correlating them with Microsoft signals (Defender for Endpoint device telemetry, Entra ID sign-ins, and threat intelligence). Cisco Secure Endpoint Typical signal types include malware detections, exploit prevention events, retrospective detections, device isolation actions, and file/trajectory context. Cisco telemetry is often hash-centric (SHA256, file reputation) which makes it excellent for IOC matching and cross-EDR correlation. WithSecure Elements WithSecure Elements tends to provide strong behavioral detections and ransomware heuristics, often including process ancestry and behavioral classification. It complements hash-based detections by providing behavior and incident context that can be joined to Defender process events. Samsung Knox Asset Intelligence Knox is posture-heavy. Typical signals include compliance state, encryption status, root/jailbreak indicators, patch level, device model identifiers and policy violations. This data is extremely useful for identity correlation: it helps answer whether a successful sign-in came from a device that should be trusted. Lookout Mobile Threat Defense Lookout focuses on mobile threats such as malicious apps, phishing, risky networks (MITM), device compromise indicators, and risk scores. Lookout signals are critical for identity attack chains because mobile phishing is often the precursor to token theft or credential reuse. 2. Ingestion architecture: from vendor API to Sentinel tables Most third‑party connectors are API-based. In production, treat ingestion as a pipeline with reliability requirements. The standard pattern is vendor API → connector runtime (codeless connector or Azure Function) → DCE → DCR transform → Log Analytics table. Key engineering controls: Secrets and tokens should be stored in Azure Key Vault where supported; rotate and monitor auth failures. Use overlap windows (poll slightly more than the schedule interval) and deduplicate by stable event IDs. Use DCR transforms to normalize fields early (device/user/IP/severity) and to filter obviously low-value noise. Monitor connector health and ingestion lag; do not rely on ‘Connected’ status alone. Ingestion health checks (KQL) // Freshness & lag per connector table (adapt table names to your workspace) let lookback = 24h union isfuzzy=true (<CiscoTable> | extend Source="Cisco"), (<WithSecureTable> | extend Source="WithSecure"), (<KnoxTable> | extend Source="Knox"), (<LookoutTable> | extend Source="Lookout") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), Events=count() by Source | extend IngestDelayMin = datetime_diff("minute", now(), LastEvent) | order by IngestDelayMin desc // Schema discovery (run after onboarding and after connector updates) Cisco | take 1 | getschema WithSecureTable | take 1 | getschema Knox | take 1 | getschema Lookout | take 1 | getschema 3. Normalization: make detections vendor-agnostic The most common failure mode in multi-EDR SOCs is writing separate rules per vendor. Instead, build one normalization function that outputs a stable schema. Then write rules once. Recommended canonical fields: Vendor, AlertId, EventTime, SeverityNormalized DeviceName (canonical), AccountUpn (canonical), SourceIP FileHash (when applicable), ThreatName/Category CorrelationKey (stable join key such as DeviceName + FileHash or DeviceName + AlertId) // Example NormalizeEndpoint() pattern. Replace column_ifexists(...) mappings after getschema(). let NormalizeEndpoint = () { union isfuzzy=true ( Cisco | extend Vendor="Cisco" | extend DeviceName=tostring(column_ifexists("hostname","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("ip","")), FileHash=tostring(column_ifexists("sha256","")), ThreatName=tostring(column_ifexists("threat_name","")), SeverityNormalized=tolower(tostring(column_ifexists("severity",""))) ), ( WithSecure | extend Vendor="WithSecure" | extend DeviceName=tostring(column_ifexists("hostname","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("ip","")), FileHash=tostring(column_ifexists("file_hash","")), ThreatName=tostring(column_ifexists("classification","")), SeverityNormalized=tolower(tostring(column_ifexists("risk_level",""))) ), ( Knox | extend Vendor="Knox" | extend DeviceName=tostring(column_ifexists("device_id","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP="", FileHash="", ThreatName=strcat("Device posture: ", tostring(column_ifexists("compliance_state",""))), SeverityNormalized=tolower(tostring(column_ifexists("risk",""))) ), ( Lookout | extend Vendor="Lookout" | extend DeviceName=tostring(column_ifexists("device_id","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("source_ip","")), FileHash="", ThreatName=tostring(column_ifexists("threat_type","")), SeverityNormalized=tolower(tostring(column_ifexists("risk_level",""))) ) | extend CorrelationKey = iff(isnotempty(FileHash), strcat(DeviceName, "|", FileHash), strcat(DeviceName, "|", ThreatName)) | project TimeGenerated, Vendor, DeviceName, AccountUpn, SourceIP, FileHash, ThreatName, SeverityNormalized, CorrelationKey, * } 4. Entity mapping and incident merging Sentinel’s incident experience improves dramatically when alerts include entity mapping. Map Host, Account, IP, and File (hash) where possible. Incident grouping should merge alerts by DeviceName and AccountUpn within a reasonable window (e.g., 6–24 hours) to avoid alert storms. 5. Correlation patterns that raise confidence High-confidence detections come from confirmation across independent sensors. These patterns reduce false positives while catching real compromise chains. 5.1 Multi-vendor confirmation (two EDRs agree) NormalizeEndpoint() | where TimeGenerated > ago(24h) | summarize Vendors=dcount(Vendor), VendorSet=make_set(Vendor, 10) by DeviceName | where Vendors >= 2 5.2 Third-party detection confirmed by Defender process telemetry let tp = NormalizeEndpoint() | where TimeGenerated > ago(6h) | where ThreatName has_any ("powershell","ransom","credential","exploit") | project TPTime=TimeGenerated, DeviceName, AccountUpn, Vendor, ThreatName tp | join kind=inner ( DeviceProcessEvents | where Timestamp > ago(6h) | where ProcessCommandLine has_any ("EncodedCommand","IEX","FromBase64String","rundll32","regsvr32") | project MDETime=Timestamp, DeviceName=tostring(DeviceName), Proc=ProcessCommandLine ) on DeviceName | where MDETime between (TPTime .. TPTime + 30m) | project TPTime, MDETime, DeviceName, Vendor, ThreatName, Proc 5.3 Mobile phishing signal followed by successful sign-in let mobile = NormalizeEndpoint() | where TimeGenerated > ago(24h) | where Vendor == "Lookout" and ThreatName has "phish" | project MTDTime=TimeGenerated, AccountUpn, DeviceName, SourceIP mobile | join kind=inner ( SigninLogs | where TimeGenerated > ago(24h) | where ResultType == 0 | project SigninTime=TimeGenerated, AccountUpn=tostring(UserPrincipalName), IPAddress, AppDisplayName ) on AccountUpn | where SigninTime between (MTDTime .. MTDTime + 30m) | project MTDTime, SigninTime, AccountUpn, DeviceName, SourceIP, IPAddress, AppDisplayName 5.4 Knox posture and high-risk sign-in let noncompliant = NormalizeEndpoint() | where TimeGenerated > ago(7d) | where Vendor=="Knox" and ThreatName has "NonCompliant" | project DeviceName, AccountUpn, KnoxTime=TimeGenerated noncompliant | join kind=inner ( SigninLogs | where TimeGenerated > ago(7d) | where RiskLevelDuringSignIn in ("high","medium") | project SigninTime=TimeGenerated, AccountUpn=tostring(UserPrincipalName), RiskLevelDuringSignIn, IPAddress ) on AccountUpn | where SigninTime between (KnoxTime .. KnoxTime + 2h) | project KnoxTime, SigninTime, AccountUpn, DeviceName, RiskLevelDuringSignIn, IPAddress 6. Response orchestration (SOAR) design Response should be consistent across vendors. Use a scoring model to decide whether to isolate a device, revoke tokens, or enforce Conditional Access. Prefer reversible actions, and log every automation step for audit. 6.1 Risk scoring to gate playbooks let SevScore = (s:string) { case(s=="critical",5,s=="high",4,s=="medium",2,1) } NormalizeEndpoint() | where TimeGenerated > ago(24h) | extend Score = SevScore(SeverityNormalized) | summarize RiskScore=sum(Score), Alerts=count(), Vendors=make_set(Vendor, 10) by DeviceName, AccountUpn | where RiskScore >= 8 | order by RiskScore desc High-severity playbooks typically execute: (1) isolate device via Defender (if onboarded), (2) revoke tokens in Entra ID, (3) trigger Conditional Access block, (4) notify and open ITSM ticket. Medium-severity playbooks usually tag the incident, add watchlist entries, and notify analysts.240Views6likes1CommentWhat’s New in Microsoft Sentinel: March 2026
March brings a set of updates to Microsoft Sentinel focused on helping your SOC automate faster, onboard data with less friction, and detect threats across more of your environment. This month's updates include natural-language playbook generation for more flexible SOAR workflows, streamlined real-time data ingestion with CCF Push, and expanded Kubernetes visibility with a dedicated GKE connector. Together, these innovations help security teams simplify operations, move faster, and strengthen coverage without added complexity. And if you're heading to RSAC 2026, check out how to join us for Microsoft Pre-Day below. What’s new Microsoft Sentinel playbook generator brings natural-language automation to SOC workflows The Microsoft Sentinel playbook generator lets you design and generate fully functional, code-based playbooks by describing what you need in natural language. Instead of relying on rigid templates and limited action libraries, you describe the workflow you want, and the generator produces a Python playbook with documentation and a visual flowchart. This has been a top ask from enterprise customers looking for more flexible automation in their SIEM workflows. The playbook generator works across Microsoft and third-party tools. By defining an Integration Profile with a base URL, authentication method, and credentials, it can create dynamic API calls without predefined connectors. That means you can automate tasks like team notifications, ticket updates, data enrichment, or incident response across your environment, then validate playbooks against real alerts and refine through chat or manual edits. You keep full transparency into the generated code and full control to customize it. Watch a demo and learn more. CCF Push delivers seamless, real-time security data to Microsoft Sentinel (public preview) The Codeless Connector Framework (CCF) Push feature allows you to send security data directly to a Sentinel workspace in real time. Instead of configuring Data Collection Endpoints (DCE), Data Collection Rules (DCR), Entra app registrations, and RBAC assignments, you press "Deploy" and Sentinel sets up all the resources for you. Built on the Log Ingestion API, CCF Push supports high-throughput ingestion, data transformation before ingestion, and direct delivery to system tables to speed up SOC detection and response and to enable more flexible access to critical security telemetry. This opens pathways to advanced scenarios, including data lake integrations and agentic AI use cases. Sentinel solution developers can begin leveraging CCF Push immediately. Partners like Keeper Security, Obsidian Security, and Varonis are already using CCF Push to stream security data into Sentinel. Learn more and check out the getting started guide. Detect threats across GKE clusters in Microsoft Sentinel with a dedicated CCF connector (general availability) A dedicated data connector for Google Kubernetes Engine (GKE) is available in the Microsoft Sentinel content hub, built on the Codeless Connector Framework (CCF). The connector ingests GKE cluster activity, workload behavior, and security events into the GKEAudit Log Analytics table, bringing GKE monitoring in line with how Azure Kubernetes Service (AKS) clusters are monitored in Sentinel today. It includes Data Collection Rule (DCR) support, data lake-only ingestion, and workspace transformation support so you can filter or modify incoming data before it reaches its destination. For security teams running workloads on GKE, this means you can apply Sentinel analytics, workbooks, and hunting queries across your GKE signals alongside the rest of your environment, giving you consistent visibility into Kubernetes threats whether your clusters run on Azure or Google Cloud. Get the GKE data connector Solve hybrid identity challenges with an RSA agent on Microsoft Sentinel data lake and Security Copilot RSA has built an agentic solution that combines RSA ID Plus telemetry with Microsoft Sentinel's data lake and Security Copilot agents. The integration ingests administrative identity telemetry from RSA ID Plus into the Sentinel data lake for cost-effective, long-term retention, then uses Security Copilot agents to assess that data and surface anomalous or risky admin behavior automatically. For security teams managing complex hybrid identity environments, this means identity risk signals from RSA are analyzed alongside your broader Sentinel telemetry without manual correlation. Admin accounts remain one of the highest-value targets for attackers, and having agentic AI continuously assessing identity patterns helps your SOC detect compromised credentials earlier and reduce investigation time. Learn more Join Microsoft Security at RSAC 2026 Pre-Day If you are heading to RSAC™ 2026 in San Francisco, join Microsoft Security for Pre-Day on Sunday, March 22 at the Palace Hotel. Hear from Vasu Jakkal, CVP of Microsoft Security Business, and other Microsoft Security leaders on how AI and autonomous agents are reshaping defense strategy. Product leaders will share what they are focused on for security operations, threat intelligence experts will discuss emerging trends, and Microsoft researchers will highlight the newest areas of security R&D. Register for Microsoft Pre-Day Explore all Microsoft experiences at RSAC 2026 Evaluate your SIEM platform for the agentic era with our strategic buyer's guide Our buyer’s guide from Microsoft Security helps security leaders evaluate what a modern SIEM platform should deliver. The Strategic SIEM Buyer's Guide walks through three essentials: building a unified foundation that is future-proof, accelerating detection and response with AI, and maximizing ROI with faster time to value. Whether you are assessing migration from a legacy on-premises SIEM or benchmarking your current platform, the guide offers practical buyer's tips and capability checklists grounded in real outcomes, including how organizations using Sentinel have achieved a 44% reduction in total cost of ownership and 93% faster deployment times. Learn more Additional resources Sign up for upcoming events: Mar 11: Microsoft Security Day (in-person, Mumbai) Mar 18: Tech brief: Next‑Generation Security Operations with Microsoft Mar 19: Microsoft Security Immersion Event: Shadow Hunter (in-person, Toronto) Mar 23-26: Microsoft Security at RSAC 2026 (in-person, San Francisco) Mar 25: Microsoft Tech Brief: Modernize security operations with a unified platform Apr 2: Master SecOps in the AI Era: Kickstart Your SC-200 Certification Challenge Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!1.2KViews3likes0CommentsUnlocking value with Microsoft Sentinel data lake
As security telemetry explodes and AI‑driven defense becomes the norm, it is critical to centralize and retain massive volumes of data for deep analysis and long‑term insights. Security teams are fundamentally rethinking how they manage, analyze, and act on security data. The Microsoft Sentinel data lake is a game changer for modern security operations, providing the foundation for agentic defense, deeper insights, and graph‑based enrichment. Security teams can centralize signals, simplify data management, and run advanced analytics, without compromising costs or performance. Across industries, organizations are using the Sentinel data lake to unify distributed data, search across years of telemetry, correlate sophisticated threats using graph-powered analytics, and operationalize agentic workflows at scale, turning raw security data into actionable intelligence. In this blog we will highlight some of the ways Sentinel data lake is transforming modern security operations. Unified, cost-effective security data foundation The challenge Many organizations tell us they have been forced to make difficult tradeoffs: high ingestion costs meant selectively choosing which logs to keep, often leaving data that might have been critical during an investigation. This selective logging creates blind spots, fragmented visibility, and unnecessary operational complexity across security operations. As a result, CISOs increasingly view selective logging as a material security risk to their organizations. How Sentinel data lake helps The Sentinel data lake removes these constraints by providing a cost‑effective, security‑optimized foundation for centralizing large volumes of security data. With the data lake, security teams can finally retain the breadth of telemetry they need without the financial penalties traditionally associated with long‑term security data retention. Organizations benefit from: A unified security data foundation designed to simplify investigations Long‑term, cost‑effective retention for up to 12 years Flexible querying across high‑volume data sets 6x data compression in storage, enabling significantly lower retention costs at scale Why it matters By unifying data in a purpose-built security data lake, SOC teams gain reliable, comprehensive visibility without the budget limitations that once forced them to choose between cost and completeness. This stronger foundation not only improves day‑to‑day investigations; it unlocks the advanced analytics and AI‑powered capabilities that future proof SOCs for AI driven defense. With full visibility restored, organizations are better equipped to identify emerging threats, respond with confidence, and modernize their security operations on their own terms. Historical security analysis The challenge SOC teams often struggle with short SIEM retention windows that limit how far back investigators can look. Critical logs age out before teams can fully piece together an attack, making root‑cause analysis slow and incomplete. This challenge grows when incidents span long periods, when new threat indicators emerge, or when organizations need to understand how a compromise evolved over time. Without access to historical telemetry, analysts face significant blind spots that weaken both investigations and hunting efforts. How Sentinel data lake helps The Sentinel data lake solves this by enabling organizations to retain and analyze years of security data at a fraction of the cost of traditional SIEM retention. Teams can use KQL and notebooks to run deep, long‑range investigations, perform advanced anomaly detection, and correlate older events that would have been impossible to recover in the analytics tier. Historical data enables retro analysis when new threat intel emerges. SOC teams can instantly look back to validate whether newly discovered indicators, techniques, or threat actors were already present in their environment. Organizations benefit from: Years of cost‑effective retention that extend far beyond traditional SIEM windows Deep forensic investigations using KQL and notebooks over historical data Improved anomaly detection with long‑range patterns and baselines Faster scoping of incidents with access to full historical context Why it matters By unlocking access to years of searchable telemetry, SOC teams are no longer limited by short retention windows or forced to make compromises that weaken security. They can retrace the full scope of an incident, hunt for slow‑moving threats, and quickly respond to new IOCs, powered by the historical context modern attacks demand. This long‑range visibility strengthens both detection and response, giving organizations the confidence and continuity they need to stay ahead of evolving threats. Graph-powered attack-path visibility and entity correlation The challenge Traditional investigations often rely on reviewing logs in isolation, making it difficult to connect identity activity, endpoint behavior, cloud access, and threat intelligence in a meaningful way. As a result, SOC teams find it difficult to trace attack paths, understand lateral movement, and build complete investigative context. Without a unified view of how entities relate to each other, investigations become slow, fragmented, and are prone to missed signals. How Sentinel data lake helps The Sentinel data lake enables powerful graph‑based correlation across identity, asset, activity, and threat intelligence data. Using graph models, analysts can visually explore how entities connect, identify hidden attack paths, pinpoint exposed routes to sensitive assets, and understand the full blast radius of compromised accounts or devices. This graph‑driven context turns complex telemetry into intuitive visuals that dramatically accelerate both pre‑breach context and incident response. Organizations benefit from: Graph‑powered correlation across identity, asset, activity, and threat intelligence data Visualization of attack paths and lateral movement that logs alone cannot expose Context‑rich investigations supported by relationship‑driven insights Greater cross‑domain visibility that strengthens both detection and response Why it matters With graph‑powered context, SOC teams move beyond event‑by‑event analysis and gain a deep understanding of how their environment behaves as a system. This visibility speeds investigations, strengthens posture before attackers strike, and provides analysts with a clear, intuitive way to uncover relationships that traditional log searches simply can’t reveal. Agentic workflows powered by MCP server The challenge SOC teams are under constant pressure from rising alert volumes, repetitive manual investigative steps, and skill gaps that make consistent triage challenging. Even experienced analysts struggle to reason across large, distributed datasets, and junior analysts often lack the experience needed to understand complex threat scenarios. These challenges slow down response and increase the risk of missed signals. How the Sentinel data lake helps The Sentinel data lake, combined with the Model Context Protocol (MCP), enables AI agents to reason over unified, contextual security data using natural‑language prompts. Analysts can ask questions directly: “Does this user have other suspicious activity?” or “What assets are at risk?”, and agents automatically interpret the request, query the data lake, and return actionable insights. These AI‑powered workflows reduce repetitive effort, strengthen investigative consistency, and help teams operate at a higher level of speed and precision. Organizations benefit from: AI‑assisted investigations that reduce manual effort and accelerate triage Agentic workflows powered by MCP to automate multi‑step reasoning over unified data Natural‑language interactions that make complex queries accessible to all analysts Consistent, high‑quality analysis regardless of analyst experience level Why it matters By introducing agentic, AI‑driven workflows, SOC teams can automate time‑consuming tasks, reduce alert fatigue, and empower every analyst, regardless of seniority, to quickly arrive at high‑quality insights. This shift not only accelerates investigations but also frees teams to focus on high‑value, proactive security work. As organizations continue modernizing their SOC, agentic workflows represent a major step forward in bridging the gap between human expertise and scalable, AI‑powered analysis. The future of security operations starts here The Sentinel data lake is becoming the backbone of modern security operations—unifying security data, expanding investigative reach, and enabling graph‑driven, AI‑powered analysis at scale. By centralizing telemetry on a cost‑effective, AI‑ready foundation, and running advanced analytics on that data, security teams can move beyond fragmented insights to correlate threats with clarity and act faster with confidence. These four use cases are just the beginning. Whether you’re strengthening investigations, advancing threat hunting, operationalizing AI, or preparing your SOC for what’s next, the Sentinel data lake provides the scale, intelligence, and flexibility to reduce complexity and stay ahead of evolving threats. Now is the time to accelerate toward a more resilient, adaptive, and future‑ready security posture. Get started with Microsoft Sentinel data lake today503Views0likes0CommentsMcasShadowItReporting / Cloud Discovery in Azure Sentinel
Hi! I´m trying to Query the McasShadowItReporting Table, for Cloud App DISCOVERYs The Table is empty at the moment, the connector is warning me that the Workspace is onboarded to Unified Security Operations Platform So I cant activate it here I cant mange it via https://security.microsoft.com/, too The Documentation ( https://learn.microsoft.com/en-us/defender-cloud-apps/siem-sentinel#integrating-with-microsoft-sentinel ) Leads me to the SIEM Integration, which is configured for (for a while) I wonder if something is misconfigured here and why there is no log ingress / how I can query them105Views0likes1CommentClarification on UEBA Behaviors Layer Support for Zscaler and Fortinet Logs
I would like to confirm whether the new UEBA Behaviors Layer in Microsoft Sentinel currently supports generating behavior insights for Zscaler and Fortinet log sources. Based on the documentation, the preview version of the Behaviors Layer only supports specific vendors under CommonSecurityLog (CyberArk Vault and Palo Alto Threats), AWS CloudTrail services, and GCP Audit Logs. Since Zscaler and Fortinet are not listed among the supported vendors, I want to verify: Does the UEBA Behaviors Layer generate behavior records for Zscaler and Fortinet logs, or are these vendors currently unsupported for behavior generation? As logs from Zscaler and Fortinet will also be get ingested in CommonSecurityLog table only.Solved78Views0likes1CommentUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.Solved1.8KViews2likes4CommentsCodeless Connect Framework (CCF) Template Help
As the title suggests, I'm trying to finalize the template for a Sentinel Data Connector that utilizes the CCF. Unfortunately, I'm getting hung up on some parameter related issues with the polling config. The API endpoint I need to call utilizes a date range to determine the events to return and then pages within that result set. The issue is around the requirements for that date range and how CCF is processing my config. The API expects an HTTP GET verb and the query string should contain two instances of a parameter called EventDates among other params. For example, a valid query string may look something like: ../path/to/api/myEndpoint?EventDates=2025-08-25T15%3A46%3A36.091Z&EventDates=2025-08-25T16%3A46%3A36.091Z&PageSize=200&PageNumber=1 I've tried a few approaches in the polling config to accomplish this, but none have worked. The current config is as follows and has a bunch of extra stuff and names that aren't recognized by my API endpoint but are there simply to demonstrate different things: "queryParameters": { "EventDates.Array": [ "{_QueryWindowStartTime}", "{_QueryWindowEndTime}" ], "EventDates.Start": "{_QueryWindowStartTime}", "EventDates.End": "{_QueryWindowEndTime}", "EventDates.Same": "{_QueryWindowStartTime}", "EventDates.Same": "{_QueryWindowEndTime}", "Pagination.PageSize": 200 } This yields the following URL / query string: ../path/to/api/myEndpoint?EventDates.Array=%7B_QueryWindowStartTime%7D&EventDates.Array=%7B_QueryWindowEndTime%7D&EventDates.Start=2025-08-25T15%3A46%3A36.091Z&EventDates.End=2025-08-25T16%3A46%3A36.091Z&EventDates.Same=2025-08-25T16%3A46%3A36.091Z&Pagination.PageSize=200 There are few things to note here: The query param that is configured as an array (EventDates.Array) does indeed show up twice in the query string and with distinct values. The issue is, of course, that CCF doesn't seem to do the variable substitution for values nested in an array the way it does for standard string attributes / values. The query params that have distinct names (EventDates.Start and .End) both show up AND both have the actual timestamps substituted properly. Unfortunately, this doesn't match the API expectations since the names differ. The query params that are repeated with the same name (EventDates.Same) only show once and it seems to use the value from which comes last in the config (so last one overwrites the rest). Again, this doesn't meet the requirements of the API since we need both. I also tried a few other things ... Just sticking the query params and placeholders directly in the request.apiEndpoint polling config attribute. No surprise, it doesn't do the variable substitution there. Utilizing queryParametersTemplate instead of queryParameters. https://learn.microsoft.com/en-us/azure/sentinel/data-connector-connection-rules-referenceindicates this is a string parameter that expects a JSON string. I tried this with various approaches to the structure of the JSON. In ALL instances, the values here seemed to be completely ignored. All other examples from Azure-Sentinel repository utilize the POST verb. Perhaps that attribute isn't even interpreted on a GET request??? And because some AI agents suggested it and ... sure, why not??? ... I tried queryParametersTemplate as an actual query string template, so "EventDates={_QueryWindowStartTime}&EventDates={_QueryWindowEndTime}". Just as with previous attempts to use this attribute, it was completely ignored. I'm willing to try anything at this point, so if you have suggestions, I'll give it a shot! Thanks for any input you may have!334Views0likes5Comments