what's new
416 TopicsIntroducing a refreshed design, task chat, and more in Microsoft Planner
We’re excited to announce that a modernized user interface and new features are now rolling out to basic plans in both Planner in Teams and Planner for the web. The updated design offers enhanced navigation, responsive layouts, a new goals view for setting objectives and priorities, and task chat—one of your most requested features—to enable real-time collaboration and @ mentioning team members. This release aims to make planning easier for everyday users while preparing for future AI-powered capabilities. Our goal is to streamline planning by making it more intelligent and connected, so teams can concentrate on achieving results rather than managing tasks. What's new in Planner A refreshed design: With this rollout, users will be able to manage their plans in a cleaner, more modern interface that brings a more consistent planning experience across work. Planner’s new look was designed to feel simpler, allowing users to find what they need. It reduces visual clutter, improves layout and spacing, and creates a more focused workspace. Task chat with @ mentions: A new task chat is coming to basic plans, bringing real-time, threaded conversations directly into tasks, including @ mentions, rich formatting, emojis, and notifications to help keep decisions tied to the specific task at hand. Plan members who are @ mentioned in a task will receive a notification in their Teams Activity feed and via email and can select the notification which takes them directly to the task card for additional context. Note that previously, users received notifications for every task comment, but as a result of customer feedback, we now only send notifications to mentioned users. The ability to @ mention team members directly in a task has been a top request, and we’re excited to roll this out in a familiar, chat-based experience. Please note, premium plans will continue to utilize the existing task conversation experience. This will converge into the new experience at a later point in time. Goals view: Basic plans will now include a dedicated Goals view, allowing teams to set clear, well-defined objectives to help prioritize work. By connecting tasks to shared goals, teams achieve greater alignment, gain clarity on priorities, and track progress and outcomes—driving the plan forward together. Access to Goals view in basic plans requires either a Planner premium license or a Microsoft 365 Copilot license. Notes on availability Please note that not all users will see the new Planner interface at the same time. This refreshed interface, along with Task chat and Goals view, begins rolling out to basic plans today and will continue to roll out over the coming weeks. This is only the beginning This redesign lays the groundwork for many more improvements coming to Planner in the next few weeks and months, including: Project Manager agent in basic plans – to help with task execution and the creation of status reports. Custom templates. Planner in Outlook. Stay tuned for announcements regarding these updates and more aligned to our long-term vision for integrated work management. Feature availability, naming, and timelines are subject to change. Please refer to the Microsoft 365 Roadmap for the latest status. Addressing your feedback We heard your feedback about inconsistencies between basic and premium plans. This refresh starts closing those gaps, so features appear consistently across plans based on your license. For example, users with a Planner premium license will now see Goals in basic plans, and users with a Microsoft 365 Copilot license will soon have access to Project Manager Agent in basic plans as well. Tell us what you think about the new Planner interface, Task chat, and Goals view by selecting More (circled question mark icon) in the top right corner of the app, then selecting Feedback from the dropdown menu. We also encourage you to share any feature requests by adding your ideas to the Planner Feedback Portal. Your feedback helps inform our feature updates, and we look forward to hearing from you. Learn more Visit planner.cloud.microsoft to access Planner directly from your browser. Sign up to receive future communication about Planner. Learn more about Planner in our Frequently asked questions. Check out the Planner adoption page and Planner help & learning page to learn more about Planner. Visit the Microsoft 365 roadmap for feature descriptions and estimated release dates for Planner. Walk through the interactive demos for Project Manager Agent in Planner and Project Manager Agent skills in Teams meetings.19KViews7likes37CommentsWhat’s new in Microsoft Sentinel: RSAC 2026
Security is entering a new era, one defined by explosive data growth, increasingly sophisticated threats, and the rise of AI-enabled operations. To keep pace, security teams need an AI-powered approach to collect, reason over, and act on security data at scale. At RSA Conference 2026 (RSAC), we’re unveiling the next wave of Sentinel innovations designed to help organizations move faster, see deeper, and defend smarter with AI-ready tools. These updates include AI-driven playbooks that accelerate SOC automation, Granular Delegated Admin Privileges (GDAP) and granular role-based access controls (RBAC) that let you scale your SOC, accelerated data onboarding through new connectors, and data federation that enables analysis in place without duplication. Together, they give teams greater clarity, control, and speed. Come see us at RSAC to view these innovations in action. Hear from Sentinel leaders during our exclusive Microsoft Pre-Day, then visit Microsoft booth #5744 for demos, theater sessions, and conversations with Sentinel experts. Read on to explore what’s new. See you at RSAC! Sentinel feature innovations: Sentinel SIEM Sentinel data lake Sentinel graph Sentinel MCP Threat Intelligence Microsoft Security Store Sentinel promotions Sentinel SIEM Playbook generator [Now in public preview] The Sentinel playbook generator delivers a new era of automation capabilities. You can vibe code complex automations, integrate with different tools to ensure timely and compliant workflows throughout your SOC and feel confident in the results with built in testing and documentation. Customers and partners are already seeing benefit from this innovation. “The playbook generator gives security engineers the flexibility and speed of AI-assisted coding while delivering the deterministic outcomes that enterprise security operations require. It's the best of both worlds, and it lives natively in Defender where the engineers already work.” – Jaime Guimera Coll | Security and AI Architect | BlueVoyant Learn more about playbook generator. SIEM migration experience [General availability now] The Sentinel SIEM migration experience helps you plan and execute SIEM migrations through a guided, in-product workflow. You can upload Splunk or QRadar exports to generate recommendations for best‑fit Sentinel analytics rules and required data connectors, then assess migration scope, validate detection coverage, and migrate from Splunk or QRadar to Sentinel in phases while tracking progress. “The tool helps turn a Splunk to Sentinel migration into a practical decision process. It gives clear visibility into which detections are relevant, how they align to real security use cases, and where it makes sense to enable or prioritize coverage—especially with cost and data sources in mind.” – Deniz Mutlu | Director | Swiss Post Cybersecurity Ltd Learn more about SIEM migration experience. GDAP, unified RBAC, and row-level RBAC for Sentinel [Public preview, April 1] As Sentinel environments grow for enterprises, MSSPs, hyperscalers, and partners operating across shared or multiple environments, the challenge becomes managing access control efficiently and consistently at scale. Sentinel’s expanded permissions and access capabilities are designed to meet these needs. Granular Delegated Admin Privileges (GDAP) lets you streamline management across multiple governed tenants using your primary account, based on existing GDAP relationships. Unified RBAC allows you to opt in to managing permissions for Sentinel workspaces through a single pane of glass, configuring and enforcing access across Sentinel experiences in the analytics tier and data lake in the Defender portal. This simplifies administration and improves operational efficiency by reducing the number of permission models you need to manage. Row-level RBAC scoping within tables enables precise, scoped access to data in the Sentinel data lake. Multiple SOC teams can operate independently within a shared Sentinel environment, querying only the data they are authorized to see, without separating workspaces or introducing complex data flow changes. Consistent, reusable scope definitions ensure permissions are applied uniformly across tables and experiences, while maintaining strong security boundaries. To learn more, read our technical deep dives on RBAC and GDAP. Sentinel data lake Sentinel data federation [Public preview, April 1] Sentinel data federation lets you analyze security data in place without copying or duplicating your data. Powered by Microsoft Fabric, you can now federate data from Fabric, Azure Data Lake Storage (ADLS), and Azure Databricks into Sentinel data lake. Federated data appears alongside native Sentinel data, so you can use familiar tools like KQL hunting, notebooks, and custom graphs to correlate signals and investigate across your entire digital estate, all while preserving governance and compliance. You can start analyzing data in place and progressively ingest data into Sentinel for deeper security insights, advanced automation, and AI-powered defense at scale. You are billed only when you run analytics on federated data using existing Sentinel data lake query and advanced insights meters. les for unified investigation and hunting Sentinel cost estimation tool [Public Preview, April 1] The new Sentinel cost estimation tool offers all Microsoft customers and partners a guided, meter-level cost estimation experience that makes pricing transparent and predictable. A built-in three-year cost projection lets you model data growth and ramp-up over time, anticipate spend, and avoid surprises. Get transparent estimates into spend as you scale your security operations. All other customers can continue to use the Azure calculator for Sentinel pricing estimates. See the Sentinel pricing page for more information. Sentinel data connectors A365 Observability connector [Public preview, April 15] Bring AI agent telemetry into the Sentinel data lake to investigate agent behavior, tool usage, prompts, reasoning and execution using hunting, graph, and MCP workflows. GitHub audit log connector using API polling [General availability, March 6] Ingest GitHub enterprise audit logs into Sentinel to monitor user and administrator activity, detect risky changes, and investigate security events across your development environment. Google Kubernetes Engine (GKE) connector [General availability, March 6] Collect Google Kubernetes Engine (GKE) audit and workload logs in Sentinel to monitor cluster activity, analyze workload behavior, and detect security threats across Kubernetes environments. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Public preview, April 15] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. With over 350 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. “Microsoft Sentinel data lake forms the core of our agentic SOC. By unifying large volumes of Microsoft and third-party data, enabling graph-based analysis, and supporting MCP-driven workflows, it allows us to investigate faster, at lower cost, and with greater confidence.” – Øyvind Bergerud | Head of Security Operations | Storebrand Learn more about Sentinel data connectors. Sentinel connector builder agent using Sentinel Visual Studio Code extension [Public preview, March 31] Build Sentinel data connectors in minutes instead of weeks using the AI‑assisted Connector Builder agent in Visual Studio Code. This low‑code experience guides developers and ISVs end-to-end, automatically generating schemas, deployment assets, connector UI, secure secret handling, and polling logic. Built‑in validation surfaces issues early, so you can validate event logs before deployment and ingestion. Example prompt in GitHub Copilot Chat: @sentinel-connector-builder Create a new connector for OpenAI audit logs using https://api.openai.com/v1/organization/audit_logs Data filtering and splitting [Public preview, March 30] As security teams ingest more data, the challenge shifts from scale to relevance. With filtering and splitting now built into the Defender portal, teams can shape data before it lands in Sentinel, without switching tools or managing custom JSON files. Define simple KQL‑based transformations directly in the UI to filter low‑value events and intelligently route data, making ingestion optimization faster, more intuitive, and easier to manage at scale. Filtering at ingest time allows you to remove low-value or benign events to reduce noise, cut unnecessary processing, and ensure that high-signal data drives detections and investigations. Splitting enables intelligent routing of data between the analytics tier and the data lake tier based on relevance and usage. Together, these two capabilities help you balance cost and performance while scaling data ingestion sustainably as your digital estate grows. Create workbook reports directly from the data lake [Public preview, April 1] Sentinel workbooks can now directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can now create trend analysis and executive reporting. Sentinel graph Custom graphs [Public preview, April 1] Custom graphs let you build tailored security graphs tuned to your unique security scenarios using data from Sentinel data lake as well as non-Microsoft sources. With custom graph, powered by Fabric, you can build, query, and visualize connected data, uncover hidden patterns and attack paths, and help surface risks that are hard to detect when data is analyzed in isolation. These graphs provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations, revealing blast radius, and helping you move from noisy, disconnected alerts to confident decisions at scale. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Custom graph API usage for creating graph and querying graph will be billed starting April 1, 2026, according to the Sentinel graph meter. Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graph experience in the Defender portal. Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the graphs section of the Defender portal under Sentinel. From there, you’ll be able to perform interactive graph-based investigations, such as using a graph built for phishing analysis to help you quickly evaluate the impact of a recent incident, profile the attacker, and trace its paths across Microsoft telemetry and third-party data. The new graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize the graph, view graph results in tabular format, and interactively travers the graph to the next hop with a simple click. Sentinel MCP Sentinel MCP entity analyzer [General availability, April 1] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. The capability analyzes data across modalities including threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates easily with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. The entity analyzer is also a trusted foundation for the Defender Triage Agent and delivers more accurate alert classifications and deeper investigative reasoning. This removes the need to manually engineer evaluation logic and creates trust for analysts and AI agents to act with higher accuracy and confidence. Learn more about entity analyzer and in our blog here. Entity analyzer will be billed starting April 1, 2026, based on Security Compute Units (SCU) consumption. Learn more about MCP billing. Sentinel MCP graph tool collection [Public preview, April 20] Graph tool collection helps you visualize and explore relationships between identities and device assets, threats and activities signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand how content interacts across your environment. This helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources, all from a single, interactive workspace. Executing graph queries via the MCP tools will trigger the graph meter. Claude MCP connector [Public preview, April 1] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. Threat Intelligence CVEs of interest in the Threat Intelligence Briefing Agent [Public preview in April] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. CVEs of interest which highlights vulnerabilities actively discussed across the security landscape and assesses their potential impact on your environment, delivering more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Microsoft Security Store Security Store embedded in Entra [General availability, March 23] As identity environments grow more complex, teams need to move faster and extend Entra with trusted third‑party capabilities that address operational, compliance, and risk challenges. The Security Store embedded directly into Entra lets you discover and adopt Entra‑ready agents and solutions in your workflow. You can extend Entra with identity‑focused agents that surface privileged access risk, identity posture gaps, network access insights, and overall identity health, turning identity data into clear recommendations and reports teams can use immediately. You can also enhance Entra with Verified ID and External ID integrations that strengthen identity verification, streamline account recovery, and reduce fraud across workforce, consumer, and external identities. Security Store embedded in Microsoft Purview [General availability, March 31] Extending data security across the digital estate requires visibility and enforcement into new data sources and risk surfaces, often requiring a partnered approach. The Security Store embedded directly into Purview lets you discover and evaluate integrated solutions inside your data security workflows. Relevant partner capabilities surface alongside context, making it easier to strengthen data protection, address regulatory requirements, and respond to risk without disrupting existing processes. You can quickly assess which solutions align to data security scenarios, especially with respect to securing AI use, and how they can leverage established classifiers, policies, and investigation workflows in Purview. Keeping integration discovery in‑flow and purchases centralized through the Security Store means you move faster from evaluation to deployment, reducing friction and maintaining a secure, consistent transaction experience. Security Store Advisor [General availability, March 23] Security teams today face growing complexity and choice. Teams often know the security outcome they need, whether that's strengthening identity protection, improving ransomware resilience, or reducing insider risk, but lack a clear, efficient way to determine which solutions will help them get there. Security Store Advisor provides a guided, natural-language discovery experience that shifts security evaluation from product‑centric browsing to outcome‑driven decision‑making. You can describe your goal in plain language, and the Advisor surfaces the most relevant Microsoft and partner agents, solutions, and services available in the Security Store, without requiring deep product knowledge. This approach simplifies discovery, reduces time spent navigating catalogs and documentation, and helps you understand how individual capabilities fit together to deliver meaningful security outcomes. Sentinel promotions Extending signups for promotional 50 GB commitment tier [Through June 2026] The Sentinel promotional 50 GB commitment tier offers small and mid-sized organizations a cost-effective entry point into Sentinel. Sign up for the 50 GB commitment tier until June 30, 2026, and maintain the promotional rate until March 31, 2027. This promotion is available globally with regional variations in pricing and accessible through EA, CSP, and Direct channels. Visit the Sentinel pricing page for details and to get started. Sentinel RSAC 2026 sessions All week – Sentinel product demos, Microsoft Booth #5744 Mon Mar 23, 3:55 PM – RSAC 2026 main stage Keynote with CVP Vasu Jakkal [KEY-M10W] Ambient and autonomous security: Building trust in the agentic AI era Tue Mar 24, 10:30 AM – Live Q&A session, Microsoft booth #5744 and online Ask me anything with Microsoft Security SMEs and real practitioners Tue Mar 24, 11 AM – Sentinel data lake theater session, Microsoft booth #5744 From signals to insights: How Microsoft Sentinel data lake powers modern security operations Tue Mar 24, 2 PM – Sentinel SIEM theater session, Microsoft booth #5744 Vibe-coding SecOps automations with the Sentinel playbook generator Wed Mar 25, 12 PM – Executive event at Palace Hotel with Threat Protection GM Scott Woodgate The AI risk equation: Visibility, control, and threat acceleration Wed Mar 25, 1:30 PM – Sentinel graph theater session, Microsoft booth #5744 Bringing knowledge-driven context to security with Microsoft Sentinel graph Wed Mar 25, 5 PM – MISA theater session, Microsoft booth #5744 Cut SIEM costs without reducing protection: A Sentinel data lake case study Thu Mar 26, 1 PM – Security Store theater session, Microsoft booth #5744 What's next for Security Store: Expanding in portal and smarter discovery All week – 1:1 meetings with Microsoft security experts Meet with Microsoft Defender and Sentinel SIEM and Defender Security Operations Additional resources Sentinel data lake video playlist Explore the full capabilities of Sentinel data lake as a unified, AI-ready security platform that is deeply integrated into the Defender portal Sentinel data lake FAQ blog Get answers to many of the questions we’ve heard from our customers and partners on Sentinel data lake and billing AI‑powered SIEM migration experience ninja training Walk through the SIEM migration experience, see how it maps detections, surfaces connector requirements, and supports phased migration decisions SIEM migration experience documentation Learn how the SIEM migration experience analyzes your exports, maps detections and connectors, and recommends prioritized coverage Accenture collaborates with Microsoft to bring agentic security and business resilience to the front lines of cyber defense Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!3.2KViews3likes0CommentsFrom alert overload to decisive action: How Security Copilot agents are transforming security and IT
Security and IT teams operate in a constant stream of alerts, incidents, and investigations. As environments expand across identities, endpoints, cloud, and data, the challenge becomes clear: identifying real risk quickly enough to act. Security Copilot agents bring AI directly into the flow of work, helping teams understand risk with greater context, investigate threats more efficiently, and take action sooner. Security Copilot is now included with Microsoft 365 E5 and E7 licenses at no additional cost, so teams can start using agents right away. Over the past year, organizations have used Security Copilot to triage alerts, surface real threats earlier, and move faster from investigation to action. At this RSA 2026 conference, we are announcing new capabilities that reflect a continuous wave of innovation, evolving from built-in AI assistance and automated summaries to new agents that can analyze signals, investigate incidents, and execute security workflows. Real-world impact: measurable results Security Copilot agents help security and IT teams identify and respond to risk more effectively. Customers are seeing that impact in their day-to-day operations. At St. Luke’s University Health Network, the Phishing Triage Agent in Microsoft Defender saves security analysts more than 200 hours every month, automatically triaging phishing alerts and surfacing those that actually matter. Independent randomized controlled studies reinforce the results. Security professionals using the Phishing Triage Agent triaged alerts up to 78% faster, delivered 77% more accurate verdicts, and identified 6.5 times more malicious emails. That same impact extends beyond the SOC into other critical areas of security and IT. A data security team at a large telecommunications organization used the Data Security Triage Agent in Microsoft Purview to triage more than 40,000 Data Loss Prevention (DLP) alerts in 90 days, surfacing the 10% most critical alerts that required investigation. Identity teams are also seeing huge improvements with the Conditional Access Optimization Agent in Microsoft Entra, which continuously analyzes access policies against Zero Trust baselines and recommends actions. In controlled productivity studies, identity admins completed policy-related tasks 43% faster and 48% more accurately when identifying configuration weaknesses. IT teams are also seeing impact using the Vulnerability Remediation Agent in Microsoft Intune, which continuously detects new vulnerabilities as threats emerge. As one CTO at a renewable energy and technology company shared, the agent is “dramatically changing the way we approach working with vulnerabilities in our environment. A two‑week process is now a two‑minute process, really huge number for us.” Across these scenarios, teams begin investigations with clearer context and a better understanding of what actually matters. Instead of piecing together signals across dozens of tools, they can focus on the highest-risk issues and move from investigation to action with confidence. As environments continue expanding across identities, endpoints, applications, and data, quickly connecting signals and understanding risk becomes essential. New Security Copilot agents and capabilities announced at RSA Conference Our innovation continues. Microsoft is introducing new Security Copilot agents and expanded capabilities designed to help organizations analyze complex security data, triage alerts more effectively, and strengthen security posture across identity, endpoint, cloud, and data environments. New and updated Security Copilot agents built by Microsoft Security Analyst Agent in Microsoft Defender Security teams are often sitting on enormous volumes of security data, but turning that data into answers takes time. The Security Analyst Agent helps teams move from raw telemetry to real understanding much faster. By performing deep, multi-step investigations across Microsoft Defender and Sentinel telemetry, the agent can analyze up to ~100MB of security data to uncover anomalies, hidden risks, and high-impact threats that might otherwise stay buried. Analysts can chat directly with the agent to ask questions, explore hypotheses, and dig deeper into findings. The results include transparent reasoning and supporting evidence, helping teams quickly understand what matters and move forward with confidence. Security Alert Triage Agent in Microsoft Defender One of the biggest challenges for SOC teams is deciding which alerts actually deserve attention. The Security Alert Triage Agent helps cut through that noise so analysts can focus on the threats that truly matter. Building on its existing phishing triage capabilities, the agent now extends autonomous triage to identity and cloud alerts. Each verdict includes clear, transparent reasoning so analysts can quickly understand the outcome and prioritize the alerts that matter most. New capabilities for Conditional Access Optimization Agent in Microsoft Entra Identity environments are constantly evolving as organizations add new apps, users, and authentication methods. New capabilities in the Conditional Access Optimization Agent help identity teams identify and close critical policy gaps faster, with recommendations tailored to their organization’s needs. The agent now delivers business-context-aware recommendations, supports phased rollout of new policies, enables automated least-privilege enforcement for supported third-party agent identities, and helps drive passkey adoption. Together, these capabilities help organizations continuously strengthen identity security while maintaining productivity. New capabilities for Data Security Posture Agent in Microsoft Purview Sensitive data often moves through documents, emails, chats, and collaboration tools, which makes it easy for credentials or secrets to end up where they shouldn’t be. A new credential scanning capability in the Data Security Posture Agent helps data security teams proactively identify exposed credentials within their data environment. By analyzing data signals and access patterns, the agent surfaces potential credential exposure risks and helps teams quickly investigate and remediate them. This gives organizations better visibility into hidden data risks and strengthens overall protection of critical systems. New capabilities for Data Security Triage Agent in Microsoft Purview Insider Risk Management Investigating insider risk alerts often requires piecing together signals from many different sources to understand what is really happening. The Data Security Triage Agent now introduces an advanced AI reasoning layer that helps security teams evaluate those signals more holistically. By performing deeper, multi-step analysis across behavioral signals from users, devices, and data activity, the agent can surface the incidents that truly require investigation while filtering out noise. The result is faster, more accurate investigations and better confidence when responding to potential insider risks. New capabilities for Data Security Triage Agent in Microsoft Purview Data Loss Prevention Custom Sensitive Information Types (SITs) are often difficult for analysts to interpret quickly because the underlying definitions and patterns lack clear context at triage time. This latest enhancement makes custom Sensitive Information Types (SITs) easier for both the agent and analysts to understand in Data Loss Prevention alerts. Purview interprets custom SIT definitions, generates semantic descriptions of the data, and surfaces that context directly within the agent. This allows the agent to classify and prioritize alerts involving custom data more accurately, helping analysts quickly recognize real risk and respond appropriately. New Security Copilot agents built by partners To meet customers where they are across their existing security stack, the Security Copilot ecosystem continues to grow with more than 70 partner-built agents available today in the Security Store, bringing additional signals and investigation capabilities into the platform. Some of these agents include the following: Security Investigation Agent by Commvault – Correlates backup anomalies with identity and security signals across platforms such as Entra, CrowdStrike, Netskope, and Darktrace. MITRE Attack Coverage Insight Agent by Inspira – Evaluates analytic rule coverage, calculates ATT&CK coverage, identifies detection gaps, generates detection recommendations, and provides SOC detection maturity scoring. Endpoint Risk Insights Agent by Avanade – Provides endpoint risk insights by correlating signals across security telemetry. Identity Role Mining Agent by Invoke – Allows user to discover and analyze administrator roles in Microsoft Entra ID with ease and precision. Identity Threat Triage Agent by Silverfort - Correlates Silverfort's identity risk signals with Entra ID and Defender for Endpoint data in the Sentinel data lake to surface risky sign‑ins, MFA abuse, suspicious processes, and anomalies. Together, these partner agents extend Security Copilot’s ability to connect signals across Microsoft and third-party security platforms, giving organizations broader visibility and stronger investigation capabilities across their security environment. To explore all new Security Copilot agents, visit the Microsoft Security Store. New Security Copilot innovations that turn insight into action Security Copilot continues to integrate more deeply into the tools security and IT teams already use every day. These capabilities bring AI directly into the environments where investigations happen, helping teams explore threats, understand context, and take action without switching between tools. Security Copilot interactive chat experience in Microsoft Defender Analysts can ask questions, explore investigative hypotheses, and follow threat activity across incidents, alerts, identities, devices, and IPs without leaving their investigation. Copilot understands the context of the page analysts are working on and grounds responses in the relevant signals already available in Defender. As analysts ask questions, Copilot can run investigative steps, gather additional evidence, and surface new insights. This allows teams to iterate quickly, validate assumptions, and dig deeper into threats while staying in the same workflow. Secret finder skill in Security Copilot is now generally available Available in the Security Copilot standalone portal, the Secret Finder skill can be invoked to analyze unstructured content such as emails, chats, documents, and investigation notes to identify exposed credentials hidden in real-world workflows. Using agentic capabilities such as multi-step reasoning rather than simple pattern matching, it detects real, usable secrets and the systems they unlock, helping security teams quickly understand potential exposure and respond with confidence. Additional integrations and use cases are planned to expand how this capability can be used across security workflows. Security Copilot trigger in Logic Apps Building on how many organizations already use Logic Apps to automate security workflows, a new connector action for Security Copilot in Logic Apps flows allows teams to easily invoke partner-built agents and custom agents they create as part of repeatable workflows. This brings deeper AI-driven investigation, context, and decision support into tasks such as incident triage, threat intelligence analysis, and policy validation. See Security Copilot in action at RSA Conference Join us at RSA Conference to see the latest Security Copilot agents and capabilities in action. Stop by the Microsoft booth to connect with the team, explore new innovations, and experience how agents are helping security and IT teams investigate threats, understand risk, and strengthen security posture. Hear from Microsoft Security product leaders in these booth sessions March 23 | 5:15 PM Empowering the SOC with assistive and autonomous AI, Yuval Derman March 24 | 3:00 PM Security Copilot agents: Insight. Action. Impact., Lizzie Heinze and Donna Lee March 25 | 10:30 AM Turning Data Risk into Action with Security Copilot Agents, Paige Johnson and Tanay Baldua March 26 | 12:00 PM Defend identity autonomously with agentic AI in Microsoft Entra, Mitch Muro, Rahul Prakash, Nikhil Reddy Join our deep dive session March 24 | 8:30 AM | The Palace Hotel Security Copilot in action: An agentic approach to modern security Register here: Microsoft Security RSAC Events | Microsoft Corporate Stop by the Microsoft booth for a hands-on experience Test out the latest Security Copilot agents at the demo station and connect with our experts. Agentic AI Arena: Try a fun, gamified experience that shows how Security Copilot agents investigate threats, surface risk, and help security teams respond faster. Start using Security Copilot in your daily workflows If you have received access to Security Copilot as part of your Microsoft 365 E5 plan, we recommend following steps to get started quickly: Sign up for the Security Copilot skilling series Review new agentic scenarios and developer capabilities in the Security Copilot Adoption Hub Learn what’s included with your Microsoft 365 E5 plan in documentation Request assistance from a Microsoft 365 FastTrack specialist to unlock the full value of Security CopilotMicrosoft Sentinel data lake FAQ
Microsoft Sentinel data lake (generally available) is a purpose‑built, cloud‑native security data lake. It centralizes all security data in an open format, serving as the foundation for agentic defense, enhanced security insights, and graph-based enrichment. It offers cost‑effective ingestion, long‑term retention, and advanced analytics. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis, contextual awareness and agentic defense. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is purpose built for security offering flexible analytics, cost management, and deeper security insights. Sentinel data lake: Centralizes security data delta parquet and open format for easy access. This unified data foundation accelerates threat detection, investigation, and response across hybrid and multi-cloud environments. Enables data federation by allowing customers to access data in external sources like Microsoft Fabric, ADLS and Databricks from the data lake. Federated data appears alongside native Sentinel data, enabling correlated hunting, investigation, and custom graph analysis across a broader digital estate. Offers a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. Allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. Integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. Supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Provides streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is designed for security workloads, ensuring that processes from ingestion to analytics meet evolving cybersecurity requirements. Is Microsoft Sentinel SIEM going away? No. Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by March 31, 2027. I am a Sentinel-only customer, and not a Defender customer. Can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Azure Docs. Is there an expected release date for Microsoft Sentinel data lake in GCC, GCC-H, and DoD? While the exact date is not yet finalized, we plan to expand Sentinel data lake to the US Government environments. . How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytics tier or the data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake (at no additional cost). Alternatively, data that is not needed in the analytics tier can be ingested directly into the data lake. Data retention is configured directly in table management, for both analytics retention and data lake storage. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and promote data from the data lake tier to the analytics tier for full analytics. For more details see documentation. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process, every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. What is the guidance for customers using data federation capability in Sentinel data lake? Starting April 1, 2026, federate data from Microsoft Fabric, ADLS, and Azure Databricks into Sentinel data lake. Use data federation when data is exploratory, infrequently accessed, or must remain at source due to governance, compliance, sovereignty, or contractual requirements. Ingest data directly into Sentinel to unlock full SIEM capabilities, always-on detections, advanced automation, and AI‑driven defense at scale. This approach lets security teams start where their data already lives — preserving governance, then progressively ingest data into Sentinel for full security value. Is there any cost for retention in the analytics tier? Analytics ingestion includes 90 days of interactive retention, at no additional cost. Simply set analytics retention to 90 days or less. Analytics retention beyond 90 days will incur a retention cost. Data can be retained longer within the data lake by using the “total retention” setting. This allows you to extend retention within the data lake for up to 12 years. While data is retained within the analytics tier, there is no charge for the mirrored data within the lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Once the table is transitioned to the analytics tier, if desired, it can then be transitioned to the data lake. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Billing for these tables will automatically switch to the Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and Auxiliary Logs are still available, they are not being enhanced further. Sentinel data lake offers more capabilities at a lower price point. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, existing retention settings will not change and will be automatically inherited by the Sentinel data lake. All data, including existing data in archive retention will be billed using the data lake storage meter, benefiting from 6x data compression. However, the data itself will not move. Existing data in archive will continue to be accessible through Sentinel search and restore experiences: o Data will not be backfilled into the data lake. o Data will be billed using the data lake storage meter. New data ingested after enabling the data lake: o Will be automatically mirrored to the data lake and accessible through data lake explorer. o Data will be billed using the data lake storage meter. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 10 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, all new data will be ingested into the Sentinel data lake. There will only be one storage meter (which is data lake storage) going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Only data that was previously ingested under the archive plan will be accessible via Search and Restore. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster for their DIY lake setup. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data that they want to manage. The lake explorer will support federation with ADX to enable the customers to migrate gradually and simplify their deployment. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR tables are available for querying in advanced hunting, with 30 days of analytics tier retention included with the XDR license. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. You can extend the retention period of supported Defender XDR tables beyond 30 days and ingest the data into the analytics tier. For more information see Manage XDR data in Microsoft Sentinel. You can also ingest XDR data directly into the data lake tier. See here for more information. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. Note: Triggering a KQL job directly via an API or Logic App is not yet supported but is on the roadmap. Why are there two different places to run KQL queries in Sentinel experience? Advanced hunting queries both XDR and analytics tables, with compute cost included. Data lake explorer only queries data in the lake and incurs a separate compute cost. Consolidating advanced hunting and KQL explorer user interfaces is on the roadmap. This will provide security analysts a unified query experience across both analytics and data lake tiers. Where is the output from KQL jobs stored? KQL jobs are written into existing or new custom tables in the analytics tier. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Security teams can run multi-workspace KQL queries for broader threat correlation Pricing and billing How does a customer pay for Sentinel data lake? Billing is automatically enabled at the time of onboarding based on Azure Subscription and Resource Group selections. Customers are then charged based on the volume of data ingested, retained, and analyzed (e.g. KQL Queries and Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. At a high level, pricing is based on the volume of data ingested/processed, the volume of data retained, and the volume of data processed. For specific meter definitions, see documentation. 3. How does the business model for Sentinel SIEM change with the introduction of the data lake? There is no change to existing Sentinel analytics tier ingestion business model. Sentinel data lake has separate meters for ingestion, storage and analytics. 4. What happens to the existing Sentinel SIEM and related Azure Monitor billing meters when a customer onboards to Sentinel data lake? When a customer onboards to the Sentinel data lake, nothing changes with analytic ingestion or retention. Customers using data archive and Auxiliary Logs will automatically transition to the new data lake meters. How does data lake storage affect cost efficiency for high volume data retention? Sentinel data lake offers cost-effective, long-term storage with uniform data compression of 6:1 across all data sources, applicable only to data lake storage. Example: For 600GB of data stored, you are only billed for 100GB compressed data. This approach allows organizations to retain greater volumes of security data over extended periods cost-effectively, thereby reducing security risks without compromising their overall security posture. here How “Data Processing” billed? To support the ingestion and standardization of diverse data sources, the Data Processing feature applies a $0.10 per GB (US East) charge for all data ingested into the data lake. This feature enables a broad array of transformations like redaction, splitting, filtering and normalization. The data processing charge is applied per GB of uncompressed data Note: For regional pricing, please refer to the “Data processing” meter within the Microsoft Sentinel Pricing official documentation. Does “Data processing” meter apply to analytics tier data mirrored in the data lake? No. Data processing charge will not be applied to mirrored data. Data mirrored from the analytic tier is not subject to either data ingestion or processing charges. How is retention billed for tables that use data lake-only ingestion & retention? Sentinel data lake decouples ingestion, storage, and analytics meters. Customers have the flexibility to pay based on how data is retained and used. For tables that use data lake‑only ingestion, there is no included free retention—unlike the analytics tier, which includes 90 days of analytics retention. Retention charges begin immediately once data is stored in the data lake. Data lake storage billing is based on compressed data size rather than raw ingested volume, which significantly reduces storage costs and delivers lower overall retention spend for customers. Does data federation incur charges? Data federation does not generate any ingestion or storage fees in Sentinel data lake. Customers are billed only when they run analytics or queries on federated data, with charges based on Sentinel data lake compute and analytics meters. This means customers pay solely for actual data usage, not mere connectivity. How do I understand Sentinel data lake costs? Sentinel data lake costs driven by three primary factors: how much data is ingested, how long that data is retained, and how the data is used. Customers can flexibly choose to ingest data into the analytics tier or data lake tier, and these architectural choices directly impact cost. For example, data can be ingested into the analytics tier—where commitment tiers help optimize costs for high data volumes—or ingested data directly into the Sentinel data lake for lower‑cost ingestion, storage, and on‑demand analysis. Customers are encouraged to work with their Microsoft account team to obtain an accurate cost estimate tailored to their environment. See Sentinel pricing page to understand Sentinel pricing. How do I manage Sentinel data lake costs? Built-in cost management experiences help customers with cost predictability, billing transparency, and operational efficiency. Reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Set usage-based alerts on specific meters to monitor and control costs. For example, receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See our Sentinel cost management documentation to learn more. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Do we charge for data lake ingestion and storage for graph experiences? Microsoft Sentinel graph-based experiences are included as part of the existing Defender and Purview licenses. However, Sentinel graph requires Sentinel data lake and specific data sources to build the underlying graph. Enabling these data sources will incur ingestion and data lake storage costs. Note: For Sentinel SIEM customers, most required data sources are free for analytics ingestion. Non-entitled sources such as Microsoft Entra ID logs will incur ingestion and data lake storage costs. How is Entra asset data and ARG data billed? Data lake ingestion charges of $0.05 per GB (US EAST) will apply to Entra asset data and ARG data. Note: This was previously not billed during public preview and is billed since data lake GA. To learn more, see: https://learn.microsoft.com/azure/sentinel/datalake/enable-data-connectors When a customer activates Sentinel data lake, what happens to tables with archive logs enabled? To simplify billing, once the data lake is enabled, all archive data will be billed using the data lake storage meter. This provides consistent long-term retention billing and includes automatic 6x data compression. For most customers, this change results in lower long‑term retention costs. However, customers who previously had discounted archive retention pricing will not automatically receive the same discounts on the new data lake storage meters. In these cases, customers should engage their Microsoft account team to review pricing implications before enabling the Sentinel data lake. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way. Learn more: Get started with Sentinel data lake today: https://aka.ms/Get_started/Sentinel_datalake Microsoft Sentinel AI-ready platform: https://aka.ms/Microsoft_Sentinel Sentinel data lake videos: https://aka.ms/Sentineldatalake_videos Latest innovations and updates on Sentinel: https://aka.ms/msftsentinelblog Sentinel pricing page: https://aka.ms/MicrosoftSentinel_Pricing4.7KViews1like8CommentsIntroducing Secret Finder: Finding Real Credentials Where Traditional Tools Fail
Secret Finder is an AI-powered capability in Microsoft Security Copilot that detects leaked credentials in unstructured content, such as emails, chat logs, documents, and screenshots, where traditional pattern-matching tools struggle. It relies on a multi‑step, multi‑agent reasoning workflow rather than a single pass detector. Detection, verification, and contextual analysis are handled by distinct reasoning stages, allowing Secret Finder to find real credentials without flooding users with false positives. Unlike regex-based scanners, Secret Finder uses reasoning to identify not just credentials, but the systems they unlock, helping security teams understand exposure and respond faster. In benchmark testing on synthetic datasets, Secret Finder achieved 98.33% true credential detection with zero false alarms on realistic emails, chats, notes, and documents—while traditional regex scanners detected only about 40% of the same credentials. Secret Finder is now generally available in Security Copilot, supporting 20+ credential types with high precision and actionable context. The Problem: Credentials Hide Where Traditional Tools Can't See When security incidents happen, leaked credentials don't always appear in clean, predictable formats. They show up buried in email threads, pasted into Teams messages, embedded in Word documents, or captured in screenshots of logs and terminals. These are exactly the places where security teams spend the most time and where traditional secret scanning tools fail. Most existing tools rely on regular expressions or simple pattern matching. This works reasonably well for structured environments like source code repositories, where credentials follow predictable formats. But in real-world incidents, secrets look different. A storage key might be split across multiple messages in an email thread. A credential could be reformatted, partially redacted, or embedded alongside explanatory text. In these situations, pattern matching produces two painful outcomes: it misses real credentials because the format doesn’t match a known rule, or it floods analysts with false positives that waste time. Security teams are left manually reviewing content, guessing which findings are real, and piecing together what systems might actually be at risk. In practice, this failure mode has a real human cost that security analysts end up reviewing thousands of alerts, manually inspecting email threads and chat logs, and trying to determine whether a suspicious string actually unlocks a storage account, API, or production service. Teams can spend days reconstructing context across messages and documents just to understand what a credential grants access to, slowing containment and increasing risk during active incidents. This is the gap Secret Finder was built to close. The Solution: Secret Finder Brings Reasoning to Secret Detection Secret Finder approaches secret detection as a reasoning problem, not a string-matching exercise. Instead of asking "does this text match a pattern?" It asks human-like questions: Is this text describing a credential or access mechanism? Does the value look real and usable? What system or resource could this access? This shift is subtle but powerful. Secret Finder doesn't just detect credentials, it connects them to doors: the specific targets those credentials unlock, such as API endpoints, storage accounts, applications, or services. This is critical for triage. Instead of stopping at “this looks like a credential,” Secret Finder tells analysts what that credential actually opens. Without context, a credential triggers manual follow‑up. When it’s linked to a specific target, analysts can immediately assess impact and act. By understanding messy, real-world content the way a human investigator would, Secret Finder delivers findings that security teams can trust and act on immediately. It's designed specifically for the unstructured, noisy environments where incidents actually unfold. Why Secret Finder Outperforms Traditional Pattern Matching Traditional secret scanners are built for clean data. Secret Finder is built for reality. Traditional tools struggle when: Credentials appear in natural language descriptions rather than code Context determines whether a string is sensitive or benign Credentials are incomplete, malformed, or partially redacted Secret Finder excels because it: Reasons through context, understanding surrounding text to identify what's truly sensitive Detects credentials and their associated resources together, providing the "what" and the "where" in a single pass Handles noisy, unstructured inputs like emails, chat logs, documents Assigns confidence scores to help teams prioritize findings and reduce alert fatigue What Secret Finder Can Do Today Secret Finder is now generally available in Microsoft Security Copilot, with capabilities shaped directly by real security workflows across incident response, red teaming, and SOC operations. It detects over 20 major credential categories, spanning cloud provider credentials like Azure Storage Keys and AWS Access Keys, authentication credentials including Microsoft Entra passwords and OAuth tokens, database connection strings, SSH private keys, API keys, and generic secrets that don't fit predefined patterns. This broad coverage means analysts can scan investigation artifacts without worrying whether the secret type is supported. What makes Secret Finder particularly effective is where it works. Email threads where credentials are discussed across multiple messages. Teams chats where credentials are pasted quickly during troubleshooting. Word documents and internal wikis where credentials are documented for operational handoffs. Incident reports and post-mortem notes written under pressure. These are the environments where traditional pattern-matching tools fail, and where Secret Finder delivers the most value. In benchmark evaluations, Secret Finder achieved 100% recall with 0% false positives on synthetic datasets containing embedded Azure Storage credentials, compared to 40% recall from traditional regex‑based tools such as CredScan. In more complex scenarios involving multiple credential types and noisy email content, Secret Finder maintained 98.33% recall with 0% false positives. These results were observed on synthetically generated evaluation datasets spanning emails, chats, notes, and documents, designed to reflect how engineers communicate and how credentials may be inadvertently shared in real‑world workflows. Scenario Precision Recall Single credential type 100% 100% Complex, multiple credential types 100% 98.33% Secret Finder is currently integrated into Security Copilot, actively supporting incident response workflows, and working toward deeper integrations with developer platforms such as GitHub to bring contextual secret detection to source code analysis at scale. Using Secret Finder in Security Copilot Secret Finder is available as a skill in Microsoft Security Copilot, making credential detection a seamless part of analyst workflows. How to use Secret Finder: Enable the Secret Finder skill in Security Copilot via "Manage Sources" → "Manage Plugins" (Figure 1) Select "FindSecretInText" from Promptbook (Figure 2) Submit unstructured content directly in the Copilot prompt: paste the text blob that might contain credentials Secret Finder analyzes the content using its multi-agent workflow, detecting credentials and associated doors Review actionable findings with contextual details Figure 1. Enabling the Secret Finder skill in Microsoft Security Copilot (Due to recent naming changes, users might see "Agentic secret finder" in Security copilot. Naming changes will reflect in a few weeks) Figure 2. Selecting the FindSecretInText prompt, which invokes Secret Finder’s multi‑step credential detection and verification workflow Figure 3. Submitting a text blob containing embedded credentials for analysis (example is synthetic) Figure 4. Secret Finder output with detected credentials and associated doors (example credentials and associated doors are synthetic) What's Next for Secret Finder Secret Finder is a living capability. Over the next six months, we are working towards coverage and deepening integrations: Exploring integrations with GitHub to reduce false positives in secret scanning for code repositories Optimizing for large-scale analysis to handle enterprise-wide scans efficiently with reduced latency Exploring graph-based risk modeling to map relationships between credentials, services, and attack paths Our long-term vision goes beyond detection: we want to help security teams understand how credentials are used, what risks exist if they're exposed, and what the impact of rotation or revocation would be. By moving from "what's leaked" to "what does it mean," Secret Finder will enable smarter prioritization, faster response, and more confident decision-making. Acknowledgments Secret Finder has been a cross-team effort over the past year, evolving from early research and prototyping through private preview, public preview, and now general availability. This milestone reflects contributions across many phases from initial system design and technical direction, to evaluation, product integration, and deployment at scale. Contributors include Mariko Wakabayashi leading the early research through production and to the team including Zixiao Chen and Avy Challa for GA improvements and bringing Secret Finder to production readiness. We also appreciate Tony Twum-Barimah, Malachi Jones, and the Security Copilot team, including Austin Trapp and Vinod Jagannathan for their technical and product support throughout the process, as well as Christian Rudnick and Helen Chang for guiding us through the responsible AI reviews before launch. Finally, a huge thanks to the incident responders and security researchers who shared valuable insights along the way. Secret Finder wouldn’t have been possible without their work and feedback.871Views0likes2CommentsMicrosoft partners with DataBahn to accelerate enterprise deployments for Microsoft Sentinel
Enterprise security teams are collecting more telemetry than ever across cloud platforms, endpoints, SaaS applications, and on-premises infrastructure. Security teams want broader data coverage and longer retention without losing control of cost and data quality. This post explains the new DataBahn integration with Microsoft Sentinel, why it matters for SIEM operations, and how to think about using a security data pipeline alongside Sentinel for onboarding, normalization, routing, and governance. DataBahn joins Microsoft Sentinel partner ecosystem This integration reflects Microsoft Sentinel’s open partner ecosystem, giving customers choice in the partners they use alongside Microsoft Sentinel to manage their security data pipelines. DataBahn joins a broader set of complementary partners, enabling customers to tailor solutions for their unique security data needs. DataBahn is available through Microsoft Marketplace and is eligible for customers to apply existing Azure Consumption Commitments toward the purchase of DataBahn. Why this matters for security operations teams Security teams are under relentless pressure to ingest more data, move faster through SIEM migrations, and preserve data fidelity for detections and investigations, all while managing costs effectively. The challenge isn’t just ingesting data, but ensuring the right telemetry arrives in a consistent, governed format that analysts and detections can trust. This is where a security data pipeline, alongside Microsoft Sentinel’s native connectors and DCRs, can add value. It helps streamline onboarding of third-party and custom sources, improve normalization consistency, and provide operational visibility across diverse environments as deployments scale. What DataBahn integration is positioned to do with Microsoft Sentinel Security teams want broader coverage and need to ensure third-party data is consistently shaped, routed, and governed at scale. This is where a security data pipeline like DataBahn complements Microsoft Sentinel. Sitting upstream of ingestion, the pipeline layer standardizes onboarding and shaping across sources while providing operational visibility into data flow and pipeline health. Together, the collaboration focuses on reducing onboarding friction, improving normalization consistency, enabling intentional routing, and strengthening governance signals so teams can quickly detect source changes, parser breaks, or data gaps—while staying aligned with Sentinel analytics and detection workflows. This model gives Sentinel customers more choice to move faster, onboard data at scale, and retain control over data routing. Key capabilities Bidirectional data integration The integration enables seamless delivery of telemetry into Sentinel while aligning with Sentinel detection logic and schema expectations. This helps ensure telemetry pipelines remain consistent with: Sentinel detection formats Custom analytics rules Sentinel data models and schemas Automated table and DCR management As detections evolve, pipeline configurations can adapt to maintain detection fidelity and data consistency. Advanced management API DataBahn provides an advanced management API that allows organizations to programmatically configure and manage pipeline integrations with Sentinel. This enables teams to: Automate pipeline configuration Manage operational workflows Integrate pipeline management into broader security or DevOps automation processes Automatic identification of configuration conflicts In complex environments with multiple telemetry sources and routing rules, configuration conflicts can arise across filtering logic, enrichment pipelines, and detection dependencies. The integration helps automatically: Detect conflicts in filtering rules and pipeline logic Identify clashes with detection dependencies Highlight missing configurations or coverage gaps This visibility allows SOC teams to quickly identify issues that could impact detection reliability. Centralized pipeline management The integration enables centralized management of data collection and transformation workflows associated with Sentinel telemetry pipelines. This provides unified visibility and control across telemetry sources while maintaining compatibility with Sentinel analytics and detections. Centralized management simplifies operations across large environments where multiple telemetry pipelines must be maintained. Flexible data transformation and customization Security telemetry often arrives in inconsistent formats across vendors and platforms. The platform supports flexible transformation capabilities that allow organizations to: Normalize logs into standard or custom Sentinel table formats Add or derive fields required by Sentinel detections Apply filtering or enrichment rules before ingestion Configuration can be performed through a single-screen workflow, enabling teams to modify schemas and define filtering logic without disrupting downstream analytics. The platform also provides schema drift detection and source health monitoring, helping teams maintain reliable telemetry pipelines as environments evolve. Closing Effective security operations depend on how quickly a SOC can onboard new data, scale effectively, and maintain high‑quality investigations. Sentinel provides a cloud‑native, AI-ready foundation to ingest security data from first- and third‑party data sources—while enabling economical, large‑scale retention and deep analytics using open data formats and multiple analytics engines. DataBahn’s partnership with Sentinel is positioned as a pipeline layer that can help teams onboard third-party sources, shape and normalize data, and apply routing and governance patterns before data lands in Sentinel. Learn more DataBahn for Microsoft Sentinel DataBahn Press Release - Databahn Deepens Partnership with Microsoft Sentinel Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn Microsoft Sentinel—AI-Ready Platform | Microsoft Security Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations | Microsoft Learn Microsoft Sentinel data lake is now generally available | Microsoft Community Hub1.4KViews2likes1CommentWhat's New in Microsoft EDU - March 2026
Join us on Wednesday, March 25th, 2026 for our latest "What's New in Microsoft EDU" webinar! We will be covering all of the latest product updates from Microsoft Education. These 30-minute webinars are put on by the Microsoft Education Product Management group and happen once per month, this month both 8:00am Pacific Time and 4:00pm Pacific time to cover as many global time zones as possible around the world. And don’t worry – we’ll be recording these and posting on our Microsoft Education YouTube channel in the new “What’s New in Microsoft EDU” playlist, so you’ll always to able to watch later or share with others! Here is our March 2026 webinar agenda: 1) M365 Copilot and AI updates for Educators and Students - Modify Existing Content - Minecraft EDU Lesson Plans - New Learning Activities: Fill in the Blanks, Matching and Self-Quizzing - Study & Learn agent for studnets 2) Learning Zone General Availability and the Copilot+ PC 3) Microsoft 365 LTI and Teach Module for Learning Management Systems 4) AMA - Ask Microsoft EDU Anything (Q&A) We look forward to having you attend the event! How to sign up 📅 OPTION 1: March 25th, Wednesday @ 8:00am Pacific Time Register here 📅 OPTION 2: March 25th, Wednesday @ 4:00pm Pacific Time Register here This is what the webinar portal will look like when you register: We look forward to seeing you there! Mike Tholfsen Group Product Manager Microsoft Education1.2KViews1like1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on September 14th, 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Follow our agentless migration playlist for a smooth transition. Spotlight on new Features with agentless Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin2KViews1like0Comments