azure
20 TopicsAutomating Microsoft Sentinel: A blog series on enabling Smart Security
Welcome to the first entry of our blog series on automating Microsoft Sentinel. We're excited to share insights and practical guidance on leveraging automation to enhance your security posture. In this series, we'll explore the various facets of automation within Microsoft Sentinel. Whether you're a seasoned security professional or just starting, our goal is to empower you with the knowledge and tools to streamline your security operations and stay ahead of threats. Join us on this journey as we uncover the power of automation in Microsoft Sentinel and learn how to transform your security strategy from reactive to proactive. Stay tuned for our upcoming posts where we'll dive deeper into specific automation techniques and share success stories from the field. Let's make your security smarter, faster, and more resilient together. In this series, we will show you how to automate various aspects of Microsoft Sentinel, from simple automation of Microsoft Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. We’re doing this as a series so that we can build up our knowledge step-by-step and finishing off with a “capstone project” that takes SOAR into areas that most people aren’t aware of or even thought was possible. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: [You are here] – Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals o Triggers o Entities o In-App Content / GitHub o Consumption plan vs. dedicated – which to choose and why? Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper o Built-In 1 st and 3 rd Party Connections (ServiceNow, etc.) o REST APIs (everything else) Part 5: Azure Functions / Custom Code o Why Azure Functions? o Consumption vs. Dedicated – which to choose and why? Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 1: Introduction to Automating Microsoft Sentinel Microsoft Sentinel is a cloud-native security information and event management (SIEM) platform that helps you collect, analyze, and respond to security threats across your enterprise. But did you know that it also has a native, integrated Security Orchestration, Automation, and Response (SOAR) platform? A SOAR platform that can do just about anything you can think of? It’s true! What is SOAR and why would I want to use it? A Security Orchestration, Automation, and Response (SOAR) platform helps your team take action in response to alerts or events in your SIEM. For example, let’s say Contoso Corp has a policy where if a user has a medium sign-in risk in Entra ID and fails their login three times in a row within a ten-minute timeframe that we force them to re-confirm their identity with MFA. While an analyst could certainly take the actions required, wouldn’t it be better if we could do that automatically? Using the Sentinel SOAR capabilities, you could have an analytic rule that automatically takes the action without the analyst being involved at all. Why Automate Microsoft Sentinel? Automation is a key component of any modern security operations center (SOC). Automation can help you: Reduce manual tasks and human errors Improve the speed and accuracy of threat detection and response Optimize the use of your resources and skills Enhance your visibility and insights into your security environment Align your security processes with your business objectives and compliance requirements Reduce manual tasks and human errors Alexander Pope famously wrote “To err is human; to forgive, divine”. Busy and distracted humans make mistakes. If we can reduce their workload and errors, then it makes sense to do so. Using automation, we can make sure that all of the proper steps in our response playbook are followed and we can make our analysts lives easier by giving them a simpler “point and click” response capability for those scenarios that a human is “in the loop” or by having the system run the automation in response to events and not have to wait for the analyst to respond. Improve the speed and accuracy of threat detection and response Letting machines do machine-like things (such as working twenty-four hours a day) is a good practice. Leveraging automation, we can let our security operations center (SOC) run around the clock by having automation tied to analytics. Rather than waiting for an analyst to come online, triage an alert and then take action, Microsoft Sentinel can stand guard and respond when needed. Optimize the use of your resources and skills Having our team members repeat the same mundane tasks is not optimal for the speed of response and their work satisfaction. By automating the mundane away, we can give our teams more time to learn new things or work on other tasks. Enhance your visibility and insights into your security environment Automation can be leveraged for more than just responding to an alert or incident. We can augment the information we have about entities involved in an alert or incident by using automation to call REST based APIs to do point-in-time lookups of the latest threat information, vulnerability data, patching statuses, etc. Align your security processes with your business objectives and compliance requirements If you have to meet particular regulatory requirements or internal KPIs, automation can help your team to achieve their goals quickly and consistently. What Tools and Frameworks Can You Use to Automate Microsoft Sentinel? Microsoft Sentinel provides several tools that enable you to automate your security workflows, such as: Automation Rules o Automation rules can be used to automate Microsoft Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team, and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems. That way only the items that need that immediate escalation receive it, quickly and efficiently. o Another great use of Automation Rules is to create Incident Tasks for analysts to follow. If you have a process and workflow, by using Incident Tasks, you can have those appear inside of an Incident right there for the analysts to follow. No need to go “look it up” in a PDF or other document. Playbooks: You can use playbooks to automatically execute actions based on triggers, such as alerts, incidents, or custom events. Playbooks are based on Azure Logic Apps, which allow you to create workflows using various connectors, such as Microsoft Teams, Azure Functions, Azure Automation, and third-party services. Azure Functions can be leveraged to run custom code like PowerShell or Python and can be called from Sentinel via Playbooks. This way if you have a process or code that’s beyond a Playbook , you can still call it from the normal Sentinel workflow. Conclusion In this blog post, we introduced the automation capabilities and benefits of SOAR in Microsoft Sentinel, and some of the tools and frameworks that you can use to automate your security workflows. In the next blog posts, we will dive deeper into each of these topics and provide some practical examples and scenarios of how to automate Microsoft Sentinel. Stay tuned for more updates and tips on automating Microsoft Sentinel! Additional Resources What are Automation Rules? Automate Threat Response with playbooks in Microsoft Sentinel2.8KViews8likes2CommentsA Look at Different Options for Storing and Searching Sentinel Archived Logs
As an Azure Sentinel user, you know the importance of having a secure and accessible backup of your log data. In this blog, we'll show you the various options available for storing and searching Sentinel logs beyond the default 90-day retention period. Explore the features and benefits of each solution to find the best fit for your organization.21KViews6likes2CommentsHow to Ingest Microsoft Intune Logs into Microsoft Sentinel
For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility. Prerequisites and Overview Before configuring log ingestion, ensure the following prerequisites are in place: Microsoft Sentinel Enabled Workspace: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: Onboard Microsoft Sentinel - Log Analytics workspace overview. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal: Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations. Intune Administrator permissions: You need appropriate rights to configure Intune Diagnostic Settings. For information, see: Microsoft Entra built-in roles - Intune Administrator. Log Analytics Contributor role: The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to Manage access to log data and workspaces in Azure Monitor. Intune diagnostic logging enabled: Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: Send Intune log data to Azure Storage, Event Hubs, or Log Analytics. Configure Intune to Send Logs to Microsoft Sentinel Sign in to the Microsoft Intune admin center. Select Reports > Diagnostics settings. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting: Select Intune Log Categories. In the “Diagnostic setting” configuration page, give the setting a name (e.g. “Microsoft Sentinel Intune Logs Demo”). Under Logs to send, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check AuditLogs, OperationalLogs, DeviceComplianceOrg, and Devices. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace. Configure Destination Details – Microsoft Sentinel Workspace. Under Destination details on the same page, select your Azure Subscription then select the Microsoft Sentinel workspace. Save the Diagnostic Setting. After you click save, the Microsoft Intune Logs will will be streamed to 4 tables which are in the Analytics Tier. For pricing on the analytic tier check here: Plan costs and understand pricing and billing. Verify Data in Microsoft Sentinel. After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in the Microsoft 365 Defender portal and in the Azure Portal. The key tables to verify are: IntuneAuditLogs IntuneOperationalLogs IntuneDeviceComplianceOrg IntuneDevices Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) 1. Open Advanced Hunting: Sign in to the https://security.microsoft.com (the unified portal). Navigate to Advanced Hunting. – This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data. 2. Find Intune Tables: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the Microsoft Sentinel Tables. Under the LogManagement Section Look for IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices in the list. Microsoft Sentinel in Defender Portal – Tables 1. Navigate to Logs: Sign in to the https://portal.azure.com and open Microsoft Sentinel. Select your Sentinel workspace, then click Logs (under General). 2. Find Intune Tables: In the Logs query editor that opens, you’ll see a Schema or tables list on the left. If it’s collapsed, click >> to expand it. Scroll down to find LogManagement and expand it; look for these Intune-related tables: IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices Microsoft Sentinel in Azure Portal – Tables Querying Intune Log Tables in Sentinel – Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data: Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) In the Advanced Hunting page, ensure the query editor is visible (select New query if needed). Run a simple KQL query such as: IntuneDevice | take 5 Click Run query to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the Microsoft Sentinel Reader role. In the Azure Logs blade, use the query editor to run a simple KQL query such as: IntuneDevice | take 5 Select Run to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed. Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals. To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating custom detection rules in the Microsoft Defender portal (part of the unified XDR experience) see [https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules] and scheduled analytics rules in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]. These detection rules will continuously monitor your Intune telemetry – tracking device compliance status, enrollment activity, and administrative actions – and will raise alerts whenever they detect suspicious or out-of-policy events. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this converts raw Intune log data into high-value security insights: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering playbooks or other automated remediation actions when an alert fires). Use this Detection Logic to Create a detection Rule IntuneDeviceComplianceOrg | where TimeGenerated > ago(24h) | where ComplianceState != "Compliant" | summarize NonCompliantCount = count() by DeviceName, TimeGenerated | where NonCompliantCount > 3 Additional Tips: After confirming data ingestion and setting up alerts, you can leverage other Microsoft Sentinel features to get more value from your Intune logs. For example: Workbooks for Visualization: Create custom workbooks to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually. Hunting and Queries: Use advanced hunting (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling correlation across Intune and other security data. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins. Incident Management: Leverage Sentinel’s Incidents view (in Azure portal) or the unified Incidents queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident. Built-in Rules & Content: Remember that Microsoft Sentinel provides many built-in Analytics Rule templates and Content Hub solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data. Frequently Asked Questions If you’ve set everything up but don’t see logs in Sentinel, run through these checks: Check Diagnostic Settings Go to the Microsoft Intune admin center → Reports → Diagnostic settings. Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace. Confirm the Right Workspace Double-check that the Azure subscription and Microsoft Sentinel workspace are selected. If you have multiple tenants/directories, make sure you’re in the right one. Verify Permissions Make Sure Logs Are Being Generated If no devices are enrolled or no actions have been taken, there may be nothing to log yet. Try enrolling a device or changing a policy to trigger logs. Check Your Queries Make sure you’re querying the correct workspace and time range in Microsoft Sentinel. Try a direct query like: IntuneAuditLogs | take 5 Still Nothing? Try deleting and re-adding the diagnostic setting. Most issues come down to permissions or selecting the wrong workspace. How long are Intune logs retained, and how can I keep them longer? The analytics tier keeps data in the interactive retention state for 90 days by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query: Log retention tiers in Microsoft Sentinel. We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X @MSFTSecSuppTeam!1.3KViews3likes0CommentsAccelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake
As a Senior Product Manager | Developer Architect on the App Assure team working to bring Microsoft Sentinel and Security Copilot solutions to market, I interact with many ISVs building agents on Microsoft Sentinel data lake for the first time. I’ve written this article to walk you through one possible approach for agent development – the process I use when building sample agents internally at Microsoft. If you have questions about this, or other methods for building your agent, App Assure offers guidance through our Sentinel Advisory Service. Throughout this post, I include screenshots and examples from Gigamon’s Security Posture Insight Agent. This article assumes you have: An existing SaaS or security product with accessible telemetry. A small ISV team (2–3 engineers + 1 PM). Focus on a single high value scenario for the first agent. The Composite Application Model (What You Are Building) When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model. The Composite Application Model consists of five layers: Data Sources – Your product’s raw security, audit, or operational data. Ingestion – Getting that data into Microsoft Sentinel. Sentinel data lake & Microsoft Graph – Normalization, storage, and correlation. Agent – Reasoning logic that queries data and produces outcomes. End User – Security Copilot or SaaS experiences that invoke the agent. This separation allows for evolving data ingestion and agent logic simultaneously. It also helps avoid downstream surprises that require going back and rearchitecting the entire solution. Optional Prerequisite You are enrolled in the ISV Success Program, so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents. Phase 1: Data Ingestion Design & Implementation Choose Your Ingestion Strategy The first choice I face when designing an agent is how the data is going to flow into my Sentinel workspace. Below I document two primary methods for ingestion. Option A: Codeless Connector Framework (CCF) This is the best option for ISVs with REST APIs. To build a CCF solution, reference our documentation for getting started. Option B: CCF Push (Public Preview) In this instance, an ISV pushes events directly to Sentinel via a CCF Push connector. Our MS Learn documentation is a great place to get started using this method. Additional Note: In the event you find that CCF does not support your needs, reach out to App Assure so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs. Phase 2: Onboard to Microsoft Sentinel data lake Once my data is flowing into Sentinel, I onboard a single Sentinel workspace to data lake. This is a one-time action and cannot be repeated for additional workspaces. Onboarding Steps Go to the Defender portal. Follow the Sentinel Data lake onboarding instructions. Validate that tables are visible in the lake. See Running KQL Queries in data lake for additional information. Phase 3: Build and Test the Agent in Microsoft Foundry Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control. 1. Create a Microsoft Foundry Instance Foundry is used as a tool for your development environment. Reference our QuickStart guide for setting up your Foundry instance. Required Permissions: Security Reader (Entra or Subscription) Azure AI Developer at the resource group After setup, click Create Agent. 2. Design the Agent A strong first agent: Solves one narrow security problem. Has deterministic outputs. Uses explicit instructions, not vague prompts. Example agent responsibilities: To query Sentinel data lake (Sentinel data exploration tool). To summarize recent incidents. To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool). 3. Implement Agent Instructions Well-designed agent instructions should include: Role definition ("You are a security investigation agent…"). Data sources it can access. Step by step reasoning rules. Output format expectations. Sample Instructions can be found here: Agent Instructions 4. Configure the Microsoft Model Context Protocol (MCP) tooling for your agent For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps: Select Tools, and under Catalog, type Sentinel, and then select Microsoft Sentinel Data Exploration. For more information about the data exploration tool collection in MCP server, see our documentation. I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference our documentation. Phase 4: Migrate the Agent to Security Copilot Once the agent works in Foundry, I migrate it to Security Copilot. To do this: Copy the full instruction set from Foundry Provision a SCU for your Security Copilot workspace. For instructions, please reference this documentation. Make note of this process as you will be charged per hour per SCU Once you are done testing you will need to deprovision the capacity to prevent additional charges Open Security Copilot and use Create From Scratch Agent Builder as outlined here. Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step). For more information on linking the Sentinel MCP tools, please refer to this article. Paste and adapt instructions. At this stage, I always validate the following: Agent Permissions – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. Agent Performance – I have confirmed a successful interaction with measured latency and benchmark results. This step intentionally avoids reimplementation. I am reusing proven logic. Phase 5: Execute, Validate, and Publish After setting up my agent, I navigate to the Agents tab to manually trigger the agent. For more information on testing an agent you can refer to this article. Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged. Click View code on the Agent under the Build tab as outlined in this documentation. Publishing to the Microsoft Security Store If I were publishing my agent to the Microsoft Security Store, these are the steps I would follow: Finalize ingestion reliability. Document required permissions. Define supported scenarios clearly. Package agent instructions and guidance (by following these instructions). Summary Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake, this playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while maintaining high standards of quality. This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup. When starting on your own agent development journey, keep the following in mind: To limit initial scope. To reuse Microsoft managed infrastructure. To separate ingestion from intelligence. What Success Looks Like At the end of this development process, you will have the following: A Microsoft Sentinel data connector live in Content Hub (or in process) that provides a data ingestion path. Data visible in data lake. A tested agent running in Security Copilot. Clear documentation for customers. A key success factor I look for is clarity over completeness. A focused agent is far more likely to be adopted. Need help? If you have any issues as you work to develop your agent, please reach out to the App Assure team for support via our Sentinel Advisory Service . Or if you have any other tips, please comment below, I’d love to hear your feedback.528Views2likes0CommentsMicrosoft partners with DataBahn to accelerate enterprise deployments for Microsoft Sentinel
Enterprise security teams are collecting more telemetry than ever across cloud platforms, endpoints, SaaS applications, and on-premises infrastructure. Security teams want broader data coverage and longer retention without losing control of cost and data quality. This post explains the new DataBahn integration with Microsoft Sentinel, why it matters for SIEM operations, and how to think about using a security data pipeline alongside Sentinel for onboarding, normalization, routing, and governance. DataBahn joins Microsoft Sentinel partner ecosystem This integration reflects Microsoft Sentinel’s open partner ecosystem, giving customers choice in the partners they use alongside Microsoft Sentinel to manage their security data pipelines. DataBahn joins a broader set of complementary partners, enabling customers to tailor solutions for their unique security data needs. DataBahn is available through Microsoft Marketplace and is eligible for customers to apply existing Azure Consumption Commitments toward the purchase of DataBahn. Why this matters for security operations teams Security teams are under relentless pressure to ingest more data, move faster through SIEM migrations, and preserve data fidelity for detections and investigations, all while managing costs effectively. The challenge isn’t just ingesting data, but ensuring the right telemetry arrives in a consistent, governed format that analysts and detections can trust. This is where a security data pipeline, alongside Microsoft Sentinel’s native connectors and DCRs, can add value. It helps streamline onboarding of third-party and custom sources, improve normalization consistency, and provide operational visibility across diverse environments as deployments scale. What DataBahn integration is positioned to do with Microsoft Sentinel Security teams want broader coverage and need to ensure third-party data is consistently shaped, routed, and governed at scale. This is where a security data pipeline like DataBahn complements Microsoft Sentinel. Sitting upstream of ingestion, the pipeline layer standardizes onboarding and shaping across sources while providing operational visibility into data flow and pipeline health. Together, the collaboration focuses on reducing onboarding friction, improving normalization consistency, enabling intentional routing, and strengthening governance signals so teams can quickly detect source changes, parser breaks, or data gaps—while staying aligned with Sentinel analytics and detection workflows. This model gives Sentinel customers more choice to move faster, onboard data at scale, and retain control over data routing. Key capabilities Bidirectional data integration The integration enables seamless delivery of telemetry into Sentinel while aligning with Sentinel detection logic and schema expectations. This helps ensure telemetry pipelines remain consistent with: Sentinel detection formats Custom analytics rules Sentinel data models and schemas Automated table and DCR management As detections evolve, pipeline configurations can adapt to maintain detection fidelity and data consistency. Advanced management API DataBahn provides an advanced management API that allows organizations to programmatically configure and manage pipeline integrations with Sentinel. This enables teams to: Automate pipeline configuration Manage operational workflows Integrate pipeline management into broader security or DevOps automation processes Automatic identification of configuration conflicts In complex environments with multiple telemetry sources and routing rules, configuration conflicts can arise across filtering logic, enrichment pipelines, and detection dependencies. The integration helps automatically: Detect conflicts in filtering rules and pipeline logic Identify clashes with detection dependencies Highlight missing configurations or coverage gaps This visibility allows SOC teams to quickly identify issues that could impact detection reliability. Centralized pipeline management The integration enables centralized management of data collection and transformation workflows associated with Sentinel telemetry pipelines. This provides unified visibility and control across telemetry sources while maintaining compatibility with Sentinel analytics and detections. Centralized management simplifies operations across large environments where multiple telemetry pipelines must be maintained. Flexible data transformation and customization Security telemetry often arrives in inconsistent formats across vendors and platforms. The platform supports flexible transformation capabilities that allow organizations to: Normalize logs into standard or custom Sentinel table formats Add or derive fields required by Sentinel detections Apply filtering or enrichment rules before ingestion Configuration can be performed through a single-screen workflow, enabling teams to modify schemas and define filtering logic without disrupting downstream analytics. The platform also provides schema drift detection and source health monitoring, helping teams maintain reliable telemetry pipelines as environments evolve. Closing Effective security operations depend on how quickly a SOC can onboard new data, scale effectively, and maintain high‑quality investigations. Sentinel provides a cloud‑native, AI-ready foundation to ingest security data from first- and third‑party data sources—while enabling economical, large‑scale retention and deep analytics using open data formats and multiple analytics engines. DataBahn’s partnership with Sentinel is positioned as a pipeline layer that can help teams onboard third-party sources, shape and normalize data, and apply routing and governance patterns before data lands in Sentinel. Learn more DataBahn for Microsoft Sentinel DataBahn Press Release - Databahn Deepens Partnership with Microsoft Sentinel Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn Microsoft Sentinel—AI-Ready Platform | Microsoft Security Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations | Microsoft Learn Microsoft Sentinel data lake is now generally available | Microsoft Community Hub1.9KViews2likes1CommentAutomating Microsoft Sentinel: A blog series on enabling Smart Security
This entry guides readers through building custom Playbooks in Microsoft Sentinel, highlighting best practices for trigger selection, managed identities, and integrating built-in tools and external APIs. It offers practical steps and insights to help security teams automate incident response and streamline operations within Sentinel.1.7KViews2likes1CommentAutomating Microsoft Sentinel: Playbook Fundamentals
Welcome to the third entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel and Part 2: Automation Rules where we talked about automating the mundane away. In this post, we’re going to start talking about Playbooks which can be used for automating just about anything. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 Part I – Fundamentals [You are here] Part 4: Playbooks 2 Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 3: Playbooks - Fundamentals Pre-Built Playbooks in Content Hub Before we dive any deeper into Playbooks, I want to first point out that there are many pre-built playbooks available in the Content Hub. As of this writing, there are 484 playbooks available from 195 providers covering all manner of use cases like threat intelligence ingestion, incident response, operations integrations, and more in both first party Microsoft and third-party security tools. Before we dive into the internals of Playbooks and start creating our own, you really should do yourself a favor and take a look at the Content Hub and see if there isn’t already a Playbook doing what you want. You can also review the list of solutions at the Microsoft Sentinel GitHub page at Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel Basic Structure of a Playbook Microsoft Sentinel Playbooks are built on Azure Logic Apps which is a low to no-code workflow automation platform. We’ll be diving into the details of how to create a Logic App from start to finish in the next installment of this series, but for now just know that there are two key “custom” features that Sentinel exposes for use in Playbooks: Triggers and Entities. Triggers The events or actions that can start a Playbook running are Triggers. These can be Incident, Alert, or Entity based. Incident Triggers Incident triggers are when an incident is either created or updated in Sentinel. Incident triggers can be tied to Automation Rules (which were covered in part 2 of this series) and can also be manually triggered by an analyst. Playbooks launched with Incident triggers receive all the incident objects, including any entities it contains as well as the alerts it is comprised of. Alert Triggers Alert triggers are similar to Incident triggers; except they trigger when an Alert is fired due to an Analytic Rule having a result. This is especially useful when you have an Alert that is not configured to create an Incident. Alert triggers can also be tied to Automation Rules Entity Triggers Entity triggers are different from Incident and Alert triggers as they cannot be tied to Automation Rules. Instead, they are triggered manually by an analyst. For example, let’s say that there is a user account that is part of an Incident and during the investigation the analyst decided they wanted to disable that user account in Entra. They could use an Entity Trigger to launch the Playbook, passing the Account Entity to the playbook for the account to be disabled. Entities We can’t really talk about Entity Triggers without talking about Entities themselves. So, what is an Entity in Sentinel? Entities are data elements that identify components in an alert or incident. There are many different types of entities within Sentinel, but for Playbooks we only need to focus on five key ones: IP Host Account URL FileHash (for more information on Entities in general, please see: https://learn.microsoft.com/azure/sentinel/entities ) How do you use Entity Triggers? When you are building an Analytic rule, you can identify the different Entities that it contains. These are then carried along as part of the Alert and exposed for further actions. This means that all you need to do is map the results of the Analytic Rule to the different Entity types using values returned from your query. For example, let’s say we are creating an Analytic Rule to alert on a new CloudShell user being created in Azure with the following query: let match_window = 3m; AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/listKeys/action") | where ActivityStatusValue =~ "Success" | extend TimeKey = bin(TimeGenerated, match_window), AzureIP = CallerIpAddress | join kind = inner (AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/write") | extend TimeKey = bin(TimeGenerated, match_window), UserIP = CallerIpAddress ) on Caller, TimeKey | summarize count() by TimeKey, Caller, ResourceGroup, SubscriptionId, TenantId, AzureIP, UserIP, HTTPRequest, Type, Properties, CategoryValue, OperationList = strcat(OperationNameValue, ' , ', OperationNameValue1) | extend Name = tostring(split(Caller,'@',0)[0]), UPNSuffix = tostring(split(Caller,'@',1)[0]) When we use this query as the basis for an Alert, we can then use Entity Mapping under Alert Enhancement to take the relevant fields returned and map them to Entity objects: This example maps the values "Caller", "Name", and "UPNSuffix" returned by the query to the "FullName", "Name", and "UPNSuffix" fields of an Account Entity. It also maps the UserIP result to the "Address" field of an IP Entity. When the Alert fires, it will include a collection of Account and IP Entities with the necessary values in its Entities field. Now if we wanted to, we could use a Playbook based on Entity Triggers to act on the Account or IP entities. What is a “strong” identifier versus a “weak” identifier and why is it important? Entities have fields that identify individual instances. Strong identifiers uniquely identify an entity, while weak identifiers may not. Often, combining weak identifiers can create a strong identifier. For example, Account entities can be identified by a strong identifier like a Microsoft Entra ID (GUID) or User Principal Name (UPN). Alternatively, a combination of weak identifiers like Name and NTDomain can be used. Different data sources might identify the same user differently. When Microsoft Sentinel recognizes two entities as the same based on their identifiers, it merges them into one for consistent handling. We’ll be covering more details on using Entities and Triggers in the next article when we start building Playbooks from scratch. Conclusion In this article we talked about the fundamentals of Playbooks in Sentinel , the Content Hub which is the home of pre-built Playbooks, as well as the different types of Triggers that can be used to launch a Playbook. In the next article we’ll be covering how to build a playbook from scratch and put these concepts to work. Additional Resources Supported triggers and actions in Microsoft Sentinel playbooks Entities in Microsoft Sentinel3.1KViews2likes0CommentsAutomating Azure Resource Diagnostics Log Forwarding Between Tenants with PowerShell
As a Managed Security Service Provider (MSSP), there is often a need to collect and forward logs from customer tenants to the MSSP's Sentinel instance for comprehensive security monitoring and analysis. When customers acquire new businesses or operate multiple Azure tenants, they need a streamlined approach to manage security operations across all tenants. This involves consolidating logs into a single Sentinel instance to maintain a unified security posture and simplify management. Current Challenges: Forwarding logs across tenants can be done manually by setting up logging for each resource individually, like Storage accounts, Key Vaults, etc. using Lighthouse. However, this method is cumbersome. Automation through Azure Policy would be ideal, but it is not feasible in this case because Azure Policy is tied to managed identities. These identities are confined to a single tenant and cannot be used to push logs to another tenant. In this article, we will explore how we can forward the Azure resources diagnostics logs from one tenant to another tenant Sentinel instance using PowerShell script. High Level Architecture: Approach: Resources Creation This section describes the creation of resources necessary for log forwarding to Log Analytic Workspace. Lighthouse Enablement Refer to the below links to learn more about Lighthouse configuration for Sentinel: Managing Microsoft Sentinel across multiple tenants using Lighthouse | Microsoft Community Hub Manage Microsoft Sentinel workspaces at scale - Azure Lighthouse | Microsoft Learn Create Multitenant SPN On the customer tenant, create the multitenant application registration and sets up a client secret for it. An admin on the customer side provisions a service principal in its tenant. This service principal is based on the multitenant application that the provider created. The customer applies role-based access control (RBAC) roles to this new service principal so that it's authorized to enable the diagnostic settings on customer tenant and able to forward the logs to MSSP log analytic workspace. Required Permission: Monitoring Contributor at Customer Tenant & Log Analytic Contributor at MSSP Tenant Access Delegation Provide the Monitoring contributor role for the multitenant SPN created on step 1.2 on customer tenants to enable the logging of diagnostic settings for all the required scope of azure resources on subscription level using the azure lighthouse delegation. Delegate Log Analytic Contributor Role in the MSSP tenant to the multitenant SPN created on step 1.2 using the azure lighthouse delegation to forward the logs to Microsoft Sentinel on MSSP tenant. Logging Configuration PowerShell Script: PowerShell script used to enable logging on Azure resources across all subscriptions in the customer tenant. The solution involves the following components: - Master PowerShell Script (Mainfile.ps1): This script lists and executes child scripts for different Azure resources depending on logging requirement. - Child PowerShell Scripts: Individual scripts for enabling diagnostic settings on specific Azure resources (e.g., Child_AzureActivity.ps1, Child_KeyVault.ps1, etc.). - Configuration Script (Config.ps1): Contains SPN details, diagnostic settings, and destination Sentinel instance details. Master PowerShell Scripts Details: This file contains the list of child Azure resource PowerShell scripts that need to be executed one by one. Comment on the child file name where logging is not required. Logging Configuration PowerShell Scripts Details: This file holds SPN details like Tenant ID, Client ID, Client Secrets and diagnostic settings name and destination sentinel instance details along with logging category for each resource logs. Change the values according to the environment and as per requirement. Child PowerShell Scripts Details: Child_AzureActivity.ps1 Child_KeyvVault.ps1 Child_NSG.ps1 Child_AzureSQL.ps1 Child_AzureFirewall.ps1 Child_PublicIPDDOS.ps1 Child_WAF_AppGateway.ps1 Child_WAF_FrontDoor.ps1 Child_WAF_PolicyDiagnostics.ps1 Child_AKS.ps1 Child_StorageAccount.ps1 Execution: Run the main PowerShell script at scheduling interval, which executes the child scripts to enable diagnostic settings for various resources such as Azure Activity, Azure Firewall, Azure Key Vault, etc. Main file executes the child PowerShell scripts one by one as configured. Below is the logic of how the child file works: Import the config.ps1 file to gather information about SPN & destination Sentinel instance & logging information. Login to tenant using the SPN. Get the list of subscriptions in the tenant. Get the list of resources details (Ex.: NSG or Key vault) from each subscription one by one. Check if the diagnostic setting is enabled for the resource with certain key words. If enabled, it will skip and go to the next resource. If it is not enabled, it will enable the logging and forward the logs to the MSSP Sentinel. Expected Result & Log Verification Once the script is executed successfully, logging configuration will be enabled on Azure activity & Azure resources diagnostic settings and log will be shipped to destination Sentinel in different tenant. On MSSP Microsoft Sentinel, verify the logs have been collected properly in AzureActivity & AzureDiagnostics table. Sample PowerShell scripts: scripts/Enabling cross tenant logging using PowerShell script at main · SanthoshSecurity/scripts847Views2likes0Comments