microsoft sentinel
215 TopicsMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process—every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting – only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does “Data processing” meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics “Commitment Tiers”. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way.1.5KViews1like8CommentsMicrosoft Sentinel’s AI-driven UEBA ushers in the next era of behavioral analytics
Co-author - Ashwin Patil Security teams today face an overwhelming challenge: every data point is now a potential security signal and SOCs are drowning in complex logs, trying to find the needle in the haystack. Microsoft Sentinel User and Entity Behavior Analytics (UEBA) brings the power of AI to automatically surface anomalous behaviors, helping analysts cut through the noise, save time, and focus on what truly matters. Microsoft Sentinel UEBA has already helped SOCs uncover insider threats, detect compromised accounts, and reveal subtle attack signals that traditional rule-based methods often miss. These capabilities were previously powered by a core set of high-value data sources - such as sign-in activity, audit logs, and identity signals - that consistently delivered rich context and accurate detections. Today, we’re excited to announce a major expansion: Sentinel UEBA now supports six new data sources including Microsoft first- and third-party platforms like Azure, AWS, GCP, and Okta, bringing deeper visibility, broader context, and more powerful anomaly detection tailored to your environment. This isn’t just about ingesting more logs. It’s about transforming how SOCs understand behavior, detect threats, and prioritize response. With this evolution, analysts gain a unified, cross-platform view of user and entity behavior, enabling them to correlate signals, uncover hidden risks, and act faster with greater confidence. Newly supported data sources are built for real-world security use cases: Authentication activities MDE DeviceLogonEvents – Ideal for spotting lateral movement and unusual access. AADManagedIdentitySignInLogs – Critical for spotting stealthy abuse of non - human identities. AADServicePrincipalSignInLogs - Identifying anomalies in service principal usage such as token theft or over - privileged automation. Cloud platforms & identity management AWS CloudTrail Login Events - Surfaces risky AWS account activity based on AWS CloudTrail ConsoleLogin events and logon related attributes. GCP Audit Logs - Failed IAM Access, Captures denied access attempts indicating reconnaissance, brute force, or privilege misuse in GCP. Okta MFA & Auth Security Change Events – Flags MFA challenges, resets, and policy modifications that may reveal MFA fatigue, session hijacking, or policy tampering. Currently supports the Okta_CL table (unified Okta connector support coming soon). These sources feed directly into UEBA’s entity profiles and baselines - enriching users, devices, and service identities with behavioral context and anomalies that would otherwise be fragmented across platforms. This will complement our existing supported log sources - monitoring Entra ID sign-in logs, Azure Activity logs and Windows Security Events. Due to the unified schema available across data sources, UEBA enables feature-rich investigation and the capability to correlate across data sources, cross platform identities or devices insights, anomalies, and more. AI-powered UEBA that understands your environment Microsoft Sentinel UEBA goes beyond simple log collection - it continuously learns from your environment. By applying AI models trained on your organization’s behavioral data, UEBA builds dynamic baselines and peer groups, enabling it to spot truly anomalous activity. UBEA builds baselines from 10 days (for uncommon activities) to 6 months, both for the user and their dynamically calculated peers. Then, insights are surfaced on the activities and logs - such as an uncommon activity or first-time activity - not only for the user but among peers. Those insights are used by an advanced AI model to identify high confidence anomalies. So, if a user signs in for the first time from an uncommon location, a common pattern in the environment due to reliance on global vendors, for example, then this will not be identified as an anomaly, keeping the noise down. However, in a tightly controlled environment, this same behavior can be an indication of an attack and will surface in the Anomalies table. Including those signals in custom detections can help affect the severity of an alert. So, while logic is maintained, the SOC is focused on the right priorities. How to use UEBA for maximum impact Security teams can leverage UEBA in several key ways. All the examples below leverage UEBA’s dynamic behavioral baselines looking back up to 6 months. Teams can also leverage the hunting queries from the "UEBA essentials" solution in Microsoft Sentinel's Content Hub. Behavior Analytics: Detect unusual logon times, MFA fatigue, or service principal misuse across hybrid environments. Get visibility into geo-location of events and Threat Intelligence insights. Here’s an example of how you can easily discover Accounts authenticating without MFA and from uncommonly connected countries using UEBA behaviorAnalytics table: BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.IsMfaUsed == "No" | where ActivityInsights.CountryUncommonlyConnectedFromInTenant == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 // Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn Anomaly detection Identify lateral movement, dormant account reactivation, or brute-force attempts, even when they span cloud platforms. Below are examples of how to discover UEBA Anomalous AwsCloudTrail anomalies via various UEBA activity insights or device insights attributes: Anomalies | where AnomalyTemplateName in ( "UEBA Anomalous Logon in AwsCloudTrail", // AWS ClousTrail anomalies "UEBA Anomalous MFA Failures in Okta_CL", "UEBA Anomalous Activity in Okta_CL", // Okta Anomalies "UEBA Anomalous Activity in GCP Audit Logs", // GCP Failed IAM access anomalies "UEBA Anomalous Authentication" // For Authentication related anomalies ) | project TimeGenerated, _WorkspaceId, AnomalyTemplateName, AnomalyScore, Description, AnomalyDetails, ActivityInsights, DeviceInsights, UserInsights, Tactics, Techniques Alert optimization Use UEBA signals to dynamically adjust alert severity in custom detections—turning noisy alerts into high-fidelity detections. The example below shows all the users with anomalous sign in patterns based on UEBA. Joining the results with any of the AWS alerts with same AWS identity will increase fidelity. BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.FirstTimeConnectionViaISPInTenant == True or ActivityInsights.FirstTimeUserConnectedFromCountry == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 // Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn, ActivityInsights | evaluate bag_unpack(ActivityInsights) Another example shows anomalous key vault access from service principal with uncommon source country location. Joining this activity with other alerts from the same service principle increases fidelity of the alerts. You can also join the anomaly UEBA Anomalous Authentication with other alerts from the same identity to bring the full power of UEBA into your detections. BehaviorAnalytics | where TimeGenerated > ago(1d) | where EventSource == "Authentication" and SourceSystem == "AAD" | evaluate bag_unpack(ActivityInsights) | where LogonMethod == "Service Principal" and Resource == "Azure Key Vault" | where ActionUncommonlyPerformedByUser == "True" and CountryUncommonlyConnectedFromByUser == "True" | where InvestigationPriority > 0 Final thoughts This release marks a new chapter for Sentinel UEBA—bringing together AI, behavioral analytics, and cross-cloud and identity management visibility to help defenders stay ahead of threats. If you haven’t explored UEBA yet, now’s the time. Enable it in your workspace settings and don’t forget to enable anomalies as well (in Anomalies settings). And if you’re already using it, these new sources will help you unlock even more value. Stay tuned for our upcoming Ninja show and webinar (register at aka.ms/secwebinars), where we’ll dive deeper into use cases. Until then, explore the new sources, use the UEBA workbook, update your watchlists, and let UEBA do the heavy lifting. UEBA onboarding and setting documentation Identify threats using UEBA UEBA enrichments and insights reference UEBA anomalies reference4.1KViews5likes3CommentsMicrosoft Sentinel and Defender: ITSM Integrations Explained
One of the main changes and advantages of onboarding Microsoft Sentinel to the Defender portal is the fact that alerts are automatically correlated into single incidents. Alert correlation will kick in when we have enough evidence that multiple alerts are related. This has great benefits, as it can quickly unveil multi-stage attacks that otherwise could look like unrelated harmless alerts. It also helps reduce the amount of incidents by merging those that are related. However, each organization has its own internal processes, teams, etc., so transitioning to this new way of working can be challenging. Furthermore, for organizations that use external ITSM tools to manage their incidents and alerts this means the logic in the incident synchronization must be considered before onboarding Microsoft Sentinel on Defender. This is critical as you will need to make sure that updates on incidents due to correlation are properly reflected on your tool. In this article, we will review the most relevant fields in the Azure management API and in the Microsoft security graph API. If you are building a new integration, we recommend transitioning to the Microsoft Graph security API. However, if you have an existing integration relying on the Microsoft Sentinel REST APIs, you can simply update what you have. If you are using ServiceNow as your ITSM tool, we recommend using this out-of-the-box app provided by ServiceNow. It relies on the Azure Management API: Microsoft Sentinel - ServiceNow Store. The latest version includes improvements that take care of the correlation logic. If you are using a different ITSM tool with an out-of-the-box integration, we recommend checking whether the latest version available already covers this change in the logic. Otherwise, if you wish to update or create your own integration logic, we are providing some guidance below. How does the correlation logic affect incidents Understanding how the correlation logic affects incidents and when it kicks in is the first step. We recommend reading our official documentation, which explains different scenarios for Incident correlation and merging. However, below we will describe in more detail how correlation affects incidents, bearing in mind how this can influence synchronization with external ITSM systems. Since correlation will automatically move alerts and close incidents, you need to understand how this works and check if your synchronization logic need modifications. When two or more alerts are identified as related, they are aggregated into a single incident. This can happen in two ways: Existing incidents: Alerts from different incidents may be moved into a new or existing “target” incident. The original (“source”) incidents will lose those alerts and will be automatically closed. On Defender, any references to the ”source” incident will be redirected to the “target” incident. On Microsoft Sentinel on Azure, you will still see the old incident, although it will be closed, it will contain the tag “redirected” and it will contain a link to the “target” incident. New alerts: Alerts that are not yet part of any incident may be directly added to the “target” incident. Please keep in mind that Sentinel alerts that are triggered without an incident (which is possible through the Sentinel analytic rules settings) will not be considered for correlation. In other words, when Sentinel alerts are correlated to a target incident, there will always be a “source” incident which will be closed. We recommend reading about the criteria for merging incidents is, as well as specific scenarios when incidents aren't merged. Using the Microsoft Sentinel REST APIs If you have an existing integration based on the Microsoft Sentinel REST APIs, these are the relevant endpoints: Get all incidents Incidents - List - REST API (Azure Sentinel) | Microsoft Learn Get a specific incident Incidents - Get - REST API (Azure Sentinel) | Microsoft Learn Get all alerts for a specific incident Incidents - List Alerts - REST API (Azure Sentinel) | Microsoft Learn What we observe in most organizations with external ITSM systems is that they generally synchronize incidents only, not alerts; and their analysts go to Microsoft Sentinel to view the alerts and other details, hunt, etc. Therefore, we will focus on the endpoint to get all incidents, which is probably the endpoint you are using today. Let us have a look at the payload for an incident that has been closed due to redirection: These are the relevant fields for you: id: resource ID of the incident on Azure, now closed name: the incident GUID on Azure title: the source incident title (now closed, the new incident will probably have a different name) status: it will always be closed in this scenario labelName: it will always be redirected in this scenario, this tag is automatically added to incidents closed by automatic alert correlation incidentNumber: the incident number of the source incident in Azure alertCount: this will always be zero in this scenario, as the incident was emptied and alerts were transferred to the target incident providerIncidentUrl: the url of the target incident on Defender where you can find the transferred alerts incidentUrl: the url of the source incident (now closed) on Sentinel on Azure. Opening this url takes you to the old incident, which contains a banner with the new target incident url providerName: Once you onboard your Microsoft Sentinel workspace to Defender, this field will always be Microsoft XDR, as all incidents (also Sentinel incidents) are created on Defender providerIncidentId: the target incident ID on Defender Updates you could make A logic you could implement is to have an automation rule that detects when an incident is updated with the tag “Redirected”: This should trigger a logic app that calls your ITSM system: On your ITSM system, make sure you are mapping the providerIncidentUrl (link to Defender), this is important for two reasons: The link on the ITSM system should take your analysts to Defender, as it contains all the details of the incident and the alerts The API only references the url of the target incident on Defender. Since you probably want to update the target incident on your ITSM system, you will need to find the “providerIncidentUrl” on your system. Close the source incident on the ITSM system Update the source incident on the ITSM system with a description/comment with a link to the providerIncidentUrl (which is the URL of the target incident) On the target incident (providerIncidentUrl) on the ITSM system, add a description/comment mentioning the incident absorbed the alerts from the former incident Potentially increase severity of your target incident on the ITSM system Other more complex logics could include updating the owner. Important! Bear in mind that Incident provider is no longer a condition you can select on Defender because, once you onboard Microsoft Sentinel to Defender, the incident provider is always Microsoft Defender XDR. If you use this condition today, please change it to Alert product names instead (if you wish to make a distinction between different alert providers) The incident title is no longer predictable, as incident names can change through merging. If you use incident title as a condition, please change it to Analytic rule name and select the analytic rule that should trigger the automation. You can use mariocuomo 's script, which generates a report with a list of all automation rules that contain either of these conditions, so you can update them before transitioned to Defender: mariocuomo/Sentinel-Transition-To-Defender-Helper-Script: This repository contains an helper script that I developed to assist Sentinel customer to adopt Sentinel in Defender. While other changes should not affect you in this logic, we recommend reading Configure automation rules and playbooks in Defender to understand what else changes. Using the Microsoft Graph security API If you are building a new integration, or if your previous integration needs multiple updates, we recommend creating your integration based on the Microsoft Graph security API. This is the API endpoint in the Microsoft security graph API to get your incidents: https://graph.microsoft.com/beta/security/incidents If you want to get alerts under your incidents as well, please use this endpoint to get incidents and corresponding alerts: https://graph.microsoft.com/beta/security/incidents?$expand=alerts Important! Please note that alerts triggered by Sentinel analytic rules where an incident is not created are not visible on Defender, although they will be visible on the SecurityAlert table (either on the Microsoft Sentinel Azure portal, or on Defender in Advanced Hunting). This means that the Microsoft Graph security APIs cannot be used yet to submit your alerts to an ITSM system. If you wish to send alerts to an ITSM system, please use the Microsoft Sentinel REST APIs. Now, let’s have a look at the API response for an incident that was redirected: These are the relevant fields for you: id: The id of the source incident on Defender status: it will always be “redirected” in this scenario. incidentWebUrl: this is the url of the source incident on Defender, now redirected redirectIncidentId: the ID of the target incident on Defender lastModifiedBy: In this scenario you will always see Microsoft 365 Defender-AlertCorrelation As you probably noticed, in the Graph API you will not find Azure Sentinel fields. What would my logic look like Assuming you are using the Microsoft Graph security API, you are already streaming your Defender incidents to your ITSM system. If you want to update your logic to close source incidents where alerts have been transferred to a target incident, your logic could look like this: Run a logic app every few minutes to check for incidents that changed the status to “redirected”. You can do it through such an API call: https://graph.microsoft.com/v1.0/security/incidents?$filter=status eq 'redirected' and lastUpdateDateTime ge 2025-09-03T11:15:00Z (add the correct timestamp here) Now, you can find those incidents on your ITSM system and update them: Close them on the ITSM system Add a description or comment with the new incident (redirectIncidentId) You could make more changes, like going to the target incident on your ITSM system and detail there that your incident has absorbed alerts from a source incident. Remember you can add tags to your incidents on Defender too, such as the incident ID on your ITSM system. This will make synchronization easier. You can refer to the field “customTags” for this purpose. Special thanks to my colleagues NChristis, sagiyagen365 and BenjiSec for reviewing and contributing to this article!1.4KViews2likes0CommentsAutomate Security Workflows in Microsoft Sentinel with BlinkOps
Automate Security Workflows in Microsoft Sentinel with BlinkOps Security teams are under increasing pressure to respond faster to threats while managing growing complexity across their environments. Microsoft Sentinel’s elevated integration with BlinkOps helps address this challenge by enabling AI-powered, no-code automation that simplifies and accelerates security operations. Introducing BlinkOps for Microsoft Sentinel BlinkOps is a no-code security automation platform designed for security and platform operations teams. It allows users to build and scale workflows using natural language prompts and a library of over 30,000 pre-built actions. With BlinkOps, teams can automate incident response, compliance, and operational tasks—without writing a single line of code. Now with an enhanced integration with Microsoft Sentinel, BlinkOps enables customers to generate automated playbooks triggered by Sentinel alerts and incidents. This integration helps streamline threat response, reduce mean time to respond (MTTR), and improve operational efficiency. Why BlinkOps? Microsoft Sentinel customers may leverage Microsoft Sentinel’s SOAR capabilities through Logic Apps today. BlinkOps enables a new set of additional capabilities to Microsoft Sentinel-powered SOC teams, including: AI-generated workflows: Create automation using natural language prompts. Pre-built content: Access a rich library of templates tailored to Sentinel use cases. No-code experience: Empower security analysts to build and manage workflows without engineering support. Scalability: Deploy automation across multiple tenants and environments with ease. Key Use Cases The BlinkOps connector for Microsoft Sentinel supports several high-impact scenarios: Automated response to alerts and incidents: Trigger sophisticated BlinkOps process workflows based on Sentinel signals to ensure swift, consistent action. Incorporate humans in interactive workflows so that automation is complemented with human judgment and decisions. Template-driven playbooks: Leverage curated templates for common SOC tasks. Examples Consider this scenario: A SOC team wants an automation to help manage the response to phishing alerts in Microsoft Sentinel. The SOC team starts in BlinkOps by prompting the system to create a workflow. In this case a simple prompt is all it takes, “I would like an automation to respond to Phishing incidents in Microsoft Sentinel. We use Microsoft Security tooling (Teams, Defender, Entra etc.)” 1.BlinkOps Builder Prompt Which then builds out a workflow of how to automate the handling of a phishing alert in a few seconds. 2. Building Workflow A straightforward 6 step set of actions is generated: 3. Phishing Workflow Then, if the SOC team wants to refine or edit a specific workflow step, they can also use the BlinkOps builder AI to update individual steps. In this case, drafting the message to send to the broader security team. Builder-Editing Action Getting Started To get started using BlinkOps and Microsoft Sentinel: 1. Visit https://www.blinkops.com/ to learn more about the platform. 2. Explore the BlinkOps connector in the Microsoft Sentinel Content Hub. 3. Use natural language to create your first workflow and start automating your SOC operations.823Views1like0CommentsTable Talk: Sentinel’s New ThreatIntel Tables Explained
Key updates On April 3, 2025, we publicly previewed two new tables to support STIX (Structured Threat Information eXpression) indicator and object schemas: ThreatIntelIndicators and ThreatIntelObjects. To summarize the important dates: 31 August 2025: We previously announced that data ingestion into the legacy ThreatIntelligenceIndicator table would cease on the 31 July 2025. This timeline has now been extended and the transition to the new ThreatIntelIndicators and ThreatIntelObjects tables will proceed gradually until the 31 st of August 2025. The legacy ThreatIntelligenceIndicator table (and its data) will remain accessible, but no new data will be ingested there. Therefore, any custom content, such as workbooks, queries, or analytic rules, must be updated to reference the new tables to remain effective. If you require additional time to complete the transition, you may opt into dual ingestion, available until the official retirement on the 21 st of May 2026, by submitting a service request. Update: The opt in to dual ingestion ended on the 31 st of August and is no longer available. 31 May 2026: ThreatIntelligenceIndicator table support will officially retire, along with ingestion for those who opt-in to dual ingestion beyond 31 st of August 2025. What’s changing: ThreatIntelligenceIndicator VS ThreatIntelIndicators and ThreatIntelObjects Let’s summarise some of the differences. ThreatIntelligenceIndicator ThreatIntelIndicators ThreatIntelObjects Status Extended data ingestion until the 31st of August 2025, opt-in for additional transition time available. Deprecating on the 31st of May 2026 — no new data will be ingested after this date. Active and recommended for use. Active and complementary to ThreatIntelIndicators. Purpose Originally used to store threat indicators like IPs, domains, file hashes, etc. Stores individual threat indicators (e.g. IPs, URLs, file hashes). Stores STIX objects that provide contextual information about indicators. Examples: threat actors, malware families, campaigns, attack patterns. Characteristics Limitations: o Less flexible schema. o Limited support for STIX (Structured Threat Information eXpression) objects. o Fewer contextual fields for advanced threat hunting. Enhancements: o Supports STIX indicator schema. o Includes a Data column with full STIX object data for advanced hunting. o More metadata fields (e.g. LastUpdateMethod, IsDeleted, ExpirationDateTime). o Optimized ingestion: excludes empty key-value pairs and truncates long fields over 1,000 characters. Enhancements: o Enables richer threat modelling and correlation. o Includes fields like StixType, Data.name, and Data.id. Use cases Legacy structure for storing threat indicators. Migration Note: All custom queries, workbooks, and analytics rules referencing this table must be updated to use the new tables . Ideal for identifying and correlating specific threat indicators. Threat Hunting: Enables hunting for specific Indicators of Compromise (IOCs) such as IP addresses, domains, URLs, and file hashes. Alerting and detection rules: Can be used in KQL queries to match against telemetry from other tables (e.g. Heartbeat, SecurityEvent, Syslog). Example query correlating threat indictors with threat actors: Identify threat actors associated with specific threat indicators Useful for understanding relationships between indicators and broader threat entities (e.g. linking an IP to a known threat actor). Threat Hunting: Adds context by linking indicators to threat actors, malware families, campaigns, and attack patterns. Alerting and Detection rules: Enrich alerts with context like threat actor names or malware types. Example query listing TI objects related to a threat actor, “Sangria Tempest.” : List threat intelligence data related to a specific threat actor Benefits of the new ThreatIntelIndicators and ThreatIntelObjects tables In addition to what’s mentioned in the table above. The main benefits of the new table include: Enhanced Threat Visibility More granular and complete representation of threat intelligence. Support for advanced hunting scenarios and complex queries. Enables attribution to threat actors and relationships. Improved Hunting Capabilities Generic parsing of STIX patterns. Support for all valid STIX IoCs, Threat Actors, Identity, and Relationships. Important considerations with the new TI tables Higher volume of data being ingested: o In the legacy ThreatIntelligenceIndicator table, only the IoCs with Domain, File, URL, Email, Network sources were ingested. o The new tables support a richer schema and more detailed data, which naturally increases ingestion volume. The Data column in both tables stores full STIX objects, which are often large and complex. o Additional metadata fields (e.g. LastUpdateMethod, StixType, ObservableKey, etc.) increase the size of each record. o Some fields like description and pattern are truncated if they exceed 1,000 characters, indicating the potential for large payloads. More Frequent Republishing: o Previously, threat intelligence data was republished over a 12-day cycle. Now, all data is republished every 7-10 days (depending on the volume), increasing the ingestion frequency and volume. o This change ensures fresher data but also leads to more frequent ingestion events. o Republishing is identifiable by LastUpdateMethod = "LogARepublisher" in the tables. Optimising data ingestion There are two mechanisms to optimise threat intelligence data ingestion and control costs. Ingestion Rules See ingestion rules in action: Introducing Threat Intelligence Ingestion Rules | Microsoft Community Hub Sentinel supports Ingestion Rules that allow organizations to curate data before it enters the system. In addition, it enables: Bulk tagging, expiration extensions, and confidence-based filtering, which may increase ingestion if more indicators are retained or extended. Custom workflows that may result in additional ingestion events (e.g. tagging or relationship creation). Reduce noise by filtering out irrelevant TI Objects such as low confidence indicators (e.g. drop IoCs with a confidence score of 0), suppressing known false positives from specific feeds. These rules act on TI objects before they are ingested into Sentinel, giving you control over what gets stored and analysed. Data Collection Rules/ Data transformation As mentioned above, the ThreatIntelIndicator and ThreatIntelObjects tables include a “Data” column which contains the full original STIX object and may or may not be relevant for your use cases. In this case, you can use a workspace transformation DCR to filter it out using a KQL query. An example of this KQL query is shown below, for more examples about using workspace transformations and data collection rules: Data collection rules in Azure Monitor - Azure Monitor | Microsoft Learn source | project-away Data A few things to note: o Your threat intelligence feeds will be sending the additional STIX objects data and IoCs, if you prefer not to receive these additional TI data, you can modify the filter out data according to your use cases as mentioned above. More examples are mentioned here: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel (Preview) - Microsoft Sentinel | Microsoft Learn o If you are using a data collection rule to make schema changes such as dropping the fields, please make sure to modify the relevant Sentinel content (e.g. detection rules, Workbooks, hunting queries, etc.) that are using the tables. o There can be additional cost when using Azure Monitor data transformations (such as when adding extra columns or adding enrichments to incoming data), however, if Sentinel is enabled on the Log Analytics workspace, there is no filtering ingestion charge regardless of how much data the transformation filters. New Threat Intelligence solution pack available A new Threat Intelligence solution is now available in the Content Hub, providing out of the box content referencing the new TI tables, including 51 detection rules, 5 hunting queries, 1 Workbook, 5 data connectors and also includes 1 parser for the ThreatIntelIndicators. Please note, the previous Threat Intelligence solution pack will be deprecated and removed after the transition phase. We recommend downloading the new solution from the Content Hub as shown below: Conclusion The transition to the new ThreatIntelIndicators and ThreatIntelObjects tables provide enhanced support for STIX schemas, improved hunting and alerting features, and greater control over data ingestion allowing organizations to get deeper visibility and more effective threat detection. To ensure continuity and maximize value, it's essential to update existing content and adopt the new Threat Intelligence solution pack available in the Content Hub. Related content and references: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel Curate Threat Intelligence using Ingestion Rules Announcing Public Preview: New STIX Objects in Microsoft Sentinel3.6KViews1like2CommentsHow to: Ingest Splunk alert data to Microsoft Sentinel SIEM
Thanks to Javier Soriano, Principal Product Manager - OneSOC Customer Experience Engineering, for the peer review Introduction Although the recommended approach is to not have multiple SIEM solutions in place, many organizations are still running dual-SIEM setups, sometimes even introducing additional ones in the mix. Combination most often seen is running a legacy solution and pairing it with modern SIEM solutions like Microsoft Sentinel SIEM. Side-by-side architecture is recommended for just long enough to complete the migration, train people and update processes - but organizations might stay with the side-by-side configuration longer when they are not ready to move away from legacy solutions. In such situations, organizations usually opt for sending alerts from their legacy SIEM to Sentinel SIEM: Cloud data is ingested and analyzed in Sentinel SIEM On-premises data is ingested and analyzed in legacy SIEM which generates alerts Alerts are forwarded from legacy SIEM to Sentinel SIEM With this approach, SOC teams have a single interface where they are able to cross-correlate and investigate alerts from their organizations while benefiting from full value of unified security operations in Microsoft Defender. Send Splunk alerts to Sentinel SIEM Splunk side When an alert is raised in Splunk, organizations have an option to set up following alert actions: Email notification action Webhook alert action Output results to a CSV lookup Log events Monitor triggered alerts Send alerts and dashboards to Splunk Mobile Users Interestingly, it is possible to work with Webhooks to make an HTTP POST request on a URL. The data is in JSON format which makes it easily consumable by Sentinel SIEM. For this to work, Splunk needs the hook URL from the target source (in this case, Sentinel SIEM). { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } Example: Splunk alert JSON payload Microsoft side From Microsoft perspective, organizations can take advantage of Logs Ingestion API, which allows for any application that can make a REST API call to send data to Sentinel SIEM. To configure Logs Ingestion API, a couple of supporting resources are needed: Microsoft Entra application which will authenticate against the API Custom table in Log Analytics workspace, where the data will be sent to and accessible from Sentinel SIEM Data Collection Rule (DCR) which will direct data to the target custom table Entra application from the first step needs to have RBAC assigned on the DCR resource A solution to call Logs Ingestion API so the data can be sent to the Sentinel SIEM. In order to make this process streamlined and easy to deploy, a solution has been developed which will automate creation of all of these supporting resources and allow you to have a Webhook URL ready which can be just placed in your Splunk solution: https://github.com/Azure/Azure-Sentinel/tree/master/Tools/SplunkAlertIngestion Picture: Content of the solution The script with supporting ARM templates can be run directly from the Azure Cloud Shell and configured with a couple of parameters: ./SplunkAlertIngestion.ps1 -ServicePrincipalName "" -tableName "" -workspaceResourceId "" -dataCollectionRuleName "" -location "" ServicePrincipalName - mandatory, define a name for the SP tableName - mandatory, define a name for the custom table with the suffix _CL (example: SplunkAlertInfo_CL) workspaceResourceId - mandatory, the resourceId can be fetched from the Log Analytics Workspace Properties blade (/subscriptions/xxx-xxx/resourceGroups/xxx/providers/Microsoft.OperationalInsights/workspaces/xxx) dataCollectionRuleName - mandatory, define DCR name location - mandatory, define Azure location for the resources (example: northeurope, eastus2) LogicAppName - not mandatory, define the name for the LogicApp, otherwise it will be named SplunkAlertAutomationLogicApp Result The script will create all supporting resources that are needed and will provide the Webhook URL as an output. Use this URL to configure trigger action in Splunk: Picture: Instructions for configuring webhook alert action in Splunk Once the webhook is configured on Splunk side, any time the alert is raised it will trigger the webhook, which will initiate the Logic App resource on Azure side responsible for parsing the data and sending that data through Logs Ingestion API to the destination table in Sentinel SIEM. Picture: Workflow of the Logic App Conclusion Ingesting alert data from other solutions in your organization to Sentinel SIEM allows for security teams to take advantage of unified security operations in Microsoft Defender - easier cross-correlation between various data sources, comprehensive threat intelligence and case management experience.610Views1like0CommentsIngesting Akamai Audit Logs into Microsoft Sentinel using Azure Function Apps
Introduction Akamai provides extensive audit logs that can be valuable for security monitoring and compliance. To integrate Akamai Audit logs with Microsoft Sentinel, we can use Azure Function Apps to retrieve logs via the Akamai EdgeGrid API and send them to Log Analytics Workspace. In this guide, we will walk through deploying an Azure Function App that fetches Akamai Audit Logs and ingests them into Microsoft Sentinel. Prerequisites Before starting, ensure you have: An active Azure subscription with Microsoft Sentinel enabled. Akamai API credentials (EdgeGrid authentication: client_token, client_secret, and access_token). A Log Analytics Workspace (LAW) where logs will be ingested. Azure Function App deployed via VS Code. Python installed locally (Use the VSCode for the local deployment). High-Level Architecture Azure Function App calls Akamai API to fetch audit logs. Logs are parsed and sent to Microsoft Sentinel via Log Analytics API request to Azure Function App. Scheduled Execution ensures logs are fetched periodically. Step 1: Create an Azure Function App To deploy an Azure Function App via VS Code: Install the Azure Functions extension for VS Code. Install Azure Core Tools: npm install -g azure-functions-core-tools@4 --unsafe-perm true Create a Python-based Function App: func init AkamaiLogsFunction --python cd AkamaiLogsFunction func new --name FetchAkamaiLogs --template "HTTP trigger" --authlevel "anonymous" Step 2: Install Required Python Packages In your Function App directory, install the required dependencies: pip install requests akamai.edgegrid pip freeze > requirements.txt Step 3: Configure Environment Variables Instead of hardcoding API credentials, store them in Azure Function App settings: Go to Azure Portal > Function App. Navigate to Configuration > Application settings. Add the following environment variables: AKAMAI_CLIENT_TOKEN AKAMAI_CLIENT_SECRET AKAMAI_ACCESS_TOKEN WORKSPACE_ID (Log Analytics Workspace ID) SHARED_KEY (Log Analytics Shared Key) Step 4: Implement the Azure Function Code Create AkamaiLogFetcher.py with the following code: import azure.functions as func import logging import requests from akamai.edgegrid import EdgeGridAuth from urllib.parse import urljoin import os app = func.FunctionApp() # Azure Function HTTP Trigger @app.function_name(name="AkamaiLogFetcher") @app.route(route="fetchlogs", auth_level=func.AuthLevel.ANONYMOUS) def fetch_logs(req: func.HttpRequest) -> func.HttpResponse: logging.info("Processing Akamai log fetch request...") # Akamai API credentials (move these to Azure App Settings for security) baseurl = 'https://YOURBASEHOSTURL.luna.akamaiapis.net/' client_token = os.getenv("AKAMAI_CLIENT_TOKEN", "xxxxxxxxxxxxxx") client_secret = os.getenv("AKAMAI_CLIENT_SECRET", "xxxxxxxxxxxxx") access_token = os.getenv("AKAMAI_ACCESS_TOKEN", "xxxxxxxxxxxxxx") # Initialize session with authentication session = requests.Session() session.auth = EdgeGridAuth( client_token=client_token, client_secret=client_secret, access_token=access_token ) try: # Call Akamai API response = session.get(urljoin(baseurl, '/events/v3/events')) response.raise_for_status() # Raise an error for HTTP errors # Return response as JSON return func.HttpResponse(response.text, mimetype="application/json", status_code=response.status_code) except requests.exceptions.RequestException as e: logging.error(f"Error fetching logs: {e}") return func.HttpResponse(f"Failed to fetch logs: {str(e)}", status_code=500) Step 5: Deploy the Function to Azure Run the following command to deploy the function: func azure functionapp publish <YourFunctionAppName> Step 6: Setting Up the Logic App Workflow Create a new Logic App in Azure: Navigate to the Azure Portal -> Logic Apps -> Create. Choose Consumption Plan and select your preferred region. Click Review + Create, then Create. Add an HTTP Trigger: Select Recurrence as the trigger. Configure it to run every 10 minutes. Configure the HTTP Action to Fetch Logs from Akamai Function App API: Use the HTTP action in Logic Apps. Set the method to GET. Enter the Function App URL. Add the required headers (content type). Parse the JSON Response: Use the "Parse JSON" action to structure the response. Define the schema using a sample response from Akamai Audit Logs. Send Logs to Microsoft Sentinel: Use the "Azure Log Analytics - Send Data" action. Map the Akamai Audit log fields to the Log Analytics schema. Select the appropriate Custom Table in Log Analytics or use CommonSecurityLog. JSON Request body for Send Logs trigger Completed Logic App will look like this: Step 7: Testing and Validation Run a test execution of the Logic App. Check the Logic Apps run history to ensure successful Function App calls and data ingestion. Verify logs in Sentinel: Navigate to Microsoft Sentinel -> Logs. Run a KQL query: RadwareEvents_CL | where TimeGenerated > ago(10m) Summary This guide demonstrated how to use Azure Function Apps and Logic Apps to fetch Akamai Audit Logs via API and send them to Microsoft Sentinel. The serverless approach ensures efficient log collection without requiring dedicated infrastructure.1.3KViews3likes1CommentEnhancing Security Monitoring: Integrating GitLab Cloud Edition with Microsoft Sentinel
Maximize your security operations by combining GitLab Cloud Edition with Microsoft Sentinel. This blog covers how to fill the void of a missing native connector for GitLab in Sentinel. Utilize GitLab's API endpoints, Azure Monitor Data Collection Rules, and Data Collection Endpoints, as well as Azure Logic Apps and Key Vault, to simplify log collection and improve immediate threat identification. Our detailed guide will help you integrate smoothly and strengthen your security defences.4.8KViews0likes6Comments