microsoft sentinel
717 TopicsHow to: Ingest Splunk alert data to Microsoft Sentinel SIEM
Thanks to Javier Soriano, Principal Product Manager - OneSOC Customer Experience Engineering, for the peer review Introduction Although the recommended approach is to not have multiple SIEM solutions in place, many organizations are still running dual-SIEM setups, sometimes even introducing additional ones in the mix. Combination most often seen is running a legacy solution and pairing it with modern SIEM solutions like Microsoft Sentinel SIEM. Side-by-side architecture is recommended for just long enough to complete the migration, train people and update processes - but organizations might stay with the side-by-side configuration longer when they are not ready to move away from legacy solutions. In such situations, organizations usually opt for sending alerts from their legacy SIEM to Sentinel SIEM: Cloud data is ingested and analyzed in Sentinel SIEM On-premises data is ingested and analyzed in legacy SIEM which generates alerts Alerts are forwarded from legacy SIEM to Sentinel SIEM With this approach, SOC teams have a single interface where they are able to cross-correlate and investigate alerts from their organizations while benefiting from full value of unified security operations in Microsoft Defender. Send Splunk alerts to Sentinel SIEM Splunk side When an alert is raised in Splunk, organizations have an option to set up following alert actions: Email notification action Webhook alert action Output results to a CSV lookup Log events Monitor triggered alerts Send alerts and dashboards to Splunk Mobile Users Interestingly, it is possible to work with Webhooks to make an HTTP POST request on a URL. The data is in JSON format which makes it easily consumable by Sentinel SIEM. For this to work, Splunk needs the hook URL from the target source (in this case, Sentinel SIEM). { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } Example: Splunk alert JSON payload Microsoft side From Microsoft perspective, organizations can take advantage of Logs Ingestion API, which allows for any application that can make a REST API call to send data to Sentinel SIEM. To configure Logs Ingestion API, a couple of supporting resources are needed: Microsoft Entra application which will authenticate against the API Custom table in Log Analytics workspace, where the data will be sent to and accessible from Sentinel SIEM Data Collection Rule (DCR) which will direct data to the target custom table Entra application from the first step needs to have RBAC assigned on the DCR resource A solution to call Logs Ingestion API so the data can be sent to the Sentinel SIEM. In order to make this process streamlined and easy to deploy, a solution has been developed which will automate creation of all of these supporting resources and allow you to have a Webhook URL ready which can be just placed in your Splunk solution: https://github.com/Azure/Azure-Sentinel/tree/master/Tools/SplunkAlertIngestion Picture: Content of the solution The script with supporting ARM templates can be run directly from the Azure Cloud Shell and configured with a couple of parameters: ./SplunkAlertIngestion.ps1 -ServicePrincipalName "" -tableName "" -workspaceResourceId "" -dataCollectionRuleName "" -location "" ServicePrincipalName - mandatory, define a name for the SP tableName - mandatory, define a name for the custom table with the suffix _CL (example: SplunkAlertInfo_CL) workspaceResourceId - mandatory, the resourceId can be fetched from the Log Analytics Workspace Properties blade (/subscriptions/xxx-xxx/resourceGroups/xxx/providers/Microsoft.OperationalInsights/workspaces/xxx) dataCollectionRuleName - mandatory, define DCR name location - mandatory, define Azure location for the resources (example: northeurope, eastus2) LogicAppName - not mandatory, define the name for the LogicApp, otherwise it will be named SplunkAlertAutomationLogicApp Result The script will create all supporting resources that are needed and will provide the Webhook URL as an output. Use this URL to configure trigger action in Splunk: Picture: Instructions for configuring webhook alert action in Splunk Once the webhook is configured on Splunk side, any time the alert is raised it will trigger the webhook, which will initiate the Logic App resource on Azure side responsible for parsing the data and sending that data through Logs Ingestion API to the destination table in Sentinel SIEM. Picture: Workflow of the Logic App Conclusion Ingesting alert data from other solutions in your organization to Sentinel SIEM allows for security teams to take advantage of unified security operations in Microsoft Defender - easier cross-correlation between various data sources, comprehensive threat intelligence and case management experience.343Views1like0CommentsMicrosoft Sentinel’s AI-driven UEBA ushers in the next era of behavioral analytics
Co-author - Ashwin Patil Security teams today face an overwhelming challenge: every data point is now a potential security signal and SOCs are drowning in complex logs, trying to find the needle in the haystack. Microsoft Sentinel User and Entity Behavior Analytics (UEBA) brings the power of AI to automatically surface anomalous behaviors, helping analysts cut through the noise, save time, and focus on what truly matters. Microsoft Sentinel UEBA has already helped SOCs uncover insider threats, detect compromised accounts, and reveal subtle attack signals that traditional rule-based methods often miss. These capabilities were previously powered by a core set of high-value data sources - such as sign-in activity, audit logs, and identity signals - that consistently delivered rich context and accurate detections. Today, we’re excited to announce a major expansion: Sentinel UEBA now supports six new data sources including Microsoft first- and third-party platforms like Azure, AWS, GCP, and Okta, bringing deeper visibility, broader context, and more powerful anomaly detection tailored to your environment. This isn’t just about ingesting more logs. It’s about transforming how SOCs understand behavior, detect threats, and prioritize response. With this evolution, analysts gain a unified, cross-platform view of user and entity behavior, enabling them to correlate signals, uncover hidden risks, and act faster with greater confidence. Newly supported data sources are built for real-world security use cases: Authentication activities MDE DeviceLogonEvents – Ideal for spotting lateral movement and unusual access. AADManagedIdentitySignInLogs – Critical for spotting stealthy abuse of non - human identities. AADServicePrincipalSignInLogs - Identifying anomalies in service principal usage such as token theft or over - privileged automation. Cloud platforms & identity management AWS CloudTrail Login Events - Surfaces risky AWS account activity based on AWS CloudTrail ConsoleLogin events and logon related attributes. GCP Audit Logs - Failed IAM Access, Captures denied access attempts indicating reconnaissance, brute force, or privilege misuse in GCP. Okta MFA & Auth Security Change Events – Flags MFA challenges, resets, and policy modifications that may reveal MFA fatigue, session hijacking, or policy tampering. Currently supports the Okta_CL table (unified Okta connector support coming soon). These sources feed directly into UEBA’s entity profiles and baselines - enriching users, devices, and service identities with behavioral context and anomalies that would otherwise be fragmented across platforms. This will complement our existing supported log sources - monitoring Entra ID sign-in logs, Azure Activity logs and Windows Security Events. Due to the unified schema available across data sources, UEBA enables feature-rich investigation and the capability to correlate across data sources, cross platform identities or devices insights, anomalies, and more. AI-powered UEBA that understands your environment Microsoft Sentinel UEBA goes beyond simple log collection - it continuously learns from your environment. By applying AI models trained on your organization’s behavioral data, UEBA builds dynamic baselines and peer groups, enabling it to spot truly anomalous activity. UBEA builds baselines from 10 days (for uncommon activities) to 6 months, both for the user and their dynamically calculated peers. Then, insights are surfaced on the activities and logs - such as an uncommon activity or first-time activity - not only for the user but among peers. Those insights are used by an advanced AI model to identify high confidence anomalies. So, if a user signs in for the first time from an uncommon location, a common pattern in the environment due to reliance on global vendors, for example, then this will not be identified as an anomaly, keeping the noise down. However, in a tightly controlled environment, this same behavior can be an indication of an attack and will surface in the Anomalies table. Including those signals in custom detections can help affect the severity of an alert. So, while logic is maintained, the SOC is focused on the right priorities. How to use UEBA for maximum impact Security teams can leverage UEBA in several key ways. All the examples below leverage UEBA’s dynamic behavioral baselines looking back up to 6 months. Teams can also leverage the hunting queries from the "UEBA essentials" solution in Microsoft Sentinel's Content Hub. Behavior Analytics: Detect unusual logon times, MFA fatigue, or service principal misuse across hybrid environments. Get visibility into geo-location of events and Threat Intelligence insights. Here’s an example of how you can easily discover Accounts authenticating without MFA and from uncommonly connected countries using UEBA behaviorAnalytics table: BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.IsMfaUsed == "No" | where ActivityInsights.CountryUncommonlyConnectedFromInTenant == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 // Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn Anomaly detection Identify lateral movement, dormant account reactivation, or brute-force attempts, even when they span cloud platforms. Below are examples of how to discover UEBA Anomalous AwsCloudTrail anomalies via various UEBA activity insights or device insights attributes: Anomalies | where AnomalyTemplateName in ( "UEBA Anomalous Logon in AwsCloudTrail", // AWS ClousTrail anomalies "UEBA Anomalous MFA Failures in Okta_CL", "UEBA Anomalous Activity in Okta_CL", // Okta Anomalies "UEBA Anomalous Activity in GCP Audit Logs", // GCP Failed IAM access anomalies "UEBA Anomalous Authentication" // For Authentication related anomalies ) | project TimeGenerated, _WorkspaceId, AnomalyTemplateName, AnomalyScore, Description, AnomalyDetails, ActivityInsights, DeviceInsights, UserInsights, Tactics, Techniques Alert optimization Use UEBA signals to dynamically adjust alert severity in custom detections—turning noisy alerts into high-fidelity detections. The example below shows all the users with anomalous sign in patterns based on UEBA. Joining the results with any of the AWS alerts with same AWS identity will increase fidelity. BehaviorAnalytics | where TimeGenerated > ago(7d) | where EventSource == "AwsConsoleSignIn" | where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" | where ActivityInsights.FirstTimeConnectionViaISPInTenant == True or ActivityInsights.FirstTimeUserConnectedFromCountry == True | evaluate bag_unpack(UsersInsights, "AWS_") | where InvestigationPriority > 0 // Filter noise - uncomment if you want to see low fidelity noise | project TimeGenerated, _WorkspaceId, ActionType, ActivityType, InvestigationPriority, SourceIPAddress, SourceIPLocation, AWS_UserIdentityType, AWS_UserIdentityAccountId, AWS_UserIdentityArn, ActivityInsights | evaluate bag_unpack(ActivityInsights) Another example shows anomalous key vault access from service principal with uncommon source country location. Joining this activity with other alerts from the same service principle increases fidelity of the alerts. You can also join the anomaly UEBA Anomalous Authentication with other alerts from the same identity to bring the full power of UEBA into your detections. BehaviorAnalytics | where TimeGenerated > ago(1d) | where EventSource == "Authentication" and SourceSystem == "AAD" | evaluate bag_unpack(ActivityInsights) | where LogonMethod == "Service Principal" and Resource == "Azure Key Vault" | where ActionUncommonlyPerformedByUser == "True" and CountryUncommonlyConnectedFromByUser == "True" | where InvestigationPriority > 0 Final thoughts This release marks a new chapter for Sentinel UEBA—bringing together AI, behavioral analytics, and cross-cloud and identity management visibility to help defenders stay ahead of threats. If you haven’t explored UEBA yet, now’s the time. Enable it in your workspace settings and don’t forget to enable anomalies as well (in Anomalies settings). And if you’re already using it, these new sources will help you unlock even more value. Stay tuned for our upcoming Ninja show and webinar (register at aka.ms/secwebinars), where we’ll dive deeper into use cases. Until then, explore the new sources, use the UEBA workbook, update your watchlists, and let UEBA do the heavy lifting. UEBA onboarding and setting documentation Identify threats using UEBA UEBA enrichments and insights reference UEBA anomalies reference2KViews4likes1CommentIngesting Akamai Audit Logs into Microsoft Sentinel using Azure Function Apps
Introduction Akamai provides extensive audit logs that can be valuable for security monitoring and compliance. To integrate Akamai Audit logs with Microsoft Sentinel, we can use Azure Function Apps to retrieve logs via the Akamai EdgeGrid API and send them to Log Analytics Workspace. In this guide, we will walk through deploying an Azure Function App that fetches Akamai Audit Logs and ingests them into Microsoft Sentinel. Prerequisites Before starting, ensure you have: An active Azure subscription with Microsoft Sentinel enabled. Akamai API credentials (EdgeGrid authentication: client_token, client_secret, and access_token). A Log Analytics Workspace (LAW) where logs will be ingested. Azure Function App deployed via VS Code. Python installed locally (Use the VSCode for the local deployment). High-Level Architecture Azure Function App calls Akamai API to fetch audit logs. Logs are parsed and sent to Microsoft Sentinel via Log Analytics API request to Azure Function App. Scheduled Execution ensures logs are fetched periodically. Step 1: Create an Azure Function App To deploy an Azure Function App via VS Code: Install the Azure Functions extension for VS Code. Install Azure Core Tools: npm install -g azure-functions-core-tools@4 --unsafe-perm true Create a Python-based Function App: func init AkamaiLogsFunction --python cd AkamaiLogsFunction func new --name FetchAkamaiLogs --template "HTTP trigger" --authlevel "anonymous" Step 2: Install Required Python Packages In your Function App directory, install the required dependencies: pip install requests akamai.edgegrid pip freeze > requirements.txt Step 3: Configure Environment Variables Instead of hardcoding API credentials, store them in Azure Function App settings: Go to Azure Portal > Function App. Navigate to Configuration > Application settings. Add the following environment variables: AKAMAI_CLIENT_TOKEN AKAMAI_CLIENT_SECRET AKAMAI_ACCESS_TOKEN WORKSPACE_ID (Log Analytics Workspace ID) SHARED_KEY (Log Analytics Shared Key) Step 4: Implement the Azure Function Code Create AkamaiLogFetcher.py with the following code: import azure.functions as func import logging import requests from akamai.edgegrid import EdgeGridAuth from urllib.parse import urljoin import os app = func.FunctionApp() # Azure Function HTTP Trigger @app.function_name(name="AkamaiLogFetcher") @app.route(route="fetchlogs", auth_level=func.AuthLevel.ANONYMOUS) def fetch_logs(req: func.HttpRequest) -> func.HttpResponse: logging.info("Processing Akamai log fetch request...") # Akamai API credentials (move these to Azure App Settings for security) baseurl = 'https://YOURBASEHOSTURL.luna.akamaiapis.net/' client_token = os.getenv("AKAMAI_CLIENT_TOKEN", "xxxxxxxxxxxxxx") client_secret = os.getenv("AKAMAI_CLIENT_SECRET", "xxxxxxxxxxxxx") access_token = os.getenv("AKAMAI_ACCESS_TOKEN", "xxxxxxxxxxxxxx") # Initialize session with authentication session = requests.Session() session.auth = EdgeGridAuth( client_token=client_token, client_secret=client_secret, access_token=access_token ) try: # Call Akamai API response = session.get(urljoin(baseurl, '/events/v3/events')) response.raise_for_status() # Raise an error for HTTP errors # Return response as JSON return func.HttpResponse(response.text, mimetype="application/json", status_code=response.status_code) except requests.exceptions.RequestException as e: logging.error(f"Error fetching logs: {e}") return func.HttpResponse(f"Failed to fetch logs: {str(e)}", status_code=500) Step 5: Deploy the Function to Azure Run the following command to deploy the function: func azure functionapp publish <YourFunctionAppName> Step 6: Setting Up the Logic App Workflow Create a new Logic App in Azure: Navigate to the Azure Portal -> Logic Apps -> Create. Choose Consumption Plan and select your preferred region. Click Review + Create, then Create. Add an HTTP Trigger: Select Recurrence as the trigger. Configure it to run every 10 minutes. Configure the HTTP Action to Fetch Logs from Akamai Function App API: Use the HTTP action in Logic Apps. Set the method to GET. Enter the Function App URL. Add the required headers (content type). Parse the JSON Response: Use the "Parse JSON" action to structure the response. Define the schema using a sample response from Akamai Audit Logs. Send Logs to Microsoft Sentinel: Use the "Azure Log Analytics - Send Data" action. Map the Akamai Audit log fields to the Log Analytics schema. Select the appropriate Custom Table in Log Analytics or use CommonSecurityLog. JSON Request body for Send Logs trigger Completed Logic App will look like this: Step 7: Testing and Validation Run a test execution of the Logic App. Check the Logic Apps run history to ensure successful Function App calls and data ingestion. Verify logs in Sentinel: Navigate to Microsoft Sentinel -> Logs. Run a KQL query: RadwareEvents_CL | where TimeGenerated > ago(10m) Summary This guide demonstrated how to use Azure Function Apps and Logic Apps to fetch Akamai Audit Logs via API and send them to Microsoft Sentinel. The serverless approach ensures efficient log collection without requiring dedicated infrastructure.1.2KViews3likes1CommentTable Talk: Sentinel’s New ThreatIntel Tables Explained
Key updates On April 3, 2025, we publicly previewed two new tables to support STIX (Structured Threat Information eXpression) indicator and object schemas: ThreatIntelIndicators and ThreatIntelObjects. To summarize the important dates: 31 August 2025: We previously announced that data ingestion into the legacy ThreatIntelligenceIndicator table would cease on the 31 July 2025. This timeline has now been extended and the transition to the new ThreatIntelIndicators and ThreatIntelObjects tables will proceed gradually until the 31 st of August 2025. The legacy ThreatIntelligenceIndicator table (and its data) will remain accessible, but no new data will be ingested there. Therefore, any custom content, such as workbooks, queries, or analytic rules, must be updated to reference the new tables to remain effective. If you require additional time to complete the transition, you may opt into dual ingestion, available until the official retirement on the 21 st of May 2026, by submitting a service request. 31 May 2026: ThreatIntelligenceIndicator table support will officially retire, along with ingestion for those who opt-in to dual ingestion beyond 31 st of August 2025. What’s changing: ThreatIntelligenceIndicator VS ThreatIntelIndicators and ThreatIntelObjects Let’s summarise some of the differences. ThreatIntelligenceIndicator ThreatIntelIndicators ThreatIntelObjects Status Extended data ingestion until the 31st of August 2025, opt-in for additional transition time available. Deprecating on the 31st of May 2026 — no new data will be ingested after this date. Active and recommended for use. Active and complementary to ThreatIntelIndicators. Purpose Originally used to store threat indicators like IPs, domains, file hashes, etc. Stores individual threat indicators (e.g. IPs, URLs, file hashes). Stores STIX objects that provide contextual information about indicators. Examples: threat actors, malware families, campaigns, attack patterns. Characteristics Limitations: o Less flexible schema. o Limited support for STIX (Structured Threat Information eXpression) objects. o Fewer contextual fields for advanced threat hunting. Enhancements: o Supports STIX indicator schema. o Includes a Data column with full STIX object data for advanced hunting. o More metadata fields (e.g. LastUpdateMethod, IsDeleted, ExpirationDateTime). o Optimized ingestion: excludes empty key-value pairs and truncates long fields over 1,000 characters. Enhancements: o Enables richer threat modelling and correlation. o Includes fields like StixType, Data.name, and Data.id. Use cases Legacy structure for storing threat indicators. Migration Note: All custom queries, workbooks, and analytics rules referencing this table must be updated to use the new tables . Ideal for identifying and correlating specific threat indicators. Threat Hunting: Enables hunting for specific Indicators of Compromise (IOCs) such as IP addresses, domains, URLs, and file hashes. Alerting and detection rules: Can be used in KQL queries to match against telemetry from other tables (e.g. Heartbeat, SecurityEvent, Syslog). Example query correlating threat indictors with threat actors: Identify threat actors associated with specific threat indicators Useful for understanding relationships between indicators and broader threat entities (e.g. linking an IP to a known threat actor). Threat Hunting: Adds context by linking indicators to threat actors, malware families, campaigns, and attack patterns. Alerting and Detection rules: Enrich alerts with context like threat actor names or malware types. Example query listing TI objects related to a threat actor, “Sangria Tempest.” : List threat intelligence data related to a specific threat actor Benefits of the new ThreatIntelIndicators and ThreatIntelObjects tables In addition to what’s mentioned in the table above. The main benefits of the new table include: Enhanced Threat Visibility More granular and complete representation of threat intelligence. Support for advanced hunting scenarios and complex queries. Enables attribution to threat actors and relationships. Improved Hunting Capabilities Generic parsing of STIX patterns. Support for all valid STIX IoCs, Threat Actors, Identity, and Relationships. Important considerations with the new TI tables Higher volume of data being ingested: o In the legacy ThreatIntelligenceIndicator table, only the IoCs with Domain, File, URL, Email, Network sources were ingested. o The new tables support a richer schema and more detailed data, which naturally increases ingestion volume. The Data column in both tables stores full STIX objects, which are often large and complex. o Additional metadata fields (e.g. LastUpdateMethod, StixType, ObservableKey, etc.) increase the size of each record. o Some fields like description and pattern are truncated if they exceed 1,000 characters, indicating the potential for large payloads. More Frequent Republishing: o Previously, threat intelligence data was republished over a 12-day cycle. Now, all data is republished every 7-10 days (depending on the volume), increasing the ingestion frequency and volume. o This change ensures fresher data but also leads to more frequent ingestion events. o Republishing is identifiable by LastUpdateMethod = "LogARepublisher" in the tables. Optimising data ingestion There are two mechanisms to optimise threat intelligence data ingestion and control costs. Ingestion Rules See ingestion rules in action: Introducing Threat Intelligence Ingestion Rules | Microsoft Community Hub Sentinel supports Ingestion Rules that allow organizations to curate data before it enters the system. In addition, it enables: Bulk tagging, expiration extensions, and confidence-based filtering, which may increase ingestion if more indicators are retained or extended. Custom workflows that may result in additional ingestion events (e.g. tagging or relationship creation). Reduce noise by filtering out irrelevant TI Objects such as low confidence indicators (e.g. drop IoCs with a confidence score of 0), suppressing known false positives from specific feeds. These rules act on TI objects before they are ingested into Sentinel, giving you control over what gets stored and analysed. Data Collection Rules/ Data transformation As mentioned above, the ThreatIntelIndicator and ThreatIntelObjects tables include a “Data” column which contains the full original STIX object and may or may not be relevant for your use cases. In this case, you can use a workspace transformation DCR to filter it out using a KQL query. An example of this KQL query is shown below, for more examples about using workspace transformations and data collection rules: Data collection rules in Azure Monitor - Azure Monitor | Microsoft Learn source | project-away Data A few things to note: o Your threat intelligence feeds will be sending the additional STIX objects data and IoCs, if you prefer not to receive these additional TI data, you can modify the filter out data according to your use cases as mentioned above. More examples are mentioned here: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel (Preview) - Microsoft Sentinel | Microsoft Learn o If you are using a data collection rule to make schema changes such as dropping the fields, please make sure to modify the relevant Sentinel content (e.g. detection rules, Workbooks, hunting queries, etc.) that are using the tables. o There can be additional cost when using Azure Monitor data transformations (such as when adding extra columns or adding enrichments to incoming data), however, if Sentinel is enabled on the Log Analytics workspace, there is no filtering ingestion charge regardless of how much data the transformation filters. New Threat Intelligence solution pack available A new Threat Intelligence solution is now available in the Content Hub, providing out of the box content referencing the new TI tables, including 51 detection rules, 5 hunting queries, 1 Workbook, 5 data connectors and also includes 1 parser for the ThreatIntelIndicators. Please note, the previous Threat Intelligence solution pack will be deprecated and removed after the transition phase. We recommend downloading the new solution from the Content Hub as shown below: Conclusion The transition to the new ThreatIntelIndicators and ThreatIntelObjects tables provide enhanced support for STIX schemas, improved hunting and alerting features, and greater control over data ingestion allowing organizations to get deeper visibility and more effective threat detection. To ensure continuity and maximize value, it's essential to update existing content and adopt the new Threat Intelligence solution pack available in the Content Hub. Related content and references: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel Curate Threat Intelligence using Ingestion Rules Announcing Public Preview: New STIX Objects in Microsoft Sentinel2.7KViews1like2CommentsMonthly news - September 2025
Microsoft Defender Monthly news - September 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from August 2025. Defender for Cloud has it's own Monthly News post, have a look at their blog space. New Virtual Ninja Show episodes: Announcing Microsoft Sentinel data lake. Inside the new Phishing Triage Agent in Security Copilot. Microsoft Defender Public Preview items in advanced hunting: The new CloudStorageAggregatedEvents table is now available and brings aggregated storage activity logs, such as operations, authentication details, access sources, and success/failure counts, from Defender for Cloud into a single, queryable schema. You can now investigate Microsoft Defender for Cloud behaviors. For more information, see Investigate behaviors with advanced hunting. The IdentityEvents table contains information about identity events obtained from other cloud identity service providers. You can now enrich your custom detection rules in advanced hunting by creating dynamic alert titles and descriptions, select more impacted entities, and add custom details to display in the alert side panel. Microsoft Sentinel customers that are onboarded to Microsoft Defender also now have the option to customize the alert frequency when the rule is based only on data that is ingested to Sentinel. The number of query results displayed in the Microsoft Defender portal has been increased to 100,000. General Availability item in advanced hunting: you can now view all your user-defined rules - both custom detection rules and analytics rules - in the Detection rules page. This feature also brings the following improvements: You can now filter for every column (in addition to Frequency and Organizational scope). For multiworkspace organizations that have onboarded multiple workspaces to Microsoft Defender, you can now view the Workspace ID column and filter by workspace. You can now view the details pane even for analytics rules. You can now perform the following actions on analytics rules: Turn on/off, Delete, Edit. (General Availability) Defender Experts for XDR and Defender Experts for Hunting customers can now expand their service coverage to include server and cloud workloads protected by Defender for Cloud through the respective add-ons, Microsoft Defender Experts for Servers and Microsoft Defender Experts for Hunting - Servers. Learn more (General Availability) Defender Experts for XDR customers can now incorporate third-party network signals for enrichment, which could allow our security analysts to not only gain a more comprehensive view of an attack's path that allows for faster and more thorough detection and response, but also provide customers with a more holistic view of the threat in their environments. (General Availability) The Sensitivity label filter is now available in the Incidents and Alerts queues in the Microsoft Defender portal. This filter lets you filter incidents and alerts based on the sensitivity label assigned to the affected resources. For more information, see Filters in the incident queue and Investigate alerts. (Public Preview) Suggested prompts for incident summaries. Suggested prompts enhance the incident summary experience by automatically surfacing relevant follow-up questions based on the most crucial information in a given incident. With a single click, you can request deeper insight (e.g. device details, identity information, threat intelligence) and obtain plain language summaries from Security Copilot. This intuitive, interactive experience simplifies investigations and speeds up access to critical insights, empowering you to focus on key priorities and accelerate threat response. Microsoft Defender for Endpoint (Public Preview) Multi-tenant endpoint security policies distribution is now in Public Preview. Defender for Endpoint security policies can now be distributed across multiple tenants from the Defender multi-tenant portal. (Public Preview) Custom installation path support for Defender for Endpoint on Linux is available in public preview. (Public Preview) Offline security intelligence update support for Defender for Endpoint on macOS is in public preview. Microsoft Defender for Identity (Public Preview) Entra ID risk level is now available on the Identity Inventory assets page, the identity details page, and in the IdentityInfo table in advanced hunting, and includes the Entra ID risk score. SOC analysts can use this data to correlate risky users with sensitive or highly privileged users, create custom detections based on current or historical user risk, and improve investigation context. (Public Preview) Defender for Identity now includes a new security assessment that helps you identify and remove inactive service accounts in your organization. This assessment lists Active Directory service accounts that have been inactive (stale) for the past 180 days, to help you mitigate security risks associated with unused accounts. For more information, see: Security Assessment: Remove Inactive Service Accounts (Public Preview) A new Graph-based API is now in preview for initiating and managing remediation actions in Defender for Identity. For more information, see Managing response actions through Graph API. (General Availability) Identity scoping is now generally available across all environments. Organizations can now define and refine the scope of Defender for Identity monitoring and gain granular control over which entities and resources are included in security analysis. For more information, see Configure scoped access for Microsoft Defender for Identity. (Public Preview) The new security posture assessment highlights unsecured Active Directory attributes that contain passwords or credential clues and recommends steps to remove them, helping reduce the risk of identity compromise. For more information, see: Security Assessment: Remove discoverable passwords in Active Directory account attributes. Detection update: Suspected Brute Force attack (Kerberos, NTLM). Improved detection logic to include scenarios where accounts were locked during attacks. As a result, the number of triggered alerts might increase. Microsoft Defender for Office 365 SecOps can now dispute Microsoft's verdict on previously submitted email or URLs when they believe the result is incorrect. Disputing an item links back to the original submission and triggers a reevaluation with full context and audit history. Learn more. Microsoft Security Blogs Dissecting PipeMagic: Inside the architecture of a modular backdoor framework A comprehensive technical deep dive on PipeMagic, a highly modular backdoor used by Storm-2460 masquerading as a legitimate open-source ChatGPT Desktop Application. Think before you Click(Fix): Analyzing the ClickFix social engineering technique The ClickFix social engineering technique has been growing in popularity, with campaigns targeting thousands of enterprise and end-user devices daily. Storm-0501’s evolving techniques lead to cloud-based ransomware Financially motivated threat actor Storm-0501 has continuously evolved their campaigns to achieve sharpened focus on cloud-based tactics, techniques, and procedures (TTPs).2KViews5likes3CommentsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesn’t require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, we’re announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. “Really amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.” Glueckkanja AG “The biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.” Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity context—enabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. What’s next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoft’s help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - it’s a foundation for integrated, AI-driven security operations. We’re committed to helping you make this transition smoothly and confidently. If you haven’t already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube32KViews7likes21CommentsShare Your Expertise: Help Shape Our Network Practitioner Community
Hello Azure network practitioners, We’re working on refining our understanding of network practitioner personas and building stronger community engagement strategies for networking practitioners. Your insights as an MVP are invaluable to this effort. Could you take a few minutes to complete this short survey? Your feedback will directly influence how we design future programs and resources for the community. 👉 https://forms.office.com/r/dfgXxNwQd9 Thank you for helping us make the Azure networking community even better! Best regards, Dan Product Marketing Manager, Identity & Network Access Growth13Views0likes0CommentsMicrosoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection
Microsoft Sentinel is leveling up! Already a trusted cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution, it empowers security teams to detect, investigate, and respond to threats with speed and precision. Now, with the introduction of its new Data Lake architecture, Sentinel is transforming how security data is stored, accessed, and analyzed, bringing unmatched flexibility and scale to threat investigation. Unlike Microsoft Fabric OneLake, which supports analytics across the organization, Sentinel’s Data Lake is purpose-built for security. It centralizes raw structured, semi-structured, and unstructured data in its original format, enabling advanced analytics without rigid schemas. This article is written by someone who’s spent years helping security teams navigate Microsoft’s evolving ecosystem, translating complex capabilities into practical strategies. What follows is a hands-on look at the key features, benefits, and challenges of Sentinel’s Data Lake, designed to help you make the most of this powerful new architecture. Current Sentinel Features To tackle the challenges security teams, face today—like explosive data growth, integration of varied sources, and tight compliance requirements—organizations need scalable, efficient architectures. Legacy SIEMs often become costly and slow when analyzing multi-year data or correlating diverse events. Security data lakes address these issues by enabling seamless ingestion of logs from any source, schema-on-read flexibility, and parallelized queries over massive datasets. This schema-on-read allows SOC analysts to define how data is interpreted at the time of analysis, rather than when it is stored. This means analysts can flexibly adapt queries and threat detection logic to evolving threats, without reformatting historical data making investigations more agile and responsive to change. This empowers security operations to conduct deep historical analysis, automate enrichment, and apply advanced analytics, such as machine learning, while retaining strict control over data access and residency. Ultimately, decoupling storage and compute allows teams to boost detection and response speed, maintain compliance, and adapt their Security Operation Center (SOC) to future security demands. As organizations manage increasing data and limited budgets, many are moving from legacy SIEMs to advanced cloud-native options. Microsoft Sentinel’s Data Lake separates storage from computing, offering scalable and cost-effective analytics and compliance. For instance, storing 500 TB of logs in Sentinel Data Lake can cut costs by 60–80% compared to Log Analytics, due to lower storage costs and flexible retention. Integration with modern tools and open formats enables efficient threat response and regulatory compliance. Microsoft Sentinel data lake pricing (preview) Sentinel Data Lake Use Cases Log Retention: Long-term retention of security logs for compliance and forensic investigations Hunting: Advanced threat hunting using historical data Interoperability: Integration with Microsoft Fabric and other analytics platforms Cost: Efficient storage prices for high-volume data sources How Microsoft Sentinel Data Lake Helps Microsoft Sentinel’s Data Lake introduces a powerful paradigm shift for security operations by architecting the separation of storage and compute, enabling organizations to achieve petabyte-scale data retention without the traditional overhead and cost penalties of legacy SIEM solutions. Built atop highly scalable, cloud-native infrastructure, Sentinel Data Lake empowers SOCs to ingest telemetry from virtually unlimited sources ranging from on-premises firewalls, proxies, and endpoint logs to SaaS, IaaS, and PaaS environments—while leveraging schema-on-read, a method that allows analysts to define how data is interpreted at query time rather than when it is stored, offering greater flexibility in analytics. For example, a security analyst can adapt to the way historical data is examined as new threats emerge, without needing to reformat or restructure the data stored in the Data Lake. From Microsoft Learn – Retention and data tiering Storing raw security logs in open formats like Parquet (this is a columnar storage file format optimized for efficient data compression and retrieval, commonly used in big data processing frameworks like Apache Spark and Hadoop) enables easy integration with analytics tools and Microsoft Fabric, letting analysts efficiently query historical data using KQL, SQL, or Spark. This approach eliminates the need for complex ETL and archived data rehydration, making incident response faster; for instance, a SOC analyst can quickly search for years of firewall logs for threat detection. From Microsoft Learn – Flexible querying with Kusto Query Language Granular data governance and access controls allow organizations to manage sensitive information and meet legal requirements. Storing raw security logs in open formats enables fast investigations of long-term data incidents, while automated lifecycle management reduces costs and ensures compliance. Data Lakes integrate with Microsoft platforms and other tools for unified analytics and security. Machine learning helps detect unusual login activity across years, overcoming previous storage issues. From Microsoft Learn – Powerful analytics using Jupyter notebooks Pros and Cons The following table highlights the advantages and potential opportunities that Microsoft Sentinel Data Lake offers. This follows the same Pay-As-You-Go pricing model as currently available with Sentinel. Pros Cons License Needed Scalable, cost-effective long-term retention of security data Requires adaptation to new architecture Pay-As-You-Go model Seamless integration with Microsoft Fabric and open data formats Initial setup and integration may involve a learning curve Pay-As-You-Go model Efficient processing of petabyte-scale datasets Transitioning existing workflows may require planning Pay-As-You-Go model Advanced analytics, threat hunting, and AI/ML across historical data Some features may depend on integration with other services Pay-As-You-Go model Supports compliance use cases with robust data governance and audit trails Complexity in new data governance features Pay-As-You-Go model Microsoft Sentinel Data Lake solution advances cloud-native security by overcoming traditional SIEM limitations, allowing organizations to better retain, analyze, and respond to security data. As cyber threats grow, Sentinel Data Lake offers flexible, cost-efficient storage for long-term retention, supporting detection, compliance, and audits without significant expense or complexity. Quick Guide: Deploy Microsoft Sentinel Data Lake Assess Needs: Identify your security data volume, retention, and compliance requirements - Sentinel Data Lake Overview. Prepare Environment: Ensure Azure permissions and workspace readiness - Onboarding Guide. Enable Data Lake: Use Azure CLI or Defender portal to activate - Setup Instructions. Ingest & Import Data: Connect sources and migrate historical logs - Microsoft Sentinel Data Connectors. Integrate Analytics: Use KQL, notebooks, and Microsoft Fabric for scalable analysis - Fabric Overview Train & Optimize: Educate your team and monitor performance - Best Practices. About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share this as it’s something I often asked during my Security Trainings. This improves the already impressive Microsoft Sentinel feature stack helping the Defender Community to secure their environment in this ever-growing hacked world. I’ve been working with Microsoft Sentinel since September 2019, and I have been teaching learners about this SIEM since March 2020. I have experience using Security Copilot and Security AI Agents, which have been effective in improving my incident response and compromise recovery times.