azure
178 TopicsYour Sentinel AMA Logs & Queries Are Public by Default — AMPLS Architectures to Fix That
When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network. For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve. Azure Monitor Private Link Scope (AMPLS) is how you solve it. What AMPLS Actually Does AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings: Where logs are allowed to go (ingestion mode: Open or PrivateOnly) Where analysts are allowed to query from (query mode: Open or PrivateOnly) Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a hard platform enforcement. Set ingestion to PrivateOnly and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point. It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level. Three Patterns — One Spectrum There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range: Architecture 1 — Open / Public (Basic) No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup. Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network. Simplicity: 9/10 | Security: 6/10 Good for: Dev environments, teams getting started, low-sensitivity workloads Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most) AMPLS is in place. Ingestion is locked to PrivateOnly — logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline. Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private. This is the architecture most organisations land on as their steady state. Simplicity: 6/10 | Security: 8/10 Good for: Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access Architecture 3 — Fully Private (Maximum Control) Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference: query mode is also set to PrivateOnly. Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary. This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter. Simplicity: 2/10 | Security: 10/10 Good for: Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates) Quick Reference — Which Pattern Fits? Scenario Architecture Getting started / low-sensitivity workloads Arch 1 — No network setup, public endpoints accepted Private log ingestion, analysts work anywhere Arch 2 — AMPLS PrivateOnly ingestion, query mode open Both ingestion and queries must be fully private Arch 3 — Same as Arch 2 + query mode set to PrivateOnly One thing all three share: Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture. Please feel free to reach out if you have any questions regarding the information provided.177Views1like1CommentUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.Solved2.4KViews2likes6CommentsIntroducing the New Microsoft Sentinel Logstash Output Plugin (Public Preview!)
Many organizations rely on Logstash as a flexible, trusted data pipeline for collecting, transforming, and forwarding logs from on-premises and hybrid environments. Microsoft Sentinel has long supported a Logstash output plugin, enabling customers to send data directly into Sentinel as part of their existing pipelines. The original plugin was implemented in Ruby, and while it has served its purpose, it no longer meets Microsoft’s Secure Future Initiative (SFI) standards and has limited engineering support. To address both security and sustainability, we have rebuilt the plugin from the ground up in Java, a language that is more secure, better supported across Microsoft, and aligned with long-term platform investments. To ensure a seamless transition, the new implementation is still packaged and distributed as a standard Logstash Ruby gem. This means the installation and usage experience remains unchanged for customers, while benefiting from a more secure and maintainable foundation. What's New in This Version Java‑based and SFI‑compliant Same Logstash plugin experience, now rebuilt on a stronger foundation. The new implementation is fully Java‑based, aligning with Microsoft’s Secure Future Initiative (SFI) and providing improved security, supportability, and long-term maintainability. Modern, DCR‑based ingestion The plugin now uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), replacing the legacy HTTP Data Collection API (For more info, see Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn). This gives customers full schema control, enables custom log tables, and supports ingestion into standard Microsoft Sentinel tables as well as Microsoft Sentinel data lake. Flexible authentication options Authentication is automatically determined based on your configuration, with support for: Client secret (App registration / service principal) Managed identity, eliminating the need to store credentials in configuration files Sovereign cloud support: The plugin supports Azure sovereign clouds, including Azure US Government, Azure China, and Azure Germany. Standard Logstash distribution model The plugin is published on RubyGems.org, the standard distribution channel for Logstash plugins, and can be installed directly using the Logstash plugin manager, no change to your existing installation workflow. What the Plugin Does Logstash plugin operates as a three-stage data pipeline: Input → Filter → Output. Input: You control how data enters the pipeline, using sources such as syslog, filebeat, Kafka, Event Hubs, databases (via JDBC), files, and more. Filter: You enrich and transform events using Logstash’s powerful filtering ecosystem, including plugins like grok, mutate, and Json, shaping data to match your security and operational needs. Output: This is where Microsoft comes in. The Microsoft Sentinel Logstash Output Plugin securely sends your processed events to an Azure Monitor Data Collection Endpoint, where they are ingested into Sentinel via a Data Collection Rule (DCR). With this model, you retain full control over your Logstash pipeline and data processing logic, while the Sentinel plugin provides a secure, reliable path to ingest data into Microsoft Sentinel. Getting Started Prerequisites Logstash installed and running An Azure Monitor Data Collection Endpoint (DCE) and Data Collection Rule (DCR) in your subscription Contributor role on your Log Analytics workspace Who Is This For? Organizations that already have Logstash pipelines, need to collect from on-premises or legacy systems, and operate in distributed/hybrid environments including air-gapped networks. To learn more, see: microsoft-sentinel-log-analytics-logstash-output-plugin | RubyGems.org | your community gem host1KViews1like2CommentsHow to Ingest Microsoft Intune Logs into Microsoft Sentinel
For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility. Prerequisites and Overview Before configuring log ingestion, ensure the following prerequisites are in place: Microsoft Sentinel Enabled Workspace: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: Onboard Microsoft Sentinel - Log Analytics workspace overview. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal: Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations. Intune Administrator permissions: You need appropriate rights to configure Intune Diagnostic Settings. For information, see: Microsoft Entra built-in roles - Intune Administrator. Log Analytics Contributor role: The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to Manage access to log data and workspaces in Azure Monitor. Intune diagnostic logging enabled: Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: Send Intune log data to Azure Storage, Event Hubs, or Log Analytics. Configure Intune to Send Logs to Microsoft Sentinel Sign in to the Microsoft Intune admin center. Select Reports > Diagnostics settings. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting: Select Intune Log Categories. In the “Diagnostic setting” configuration page, give the setting a name (e.g. “Microsoft Sentinel Intune Logs Demo”). Under Logs to send, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check AuditLogs, OperationalLogs, DeviceComplianceOrg, and Devices. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace. Configure Destination Details – Microsoft Sentinel Workspace. Under Destination details on the same page, select your Azure Subscription then select the Microsoft Sentinel workspace. Save the Diagnostic Setting. After you click save, the Microsoft Intune Logs will will be streamed to 4 tables which are in the Analytics Tier. For pricing on the analytic tier check here: Plan costs and understand pricing and billing. Verify Data in Microsoft Sentinel. After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in the Microsoft 365 Defender portal and in the Azure Portal. The key tables to verify are: IntuneAuditLogs IntuneOperationalLogs IntuneDeviceComplianceOrg IntuneDevices Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) 1. Open Advanced Hunting: Sign in to the https://security.microsoft.com (the unified portal). Navigate to Advanced Hunting. – This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data. 2. Find Intune Tables: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the Microsoft Sentinel Tables. Under the LogManagement Section Look for IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices in the list. Microsoft Sentinel in Defender Portal – Tables 1. Navigate to Logs: Sign in to the https://portal.azure.com and open Microsoft Sentinel. Select your Sentinel workspace, then click Logs (under General). 2. Find Intune Tables: In the Logs query editor that opens, you’ll see a Schema or tables list on the left. If it’s collapsed, click >> to expand it. Scroll down to find LogManagement and expand it; look for these Intune-related tables: IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices Microsoft Sentinel in Azure Portal – Tables Querying Intune Log Tables in Sentinel – Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data: Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) In the Advanced Hunting page, ensure the query editor is visible (select New query if needed). Run a simple KQL query such as: IntuneDevice | take 5 Click Run query to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the Microsoft Sentinel Reader role. In the Azure Logs blade, use the query editor to run a simple KQL query such as: IntuneDevice | take 5 Select Run to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed. Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals. To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating custom detection rules in the Microsoft Defender portal (part of the unified XDR experience) see [https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules] and scheduled analytics rules in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]. These detection rules will continuously monitor your Intune telemetry – tracking device compliance status, enrollment activity, and administrative actions – and will raise alerts whenever they detect suspicious or out-of-policy events. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this converts raw Intune log data into high-value security insights: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering playbooks or other automated remediation actions when an alert fires). Use this Detection Logic to Create a detection Rule IntuneDeviceComplianceOrg | where TimeGenerated > ago(24h) | where ComplianceState != "Compliant" | summarize NonCompliantCount = count() by DeviceName, TimeGenerated | where NonCompliantCount > 3 Additional Tips: After confirming data ingestion and setting up alerts, you can leverage other Microsoft Sentinel features to get more value from your Intune logs. For example: Workbooks for Visualization: Create custom workbooks to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually. Hunting and Queries: Use advanced hunting (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling correlation across Intune and other security data. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins. Incident Management: Leverage Sentinel’s Incidents view (in Azure portal) or the unified Incidents queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident. Built-in Rules & Content: Remember that Microsoft Sentinel provides many built-in Analytics Rule templates and Content Hub solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data. Frequently Asked Questions If you’ve set everything up but don’t see logs in Sentinel, run through these checks: Check Diagnostic Settings Go to the Microsoft Intune admin center → Reports → Diagnostic settings. Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace. Confirm the Right Workspace Double-check that the Azure subscription and Microsoft Sentinel workspace are selected. If you have multiple tenants/directories, make sure you’re in the right one. Verify Permissions Make Sure Logs Are Being Generated If no devices are enrolled or no actions have been taken, there may be nothing to log yet. Try enrolling a device or changing a policy to trigger logs. Check Your Queries Make sure you’re querying the correct workspace and time range in Microsoft Sentinel. Try a direct query like: IntuneAuditLogs | take 5 Still Nothing? Try deleting and re-adding the diagnostic setting. Most issues come down to permissions or selecting the wrong workspace. How long are Intune logs retained, and how can I keep them longer? The analytics tier keeps data in the interactive retention state for 90 days by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query: Log retention tiers in Microsoft Sentinel. We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X @MSFTSecSuppTeam!1.7KViews3likes0CommentsIssue connecting Azure Sentinel GitHub app to Sentinel Instance when IP allow list is enabled
Hi everyone, I’m running into an issue connecting the Azure Sentinel GitHub app to my Sentinel workspace in order to create our CI/CD pipelines for our detection rules, and I’m hoping someone can point me in the right direction. Symptoms: When configuring the GitHub connection in Sentinel, the repository dropdown does not populate. There are no explicit errors, but the connection clearly isn’t completing. If I disable my organization’s IP allow list, everything works as expected and the repos appear immediately. I’ve seen that some GitHub Apps automatically add the IP ranges they require to an organization’s allow list. However, from what I can tell, the Azure Sentinel GitHub app does not seem to have this capability, and requires manual allow listing instead. What I’ve tried / researched: Reviewed Microsoft documentation for Sentinel ↔ GitHub integrations Looked through Azure IP range and Service Tag documentation I’ve seen recommendations to allow list the IP ranges published at //api.github.com/meta, as many GitHub apps rely on these ranges I’ve already tried allow listing multiple ranges from the GitHub meta endpoint, but the issue persists My questions: Does anyone know which IP ranges are used by the Azure Sentinel GitHub app specifically? Is there an official or recommended approach for using this integration in environments with strict IP allow lists? Has anyone successfully configured this integration without fully disabling IP restrictions? Any insight, references, or firsthand experience would be greatly appreciated. Thanks in advance!174Views0likes1CommentMissing details in Azure Activity Logs – MICROSOFT.SECURITYINSIGHTS/ENTITIES/ACTION
The Azure Activity Logs are crucial for tracking access and actions within Sentinel. However, I’m encountering a significant lack of documentation and clarity regarding some specific operation types. Resources consulted: https://learn.microsoft.com/en-us/azure/sentinel/audit-sentinel-data https://learn.microsoft.com/en-us/rest/api/securityinsights/entities?view=rest-securityinsights-2024-01-01-preview https://learn.microsoft.com/en-us/rest/api/securityinsights/operations/list?view=rest-securityinsights-2024-09-01&tabs=HTTP My issue: I observed unauthorized activity on our Sentinel workspace. The Azure Activity Logs clearly indicate the user involved, the resource, and the operation type: "MICROSOFT.SECURITYINSIGHTS/ENTITIES/ACTION" But that’s it. No detail about what the action was, what entity it targeted, or how it was triggered. This makes auditing extremely difficult. It's clear the person was in Sentinel and perform an activity through it, from search, KQL, logs to find an entity from a KQL query. But, that's all... Strangely, this operation is not even listed in the official Sentinel Operations documentation linked above. My question: Has anyone encountered this and found a way to interpret this operation type properly? Any insight into how to retrieve more meaningful details (action context, target entity, etc.) from these events would be greatly appreciated.263Views0likes3CommentsI'm stuck!
Logically, I'm not sure how\if I can do this. I want to monitor for EntraID Group additions - I can get this to work for a single entry using this: AuditLogs | where TimeGenerated > ago(7d) | where OperationName == "Add member to group" | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName == "NameOfGroup" <-- This returns the single entry | extend User = tostring(TargetResources[0].userPrincipalName) | summarize ['Count of Users Added']=dcount(User), ['List of Users Added']=make_set(User) by GroupName | sort by GroupName asc However, I have a list of 20 Priv groups that I need to monitor. I can do this using: let PrivGroups = dynamic[('name1','name2','name3'}); and then call that like this: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any (PrivGroup) But that's a bit dirty to update - I wanted to call a watchlist. I've tried defining with: let PrivGroup = (_GetWatchlist('TestList')); and tried calling like: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any ('PrivGroup') I've tried dropping the let and attempted to lookup the watchlist directly: | where GroupName has_any (_GetWatchlist('TestList')) The query runs but doesn't return any results (Obvs I know the result exists) - How do I lookup that extracted value on a Watchlist. Any ideas or pointers why I'm wrong would be appreciated! Many thanksSolved230Views0likes3CommentsPricing Calculator for Microsoft Sentinel
Hi everyone, I am using the Pricing Calculator for Microsoft Sentinel. I can see the pricing split into two parts - Azure Monitor and Microsoft Sentinel. In my understanding, Microsoft Sentinel will process the log stored in the Log Analytics Workspace. The Cost is based on the log size in the Log Analytics Workspace. It may not relate to the Azure Monitor part. The Pricing Calculator will charge the Azure Monitor part because Azure Monitor and Microsoft Sentinel share the same Log Analytics Workspace? Basically, I am not using Azure Monitor. Any method to reduce the cost of the Azure Monitor part?Solved10KViews0likes13CommentsAccelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake
As a Senior Product Manager | Developer Architect on the App Assure team working to bring Microsoft Sentinel and Security Copilot solutions to market, I interact with many ISVs building agents on Microsoft Sentinel data lake for the first time. I’ve written this article to walk you through one possible approach for agent development – the process I use when building sample agents internally at Microsoft. If you have questions about this, or other methods for building your agent, App Assure offers guidance through our Sentinel Advisory Service. Throughout this post, I include screenshots and examples from Gigamon’s Security Posture Insight Agent. This article assumes you have: An existing SaaS or security product with accessible telemetry. A small ISV team (2–3 engineers + 1 PM). Focus on a single high value scenario for the first agent. The Composite Application Model (What You Are Building) When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model. The Composite Application Model consists of five layers: Data Sources – Your product’s raw security, audit, or operational data. Ingestion – Getting that data into Microsoft Sentinel. Sentinel data lake & Microsoft Graph – Normalization, storage, and correlation. Agent – Reasoning logic that queries data and produces outcomes. End User – Security Copilot or SaaS experiences that invoke the agent. This separation allows for evolving data ingestion and agent logic simultaneously. It also helps avoid downstream surprises that require going back and rearchitecting the entire solution. Optional Prerequisite You are enrolled in the ISV Success Program, so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents. Phase 1: Data Ingestion Design & Implementation Choose Your Ingestion Strategy The first choice I face when designing an agent is how the data is going to flow into my Sentinel workspace. Below I document two primary methods for ingestion. Option A: Codeless Connector Framework (CCF) This is the best option for ISVs with REST APIs. To build a CCF solution, reference our documentation for getting started. Option B: CCF Push (Public Preview) In this instance, an ISV pushes events directly to Sentinel via a CCF Push connector. Our MS Learn documentation is a great place to get started using this method. Additional Note: In the event you find that CCF does not support your needs, reach out to App Assure so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs. Phase 2: Onboard to Microsoft Sentinel data lake Once my data is flowing into Sentinel, I onboard a single Sentinel workspace to data lake. This is a one-time action and cannot be repeated for additional workspaces. Onboarding Steps Go to the Defender portal. Follow the Sentinel Data lake onboarding instructions. Validate that tables are visible in the lake. See Running KQL Queries in data lake for additional information. Phase 3: Build and Test the Agent in Microsoft Foundry Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control. 1. Create a Microsoft Foundry Instance Foundry is used as a tool for your development environment. Reference our QuickStart guide for setting up your Foundry instance. Required Permissions: Security Reader (Entra or Subscription) Azure AI Developer at the resource group After setup, click Create Agent. 2. Design the Agent A strong first agent: Solves one narrow security problem. Has deterministic outputs. Uses explicit instructions, not vague prompts. Example agent responsibilities: To query Sentinel data lake (Sentinel data exploration tool). To summarize recent incidents. To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool). 3. Implement Agent Instructions Well-designed agent instructions should include: Role definition ("You are a security investigation agent…"). Data sources it can access. Step by step reasoning rules. Output format expectations. Sample Instructions can be found here: Agent Instructions 4. Configure the Microsoft Model Context Protocol (MCP) tooling for your agent For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps: Select Tools, and under Catalog, type Sentinel, and then select Microsoft Sentinel Data Exploration. For more information about the data exploration tool collection in MCP server, see our documentation. I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference our documentation. Phase 4: Migrate the Agent to Security Copilot Once the agent works in Foundry, I migrate it to Security Copilot. To do this: Copy the full instruction set from Foundry Provision a SCU for your Security Copilot workspace. For instructions, please reference this documentation. Make note of this process as you will be charged per hour per SCU Once you are done testing you will need to deprovision the capacity to prevent additional charges Open Security Copilot and use Create From Scratch Agent Builder as outlined here. Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step). For more information on linking the Sentinel MCP tools, please refer to this article. Paste and adapt instructions. At this stage, I always validate the following: Agent Permissions – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. Agent Performance – I have confirmed a successful interaction with measured latency and benchmark results. This step intentionally avoids reimplementation. I am reusing proven logic. Phase 5: Execute, Validate, and Publish After setting up my agent, I navigate to the Agents tab to manually trigger the agent. For more information on testing an agent you can refer to this article. Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged. Click View code on the Agent under the Build tab as outlined in this documentation. Publishing to the Microsoft Security Store If I were publishing my agent to the Microsoft Security Store, these are the steps I would follow: Finalize ingestion reliability. Document required permissions. Define supported scenarios clearly. Package agent instructions and guidance (by following these instructions). Summary Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake, this playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while maintaining high standards of quality. This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup. When starting on your own agent development journey, keep the following in mind: To limit initial scope. To reuse Microsoft managed infrastructure. To separate ingestion from intelligence. What Success Looks Like At the end of this development process, you will have the following: A Microsoft Sentinel data connector live in Content Hub (or in process) that provides a data ingestion path. Data visible in data lake. A tested agent running in Security Copilot. Clear documentation for customers. A key success factor I look for is clarity over completeness. A focused agent is far more likely to be adopted. Need help? If you have any issues as you work to develop your agent, please reach out to the App Assure team for support via our Sentinel Advisory Service . Or if you have any other tips, please comment below, I’d love to hear your feedback.604Views2likes0Comments