alerts
125 TopicsCrowdStrike API Data Connector (via Codeless Connector Framework) (Preview)
API scopes created. Added to Connector however only streams observed are from Alerts and Hosts. Detections is not logging? Anyone experiencing this issue? Github has post about it apears to be escalated for feature request. CrowdStrikeDetections. not ingested Anyone have this setup and working?295Views0likes2CommentsRSAC 2026: What the Sentinel Playbook Generator actually means for SOC automation
RSAC 2026 brought a wave of Sentinel announcements, but the one I keep coming back to is the playbook generator. Not because it's the flashiest, but because it touches something that's been a real operational pain point for years: the gap between what SOC teams need to automate and what they can realistically build and maintain. I want to unpack what this actually changes from an operational perspective, because I think the implications go further than "you can now vibe-code a playbook." The problem it solves If you've built and maintained Logic Apps playbooks in Sentinel at any scale, you know the friction. You need a connector for every integration. If there isn't one, you're writing custom HTTP actions with authentication handling, pagination, error handling - all inside a visual designer that wasn't built for complex branching logic. Debugging is painful. Version control is an afterthought. And when something breaks at 2am, the person on call needs to understand both the Logic Apps runtime AND the security workflow to fix it. The result in most environments I've seen: teams build a handful of playbooks for the obvious use cases (isolate host, disable account, post to Teams) and then stop. The long tail of automation - the enrichment workflows, the cross-tool correlation, the conditional response chains - stays manual because building it is too expensive relative to the time saved. What's actually different now The playbook generator produces Python. Not Logic Apps JSON, not ARM templates - actual Python code with documentation and a visual flowchart. You describe the workflow in natural language, the system proposes a plan, asks clarifying questions, and then generates the code once you approve. The Integration Profile concept is where this gets interesting. Instead of relying on predefined connectors, you define a base URL, auth method, and credentials for any service - and the generator creates dynamic API calls against it. This means you can automate against ServiceNow, Jira, Slack, your internal CMDB, or any REST API without waiting for Microsoft or a partner to ship a connector. The embedded VS Code experience with plan mode and act mode is a deliberate design choice. Plan mode lets you iterate on the workflow before any code is generated. Act mode produces the implementation. You can then validate against real alerts and refine through conversation or direct code edits. This is a meaningful improvement over the "deploy and pray" cycle most of us have with Logic Apps. Where I see the real impact For environments running Sentinel at scale, the playbook generator could unlock the automation long tail I mentioned above. The workflows that were never worth the Logic Apps development effort might now be worth a 15-minute conversation with the generator. Think: enrichment chains that pull context from three different tools before deciding on a response path, or conditional escalation workflows that factor in asset criticality, time of day, and analyst availability. There's also an interesting angle for teams that operate across Microsoft and non-Microsoft tooling. If your SOC uses Sentinel for SIEM but has Palo Alto, CrowdStrike, or other vendors in the stack, the Integration Profile approach means you can build cross-vendor response playbooks without middleware. The questions I'd genuinely like to hear about A few things that aren't clear from the documentation and that I think matter for production use: Security Copilot dependency: The prerequisites require a Security Copilot workspace with EU or US capacity. Someone in the blog comments already flagged this as a potential blocker for organizations that have Sentinel but not Security Copilot. Is this a hard requirement going forward, or will there be a path for Sentinel-only customers? Code lifecycle management: The generated Python runs... where exactly? What's the execution runtime? How do you version control, test, and promote these playbooks across dev/staging/prod? Logic Apps had ARM templates and CI/CD patterns. What's the equivalent here? Integration Profile security: You're storing credentials for potentially every tool in your security stack inside these profiles. What's the credential storage model? Is this backed by Key Vault? How do you rotate credentials without breaking running playbooks? Debugging in production: When a generated playbook fails at 2am, what does the troubleshooting experience look like? Do you get structured logs, execution traces, retry telemetry? Or are you reading Python stack traces? Coexistence with Logic Apps: Most environments won't rip and replace overnight. What's the intended coexistence model between generated Python playbooks and existing Logic Apps automation rules? I'm genuinely optimistic about this direction. Moving from a low-code visual designer to an AI-assisted coding model with transparent, editable output feels like the right architectural bet for where SOC automation needs to go. But the operational details around lifecycle, security, and debugging will determine whether this becomes a production staple or stays a demo-only feature. Would be interested to hear from anyone who's been in the preview - what's the reality like compared to the pitch?Solved73Views0likes1CommentYour Sentinel AMA Logs & Queries Are Public by Default — AMPLS Architectures to Fix That
When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network. For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve. Azure Monitor Private Link Scope (AMPLS) is how you solve it. What AMPLS Actually Does AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings: Where logs are allowed to go (ingestion mode: Open or PrivateOnly) Where analysts are allowed to query from (query mode: Open or PrivateOnly) Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a hard platform enforcement. Set ingestion to PrivateOnly and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point. It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level. Three Patterns — One Spectrum There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range: Architecture 1 — Open / Public (Basic) No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup. Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network. Simplicity: 9/10 | Security: 6/10 Good for: Dev environments, teams getting started, low-sensitivity workloads Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most) AMPLS is in place. Ingestion is locked to PrivateOnly — logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline. Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private. This is the architecture most organisations land on as their steady state. Simplicity: 6/10 | Security: 8/10 Good for: Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access Architecture 3 — Fully Private (Maximum Control) Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference: query mode is also set to PrivateOnly. Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary. This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter. Simplicity: 2/10 | Security: 10/10 Good for: Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates) Quick Reference — Which Pattern Fits? Scenario Architecture Getting started / low-sensitivity workloads Arch 1 — No network setup, public endpoints accepted Private log ingestion, analysts work anywhere Arch 2 — AMPLS PrivateOnly ingestion, query mode open Both ingestion and queries must be fully private Arch 3 — Same as Arch 2 + query mode set to PrivateOnly One thing all three share: Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture. Please feel free to reach out if you have any questions regarding the information provided.87Views1like0CommentsIngest Microsoft XDR Advanced Hunting Data into Microsoft Sentinel
I had difficulty finding a guide that can query Microsoft Defender vulnerability management Advanced Hunting tables in Microsoft Sentinel for alerting and automation. As a result, I put together this guide to demonstrate how to ingest Microsoft XDR Advanced Hunting query results into Microsoft Sentinel using Azure Logic Apps and System‑Assigned Managed Identity. The solution allows you to: Run Advanced Hunting queries on a schedule Collect high‑risk vulnerability data (or other hunting results) Send the results to a Sentinel workspace as custom logs Create alerts and automation rules based on this data This approach avoids credential storage and follows least privilege and managed identity best practices. Prerequisites Before you begin, ensure you have: Microsoft Defender XDR access Microsoft Sentinel deployed Azure Logic Apps permission Application Administrator or higher in Microsoft Entra ID PowerShell with Az modules installed Contributor access to the Sentinel workspace Architecture at a Glance Logic App (Managed Identity) ↓ Microsoft XDR Advanced Hunting API ↓ Logic App ↓ Log Analytics Data Collector API ↓ Microsoft Sentinel (Custom Log) Step 1: Create a Logic App In the Azure Portal, go to Logic Apps Create a new Consumption Logic App Choose the appropriate: Subscription Resource Group Region Step 2: Enable System‑Assigned Managed Identity Open the Logic App Navigate to Settings → Identity Enable System‑assigned managed identity Click Save Note the Object ID This identity will later be granted permission to run Advanced Hunting queries. Step 3: Locate the Logic App in Entra ID Go to Microsoft Entra ID → Enterprise Applications Change filter to All Applications Search for your Logic App name Select the app to confirm it exists Step 4: Grant Advanced Hunting Permissions (PowerShell) Advanced Hunting permissions cannot be assigned via the portal and must be done using PowerShell. Required Permission AdvancedQuery.Read.All PowerShell Script # Your tenant ID (in the Azure portal, under Azure Active Directory > Overview). $TenantID=”Your TenantID” Connect-AzAccount -TenantId $TenantID # Get the ID of the managed identity for the app. $spID = “Your Managed Identity” # Get the service principal for Microsoft Graph by providing the AppID of WindowsDefender ATP $GraphServicePrincipal = Get-AzADServicePrincipal -Filter "AppId eq 'fc780465-2017-40d4-a0c5-307022471b92'" | Select-Object Id # Extract the Advanced query ID. $AppRole = $GraphServicePrincipal.AppRole | ` Where-Object {$_.Value -contains "AdvancedQuery.Read.All"} # If AppRoleID comes up with blank value, it can be replaced with 93489bf5-0fbc-4f2d-b901-33f2fe08ff05 # Now add the permission to the app to read the advanced queries New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId $AppRole.Id # Or New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId 93489bf5-0fbc-4f2d-b901-33f2fe08ff05 After successful execution, verify the permission under Enterprise Applications → Permissions. Step 5: Build the Logic App Workflow Open Logic App Designer and create the following flow: Trigger Recurrence (e.g., every 24 hours Run Advanced Hunting Query Connector: Microsoft Defender ATP Authentication: System‑Assigned Managed Identity Action: Run Advanced Hunting Query Sample KQL Query (High‑Risk Vulnerabilities) Send Data to Log Analytics (Sentinel) On Send Data, create a new connection and provide the workspace information where the Sentinel log exists. Obtaining the Workspace Key is not straightforward, we need to retrieve using the PowerShell command. Get-AzOperationalInsightsWorkspaceSharedKey ` -ResourceGroupName "<ResourceGroupName>" ` -Name "<WorkspaceName>" Configuration Details Workspace ID Primary key Log Type (example): XDRVulnerability_CL Request body: Results array from Advanced Hunting Step 6: Run the Logic app to return results In the logic app designer select run, If the run is successful data will be sent to sentinel workspace. Step 7: Validate Data in Microsoft Sentinel In Sentinel, run the query: XDRVulnerability_CL | where TimeGenerated > ago(24h) If data appears, ingestion is successful. Step 8: Create Alerts & Automation Rules Use Sentinel to: Create analytics rules for: CVSS > 9 Exploit available New vulnerabilities in last 24 hours Trigger: Email notifications Incident creation SOAR playbooks Conclusion By combining Logic Apps, Managed Identities, Microsoft XDR, and Microsoft Sentinel, you can create a powerful, secure, and scalable pipeline for ingesting hunting intelligence and triggering proactive detections.92Views1like1CommentClarification on UEBA Behaviors Layer Support for Zscaler and Fortinet Logs
I would like to confirm whether the new UEBA Behaviors Layer in Microsoft Sentinel currently supports generating behavior insights for Zscaler and Fortinet log sources. Based on the documentation, the preview version of the Behaviors Layer only supports specific vendors under CommonSecurityLog (CyberArk Vault and Palo Alto Threats), AWS CloudTrail services, and GCP Audit Logs. Since Zscaler and Fortinet are not listed among the supported vendors, I want to verify: Does the UEBA Behaviors Layer generate behavior records for Zscaler and Fortinet logs, or are these vendors currently unsupported for behavior generation? As logs from Zscaler and Fortinet will also be get ingested in CommonSecurityLog table only.Solved114Views0likes1CommentDefender Entity Page w/ Sentinel Events Tab
One device is displaying the Sentinel Events Tab, while the other is not. The only difference observed is that one device is Azure AD (AAD) joined and the other is Domain Joined. Could this difference account for the missing Sentinel events data? Any insight would be appreciated!275Views0likes2CommentsMicrosoft Application Protection Incidents
Hi, Seeing a small number of incidents within Sentinel with the Alert product name of 'Microsoft Application Protection'. Can view them in Sentinel but when clicking on the hyper link to be taken to the defender portal I can't access them? Two things, which Defender suite are these alerts coming from? Which roles/permissions are required to view them within the Defender/Unified portal?190Views1like1CommentDevice Tables are not ingesting tables for an orgs workspace
Device Tables are not ingesting tables for an orgs workspace. I can confirm that all devices are enrolled and onboarded to MDE (Microsoft defender for endpoint) I had placed an EICAR file on one of the machine which bought an alert through to sentinel,however this did not invoke any of the device related tables . Workspace i am targeting Workspace from another org with tables enabled and ingesting data Microsoft Defender XDR connector shows as connected however the tables do not seem to be ingesting data; I run the following; DeviceEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated DeviceProcessEvents | where TimeGenerated > ago(15m) | top 20 by TimeGenerated I receive no results; No results found from the specified time range Try selecting another time range Please assist As I cannot think where this is failing164Views1like1CommentMicrosoft 365 defender alerts not capturing fields (entities) in azure sentinel
We got an alert from 365 defenders to azure sentinel ( A potentially malicious URL click was detected). To investigate this alert we have to check in the 365 defender portal. We noticed that entities are not capturing (user, host, IP). How can we resolve this issue? Note: This is not a custom rule.2.7KViews1like3Comments