soar
103 TopicsSome accounts missing Azure AD Object ID
Hi all There is something that has been annoying me for a while and I felt it's finally time to post abount it. We have a hybrid AD-AAD setup with a user sync up and running since years back, that particular feature is not my area but from what I've heard the sync is working fine. My trouble is that Sentinel seems to not be able to reslove the AAD Object ID of some users. For example if I use the Entity Behaviour feature to look up one user it's entity page show "-" as the Azure AD Object ID. Alerts and incidents are shown for the user so Sentinel seems to be able to tie the user to incidents at least. If I select another user I might get the full AAD Object ID. This is driving my crazy because I have a few playbooks where I need the AAD-ID and they don't work as it is now. Could anyone shed some light on what process lies behind the correlation between a user and the AAD ID? Regards Fredrik3.6KViews2likes4CommentsSentinel incident playbook - get alert entities
Hi! My main task is to get all alerts (alerts, not incidents) from sentinel (analytics rules and Defender XDR) to external case management. For different reasons we need to do this on alert level. Alert trigger by design works perfectly, but this does not trigger on Defender alerts on Sentinel, only analytic rules. When using Sentinel incident trigger, then i'm not able to extract entities related to alerts, only incident releated entities. Final output is sent with HTTP post to our external system using logic app. Any ideas how to get in logic app all alerts with their entities?348Views1like5CommentsTips on how to process firewall URL/DNS alerts
Greetings I've been tossing this around ever since I've started using Sentinel, or Defender XDR rather, a few years ago. How to process events from our firewalls. Specifically all the URL and DNS block alerts generated. I don't want to tune them out completely because they might be an indication of something bigger but as the situation is now it's almost impossible to process them. All alerts are created by an NDR rule that processes CommonSecurityLog entires based on syslog data and creates incidents with entities for the incident. The way Defender XDR then process these Sentinel incidents seems to be to create "mega incidents" where it dumps all incidents for a specific time. I can understand the logic behind this thinking where Defender XDR tries to piece together differenct incidents and alerts that have common elements, users, mitre attributes or any combination. But it becomes unmanagable. I would like some input from the community, or references to best practicies.210Views1like4CommentsBehavior Analytics, investigation Priority
Hello, Regarding the field investigation Priority in the Behavior Analytics table, what would be the value that Microsoft considers to be high/critical to look into the user's account? By analyzing the logs i would say, 7 or higher, if someone could tell me, and thank you in advance.166Views1like1CommentUpdate content package Metadata
Hello Sentinel community and Microsoft. Ive been working on a script where i use this command: https://learn.microsoft.com/en-us/rest/api/securityinsights/content-package/install?view=rest-securityinsights-2024-09-01&tabs=HTTP Ive managed to successfully create everything from retrieving whats installed, uninstalling, reinstalling and lastly updating (updating needed to be "list, delete, install" however :'), there was no flag for "update available"). However, now to my issue. As this work like a charm through powershell, the metadata and hyperlinking is not being deployed - at all. So i have my 40 content packages successfully installed through the REST-api, but then i have to visit the content hub in sentinel in the GUI, filter for "installed" and mark them all, then press "install". When i do this the metadata and hyperlinking is created. (Its most noticeable that the analytic rules for the content hubs are not available under analytic rules -> Rule templates after installing through the rest api). But once you press install button in the GUI, they appear. So i looked in to the request that is made when pressing the button. It uses another API version, fine, i can add that to my script. But it also uses 2 variables that are not documented and encrypted-data. they are called c and t: Im also located in EU and it makes a request to SentinelUS. im OK with that, also as mentioned, another API version (2020-06-01) while the REST APi to install content packages above has 2024-09-01. NP. But i can not simulate this last request as the variables are encrypted and not available through the install rest api. They are also not possible to simulate. it ONLY works in the GUI when pressing install. Lastly i get another API version back when it successfully ran through install in GUI, so in total its 3 api versions. Here is my code snippet i tried (it is basically a mimic of the post request in the network tab of the browser then pressing "install" on the package in content hub, after i successfully installed it through the official rest api). function Refresh-WorkspaceMetadata { param ( [Parameter(Mandatory = $true)] [string]$SubscriptionId, [Parameter(Mandatory = $true)] [string]$ResourceGroup, [Parameter(Mandatory = $true)] [string]$WorkspaceName, [Parameter(Mandatory = $true)] [string]$AccessToken ) # Use the API version from the portal sample $apiVeri = "?api-version=" $RefreshapiVersion = "2020-06-01" # Build the batch endpoint URL with the query string on the batch URI $batchUri = "https://management.azure.com/\$batch$apiVeri$RefreshapiVersion" # Construct a relative URL for the workspace resource. # Append dummy t and c parameters to mimic the portal's request. $workspaceUrl = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/$WorkspaceName$apiVeri$RefreshapiVersion&t=123456789&c=dummy" # Create a batch payload with several GET requests $requests = @() for ($i = 0; $i -lt 5; $i++) { $requests += @{ httpMethod = "GET" name = [guid]::NewGuid().ToString() requestHeaderDetails = @{ commandName = "Microsoft_Azure_SentinelUS.ContenthubWorkspaceClient/get" } url = $workspaceUrl } } $body = @{ requests = $requests } | ConvertTo-Json -Depth 5 try { $response = Invoke-RestMethod -Uri $batchUri -Method Post -Headers @{ "Authorization" = "Bearer $AccessToken" "Content-Type" = "application/json" } -Body $body Write-Host "[+] Workspace metadata refresh triggered successfully." -ForegroundColor Green } catch { Write-Host "[!] Failed to trigger workspace metadata refresh. Error: $_" -ForegroundColor Red } } Refresh-WorkspaceMetadata -SubscriptionId $subscriptionId -ResourceGroup $resourceGroup -WorkspaceName $workspaceName -AccessToken $accessToken (note: i have variables higher up in my script for subscriptionid, resourcegroup, workspacename and token etc). Ive tried with and without mimicing the T and C variable. none works. So for me, currently, installing content hub packages for sentinel is always: Install through script to get all 40 packages Visit webpage, filter for 'Installed', mark them and press 'Install' You now have all metadata and hyperlinking available to you in your Sentinel (such as hunting rules, analytic rules, workbooks, playbooks -templates). Anyone else manage to get around this or is it "GUI" gated ? Greatly appreciated.Solved265Views1like6CommentsFinOps In Microsoft Sentinel
Microsoft Sentinel's security analytics and operations data is stored in an Azure Monitor Log Analytics workspace. Billing is based on the volume of data analyzed in Microsoft Sentinel and stored in the Log Analytics workspace. The cost of both is combined in a simplified pricing tier. Microsoft 365 data sources are always free to ingest for all Microsoft Sentinel users: Billable data sources: Although alerts are free, the raw logs for Microsoft Endpoint Defender, Defender for Cloud Apps, Microsoft Entra ID sign in and audit logs, and Azure Information Protection (AIP) data types are paid: Microsoft Sentinel data retention is free for the first 90 days. Enable Microsoft Sentinel on an Azure Monitor Log Analytics workspace and the first 10 GB/day is free for 31 days. The cost for both Log Analytics data ingestion and Microsoft Sentinel analysis charges up to the 10 GB/day limit are waived during the 31-day trial period. This free trial is subject to a 20 workspace limit per Azure tenant • By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive. • You can modify the retention and archive settings of individual tables Azure Monitor Logs retains data in two states: - Interactive retention: Lets you retain Analytics logs for interactive queries of up to 2 years. - Archive: Lets you keep older, less used data in your workspace at a reduced cost. • You can access data in the archived state by using search jobs, restore and keep data in archived state for up to 12 years • Its very important for cost management in MS Sentinel when you define short data retention period, but firstly go in Log Analytics WS | Workbooks | Workspace Usage in order to see tables size Use this workbook to analyze the the sizes of the different tables in your workspace: Where can save your money? Ingestion • Carefully plan what data is sent into your Microsoft sentinel workspace • Utilize filtering mechanisms to reduce ingestions to what the SOC needs • Set daily cap (good for PoC scenarios but not recommend for production) Retention • Send data to other storage platforms that have cheaper storage costs (Azure blob storage, Azure data explorer) Compute • Shutdown Azure machine learning compute during off hours, consider using reserved instances pricing • Set quotas on your subscription and workspaces • Use low-priority virtual machine (VM) Bandwidth • Sending data across Azure regions might incur into additional costs Ingestion planning • Analyze your data sources and decides what data is needed by your SOC for detection, investigations, hunting and enrichment. Take use-driven approach • Plan your workspace design • Existing workspaces might be ingesting data not needed by the SOC • Consider using separate workspace for Microsoft Sentinel • When possible enable Defender for Servers on the same workspace where you enable Microsoft Sentinel, you get 500 MB of free data ingestion per day • If you configure your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500-MB free data ingestion for each workspace. Retention • Microsoft Sentinel retention is charged ($0.1/GB/month) and can become a big portion of the Microsoft Sentinel cost • 1.2 TB/day ingestion with 1-year retention (East US list prices) Ingestion: ~ $89К/month Retention: ~ $33К/month • If you require more than 90 days retention, determine if you need it for the whole workspace or just some tables • Consider using other storage platform for long storage retention (Azure blob storage, Azure data explorer) Long term retention options: • Azure blob storage • Cheaper than Microsoft sentinel retention • Difficult for query • Ideal for audit/compliance purposes Azure Data explorer Stores security logs in Azure Data Explorer on a long-term basis. Minimizes costs and provides easy access when you need to query the data and stores most of the data in the cold cache, minimizing the computing cost. Log Analytics doesn't currently support exporting custom log tables. In this scenario, you can use Azure Logic Apps to export data from Log Analytics workspaces. Because Azure Data Explorer provides long-term storage, you can reduce your Sentinel retention costs with this approach and ideal for forensic investigation and hunting on older data Can achieve up to 75% saving on retention costs Instead of using Azure Data Explorer for long-term storage of security logs, you can use Storage. This approach simplifies the architecture and can help control the cost. A disadvantage is the need to rehydrate the logs for security audits and interactive investigative queries. With Azure Data Explorer, you can move data from the cold partition to the hot partition by changing a policy. This functionality speeds up data exploration. Bandwidth Sending telemetry from one Azure region to another can incur in bandwidth costs this only affect Azure VMs that send telemetry across Azure regions data sources based on diagnostics settings are not affected not a big cost component compared to ingestion or retention Example: 1000 VMs, where each generates 1GB/day, sending data from US to EU: 1000 VMs * 1GB/day *30 days/month*$0.05/GB =$1.500/month Ingestion Cost Alert Playbook Managing cost for cloud services is an essential part of ensuring that you get maximum value for your investment in solutions running on this computing platform. Azure Sentinel is no different. To help you exercise greater control over your budget for Azure Sentinel this playbook will send you an alert should you exceed a budget that you define for your Azure Sentinel Workspace within a given time-frame With the ingestion cost alert playbook, you can set up an alert based on the budget defined in your Microsoft Sentinel workspace within a given timeframe. Ingestion Anomaly Alert Playbook This playbook sends you an alert should there be an ingestion spike into your workspace. The playbook uses the series_decompose_anomalies KQL function to determine anomalous ingestion The Workspace Usage Report workbook The Workspace Usage Report workbook provides your workspace's data consumption, cost, and usage statistics. The workbook gives the workspace's data ingestion status and amount of free and billable data. You can use the workbook logic to monitor data ingestion and costs, and to build custom views and rule-based alerts. This workbook also provides granular ingestion details. The workbook breaks down the data in your workspace by data table, and provides volumes per table and entry to help you better understand your ingestion patterns. Azure pricing model – based on volume of data ingested User Entity Behavior Analytics Approximately 10% of the cost of logs selected for UEBA Reduce To change your pricing tier commitment, select one of the other tiers on the pricing page, and then select Apply. You must have Contributor or Owner role in Microsoft Sentinel to change the pricing tier costs for Microsoft Sentinel Useful links: Tools that are related to FinOps on Azure Sentinel (Azure Pricing Calculator, Azure Cost Management, Azure Advisor, TCO Calculator, Azure Hybrid Benefit Savings Calculator) https://techcommunity.microsoft.com/t5/fasttrack-for-azure/the-azure-finops-guide/ba-p/3704132 Manage and monitoring Costs for Microsoft Sentinel https://learn.microsoft.com/en-us/azure/sentinel/billing-monitor-costs Reduce costs for Microsoft Sentinel https://learn.microsoft.com/en-us/azure/sentinel/billing-reduce-costs Ingestion Cost Spike Detection Playbook https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/ingestion-cost-spike-detection-playbook/ba-p/2591301 Ingestion Cost Alert Playbook https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/ingestion-cost-alert-playbook/ba-p/2006003 Introducing Microsoft Sentinel Optimization Workbook https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/introducing-microsoft-sentinel-optimization-workbook/ba-p/39014892.2KViews1like0CommentsProvide MS Sentinel explicit permissions to run playbooks via ARM
Hi, Simple ask. Using ARM to add a template to a new install of Sentinel (LA workspace, sentinel, analytics, workbooks and playbooks all installed in one go) I can't figure out how to assign the permissions required as part of the initial ARM install. For the GUI it's simple enough - Sentinel > Settings > Settings > Playbook Permissions > Configure permissions > select RG and apply. How can this be replicated using ARM only - no PS and no GUI -- ideally i would like the ARM template to be a one shot. Any help is appreciated. Cheers1.8KViews1like1CommentThreat Intelligence Indicators in Microsoft Sentinel
Hello Microsoft Community, This is my first post and I hope it will be helpful for those who are trying to understand how the Threat Intelligence (TI) Indicators feature works on Microsoft and especially in Microsoft Sentinel. But before that, I would like to share my own experience. Working with a lot of customers I have had a big number of questions about how to automate the purging process of stale TI indicators from Sentinel. There is a way to do it manually, but if you have thousands of them, it will be a tough task to remove only one hundred at a time. I decided to automate this process and started investigating different kinds of automation (Graph API, PowerShell, etc.). After some investigation, the API command that allows to get a list of all indicators stored in Microsoft tenant was found. This one: GET https://graph.microsoft.com/beta/security/tiIndicators. And I decided to build a Logic App that will get all TI indicators, extract their IDs and then remove each of them by running DELETE https://graph.microsoft.com/beta/security/tiIndicators/{id}. Unfortunately, when I was testing it, I stuck with the situation when I was getting nothing even though I have more than one thousand indicators in my test environment. Trying to search on forums, asking questions I got no answers and decided to open a case for Microsoft Support. I really appreciate Microsoft Support team for providing a professional and fast response and explanation. Now, I will try to explain a little bit how the TI backend works on Microsoft. Let’s move to the technical part. 1. TI indicators ingestion There are a few ways to ingest TI indicators. The first one is to use a built-in TAXII connector. There are plenty of them. You can use, for example, Anomali, IBM X-Force, Pulsedive, and others. The configuration is simple, based on Microsoft you only need to get the TAXII server API Root and Collection ID, and then enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel. The second way is to build a playbook that will pull TI indicators from a TI provider and push them into Sentinel Using Graph Security API. There is a great playbook for pulling TI indicators from Alien Vault: Azure-Sentinel/Playbooks/Get-AlienVault_OTX at master · Azure/Azure-Sentinel (github.com) Such kinds of playbooks require minor configuration and can be deployed from GitHub. The third way for adding TI indicators is flat file import. This feature is currently in Private Preview and will be available soon for the Public. Sentinel administrators will be able to import indicators from a CSV or JSON file. And the last way is manual creation. This is a good option only if you have a few indicators to add and have no time to write scripts and build automation. One more important thing to mention is the fact that Graph Security API serves Threat Intelligence by TenantID and AppID (the application ID that uploads the TI through GSA and was configured in Azure AD). If the TI indicators were uploaded using one application (AppID) and then queried with another application (AppID), the data will not be returned. For example, if you use the playbook mentioned above, you should Register an application in your Azure AD to ingest indicators. Then you will not be able to query those indicators with another application, for example, with Microsoft Graph Explorer. You must use the same application to get the list of indicators you uploaded. 2. TI indicators storing Based on Microsoft, when using the tiIndicators entity, you must specify the Microsoft security solution you want to utilize the indicators for via the targetProduct property and define the action (allow, block, or alert) to which the security solution should apply the indicators via the action property. In the playbook for pulling indicators from GitHub, we have the following parameter: “targetProduct” that should be “Azure Sentinel”. Yes, Azure and not Microsoft Sentinel. By setting this parameter, we configure the playbook to ingest logs into Sentinel Log Analytics Workspace and so we will be able to process the ingested indicators later. In Logs under Microsoft Sentinel, a new table is created “ThreatIntelligenceIndicator”. This is our final diagram for Microsoft Sentinel: TI indicators are not stored only in the Sentinel LAW. There are also stored in Microsoft backend with a retention period of 1 year or if deleted via the API. For Log Analytics Workspace the retention period is usually configured by a customer and data is there until deleted. 3. TI indicators pulling As well as for ingesting indicators, there are a few ways for pulling them from Microsoft backend and from Log Analytics Workspace. It was mentioned previously that to pull indicators from Microsoft Graph backend you should use Microsoft Security Graph API with the same Application and Tenant ID. Otherwise, you will get nothing. You should also pay attention to the expiration date of the ingested certificates. If you try to get a specific indicator(s) and get nothing, probably it has been expired and removed from the Graph backend. Use this resource Graph Explorer | Try Microsoft Graph APIs - Microsoft Graph to test the API. Pulling TI indicators from Sentinel Log Analytics Workspace is simpler. You need to open a Sentinel LAW and get them by running a KQL query. For example, this one: ThreatIntelligenceIndicator | project TimeGenerated, Description, IndicatorId | top 100 by IndicatorId This KQL will show you the first 100 indicators by IndicatorID. Don’t forget to set a date under “Time range”. You can also use built-in queries to protect your environment or build your own queries based on your company requirements. The last way to see TI indicators ingested into your Sentinel is by opening Threat Intelligence page in Microsoft Sentinel. This page will provide you with details for each indicator, allow you to remove them (only 100 at a time) and edit their details. Summary Microsoft's security ecosystem has a huge number of capabilities that help organizations to protect their environments from modern security threats. And TI indicators is only one piece of the puzzle called Threat Intelligence. It is important to understand how this feature works to gain the best results from it. I hope the information provided in this article will be helpful for the community and will allow to understand how the Microsoft TI works better. If you have any questions or suggestions for the text, I will be glad to hear them. There is an amazing webinar Threat Intelligence published by Microsoft Team: Cyber Threat Intelligence Demystified in Microsoft Sentinel - YouTube You can also find the article on LinkedIn: Threat Intelligence Indicators in Microsoft Sentinel | LinkedIn7.1KViews1like3CommentsAzure-Sentinel/Playbooks/Get-GeoFromIpAndTagIncident
Hi I am really scratching my head with this one, I want to use the Get-GeoFromIpAndTagIncident playbook which is available on GitHub from the Community page in Sentinel. I've set up the playbook but when I run it I get a failure with the message 'SSL unavailable for this endpoint, order a key at https://members.ip.api.com/' , I'm positive there's a way to circumvent this but I am drawing a blank as to where?2KViews1like3CommentsPlaybook Triggering
Hi everyone! I'm working with playbooks and we want to get a copy of every Incident created in Sentinel sent to a centralised location. I originally implemented this with every Alert that fired, but that does not give every Incident; the Incidents created from other Sentinel components (MCAS, O365, PIM, etc) do not have the ability to launch a playbook like a regular Analytics Rule. I recently saw the following option for a Playbook Trigger: However, I cannot figure out how to reference it within any Analytics Rule or anywhere within Sentinel! It does not seem to fire on its own when Incidents are created. How is this supposed to work?1.8KViews1like3Comments