azure
152 TopicsMicrosoft Graph Security API - Issue with https://graph.microsoft.com/beta/security/tiIndicators
Hi All I am trying to use Microsoft graph API threat Indicators API based on Azure sentinel recommended way of integrating threat intelligence sources for IOC ingestion to Sentinel Instance. I perform the following steps in linux curl to test the functionality : Get the OAuth token from Microsoft using : curl -X POST -d 'grant_type=client_credentials&client_id=[myClientId]&client_secret=[myAppSecret]&scope=openid profile ThreatIndicators.ReadWrite.OwnedBy' https://login.microsoftonline.com/[myTenantId]/oauth2/token Using the received bearer token calling the following API: curl -X GET -H "Authorization: Bearer [access token]" https://graph.microsoft.com/beta/security/tiIndicators I am receiving below mentioned error: { "error": { "code": "InvalidAuthenticationToken", "message": "Access token validation failure. Invalid audience.", "innerError": { "request-id": "########################", "date": "2019-12-19T07:41:51" } } Anybody has Idea how to use this ? Main motive is to use graph API POST query to insert threat indicators in Azure SentinelSolved14KViews0likes3CommentsSentinel Reports
We are switching over from an on-prem SIEM solution and moving to Sentinel. One very nice feature of our previous SIEM was that we could generate PDF reports for the board meetings very easily. I know you can make wondering graphs and dashboards within Sentinel using Playbooks and such but is the only way I have found to get them out of Sentinel is in an excel spreadsheet. Has anyone found a way to nicely generate reports through Sentinel?14KViews0likes1CommentCombine 2 columns in Single coulmn in KQL
Hi , I have data in sign-in logs as username and location, I want to combine username, location columns and add it to 3rd column. How I can do it in KQL. I have data like- User Name Location User-1 IN User-2 US User-3 GB User-4 MX I want it like following- User Details User-1 - IN User-2 - US User-3 - GB User-4 - MX9.8KViews0likes1CommentPricing Calculator for Microsoft Sentinel
Hi everyone, I am using the Pricing Calculator for Microsoft Sentinel. I can see the pricing split into two parts - Azure Monitor and Microsoft Sentinel. In my understanding, Microsoft Sentinel will process the log stored in the Log Analytics Workspace. The Cost is based on the log size in the Log Analytics Workspace. It may not relate to the Azure Monitor part. The Pricing Calculator will charge the Azure Monitor part because Azure Monitor and Microsoft Sentinel share the same Log Analytics Workspace? Basically, I am not using Azure Monitor. Any method to reduce the cost of the Azure Monitor part?Solved9.7KViews0likes12CommentsIntegration of Microsoft Sentinel & Microsoft TEAMS for integration of alerts
What are some of the best methods and strategies to start implementing an integration between Sentinel and TEAMS where when there are certain instances or alerts occurring, said alerts can be pinged to certain members on Microsoft TEAMS like through the use of playbooks, automations and setting up a API connection to integrate the two.7.7KViews0likes4CommentsReached the maximum limit of Analytics Rules of 512 in Sentinel
Hello all, We have 539 toal analytics rules in Sentinel, 478 enabled rules and 61 disabled rules. Today, we noticed that we can't add new scheduled rules in the Analytics section of Sentinel. When we checked the Sentinel workspace's Activity logs, we saw this error message: "The maximum number of Scheduled analytics rules (512) has already been reached for workspace xxxxxx". It looks that Microsoft Sentinel has indeed a Service Limit on the number of Analytics rules of 512 you can have in a workspace, as per this article https://docs.microsoft.com/en-us/azure/sentinel/sentinel-service-limits We need to add more rules to ensure that our Sentinel is benchmarked against Mitre Att&ck framework. According to https://attack.mitre.org/techniques/enterprise/, there are 191 techniques and 385 sub-techniques in the latest Att&ck framework – that’s a total of 576, how are we supposed to have have good analytics insights coverage with the limit of 512? That’s without even considering new ransomware rules, threat intel rules, and general zero-day rules e.g. Log4J etc. We have a single workspace where all data connectors (from other Microsoft solutions, Defender products etc as well as other on-premise Syslog servers). If we consider splitting our rules between two or three workspaces to cover all the Mitre Att&ck techniques and sub-techniques (and other custom rules for our own environment), then we need to duplicate the data across those additional workspaces but we split the rules across multiple workspaces and work with incidents across all workspaces (per this article https://docs.microsoft.com/en-us/azure/sentinel/multiple-workspace-view) - but this means we have to pay for duplication of workspaces storage. This can't be a realistic solution that Microsoft expects us to do! Has anyone faced this challenge and hit this maximum analytics rule limit of 512? Any advice how we might overcome it? Where do we go from here? I am surprised that this topics has not been discussed widely by companies who have mature SOCs based on Sentinel who have considered full benchmarking their Sentinel rules against Mitre Att&ck framework. Any help will be highly appreciated and thanks in advance for any comments.Solved6.8KViews2likes3CommentsSentinel KQL Query to retrieve last sign-in date.
Can someone take a look at my queries and see if they can find any errors please? My original query below provides as output all disabled accounts for the previous month and includes the admin who took the action, the disabled user along with their information and the time the account was disabled. //All User account that were disabled the previous month let lastmonth = getmonth(datetime(now)) - 1; let year = getyear(datetime(now)); let monthEnd = endofmonth(datetime(now), -1); SecurityEvent | where TimeGenerated >= make_datetime(year, lastmonth, 01) and TimeGenerated <= monthEnd | extend Disabled_EST = datetime_utc_to_local(TimeGenerated, "US/Eastern") | where EventID == "4725" | where AccountType == "User" |join IdentityInfo on $left.TargetSid== $right.AccountSID | summarize by TimeGenerated, Disabled_EST, Account, Activity, MemberName, TargetAccount, Computer, AccountDisplayName, GivenName, Surname | order by Disabled_EST asc Now the auditors want to also see when the disabled account was last signed-in so I need to add another column to the above, however I could not find any values from the IdentityInfo, the SecurityEvent and the SigninLogs tables that can be used to join the tables. So what I did was to start from scratch and use the slit function to create a field that I could use as a key for the join operation. The query below works as expected although I believe the builtin datetime_utc_to_local() function no longer works? As I'm still getting the UTC time it appears. SigninLogs |extend LastLoginTimeEST = datetime_utc_to_local(TimeGenerated, "US/Eastern") | extend NetAccount_ = tostring(split(AlternateSignInName, "@")[0]) | project-away AlternateSignInName | summarize max(LastLoginTimeEST) by NetAccount_, OperationName, AuthenticationRequirement So then the original query was modified to include the above query. Now I'm able to get as output the required columns and the last sign-in of the user. However, as mentioned earlier, the results appear to be incorrect as the date/time in this query does not match the output of the standalone SigninLogs query above. //Working but incorrect results shown in lastLogin_EST column let lastmonth = getmonth(datetime(now)) - 1; let year = getyear(datetime(now)); let monthEnd = endofmonth(datetime(now), -1); let SecurityEvents = SecurityEvent | where TimeGenerated >= make_datetime(year, lastmonth, 01) and TimeGenerated <= monthEnd //| extend EST_Disabled = datetime_utc_to_local(TimeGenerated, "US/Eastern") | where EventID == "4725" | where AccountType == "User" | join kind=leftouter (IdentityInfo | project AccountName, AccountDisplayName, GivenName, Surname) on $left.TargetUserName == $right.AccountName; let LastSigninLogs = SigninLogs //| extend LastLogin_EST = datetime_utc_to_local(TimeGenerated, "US/Eastern") | extend IdName=split(AlternateSignInName,"@", 0) | extend NetAccount_ = tostring(IdName[0]) | project-away IdName | summarize LastLogin_EST = max(TimeGenerated) by NetAccount_, OperationName, AuthenticationRequirement; SecurityEvents //|where Surname == "xyz" //use a lastname to reduce output for verification | join kind=leftouter LastSigninLogs on $left.TargetUserName == $right.NetAccount_ | summarize max(TimeGenerated)by TimeGenerated,Account, Activity, TargetAccount, Computer, AccountDisplayName, GivenName, Surname, LastLogin_EST, OperationName, AuthenticationRequirement //|extend DisabledEST= datetime_utc_to_local(max_TimeGenerated, "US/Eastern") Can someone take a look at help me find the bug or is this actually correct?6.5KViews0likes1CommentHow to Prevent Duplicate Incidents from Being Generated due to Long Data Look Back
Hey everyone, We are facing an issue with regards to our rules on Sentinel and that is when we create a rule and, in its logic, we configure the query to lookup data from longer times, say the last 14 days, this rule is going to get triggered whenever when it sees the same event during that 14 days again and again whenever the query runs, and it is going to create the same incident (with different ID). For example, the event X has happened today. The query detects it, and the rule generates an incident for it. We then analyse and finally close this incident. If our query runs for example every 2 hours, on the next run, since the rules lookup data from the past 14 days, it again sees the event X and it is going to create another incident with the same attributes for it only with a different incident ID. And the alert grouping does not work here since doesn't work on closed alerts. Since we need the rule to lookup the past 14 days, is there any way to prevent the creation of the same incidents on each query run for the same events? Thank you so much in advance for your kind help.Solved6.4KViews0likes5CommentsMore than 10 failed logins per user and device
Hello I have been working with a query that is very useful but I want it to show me the username of the person as well as the device used. I am using a pre built query I found to detect more than 10 failed logins. As well I want to be able to search for a specific name of a person in our company. Thanks. Here is the query that I have been using. // Sample query to detect If there are more then 10 failed logon authentications on high value assets. // Update DeviceName to reflect your high value assets. // For questions @MiladMSFT on Twitter or email address removed for privacy reasons DeviceLogonEvents | where ActionType == "LogonFailed" | summarize LogonFailures=count() by DeviceName, LogonType, InitiatingProcessCommandLine, AccountName, InitiatingProcessAccountUpn | where LogonFailures > 10 | project LogonFailures, DeviceName, LogonType, InitiatingProcessCommandLine, AccountName, InitiatingProcessAccountUpn | sort by LogonFailures desc6.3KViews0likes3CommentsHow to sync automation rules from Github to Sentinel
Hi, As for the analytics rule synced from Github to Sentinel, we could just simply export the rules and import it to github. However, I am not able to export the automation rules to json file and could not find the guide for the sync. Could you provide some guidance on it? ThanksSolved5.4KViews1like7Comments