azure
152 Topicsneed to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please help35Views2likes1CommentBacking up Sentinel and the Security subscription
A lot of people ask about how Security Operations can effectively back up all of the Sentinel related objects. One option is to use GitHub or Azure DevOps pipelines to get a daily backup. I've been doing this for a very long time and it seems like a good forum to share that code. The trick behind it has been to use PowerShell to derive the current API versions for Azure objects. Once you do that, you can recursively download the whole subscription to a repo and then scripts can renerate reports using markdown and yaml. I've been backing up my subscription reliably since 2021. The default project creates reports for all the Sentinel related elements. Markdown lets the object reports be drilled down into... And KQL is presented as YAML for readability. It's actually easy to deploy all the backedup JSON files through REST if needed but for most of us, being able to have readable KQL and Git History of changes in files is probably all we need. This project is completely written in PowerShell with no compiled modules & anyone is freely welcome to it. I've written more about it here: https://www.laurierhodes.info/node/168 ... and the source code and install documentation can be found here: https://github.com/LaurieRhodes/PUBLIC-Subscription-Backup I hope this is of use to the community! π Best Regards Laurie1.2KViews2likes3CommentsSome accounts missing Azure AD Object ID
Hi all There is something that has been annoying me for a while and I felt it's finally time to post abount it. We have a hybrid AD-AAD setup with a user sync up and running since years back, that particular feature is not my area but from what I've heard the sync is working fine. My trouble is that Sentinel seems to not be able to reslove the AAD Object ID of some users. For example if I use the Entity Behaviour feature to look up one user it's entity page show "-" as the Azure AD Object ID. Alerts and incidents are shown for the user so Sentinel seems to be able to tie the user to incidents at least. If I select another user I might get the full AAD Object ID. This is driving my crazy because I have a few playbooks where I need the AAD-ID and they don't work as it is now. Could anyone shed some light on what process lies behind the correlation between a user and the AAD ID? Regards Fredrik3.6KViews2likes4CommentsReached the maximum limit of Analytics Rules of 512 in Sentinel
Hello all, We have 539 toal analytics rules in Sentinel, 478 enabled rules and 61 disabled rules. Today, we noticed that we can't add new scheduled rules in the Analytics section of Sentinel. When we checked the Sentinel workspace's Activity logs, we saw this error message: "The maximum number of Scheduled analytics rules (512) has already been reached for workspace xxxxxx". It looks that Microsoft Sentinel has indeed a Service Limit on the number of Analytics rules of 512 you can have in a workspace, as per this article https://docs.microsoft.com/en-us/azure/sentinel/sentinel-service-limits We need to add more rules to ensure that our Sentinel is benchmarked against Mitre Att&ck framework. According to https://attack.mitre.org/techniques/enterprise/, there are 191 techniques and 385 sub-techniques in the latest Att&ck framework β thatβs a total of 576, how are we supposed to have have good analytics insights coverage with the limit of 512? Thatβs without even considering new ransomware rules, threat intel rules, and general zero-day rules e.g. Log4J etc. We have a single workspace where all data connectors (from other Microsoft solutions, Defender products etc as well as other on-premise Syslog servers). If we consider splitting our rules between two or three workspaces to cover all the Mitre Att&ck techniques and sub-techniques (and other custom rules for our own environment), then we need to duplicate the data across those additional workspaces but we split the rules across multiple workspaces and work with incidents across all workspaces (per this article https://docs.microsoft.com/en-us/azure/sentinel/multiple-workspace-view) - but this means we have to pay for duplication of workspaces storage. This can't be a realistic solution that Microsoft expects us to do! Has anyone faced this challenge and hit this maximum analytics rule limit of 512? Any advice how we might overcome it? Where do we go from here? I am surprised that this topics has not been discussed widely by companies who have mature SOCs based on Sentinel who have considered full benchmarking their Sentinel rules against Mitre Att&ck framework. Any help will be highly appreciated and thanks in advance for any comments.Solved6.8KViews2likes3CommentsFetching alerts from Sentinel using logic apps
Hello everyone, I have a requirement to archive alerts from sentinel. To do that I need to do the following: Retrieve the alerts from Sentinel Send the data to an external file share As a solution, I decided to proceed with using logic apps where I will be running a script to automate this process. My questions are the following: -> Which API endpoints in sentinel are relevant to retrieve alerts or to run kql queries to get the needed data. -> I know that I will need some sort of permissions to interact with the API endpoint. What type of service account inside azure should I create and what permissions should I provision to it ? -> Is there any existing examples of logic apps interacting with ms sentinel ? That would be helpful for me as I am new to Azure. Any help is much appreciated !395Views1like4CommentsCan we deploy Bicep through Sentinel repo
Hi there, Im new here, but π .... With the problem statement being "Deploying and managing sentinel infrastructure through git repository. I had looked into Sentinel Repository feature which is still in Preview. With added limitations of not being able to deploy watchlists or custom log analytical functions ( custom parsers ). There is also a limitation of deploying only ARM content My guess would be that the product folks at msft are working on this π My hypothesized (just started the rnd, as of writing this) options would be to Fully go above and beyond with Bicep; Create bicep deployment files for both the rules as well as their dependencies like LAW functions, watchlists and the whole nine yards. Need to write pipelines for the deployment. The CI/CD would also need extra work to implement Hit that sweet spot; Deploy the currently supported resources using sentinel repo and write a pipeline to deploy the watchlists using Bicep. But not sure if this will be relevant to solutions to clients. When the whole shtick is that we are updating now so we dont have to later. Go back to the dark ages: Stick to the currently supported sentinel content through ARM & repo. And deploy the watchlists and dependencies using GUI π I will soon confirm the first two methods, but may take some time. As you know, I may or may not be new to sentinel...or devops.. But wanted to kick off the conversation, to see how close to being utterly wrong I am. π Thanks, mal_sec83Views1like0CommentsIngestion of AWS CloudWatch data to Microsoft Sentinel using S3 connector
Hello Guys, I hope you all are doing well. I already posted this as question but i wanted to start discussion since perhaps some of you maybe had better experience. I want to integrate CloudWatch logs to S3 bucket using Lambda function and then to send those logs to Microsoft Sentinel. As per Microsoft documentation provided: https://learn.microsoft.com/en-us/azure/sentinel/cloudwatch-lambda-function%22learn.microsoft.com%22 https://learn.microsoft.com/en-us/azure/sentinel/connect-aws?tabs=s3%22learn.microsoft.com%22 there is a way to do this BUT, first link is from last year and when i try to ingest logs on way provided there is always an error in query "Unable to import module 'lambda_function': No module named 'pandas' ; Also, as i understood, Lambda Python script gives you the specified time range you need to set in order to export those logs - i want that logs be exported every day each few minutes and synchronized into Microsoft Sentinel. (Lambda function .py script was run in Python 3.9 as mentioned on Microsoft documentation, also all of the resources used were from github solution mentioned in Microsoft documents). When trying to run automation script provided i got created S3 bucket IAM role and SQS in AWS which is fine, but even then, the connector on AWS is still grey without any changes. I even tried to change IAM role in AWS by adding Lambda permissions and using it for Lambda queries i found on internet, created CloudWatch event bridge rule for it, but even though i can see some of .gz data ingested to S3 bucket, there is no data sent to Microsoft Sentinel. So is there anyone here that can describe full process needed to be preformed in order to ingest logs from CloudWatch to Sentinel successfully and maybe are there some people that had experience with this process - what are the things i need to take care of / maybe log ingestion data (to be cost effective) etc.. I want to mention that i am preforming this in my testing environment. Since automation script in powershell gives you capability to automatically create aws resources necessary, i tried this on test environment: Downloaded AWS CLI, ran aws config, provided keys necessary with default location of my resources. 2.Run Automation Script from powershell as documentation mentioned, filled out all fields necessary. 2.1 Automation script created: 2.1.1 S3 Bucket with Access policy: allow IAM role to read S3 bucket and s3GetObject from s3 bucket Allow CloudWatch to upload objects to bucket with S3PutObject, AWS Cloud Watch ACLCheck Allowed from CloudWatch to S3 bucket. 2.1.2 Notification event for S3 bucket to send all logs from specified S3 bucket to SQS for objects with suffix .gz (Later edited this manually and added all event types to make sure events are sent) 2.1.3 SQS Queue with Access Policy - Allow S3 bucket to SendMessage to SQS service. 2.1.4 IAM user with Sentinel Workspace ID and Sentinel RoleID Since this was deployed via Automation script, in order to send logs with CloudWatch it is necessary to configure Lambda function. Since script itself does not create these resources i have created it manually: Added IAM role assignments for Permission policies: S3 Full Access, AWS Lambda Execute, CloudWatchFullAccess, CloudWatchLogsFullAccess (even later i added: CloudWatchFullAccessV2, S3ObjectLambdaExecutionRolePolicy to try it out) 1.2 Added lambda.amazonaws.com in trust relationship policy so i can use this role for Lambda execution. Created a CloudWatch log group and log stream - created log group per subscription filter for lambda function 3.Created Lambda function as per Microsoft documentation - tried newest article https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enhance-the-ingestion-of-aws-cloudwatch-logs-into-microsoft/ba-p/4100565 (Choose Lambda Python 3.12 , used existing role created above); (Took CloudWatchLambdaFunction_V2.py and there is an issue with pandas module, i managed to overcome this using the document: https://medium.com/@shandilya90apoorva/aws-cloud-pipeline-step-by-step-guide-241aaf059918 but even then i get error: Response { "errorMessage": "Unable to import module 'lambda_function': Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.", "errorType": "Runtime.ImportModuleError", "requestId": "", "stackTrace": [] } Anyway this is what i tried and i eventually get to same error regarding lambda function provided from Microsoft.639Views1like0CommentsThreat Monitoring for GitHub Connector broken - 403 error
Hello, I can deploy successfully the connector and all the other components, but when I put the Org name and the API key I get this error: The permission in Github is the one requested and I even added +80 Azure IPs to our allowlist. Still get the same error. Appreciate any help.236Views1like0CommentsSending IIS logs to sentinel
Hi everyone, We have multiple on-prem windows application servers to forward IIS logs to sentinel. Can we go with WEF and install AMA in WEF to send IIS logs to sentinel or do I need to onboard each windows server to Azure through Azure arc for AMA installation? Any suggestions would be highly appreciated. Thanks784Views1like0CommentsConstant Noninteractive sign in attempts from Microsoft IPs
In noninteractivesigninlogs, we're seeing a bunch of attempts made to sign in to our admin accounts rejected with error codes 500131 and 500133 coming from 4.231.207.170 and 2a01:111:f400:fe13::100 (Microsoft datacentre IPs), device type "Windows 10", Resources are ComplianceAuthServer/Office 365 Exchange Online. What are we seeing here, is this a misconfiguration on the Microsoft side, or an attack?964Views1like0Comments