User Profile
stianhoydal
Brass Contributor
Joined 6 years ago
User Widgets
Recent Discussions
Linux syslog agent initial setup on RHEL 8 machine
Greetings, I was trying to set up the log forwarder for a fortinet firewall to ingest to Sentinel, however i can't seem to figure out why the script is failing to do what it normally does. I usually run on ubuntu machines and have no issues, but this time i had to do it on a Red Hat Enterprise Linux 8 machine. To be more specific most of the script runs fine untill i get this message: Job for rsyslog.service failed because the control process exited with error code. See "systemctl status rsyslog.service" and "journalctl -xe" for details. the systemctl status message contains the following: ● rsyslog.service - System Logging Service Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sat 2022-02-19 18:17:44 CET; 3min 56s ago Docs: man:rsyslogd(8) https://www.rsyslog.com/doc/ Process: 92657 ExecStart=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS (code=exited, status=1/FAILURE) Main PID: 92657 (code=exited, status=1/FAILURE) Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Failed with result 'exit-code'. Feb 19 18:17:44 machineName systemd[1]: Failed to start System Logging Service. Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Service RestartSec=100ms expired, scheduling restart. Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Scheduled restart job, restart counter is at 7. Feb 19 18:17:44 machineName systemd[1]: Stopped System Logging Service. Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Start request repeated too quickly. Feb 19 18:17:44 machineName systemd[1]: rsyslog.service: Failed with result 'exit-code'. Feb 19 18:17:44 machineName systemd[1]: Failed to start System Logging Service. Does anyone have a good idea for how this is not working? This part of the script is, from what i understand, responsible for the syslog daemon, so it's quite important that it works. Any help is much appreciated.6.9KViews0likes3CommentsRe: No option to tune analytics rule with Microsoft 365 Defender connector
The problem with using automation rules(as far as i know) is that the incident would still be created. I am working for a MSP and we are running a SOC which gets all incidents forwarded to them continously. I suppose i could try to create an automation rule that closes these incidents and put a check in the mail forwarding playbook to check if the incident is open or not(unless it does this by default)2.2KViews0likes2CommentsRe: No option to tune analytics rule with Microsoft 365 Defender connector
Thijs Lecomte So the best way of solving this particular issue is to turn of the Microsoft 365 Defender connector for now and keep the connectors as they are separated. Since the M365 Defender connector is in preview i suppose there might be hope for this functionality in the future.2.1KViews0likes4CommentsNo option to tune analytics rule with Microsoft 365 Defender connector
Greetings, i have been working with a few different customers and when trying to configure the Defender for O365 alert "Email messages containing malicious URL removed after delivery", however there is no option to add exlucions and minor tweaks to the analytics rule as it used to be when not connected via the Microsoft 365 Defender connetor. The option to click "Create incidents based on *product name* alerts" does not exist after activating the Microsoft 365 Defender connector. Is there any way to do similar tuning anyway? I wish to not make informational incidents like the email messages, but still recieve the alert in the background and rather create an incident if more that 20+ of the same alert is recieved.Solved2.2KViews0likes7CommentsAdd comment to incident with IP information
Greetings everyone! I am currently trying to set up a playbook that takes the IP from a incident, looks up this ip(ip lookup or other similar services), and places a comment on the incident regarding information about who owns this IP. I am doing this because there is extensive use of VPN's in the network and i wish to know if the logins occurring e.g. outside of Europe is owned by a known entity, such as Microsoft, or if it's something else. I do not know much about how the logic apps are configured so any pointers in the right direction is much appreciated.Solved5.8KViews0likes9CommentsRe: Workbook bug counting too many incidents?
Clive_Watson Thanks for the response, I had a similar thought when i first encountered this, but the bin does not seem to be the cause, or at least not in the way we think. This is a screen shot from events accumulated in the previous month: With the accompanying query and graph. To get a similar number of events i need to add several extra days in the time range if i remove the bin function: I have added 2 extra days to make sure the bin doesn't gather extra information as well as set the time frame from 00:00 That leaves me with this result. They were obviously not expected to be the same, but just to prove a point that it is not just extra time the bin function has found other incidents. Adding back the bin function in the last query gives this result: 5 extra events this time around. Could it be that the bin function somehow counts extra incidents? How does it treat for example an incident that has its severity changed. I suppose it shouldn't show up seeing as i use the dcount(IncidentNumber) however i do summarize based on severity and that might be a source of a duplicate? Or incidents that has happened and the been updated in the next day? Just throwing out ideas.481Views0likes0CommentsWorkbook bug counting too many incidents?
Greetings, i am trying to put together a decent workbook for a customer to use for reporting purposes, however i have come across something that might look like a peculiar bug. When i count the number of incidents for the given time period it doesn't match with what is actually shown in the logs. This picture matches what i can see in the incidents tab and the logs for the time period. However, when i add the remainder of the summarize line to generate a chart it shows extra count. Notice the 1 extra Medium severity, 1 extra Informational severity and 2 extra Low severity. Any ideas as to where these 4 extras come from?593Views0likes2CommentsRe: Merge identical values from different variables
Aha, so if i understand this correct the mv_expand unfolds the previously aggregated Entities into singular entries making it possible to search across them without having to look trough different possible locations within the Entities category? Thanks for a quick and easy answer!1KViews0likes0CommentsMerge identical values from different variables
Greetings, I have recently been trying to figure out a decent way to make an alert when a certain amount of informational alerts triggers from other Defender products, like for example large amounts of Emails with malicious URL's removed. This could indicate a phishing campaign that i would like to be notified about. The problem is this: The sender domains are stored in different parts of Entities although they are from the same sender. Is there a way to merge these into one variable instead of having them separated like this.Solved1.2KViews0likes2CommentsLow information alert, Remote code execution attempt
Greetings, I have a customer that is running Defender for Identity and this alert keeps showing up in their Azure Sentinel instance. I thought it might have been a problem with information being lost on the way from Defender for Identity->Cloud App Security-> Sentinel, but from the Defender for Identity portal it is just as inexpressive. Is there a way to get more information sent with the alert?2.8KViews0likes2CommentsCustom mass download alert
Greetings, I have been messing around with Cloud App Security and have noticed their mass download alert, unfortunately i seem unable to add exclusion to this alert so it triggers way to often on totally uninportant sharepoint sites. Therefore i have made my own query to check for mass downloads, however i can't make the query both count how many download operations a user has togheter with which sites they have downloaded from. It's either how many downloads total and no info on which site they have downloaded from or on a per sharepoint-site basis which is not very usefull when some of the folders are very small and will not trigger on the set threshold. My query looks like this where i have used the extract function to filter out the uninteresting sharepoint sites which the CAS alerts keep triggering on. let uninterestingPNNNNSites = OfficeActivity //Removes sites containing /p-NNNN, N being a number | where Operation contains "download" | extend pGroups = extract("(p+\\-+\\d{4}\\/$)",1, Site_Url) | where pGroups != "" | summarize count() by Site_Url; let uninterestingPersonalSites = OfficeActivity //Removes /personal sites | where Operation contains "download" | extend personalGroups = extract("(\\/+personal+\\/)", 1 , Site_Url) | where personalGroups != "" | summarize count() by Site_Url; let uninterestingSiteP = OfficeActivity //Removes the site /p/, this being an old site that is not going to be used. | where Operation contains "download" | extend pGroups = extract("(/p/)",1, Site_Url) | where pGroups != "" | summarize count() by Site_Url; OfficeActivity | where Operation contains "download" | where Site_Url !in ( uninterestingPersonalSites ) | where Site_Url !in ( uninterestingPNNNNSites) | where Site_Url !in ( uninterestingSiteP) | summarize count() by Site_Url, UserId, ClientIP //Remove Site-Url for total downloads per user | project-rename Number_of_downloadoperations = count_ | where Number_of_downloadoperations > 300 Preferably i would be able to summarize by only UserId and ClientIP giving a count for how many downloads they have done in a day, but also attaching a list of which sites they have downloaded from for analysts to act on without having to run their own manual search.Solved4.7KViews0likes2CommentsRe: Monitoring specific list of users, belonging to an AD group
Ciyaresh Ah, well that is because the query you found in the link was made by the original creator, it is more of a test to see that it works. I would probably do something like this; let HighriskUsers = HighRiskUsers_CL | distinct UserPrincipalName_s; SecurityEvent | where TargetAccount in (HighriskUsers) | where EventID == "4624" Just make sure the custom log table usernames match with the SecurityEvent TargetAccount regarding upper/lower case. You can use the toupper/tolower function to make sure they match if they are not by default. I use the distinct operation to make sure i dont get duplicate values from the custom table.12KViews0likes1CommentRe: Unable to utilize logics apps to feed data in a watchlist
abubakr786 GaryBushey correct me if I'm wrong, but i believe you will never get any entities/other useful information in your logic app/watchlist unless you run the logic app on an actual alert. Just pressing "run trigger" will end up returning blanks regardless since there is no information in the initial trigger "When a response to an Azure Sentinel alert is triggered"2.2KViews0likes1CommentRe: Monitoring specific list of users, belonging to an AD group
Ciyaresh I had a somewhat similar problem where i wanted to create a query for alerting on brute-force attempts against users in specific "high risk groups". A user then came up with this solution: https://learnsentinel.blog/2021/07/04/enrich-hunting-with-data-from-ms-graph-and-azure-ad/ This way you can have a updated table of the high risk users from our AD, then you can join other tables to cross reference activity regarding changes to group membership.12KViews1like3CommentsSome predefined incidents do not have sufficient information
Hello, I have noticed that some of the predefined incident, the ones from different Defender products, sometimes are missing crucial information about the incident. For example this alert from Defender for Identity Which computers are affected is nice, but I would like to know what the "1 service" is. This information is not shown in azure sentinel, but if I check out the alert from the defender page this information is available. How do i get that information forwarded to Sentinel correctly?704Views0likes0CommentsRe: Azure Sentinel triggers incident when it shouldn't
For anyone else that might have been wondering, seemingly the best way i found to make this work is to fetch the AAD group members into a custom table and update this according to how often you would want to run the analytics rule since the analytics rule wizard overrides any time references made in a query. If i want the query to run every 1 hour with the latest 1 hour of data i would need to update the custom table every 1 hour or less.1.4KViews0likes0CommentsRe: Azure Sentinel triggers incident when it shouldn't
So i figured out a simple workaround, but still the query wizard shows that it would trigger the alarm several times although it shouldn't have. let excludedUsers = GuestAccountsExcludedFromCAPolicy_CL | distinct UserEmail_s; SigninLogs | where Location !in ( "AL","AD","AM","AT","BY","BE","BA","BG","CH","CY","CZ","DE","DK","EE","ES","FO","FI","FR","GB","GE","GI","GR","HU","HR","IE","IS","IT","LI","LT","LU","LV","MC","MK","MT","NO","NL","PL","PT","RO","RU","SE","SI","SK","SM","TR","UA","VA","SJ","") // List of country codes in europe. | where UserPrincipalName !in (excludedUsers) | extend AccountCustomEntity = Identity | extend IPCustomEntity = IPAddress The GuestAccountsExcludedFromCAPolicy_CL is simply a table filled with users fetched from AAD via logic apps. Still the query wizard shows that it would trigged multiple alarms within the last 48 hours although there should only be one. It seems to me as if the query is just ignoring the line | where UserPrincipalName !in (excludedUsers) because it would be correct otherwise, but the whole point is to not get alerted when one of the excluded members tries to log on. Anyone have any ideas on why this is happening, or potential solutions?1.5KViews0likes2Comments
Recent Blog Articles
No content to show