automation
282 TopicsFirewall Rules programming with Defender XDR
We have our devices onboarded to Defender for Endpoint, and want to program Firewall Policy and Firewall Rules Policy using Defender Onboarding. We know that we can onboard devices to Intune and use Intune MDM to program rules. But, we don't want a full blown MDM setup or license for just firewall programming. Is there a deployment scenario where we can do firewall programming just using defender machines. Any help is really appreciated.Solved93Views0likes1CommentIngesting .CSV log files from Azure Blob Storage into Microsoft Sentinel
Overview: Organizations generate vast amounts of log data from various applications, services, and systems. These logs are often stored in .CSV (Comma-Separated Values) format in Azure Blob Storage, a scalable cloud-based storage solution. To enhance security monitoring, compliance, and threat detection, it is important to bring this log data into a centralized security tool like Microsoft Sentinel. The main goal is to automatically collect and analyze .CSV log files stored in Azure Blob Storage using Sentinel’s advanced analytics and automation capabilities. This enables better visibility into security events and helps in proactive threat management. Benefits: Flexible Log Ingestion via logic app: Allows ingestion of logs from systems without built-in Sentinel connectors, including custom, third-party, or legacy systems. Uses Existing Storage Workflows: Reuses Azure Blob Storage where logs are already being saved, with no need to change current export methods. Structured and Clean Data Format: .CSV files offer a structured format that makes mapping and parsing data into Sentinel efficient and reliable. Enables Custom Analysis: Once in Sentinel, the data can be queried using Kusto Query Language (KQL) for in-depth analysis and reporting. Operational Efficiency: Reduces manual efforts in collecting, uploading, or processing logs. Saves time for IT and security teams by automating the data pipeline. Improves Threat Visibility: Ingested data is available in real-time. Dashboards and visualizations make it easy to understand what's happening. Pre-requisites: Log Analytics Workspace A configured workspace to receive and analyze the ingested data. Blob Storage Path The exact location in Azure Blob Storage where the CSV log files are stored. Required Roles and Permissions Microsoft Sentinel Contributor– to manage Sentinel resources. Logic App Contributor– to create and manage automation workflows. Access to the Storage Account– to read and retrieve log files from Blob Storage. Implementation Steps: Configure the Logic App trigger to run whenever a new blob is added or an existing one is modified. Select the storage account and container details, then configure the recurrence based on how frequently data is uploaded to the storage account. Choose the authentication type to connect with storage account. CSV Retrieval: Use the Logic App action to retrieve the CSV blob content by specifying the exact file path of the container. CSV Parsing: Use built-in Logic App actions along with regex to parse the CSV content. Apply the Composeaction to split the file contents by new lines, converting them into an array for structured processing. Here is the expression used in SplitLines compose action: split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A')) Follow the below MS Doc to write expressions: Removing last(empty) line from previous output using another compose action as shown below, take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1)) Separating field names using compose action: split(first(outputs('SplitLines')), ',') Column Mapping: Repeat the required expression using the Select action to map each column from the CSV file to its corresponding field in the structured output. **From**: **`skip(outputs('RemoveLastLine'), 1)`** **Map:** **`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`** **`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`** Data Ingestion to Sentinel: Leveraging the Microsoft Sentinel connector to ingest the parsed data into the appropriate table. The connection to be configured using the workspace ID, shared key, and target table name. Key Highlights: The Logic App is triggered whenever a file is added or modified in the Blob container. The CSV content is parsed within the Logic App before being ingested into Sentinel. Leveraged the Microsoft Sentinel connector to ingest the parsed data into Sentinel. To support dynamic updates, we recommended overwriting the existing CSV file in the storage account. Outcome: Log Visibility in Sentinel Workspace: Once the Logic App is triggered, the custom table will be created automatically in Microsoft Sentinel, and logs can be viewed by running a KQL query in the Sentinel workspace. Conclusion: Ingesting .CSV log files from Azure Blob Storage into Microsoft Sentinel is a powerful way to centralize and automate the organization’s security monitoring. It enhances visibility, supports compliance, and empowers security teams with timely insights and alerts.Sensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. You’ll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, it’s not likely to last very long. So, I affectionately refer to information that hasn’t been evaluated as ‘porridge’, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of ‘content contains sensitivity label’ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the ‘porridge’, determine whether it’s an item that needs protection and if so, move it to a higher shelf in the pantry so that it’s out of reach for the kids. Effective use of Purview revolves around the use of ‘know your data’ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like ‘Sensitive’ or ‘Protected’ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We don’t need to guess at an item’s sensitivity if another system has already established what the item’s classification is. These methods are much less prone to false positives. Why isn’t everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations don’t have this skill set. There’s also a lack of understanding of the relevance of checking for item properties. Microsoft haven’t helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where I’m from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, we’ll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isn’t for everyone. If your organisation is new to classification or you don’t have external partners that you collaborate with at higher sensitivity levels, then this likely isn’t for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that you’ll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels won’t impact usability as you won’t publish them to users. You will, however, need to publish them to a single user or break glass account so that they’re not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID won’t correspond with any of that tenant’s labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasn’t been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the item’s existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purview’s data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 – Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as ‘crawled properties’, which we’ll then need to convert into ‘managed properties’ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, you’ll need to upload or create an item or items with the properties applied. For testing, you’ll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties you’re looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, I’ve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which you’ll often see applied by Purview when an item has a sensitivity label content marking applied to it. I’ve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 – Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 – Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the property’s type to text, select queryable and retrievable. Under ‘mappings to crawled properties’, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so there’s no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 – Configure Auto-labelling policies Next up, create some auto-labelling policies. You’ll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO – Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, you’ll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 – Expand into DLP If you’ve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an item’s sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasn’t yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesn’t match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.2KViews8likes8CommentsSentinel Playbook help required
Hi there, I am trying to create a logic app for when a new sentinel incident is triggered, it will check for the entities in the incident, compare it with a defined Entra ID group members, and if it matches, it will change the status to close the incident and it it does not match it will send an email. Is it something, someone in the forum has already built? or is there someone who could help me achieve this logic? Thank you.60Views0likes1Comment[DevOps] dps.sentinel.azure.com no longer responds
Hello, Ive been using Repository connections in sentinel to a central DevOps for almost two years now. Today i got my first automated email on error for a webhook related to my last commit from the central repo to my Sentinel intances. Its a webhook that is automticly created in connections that are made the last year (the once from 2 years ago dont have this webhook automaticly created). The hook is found in devops -> service hooks -> webhooks "run state change" for each connected sentinel However, after todays run (which was successfull, all content deployed) this hook generates alerts. It says it cant reach: (EU in my case) eu.prod.dps.sentinel.azure.com full url: https://eu.prod.dps.sentinel.azure.com/webhooks/ado/workspaces/[REDACTED]/sourceControls/[REDACTED] So, what happened to this domain? why is it no longer responding and when was it going offline? I THINK this is the hook that sets the status under Sentinel -> Repositories in the GUI. this success status in screenshoot is from 2025/02/06, no new success has been registered in the receiving Sentinel instance. For the Sentinel that is 2 year old and dont have a hook in my DevOps that last deployment status says "Unknown" - so im fairly sure thats what the webhook is doing. So a second question would be, how can i set up a new webhook ? (it want ID and password of the "Azure Sentinel Content Deployment App" - i will never know that password....) so i cant manually add ieather (if the URL ever comes back online or if a new one exists?). please let me know.64Views0likes1CommentHow to exclude IPs & accounts from Analytic Rule, with Watchlist?
We are trying to filter out some false positives from a Analytic rule called "Service accounts performing RemotePS". Using automation rules still gives a lot of false mail notifications we don't want so we would like to try using a watchlist with the serviceaccounts and IP combination we want to exclude. Anyone knows where and what syntax we would need to exlude the items on the specific Watchlist? Query: let InteractiveTypes = pack_array( // Declare Interactive logon type names 'Interactive', 'CachedInteractive', 'Unlock', 'RemoteInteractive', 'CachedRemoteInteractive', 'CachedUnlock' ); let WhitelistedCmdlets = pack_array( // List of whitelisted commands that don't provide a lot of value 'prompt', 'Out-Default', 'out-lineoutput', 'format-default', 'Set-StrictMode', 'TabExpansion2' ); let WhitelistedAccounts = pack_array('FakeWhitelistedAccount'); // List of accounts that are known to perform this activity in the environment and can be ignored DeviceLogonEvents // Get all logon events... | where AccountName !in~ (WhitelistedAccounts) // ...where it is not a whitelisted account... | where ActionType == "LogonSuccess" // ...and the logon was successful... | where AccountName !contains "$" // ...and not a machine logon. | where AccountName !has "winrm va_" // WinRM will have pseudo account names that match this if there is an explicit permission for an admin to run the cmdlet, so assume it is good. | extend IsInteractive=(LogonType in (InteractiveTypes)) // Determine if the logon is interactive (True=1,False=0)... | summarize HasInteractiveLogon=max(IsInteractive) // ...then bucket and get the maximum interactive value (0 or 1)... by AccountName // ... by the AccountNames | where HasInteractiveLogon == 0 // ...and filter out all accounts that had an interactive logon. // At this point, we have a list of accounts that we believe to be service accounts // Now we need to find RemotePS sessions that were spawned by those accounts // Note that we look at all powershell cmdlets executed to form a 29-day baseline to evaluate the data on today | join kind=rightsemi ( // Start by dropping the account name and only tracking the... DeviceEvents // ... | where ActionType == 'PowerShellCommand' // ...PowerShell commands seen... | where InitiatingProcessFileName =~ 'wsmprovhost.exe' // ...whose parent was wsmprovhost.exe (RemotePS Server)... | extend AccountName = InitiatingProcessAccountName // ...and add an AccountName field so the join is easier ) on AccountName // At this point, we have all of the commands that were ran by service accounts | extend Command = tostring(extractjson('$.Command', tostring(AdditionalFields))) // Extract the actual PowerShell command that was executed | where Command !in (WhitelistedCmdlets) // Remove any values that match the whitelisted cmdlets | summarize (Timestamp, ReportId)=arg_max(TimeGenerated, ReportId), // Then group all of the cmdlets and calculate the min/max times of execution... make_set(Command, 100000), count(), min(TimeGenerated) by // ...as well as creating a list of cmdlets ran and the count.. AccountName, AccountDomain, DeviceName, DeviceId // ...and have the commonality be the account, DeviceName and DeviceId // At this point, we have machine-account pairs along with the list of commands run as well as the first/last time the commands were ran | order by AccountName asc // Order the final list by AccountName just to make it easier to go through | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "")124Views0likes1CommentUnable to add Endpoints and Vulnerability management in XDR Permissions
Hi, I have defender for endpoint running on obver 400 devices. I have 10 with Bus Premium, 5 with E5, and the rest E3. I am getting incidents for DFE, and this is being sent to my SOAR platform for analysis, but when I pivot back using client-sync, I cannot see DFE incidents. I have gone into Settings > XDR > Workload settings, and can only see the below There does not appear to be the option to grant the roles I have provided for my SOAR user the ability to see Endpoint and Vulnerability management. Really scratching my head here. Help?194Views0likes3CommentsAutomating Microsoft Sentinel: Playbook Fundamentals
Welcome to the third entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel and Part 2: Automation Rules where we talked about automating the mundane away. In this post, we’re going to start talking about Playbooks which can be used for automating just about anything. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 Part I – Fundamentals [You are here] Part 4: Playbooks 2 Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 3: Playbooks - Fundamentals Pre-Built Playbooks in Content Hub Before we dive any deeper into Playbooks, I want to first point out that there are many pre-built playbooks available in the Content Hub. As of this writing, there are 484 playbooks available from 195 providers covering all manner of use cases like threat intelligence ingestion, incident response, operations integrations, and more in both first party Microsoft and third-party security tools. Before we dive into the internals of Playbooks and start creating our own, you really should do yourself a favor and take a look at the Content Hub and see if there isn’t already a Playbook doing what you want. You can also review the list of solutions at the Microsoft Sentinel GitHub page at Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel Basic Structure of a Playbook Microsoft Sentinel Playbooks are built on Azure Logic Apps which is a low to no-code workflow automation platform. We’ll be diving into the details of how to create a Logic App from start to finish in the next installment of this series, but for now just know that there are two key “custom” features that Sentinel exposes for use in Playbooks: Triggers and Entities. Triggers The events or actions that can start a Playbook running are Triggers. These can be Incident, Alert, or Entity based. Incident Triggers Incident triggers are when an incident is either created or updated in Sentinel. Incident triggers can be tied to Automation Rules (which were covered in part 2 of this series) and can also be manually triggered by an analyst. Playbooks launched with Incident triggers receive all the incident objects, including any entities it contains as well as the alerts it is comprised of. Alert Triggers Alert triggers are similar to Incident triggers; except they trigger when an Alert is fired due to an Analytic Rule having a result. This is especially useful when you have an Alert that is not configured to create an Incident. Alert triggers can also be tied to Automation Rules Entity Triggers Entity triggers are different from Incident and Alert triggers as they cannot be tied to Automation Rules. Instead, they are triggered manually by an analyst. For example, let’s say that there is a user account that is part of an Incident and during the investigation the analyst decided they wanted to disable that user account in Entra. They could use an Entity Trigger to launch the Playbook, passing the Account Entity to the playbook for the account to be disabled. Entities We can’t really talk about Entity Triggers without talking about Entities themselves. So, what is an Entity in Sentinel? Entities are data elements that identify components in an alert or incident. There are many different types of entities within Sentinel, but for Playbooks we only need to focus on five key ones: IP Host Account URL FileHash (for more information on Entities in general, please see: https://learn.microsoft.com/azure/sentinel/entities ) How do you use Entity Triggers? When you are building an Analytic rule, you can identify the different Entities that it contains. These are then carried along as part of the Alert and exposed for further actions. This means that all you need to do is map the results of the Analytic Rule to the different Entity types using values returned from your query. For example, let’s say we are creating an Analytic Rule to alert on a new CloudShell user being created in Azure with the following query: let match_window = 3m; AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/listKeys/action") | where ActivityStatusValue =~ "Success" | extend TimeKey = bin(TimeGenerated, match_window), AzureIP = CallerIpAddress | join kind = inner (AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/write") | extend TimeKey = bin(TimeGenerated, match_window), UserIP = CallerIpAddress ) on Caller, TimeKey | summarize count() by TimeKey, Caller, ResourceGroup, SubscriptionId, TenantId, AzureIP, UserIP, HTTPRequest, Type, Properties, CategoryValue, OperationList = strcat(OperationNameValue, ' , ', OperationNameValue1) | extend Name = tostring(split(Caller,'@',0)[0]), UPNSuffix = tostring(split(Caller,'@',1)[0]) When we use this query as the basis for an Alert, we can then use Entity Mapping under Alert Enhancement to take the relevant fields returned and map them to Entity objects: This example maps the values "Caller", "Name", and "UPNSuffix" returned by the query to the "FullName", "Name", and "UPNSuffix" fields of an Account Entity. It also maps the UserIP result to the "Address" field of an IP Entity. When the Alert fires, it will include a collection of Account and IP Entities with the necessary values in its Entities field. Now if we wanted to, we could use a Playbook based on Entity Triggers to act on the Account or IP entities. What is a “strong” identifier versus a “weak” identifier and why is it important? Entities have fields that identify individual instances. Strong identifiers uniquely identify an entity, while weak identifiers may not. Often, combining weak identifiers can create a strong identifier. For example, Account entities can be identified by a strong identifier like a Microsoft Entra ID (GUID) or User Principal Name (UPN). Alternatively, a combination of weak identifiers like Name and NTDomain can be used. Different data sources might identify the same user differently. When Microsoft Sentinel recognizes two entities as the same based on their identifiers, it merges them into one for consistent handling. We’ll be covering more details on using Entities and Triggers in the next article when we start building Playbooks from scratch. Conclusion In this article we talked about the fundamentals of Playbooks in Sentinel , the Content Hub which is the home of pre-built Playbooks, as well as the different types of Triggers that can be used to launch a Playbook. In the next article we’ll be covering how to build a playbook from scratch and put these concepts to work. Additional Resources Supported triggers and actions in Microsoft Sentinel playbooks Entities in Microsoft Sentinel1.9KViews0likes0CommentsHow to Connect MS Secure Scores to Power Query?
The Microsoft 365 Defender Portal (https://security.microsoft.com/) has a 'Secure Score' page, which contains the following: An overall secure score which is then broken down by Identity, Data, Device, and Application secure scores. I would like to be able to pull these four scores into a Power BI report; however, I have had some difficulty in putting together a solution. This data seems like it could be found in the Microsoft Graph API, but https://learn.microsoft.com/en-us/power-query/connecting-to-graph. I've tried other Defender APIs, but they all seem either outdated or out of scope for what I'm trying to pull. Can anyone advise? Thanks for reading.2.6KViews0likes2CommentsMulti-trigger Playbooks & Renamed Triggers
Hello Sentinel enthusiasts, In some cases, deploying a playbook with multiple triggers is a much easier solution than having 9 playbooks which do the same thing. In the specific example I'm going for I have developed a playbook which requires the user to change their password within a certain amount of time. We want to have the ability to have three triggers for the playbook - incident, entity and http (for external orchestration platforms) - then we also want to have the option for it to be instant, or within a configurable number of days/hours (1day, 7days). In this situation the native way would be to have 9 playbooks in total - seems like a big amount for a simple action. I attempted to initially develop these playbooks with all three triggers in a base template which is deployed using bicep - success. But what do you know, at first, I could only see the playbook in the automation playbook list saying "Sentinel Action" but I couldn't trigger it from an actual incident or entity. Turns out this was because I had given the trigger a different display name than the default. This specific case seems a bit odd to me because the underlying data is the same, nonetheless - not a huge issue, change the display name to the default and voila. My next surprise was when I realised that the Sentinel UI will only pick up the first trigger to run a playbook. i.e. if I define an entity trigger first then an incident trigger, I cannot see it appear to trigger for an incident and vice versa. So, I set on a mission and was able to create a chromium extension which will modify the resource response - to duplicate a playbook once for every trigger it has (only in the azure portal PWA) and what do I know - everything works perfectly as if it was fully supported. It would be great if these UI bugs could be fixed as they seem pretty trivial and don't seem to require a major change, considering it is solely a frontend bug - especially if I can create an extension which resolves the issue. Obviously, this is not an ideal scenario in production. Garnering some support to have this rectified would be great and it would also be cool to hear people's opinions on this ~Seb47Views0likes0Comments