security operations
45 TopicsTake Flight with Microsoft Security Copilot Flight School
Greetings pilots, and welcome to another pioneering year of AI innovation with Security Copilot. Find out how your organization can reach new heights with Security Copilot through the many exciting announcements on the way at both Microsoft Secure and RSA 2025. This is why now is the time to familiarize yourself and get airborne with Security Copilot. Go to School Microsoft Security Copilot Flight School is a comprehensive series charted to take students through fundamental concepts of AI definitions and architectures, take flight with prompting and automation, and hit supersonic speeds with Logic Apps and custom plugins. By the end of the course, students should be equipped with the requisite knowledge for how to successfully operate Security Copilot to best meet their organizational needs. The series contains 11 episodes with each having a flight time of around 10 minutes. Security Copilot is something I really, really enjoy, whether I’m actively contributing to its improvement or advocating for the platform’s use across security and IT workflows. Ever since I was granted access two years ago – which feels like a millennium in the age of AI – it’s been a passion of mine, and it’s why just recently I officially joined the Security Copilot product team. This series in many ways reflects not only my passion but similar passion found in my marketing colleagues Kathleen Lavallee (Senior Product Marketing Manager, Security Copilot) Shirleyse Haley (Senior Security Skilling Manager), and Shateva Long (Product Manager, Security Copilot). I hope that you enjoy it just as much as we did making it. Go ahead, and put on your favorite noise-cancelling headphones, it’s time, pilots, to take flight. Log Flight Hours There are two options for watching Security Copilot Flight School: either on Microsoft Learn or via the Youtube Playlist found on the Microsoft Security Youtube Channel. The first two episodes focus on establishing core fundamentals of Security Copilot platform design and architecture – or perhaps attaining your instrument rating. The episodes thereafter are plotted differently, around a standard operating procedure. To follow the ideal flight path Security Copilot should be configured and ready to go – head over to MS Learn and the Adoption Hub to get airborne. It’s also recommended that pilots watch the series sequentially, and be prepared to follow along with resources found on Github, to maximize learning and best align with the material. This will mean that you’ll need to coordinate with a pilot with owner permissions for your instance to create and manipulate the necessary resources. Episode 1 - What is Microsoft Security Copilot? Security is complex and requires highly specialized skills to face the challenges of today. Because of this, many of the people working to protect an organization work in silos that can be isolated from other business functions. Further, enterprises are highly fragmented environments with esoteric systems, data, and processes. All of which takes a tremendous amount of time, energy, and effort just to do the day-to-day. Security Copilot is a cloud-based, AI-powered security platform that is designed to address the challenges presented by complex and fragmented enterprise environments by redefining what security is and how security gets done. What is AI, and why exactly should it be used in a cybersecurity context? Episode 2 - AI Orchestration with Microsoft Security Copilot Why is The Paper Clip Pantry a 5-star restaurant renowned the world over for its Wisconsin Butter Burgers? Perhaps it’s how a chef uses a staff with unique skills and orchestrates the sourcing of resources in real time, against specific contexts to complete an order. After watching this episode you’ll understand how AI Orchestration works, why nobody eats a burger with only ketchup, and how the Paper Clip Pantry operates just like the Security Copilot Orchestrator. Episode 3 – Standalone and Embedded Experiences Do you have a friend who eats pizza in an inconceivable way? Maybe they eat a slice crust-first, or dip it into a sauce you never thought compatible with pizza? They work with pizza differently, just like any one security workflow could be different from one task, team, or individual to the next. This philosophy is why Security Copilot has two experiences – solutions embedded within products, and a standalone portal – to augment workflows no matter their current state. This episode will begin covering those experiences. Episode 4 – Other Embedded Experiences Turns out you can also insist upon putting cheese inside of pizza crust, or bake it thick enough as to require a fork and knife. I imagine, it’s probably something Windows 95 Man would do. In this episode, the Microsoft Entra, Purview, Intune, and Microsoft Threat Intelligence products showcase how Security Copilot advances their workflows within their portals. Beyond baking in the concepts of many workflows, many operators, the takeaway from this episode is that Security Copilot works with security adjacent workflows – IT, Identity, and DLP. Episode 5 – Manage Your Plugins Like our chef in The Paper Clip Pantry, we should probably define what we want to cook, what chefs to use, and set permissions for those that can interact within any input or output from the kitchen. Find out what plugins add to Security Copilot and how you can set plugin controls for your team and organization. Episode 6 – Prompting Is this an improv lesson, or a baking show? Or maybe if you watch this episode, you’ll learn how Security Copilot handles natural language inputs to provide you meaningful answers know as responses. Episode 7 – Prompt Engineering response, consider your goal, the context needed, sources available, and the final presentation of the information to achieve the best result. With the fundamentals of prompting in your flight log, it’s time to soar a bit higher with prompt engineering. In this episode you will learn how to structure prompts in a way to maximize the benefits of Security Copilot and begin building workflows. Congrats, pilot, your burgers will no longer come with just ketchup. Episode 8 – Using Promptbooks What would it look like to find a series of prompts and run them, in the same sequence with the same output every time? You guessed it, a promptbook, a repeatable workflow in the age of AI. See where to access promptbooks within the platform, and claw back some of your day to perfect your next butter burger. Episode 9 – Custom Promptbooks You’ve been tweaking your butter burger recipe for months now. You’ve finally landed at the perfect version by incorporating a secret nacho cheese recipe. The steps are defined, the recipe perfect. How do you repeat it? Just like your butter burger creation, you might discover or design workflows with Security Copilot. With custom promptbooks you can repeat and share them across your organization. In this episode you’ll learn about the different ways Security Copilot helps you develop your own custom AI workflows. Episode 10 – Logic Apps System automation, robot chefs? Actions? What if customers could order butter burgers with the click of a button, and the kitchen staff would automatically make one? Or perhaps every Friday at 2pm a butter burger was just delivered to you? Chances are there are different conditions across your organization that when present requires a workflow to begin. With Logic Apps, Security Copilot can be used to automatically aid workflows across any system a Logic App can connect to. More automation, less mouse clicking, that’s a flight plan everyone can agree on. Episode 11 – Extending to Your Ecosystem A famed restaurant critic stopped into the The Paper Clip Pantry ordered a butter burger, and it’s now the burger everyone is talking about. Business is booming and it's time to expand the menu – maybe a butter burger pizza, perhaps a doughnut butter burger? But you’ll need some new recipes and sources of knowledge to achieve this. Like a food menu the possibilities of expanding Security Copilot’s capabilities are endless. In this episode learn how this can be achieved with custom plugins and knowledgebases. Once you have that in your log, you will be a certified Ace, and ready to take flight with Security Copilot. Take Flight I really hope that you not only learn something new but have fun taking flight with the Security Copilot Flight School. As with any new and innovative technology, the learning never stops, and there will be opportunities to log more flight hours from our expert flight crews. Stay tuned at the Microsoft Security Copilot video hub, Microsoft Secure, and RSA 2025 for more content in the next few months. If you think it’s time to get the rest of your team and/or organization airborne there’s check out the Security Copilot adoption hub to get started: aka.ms/SecurityCopilotAdoptionHub Carry-on Resources Our teams have been hard at work building solutions to extend Security Copilot, you can find them on our community Github page found at: aka.ms/SecurityCopilotGitHubRepo To stay close to the latest in product news, development, and to interact with our engineering teams, please join the Security Copilot CCP to get the latest information: aka.ms/JoinCCP533Views0likes0CommentsEmpowering Security Copilot with NL2KQL: Transforming Natural Language into Insightful KQL queries
By leveraging NL2KQL, a powerful framework that translates natural language into KQL queries, Security Copilot makes querying in KQL as intuitive as a conversation. In this article, we’ll explore the story behind NL2KQL, its potential to transform security operations, and why it matters for the future of cybersecurity.773Views3likes0CommentsAzure Lighthouse support for MSSP use of Security Copilot Sentinel scenarios in Public Preview
Security Copilot support for Azure Lighthouse Sentinel use cases for managed security service provider (MSSP) tenants is now in public preview. With this support, MSSPs can purchase SCUs and attach them to the managing tenant in Azure Lighthouse and use those SCUs to run Security Copilot skills related to Microsoft Sentinel on their customer tenants via Azure Lighthouse. All the Sentinel skills available in Security Copilot will be invokable from the Azure Lighthouse tenant without the customer needing to have Security Copilot, thereby making Security Copilot available to MSSPs who manage multiple customers. Supported scenarios include querying the customer Sentinel incident, incident entities/ details, querying Sentinel workspaces, and fetching Sentinel incident query. These skills can be invoked on per customer Sentinel workspace. Managing tenants using Azure Lighthouse now can do the following, without their customers needing to provision SCUs: Use the same natural language based prompts using Sentinel skills on customer data Create custom promptbooks using Sentinel skills to automate their investigations Use Logic Apps to trigger these promptbooks While this release doesn’t support all Security Copilot skills across customer tenants for MSSPs, it is an important development on the road to full support for Security Copilot for MSSPs using Azure Lighthouse. Read on to learn more about what this means for your practice, and how to get started. What is Azure Lighthouse? Azure Lighthouse is built into the Azure portal and allows IT partners to manage multiple tenants for Azure services. It provides a unified management experience, enabling partners to view and manage resources across all their customers' Azure environments from a single pane of glass. It supports multi-customer management, meaning partners can perform actions across multiple customer tenants simultaneously. This is particularly useful for Managed Service Providers (MSPs) who need to manage resources at scale. What is changing? We are introducing Azure Lighthouse support for MSSPs to use Security Copilot on their customer tenants without requiring customers to purchase Security Compute Units (SCUs). With Azure Lighthouse support, SCUs should be purchased by a MSSP admin for use on their customer’s tenant . To get started, MSSPs can go to Azure to onboard on to Security Copilot and apply their purchased SCUs to their Azure Lighthouse subscription. In Azure Lighthouse, the MSSP needs to ensure that they have access setup to their customer’s Sentinel environment. Once the setup is completed, MSSPs can invoke Sentinel skills on the customer tenant via the Security Copilot Standalone portal and use the SCUs associated to the Azure Lighthouse subscription. MSSPs can further use custom promptbooks and logic apps to automate their workflows. In future, managed service support will continue to expand to include other skills and capabilities such as Entra, Intune and Purview skills. We will also add support to run the skills in parallel on multiple workspaces across customer tenants so that the same prompt can return the response from multiple tenants for better analysis. What other access controls are supported? As of December 2024, we now support M365 Partner Center GDAP (Granular Delegated Admin Privileges) which allows the managing tenant to operate directly in their customer’s environment using their customer’s Security Copilot tenant. M365 Partner Center GDAP: GDAP is focused on Microsoft 365 services and is available through the Partner Center. It provides more granular and time-bound access to customer workloads, addressing security concerns by offering least-privileged access. Unlike Azure Lighthouse, GDAP relationships are more specific and time-bound, with a maximum duration of two years. Partners can request and manage these relationships through the Partner Center. GDAP is designed to help partners provide services to customers who have regulatory requirements or security concerns about high levels of partner access. MSSPs can get access to customer tenants via GDAP and log into the Security Copilot standalone portal or the embedded experience to get their jobs done. The MSSP will be able to execute all the skills in Security Copilot (Entra, Defender, Purview, Intune, XDR etc.,), a full list of skills is available here as GDAP supports all these services. In this configuration, the customer is the one purchasing Security Copilot SCUs and the MSSP uses these SCUs associated to the customer tenant, rather than SCUs associated to the MSSP’s tenant. Since Entra, Defender, Purview, Intune are not supported in Azure Lighthouse, the only way for MSSPs to use Security Copilot on their customer tenant for these products is by directly logging into the customer tenant and utilizing the SCUs purchased by customers. Additional Resources Understand authentication in Microsoft Security Copilot | Microsoft Learn Grant MSSPs access to Microsoft Security Copilot | Microsoft Learn Microsoft Security Copilot Frequently Asked Questions | Microsoft Learn Microsoft 365 Lighthouse frequently asked questions (FAQs) GDAP frequently asked questions - Partner Center | Microsoft Learn1.8KViews3likes0CommentsIntegrating Fluent Bit with Microsoft Sentinel
This guide will walk you through the steps required to integrate Fluent Bit with Microsoft Sentinel. Beware that in this article, we assume you already have a Sentinel workspace, a Data Collection Endpoint and a Data Collection Rule, an Entra ID application and finally a Fluent Bit installation. As mentioned above, log ingestion API supports ingestion both in custom tables as built-in tables, like CommonSecurityLog, Syslog, WindowsEvent and more. In case you need to check which tables are supported please the following article: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#supported-tables Prerequisites: Before beginning the integration process, ensure you have the following: An active Azure subscription with Microsoft Sentinel enabled; Microsoft Entra ID Application taking note of the ClientID, TenantID and Client Secret – create one check this article: https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=certificate A Data Collection Endpoint (DCE) – to create a data collection endpoint, please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-endpoint-overview?tabs=portal A Data Collection Rule (DCR) – fields from the Data Collection Rule need to match exactly to what exists in table columns and also the fields from the log source. To create a DCR please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=cli Depending on the source, it might require a custom table to be created or an existing table from log analytics workspace; Fluent Bit installed on your server or container – In case you haven’t yet installed Fluent Bit, in the following article you'll find the instructions per type of operating system: https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit High level architecture: Step 1: Setting up Fluent Big configuration file Before we step-in into the configuration, Fluent Bit has innumerous output plugins and one of those is through Log Analytics Ingestion API both to supported Sentinel tables but also for custom tables. You can check more information about it here in Fluent Bit documentation: https://docs.fluentbit.io/manual/pipeline/outputs/azure_logs_ingestion Moving forwarder, in order to configure Fluent Bit to send logs into Sentinel log analytics workspace, please take note of the specific input plugin you are using or intend to use to receive logs and how can you use it to output the logs to Sentinel workspace. For example most of the Fluent Bit plugins allow to set a “tag” key which can be used within the output plugin so that there’s a match in which logs are intended to send. On the other hand, in a scenario where multiple input plugins are used and all are required send logs to Sentinel, then a match of type wildcard "*" could be used as well. Another example, in a scenario where there are multiple input plugins of type “HTTP” and you want to just send a specific one into Sentinel, then the “match” field must be set according to the position of the required input plugin, for example “match http.2”, if the input plugin would the 3 rd in the list of HTTP inputs. If nothing is specified in the "match" field, then it will assume "http.0" by default. For better understanding, here’s an example of how a Fluent Bit config file could look: First, the configuration file is located under the path ”/etc/fluent-bit/fluent-bit.conf” The first part is the definition of all “input plugins”, then follows the “filter plugins” which you can use for example to rename fields from the source to match for what exists within the data collection rule schema and Sentinel table columns and finally there’s the output plugins. Below is a screenshot of a sample config file: INPUT plugins section: In this example we’re going to use the “dummy input” to send sample messages to Sentinel. However, in your scenario you could leverage other’s input plugins within the same config file. After everything is configured in the input section, make sure to complete the “FILTER” section if needed, and then move forward to the output plugin section, screenshot below. OUTPUT plugins section: In this section, we have output plugins to write on a local file based on two tags “dummy.log” and “logger”, an output plugin that prints the outputs in json format and the output plugin responsible for sending data to Microsoft Sentinel. As you can see, this one is matching the “tag” for “dummy.log” where we’ve setup the message “{“Message”:”this is a sample message for testing fluent bit integration to Sentinel”, “Activity”:”fluent bit dummy input plugn”, “DeviceVendor”:”Ubuntu”}. Make sure you insert the correct parameters in the output plugin, in this scenario the "azure_logs_ingestion" plugin. Step 2: Fire Up Fluent Bit When the file is ready to be tested please execute the following: sudo /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf Fluent bit will start initialization all the plugins it has under the config file. Then you’re access token should be retrieved if everything is well setup under the output plugin (app registration details, data collection endpoint URL, data collection rule id, sentinel table and important to make sure the name of the output plugin is actually “azure_logs_ingestion”). In a couple of minutes you should see this data under your Microsoft Sentinel table, either an existing table or a custom table created for the specific log source purpose. Summary Integrating Fluent Bit with Microsoft Sentinel provides a powerful solution for log collection and analysis. By following this guide, hope you can set up a seamless integration that enhances your organization's ability to monitor and respond to security threats, just carefully ensure that all fields processed in Fluent Bit are mapped exactly to the fields in Data Collection Rule and Sentinel table within Log Analytics Workspace. Special thanks to “Bindiya Priyadarshini” that collaborated with me on this blog post. Cheers!757Views2likes1CommentIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.2.8KViews2likes2CommentsNext-Gen Device Incident Investigation & Threat Hunting with Custom Plugins
The Security Copilot custom plugin empowers you to extend Security Copilot functionalities beyond the preinstalled and third-party plugins. This blog introduces two custom plugins that you can install and use in your environment. An incident investigation case study will be used to demonstrate the features of these two custom plugins. Additionally, a step-by-step guide will walk you through the setup process, which only takes a few clicks. The first custom plugin, “Custom Plugin Defender Device Investigation”, provides the following skills: Title: File - Files Downloaded Description: Lists files downloaded to this device in specific timeframe within past 30 days. Title: File - Last 15 Days Files Downloaded Description: Lists files downloaded to this device in the last 15 days. Title: File - Any Device Events Related To This File Description: Display device events that include the filename, in specific timeframe. Title: File - Sensitive Files Events Description: Lists sensitive files events on this device in the last 10 days. Title: File - File Origin Description: Display the origin or source of the file, in past 30 days. Title: Process - Process Executions Summary Description: Summary of process executions on this device in specific timeframe. Title: Process - Detailed Process Executions Description: Detailed all process execution events on device within a brief period, e.g. an hour. Title: Process - Detailed Process Events Description: Detailed specific process execution events on device within a defined time frame. Title: Lateral Movement - RDP To Device Description: Inbound RDP connection to this device in a specific timeframe. Title: Lateral Movement - Logon To Device Description: Logon events from other devices to this device in a specific timeframe. Title: Lateral Movement - Logons To Device In Last 10 Days Description: Logon events from other devices to this device in the last 10 days. Title: Network - Outbound Network Events Description: Device outbound network events, including attempts and failed connections. Title: Network - Inbound Network Events Description: Device inbound network events and attempts in a specific timeframe. Title: Network - Device Listening Ports Description: Displays device listening ports in specific timeframe. Title: Device Events - Scheduled Task Events Description: Scheduled task events seen on a device in a specific timeframe. Title: Device Events - User Account Events Description: User account events seen on a device in a specific timeframe. Title: Device Events - User Account Added Or Removed From Local Group Description: User account added or removed from local group in a specific timeframe. Title: Suspicious Activities - ASR Rules Triggered Description: ASR rules that were triggered on this device in the past 7 days. Title: Suspicious Activities - ASMSI Script Detection Description: Script detection from Windows Antimalware Scan Interface (AMSI) in past 7 days. Title: Suspicious Activities - Exploit Guard Events Description: Exploit Guard events detected on this device in past 7 days. Title: Suspicious Activities - Network Protection Events Description: Network Protection events triggered on this device in the past 7 days. Title: Suspicious Activities - Device Tampering Attempts Description: Possible tampering attempts on this device in the past 7 days. The second custom plugin, “Custom Plugin Defender Device Info”, offers specific device information often needed during an investigation. Its skills include: Title: Device OS Information Description: Latest device OS information with the device name as the input. Title: Device Current and Past IPs Description: The current and past IPs assigned to this device in the last 10 days. Title: Device Users and Login Counts Description: List users logged onto this device and the number of times, within the last 10 days. Title: Device Alert Information Description: Alerts observed on this device in the last 30 days. Title: Device Installed Applications Description: Currently installed applications on this device. Title: Device Vulnerability Information Description: Vulnerabilities identified on this device. Title: Device Critical Vulnerabilities Description: Vulnerability with CVSS score 7 or higher, or exploit is publicly available. Both custom plugins are available for download from the Security Copilot GitHub repository at this link. Step-by-step guides on how to install the custom plugin will be covered later in this blog. Let's start by demonstrating some of the capabilities of the two custom plugins through a case study of a Microsoft Defender XDR incident. For this incident, the Security Copilot incident summary reveals that the threat actor used a credential phishing attack to gain initial access. Over the course of the incident, several instances of lateral movement, credential access, and privilege escalation were detected, impacting users and devices across the network. Key activities included the use of tools like Mimikatz and Rubeus, suspicious remote sessions, and evidence of system manipulation. From the Security Copilot incident summary, you learn that the attack started when user “jonaw” clicked on a malicious URL in an email. Following that, a suspicious remote session was detected on device “vnevado-win10v”. To investigate the suspicious remote session on the device, one way is to leverage the “Lateral Movement – Logon To Device” skill from the “Custom Plugin Defender Device Investigation” plugin in Security Copilot's standalone mode. This skill presents the logon events that occurred on the device within the specified timeframe. The logon events include console logons, Remote Desktop logons, remote registry logons, scheduled task logons, and more. You can invoke this skill by navigating to the System Capabilities menu option from the prompt bar. To get to the System Capabilities menu option, select the Prompts option from the prompt bar, as shown next. Then the System Capabilities menu option appears. This skill is located under the plugin named “CUSTOM PLUGIN DEFENDER DEVICE INVESTIGATION”, as shown next. Once this skill is selected, you will need to fill in three input fields: the device name, start time, and end time. For this case study, the alert for the suspicious remote session was triggered for device vnevado-win10v, occurring at approximately 9:42 UTC on November 22 nd 2024. For the investigation, let's set the start time to 2024-11-22 9:30 UTC and the end time to 9:50 UTC, as shown in the next screenshot. The next screenshot demonstrates that Security Copilot executes this skill. Using the “Export to Excel” option in the Copilot response, you can download then manually review the logon events. Upon inspection, it is discovered that for device vnevado-win10v, there is a long list of logon events involving different user accounts within the 20-minute time frame. A screenshot showing a portion of the logon events is displayed next. You can then ask Security Copilot with this prompt: “Can you review the previous output of the logon events for the device vnevado-win10v between 2024-11-22 09:30 and 2024-11-22 09:50, summarize the logon events and also point out anything suspicious”. The next screenshot displays the Security Copilot prompt along with the beginning of its response. The logon event summary provided by Security Copilot is thorough but a bit long. At the end, it includes the identified suspicious logon activities: There are several instances where logon attempts are followed by successful logons within milliseconds, which could indicate automated or scripted logon attempts. There are 10 logon events with an "Unknown" logon type, which is unusual and may warrant further investigation. The account debrab has one logon event where it is marked as a local admin, which should be verified for legitimacy. For your reference, the last section of the Security Copilot’s logon event summary is shown in the next screen capture. After reviewing the logon event summary for device vnevado-win10v, let’s find out who might be the owner of this device. The “Device Users and Login Counts” skill from the “Custom Plugin Defender Device Info” plugin provides a summary of how many times each user has logged into the device over the past 30 days. Typically, the user with the most logins is likely the device owner. Once the skill is executed for device vnevado-win10v, Security Copilot reports that “user jonaw has logged onto the device vnevado-win10v a total of 189 times in the last 30 days”, as shown in the next screen capture. This helps to identify user “jonaw” as the likely device owner, which in turn makes user “debrab” appear even more suspicious. Let’s go back to the detailed logon events provided by Security Copilot earlier and take another look at user account “debrab”. The next screenshot shows the logon events for device vnevado-win10v, filtered to display only those associated with the user “debrab”. One notable observation is that the logon type for user “debrab” is either batch or unknown, which appears suspicious as well, especially with one batch logon with local admin privilege. What is a batch logon type? You can ask Security Copilot for more insights. The next screenshot displays Copilot’s responses, which explains that a batch logon type is typically used for scheduled tasks. The batch logon seems odd in this case. One of Security Copilot's key features is its ability to distinguish between normal and anomalous behavior in IT operations. In this case, let’s ask Security Copilot whether it’s common for someone with local admin privilege to log on to a device through a batch logon. As seen in the previous screenshot, Security Copilot points out that the batch logon is unusual, as it is typically used for scheduled tasks or automated processes, not for interactive sessions by administrators. Security Copilot’s response further confirms that the batch logon events with user account “debrab” are suspicious. This information and the other Security Copilot observations can assist you in identifying the suspicious remote session detected on device “vnevado-win10v”. The incident summary generated by Security Copilot not only mentions the detection of a suspicious remote session on device vnevado-win10v, but also reports the presence of suspicious files, including mimikatz.exe, rubeus.exe, xcopy.exe, and powershell.exe. The incident summary snippet is displayed next for reference. Let’s now examine what occurred on the device involving these suspicious files. A quick and easy way to start the investigation is to check for files downloaded to the device and reviewing the device's process execution events around the time of the incident to identify anything suspicious. Manually checking for downloaded files and examining process execution events can be time-consuming and labor-intensive. However, with the help of Security Copilot, these tasks can be performed more quickly and efficiently. The “File - Files Downloaded” skill from the “Custom Plugin Defender Device Investigation” plugin can be used to quickly identify files that were downloaded onto a device within a specific time period. Then, the “Process - Process Executions Summary” skill from the same Security Copilot plugin can be used to list the processes that executed on the device during the same timeframe. You can then ask Security Copilot to analyze these processes to identify anything suspicious. After the “File - Files Downloaded” skill executes, Security Copilot identifies a file named DomainDominance198.zip was downloaded to device vnevado-win10v. Another thing to keep in mind is that not all the information from the Copilot findings is directly visible in the Security Copilot console. You can expand the output result within the console or export the findings to Excel for a clearer view of the additional details. For this investigation, you can then more thoroughly review the URL from which the file was downloaded, verify the file location through its folder path, and locate the user account associated with the download. The next screenshot displays these additional details seen in the Excel spreadsheet. Then, the “Process - Process Executions Summary” skill provides a list of processes executed on the same device, vnevado-win10v, during the same period. Instead of manually reviewing all 128 processes, you can ask Security Copilot to analyze the processes and flag any suspicious ones. In addition, it's worth mentioning earlier in the investigation, leveraging the Microsoft Entra plugin, Security Copilot reports that user account “jonaw” belongs to Jonathan Wolcott, an account executive in the Sales department. With this information, let’s ask Security Copilot to identify any process execution that should typically not be carried out by someone outside of the IT department. Here is the Security Copilot prompt you can use: User “jonaw” is an account executive in the sales department, with this information, can you identify any processes that typically should not be carried out by someone outside of the IT department? Security Copilot then identifies six suspicious processes and provides its reasoning along the way. Once again, you can export the Security Copilot findings to Excel for a more thorough review. The next screenshot displays the results in Excel, with a more readable format. Now that a few more suspicious processes have been identified, let's revisit the downloaded file, DomainDominance198.zip, to see if more details can be uncovered. The skill, “File - Any Device Events Related To This File”, is part of the “Custom Plugin Defender Device Investigation” plugin in Security Copilot. It is designed to identify any device events or activities related to a specific file. It uses the filename as a keyword to filter and display only the device events containing this keyword within a defined time period. For this security incident, let's use this skill to search for device events containing the name of the downloaded file, DomainDominance198. Upon reviewing the Security Copilot response exported to Excel, you can see that a new file, DomainDominance198.ps1, has been created in the same directory as DomainDominance198.zip. In addition, the “File - File Origin” skill in the “Custom Plugin Defender Device Investigation” plugin provides details about a file's origin or source. It shows where the file came from, and any associated file or connection linked to it. In this case, as shown in the next screenshot, Security Copilot reveals that the file DomainDominance198.zip was downloaded from a specific URL. And that the file DomainDominance198.ps1 is associated with file DomainDominance198.zip, as shown next. The additional details in Security Copilot’s responses highlight the exact association, indicating that the File Origin Referrer URL for DomainDominance198.ps1 is DomainDominance198.zip, as shown in the next screen capture. With these insights, let's use another Security Copilot skill to conduct a more in-depth examination of PowerShell execution events on device vnevado-win10v. The skill, “Process - Detailed Process Events”, is also part of the “Custom Plugin Defender Device Investigation” plugin. It retrieves detailed process execution events, including process command line information and the parent process execution details, for the specified process on a given device within a defined time frame. When this skill is invoked, it requires four mandatory fields to be filled, as shown next. Security Copilot then displays the PowerShell execution events identified on device vnevado-win10v within the specified timeframe of 2024-11-22 09:30 to 2024-11-22 10:30, as shown next. From a more condensed text view of the responses from Security Copilot, a range of unusual or potentially harmful behaviors can be observed in the next screenshot. Some of these suspicious events are highlighted in yellow or displayed in bold in the next screenshot. The process execution events retrieved include command line details and parent process, therefore you are able to see both the PowerShell execution and processes launched with PowerShell as the parent process. The suspicious processes, such as mimikatz.exe, Rubeus.exe, xcopy.exe, PxExec.exe, and others mentioned in the Security Copilot incident summary, are identified here, allowing you to quickly recognize the correlation. Additionally, you can ask Security Copilot to assist you in reviewing the suspicious events. For instance, immediately after the xcopy command was used to copy the file “Rubeus.exe” to the remote device vnevado-win10b, a subsequent command involving “PsExec.exe” is observed in the detailed PowerShell execution events presented earlier by Security Copilot. The two command lines are shown in the next screen capture. Consulting with Security Copilot reveals that “PsExec.exe” executed a command remotely on the device vnevado-win10b. This command launched “Rubeus.exe” to dump Kerberos tickets for the user “nestorw” and saved the output to C:\Temp\AdminTicket.txt. Security Copilot notes that this action indicates credential dumping and potential lateral movement within the network. The next screenshot shows the prompt along with part of the responses from Security Copilot. As there are many other potentially harmful behaviors also observed in the detailed PowerShell execution events presented by Security Copilot earlier, you can submit each of these suspicious events to Security Copilot and ask for insights. Downloading and Installing the Custom Plugins The configuration files for the custom plugins can be downloaded from this link. Once you have the configuration file (in YAML format), here are the steps to upload and install it to your Security Copilot instance. Step 1: Select the Sources icon in the Prompt bar. Step 2: Scroll to the bottom of the Manage Sources page, within the Custom section, you'll find the "Add a plugin" option. Step 3: Click on “Add plugin” and then choose “Copilot for Security plugin”, as illustrated in the next screenshot. Step 4: Click on “Upload file” to install configuration file, which is in YAML format. Step 5: Click on Add. And voilà, the new custom plugin appears along with other plugins in the Manage sources section, as seen in the screen capture next. Now you can start using the custom plugins and they will appear in the “System Capabilities” section.1.2KViews2likes0CommentsAccelerating the Anomalous Sign-Ins detection with Microsoft Entra ID and Security Copilot
Overview In today’s complex threat landscape, identity protection is critical for securing organizational assets. A common sign of compromise is user activity indicating connections from multiple locations separated by over X- kilometers within a short period. Such events might represent risky sign-ins, requiring Security Analysts to determine whether they are true positives (indicating malicious activity) or false positives (such as misconfigured settings or benign anomalies). To enhance efficiency and accelerate the investigation process, organizations can leverage AI tools like Microsoft Security Copilot. By integrating Security Copilot with Microsoft Entra ID mainly AADUserRiskEvent and developing custom Promptbooks, organizations can investigate risky sign-ins, reduce manual workloads, and enable proactive decision-making to boost SOC efficiency in such scenarios. Use Case: Challenge and Solution Challenge Organizations face significant challenges in investigating and triaging identity protection alerts for sign-in anomalies, especially when users appear to log in from geographically disparate locations within hours. These challenges include: Volume of Alerts: Large organizations generate numerous risky sign-in events daily. False Positives: Legitimate activities, such as VPN connections or device relocations, may be flagged. Resource Constraints: Security teams must efficiently prioritize true positives for investigation. Solution Using Microsoft Security Copilot with a tailored Promptbook, Security Analysts can automate the initial triage process and focus on meaningful insights. This approach combines data querying, AI-driven analysis, and actionable recommendations to improve investigation workflows. Promptbook Structure The custom Promptbook comprises two key prompts: 1. First Prompt: Data Retrieval from Defender XDR via KQL Query This query retrieves users flagged for risky sign-ins within a 1-day window, focusing on events where the distance between locations exceeds 500 kilometers within 3 hours as example. Retrieve Defender XDR information using this KQL query: let riskyusers = AADUserRiskEvents | where TimeGenerated is greater than or equal ago(<TimeIntervalByDays>) | project UserPrincipalName, TimeGenerated, Location, IpAddress, RiskState, Id, RiskEventType; riskyusers | join kind=inner ( riskyusers | extend TimeGenerated1 = TimeGenerated, LocationDetails1 = Location ) on UserPrincipalName | where TimeGenerated is less than TimeGenerated1 and datetime_diff('hour', TimeGenerated1, TimeGenerated) is less than or equal <ConnectionsInterbalByHrs> | extend latyy = Location.geoCoordinates.latitude | extend longy= Location.geoCoordinates.longitude | extend latyy1 = LocationDetails1.geoCoordinates.latitude | extend longy1 = LocationDetails1.geoCoordinates.longitude | extend distance = geo_distance_2points(todouble(Location.geoCoordinates.latitude), todouble(Location.geoCoordinates.longitude), todouble(LocationDetails1.geoCoordinates.latitude), todouble(LocationDetails1.geoCoordinates.longitude)) | where distance is greater than or equal <SepratedDistanceByKM> | summarize arg_max(TimeGenerated, *) by Id | where RiskState is not equal @"dismissed" | project UserPrincipalName, TimeGenerated, IpAddress, Location, TimeGenerated1, IpAddress1, LocationDetails1, RiskEventType, distance Please make sure to set value for the following input parameters: <TimeIntervalByDays> example: 7d <ConnectionsInterbalByHrs> example: 3 <SepratedDistanceByKM> example: 5000 2. Second Prompt: AI Analysis for Patterns and Recommendations This prompt enables Security Copilot to analyze the retrieved data, identify patterns (e.g., recurring IP addresses or anomalous locations), and suggest further investigative steps and mitigative actions. /AnalyzeSecurityData Provide your insights as Security Analyst about what anomalies or similarity patterns can you identify. Provide a list of prompts for Security Copilot to investigate further and a list of recommendations. Use as input security data the information in the table from the previous prompt in this session. Automating the Process with Azure Logic Apps Organizations can further streamline the process by automating risky sign-in investigations using Azure Logic Apps. Here’s how: Create a Logic App: Set up a Logic App in the Azure portal. Trigger Configuration: Use a recurring schedule trigger to run the investigation daily. Integration with Security Copilot: Configure the Logic App to execute the Security Copilot’s Promptbook. Automate prompts for insights and recommendations. Notification Mechanism: Send results via email to the SOC team or log them in a ticketing system for further action. Note: to send only the result of the last prompt in the promptbook, use: last(body('Run_a_Security_Copilot_promptbook')?['evaluationResults'])['evaluationResultContent'] Benefits of the Approach Efficiency: Reduces manual efforts by automating repetitive tasks. Accuracy: AI analysis helps filter out false positives and prioritize true positives. Scalability: Easily extendable for other security use cases. Fast triage: Enables SOC teams to act quickly and decisively. Conclusion Incorporating Microsoft Security Copilot with a custom Promptbook into daily operations empowers Security Analysts to efficiently investigate and triage risky sign-in events. By automating processes through Azure Logic Apps, organizations can maintain a proactive security posture and better protect their identities and assets. Try it out: If your organization is looking to enhance its SOC capabilities, consider implementing this solution to harness the power of AI for identity protection. The Promptbook added to the github Security Copilot repo : Click here1.1KViews1like0CommentsBoost SOC automation with AI: Speed up incident triage with Security Copilot and Microsoft Sentinel
The Solution This solution leverages AI and automation to speed up incident triage by providing automated response to an incident while infusing AI reasoning into the triage process, allowing the analyst to gain quick context about the gravity of the incident, detailed information about each entity involved and any executed processes. It then goes on to recommend mitigation steps, leading to faster MTTR (Mean Time To Respond). Below are key highlights of the solution: Accelerated triage: One of the scenarios in which analysts could spend a considerable amount of time is when the incident includes, for example, a process name that they have never encountered before. This challenge is compounded when the process execution includes command line elements. In this situation Security Copilot steps in to provide a rapid analysis of the process and associated command line elements and presenting the output to the analyst in a much faster fashion than they would be able to do without AI’s contribution. Similarly, in the case of the device entity Copilot taps into Microsoft Intune to bring in a summary of OS information, compliance status and hardware information, etc., thereby accelerating triage. Additionally, the reality in the SOC is that incidents do not happen at convenient times, several incidents can be triggered at the same time, requiring analysts to triage them as quickly as possible. This is where AI and automation become a force multiplier. Having the logic app trigger automatically upon incident creating and performing the core triage tasks saves the analysts precious time that they would have spent having to manually triage several incidents that could trigger at the same time. Insight consolidation: The Logic App brings together context from multiple sources, spanning across both first and third-party. In this example we are tapping into AbuseIPDB as a third-party enrichment source. The logic app offers this flexibility, giving customers the option to being in enrichment data from third party or custom sources and have Security Copilot build a holistic narrative for the triage summary. In doing so it helps the analyst get as much context as possible without needing to pivot into multiple security tools. Streamlined incident management: Incident comments in Microsoft Sentinel are automatically updated, providing investigators with up-to-date information and reducing manual effort. These comments are also automatically synchronized to Defender XDR portal and are therefore also accessible from that interface. The automated incident investigation summary is structured with the following details: Incident overview – Details matching those used to define the analytics rule Incident description – A summary including the key highlights of the incident Analysis on incident entities – AI-powered analysis of the IP, Account, Host and Process details as extracted from the incident Possible mitigation steps – Depending on the nature of the incident, provide suggested mitigation steps for the incident Conclusion Below is a snapshot of the logic App steps: Sample output Once attached to the selected analytics rules and the associated incident is created, you can expect output the incident to be enriched in a manner similar to what is shown here below and then added as a comment to the triggered Microsoft Sentinel incident Security Copilot skills used Skill Description ProcessAnalyzer Scrutinizes process names and command lines, providing detailed insights into potentially malicious activities. GetEntraUserDetails Retrieves comprehensive user information GetIntineDevices Facilitates the extraction of device details from Intune, ensuring that all devices associated with an incident are thoroughly examined AbuseIPDB Preforms IP address reputation checks, helping to identify and mitigate threats from suspicious IP addresses Deployment prerequisites Before deploying the Logic App, ensure the following prerequisites are met: The user or service principal deploying this logic app should have the Contributor role on the Azure Resource Group that will host the logic App. Microsoft Security Copilot should be enabled in the Azure tenant. The user should have access to Microsoft Security Copilot to submit prompts by authenticating to the Microsoft Copilot for Security connector within the logic app. Microsoft Sentinel is configured and generates incidents. Obtain an AbuseIPDB API key to perform IP address reputation analysis. Follow below link to our Security Copilot GitHub repo to obtain the solution: SecurityCopilot-Sentinel-Incident-Investigation automation on GitHub Conclusion The integration of AI and automation in the Security Operations Center (SOC) through tools like Security Copilot and Logic Apps in Microsoft Sentinel significantly enhances incident triage and management. By leveraging these technologies, organizations can achieve faster, more consistent, and reliable incident handling, ultimately strengthening their overall security posture. Additional resources Overview - Azure Logic Apps | Microsoft Learn Logic Apps connectors in Microsoft Security Copilot | Microsoft Learn Microsoft Sentinel - Cloud-native SIEM Solution | Microsoft Azure Microsoft Security Copilot | Microsoft Security3KViews2likes0CommentsImprove SecOps collaboration with case management
Are you using a 3rd party case management system for the SecOps work you do in Microsoft Sentinel or Defender XDR? Do you struggle to find a solution that encompasses the specific needs of your security team? We are excited to announce a new case management solution, now in public preview. This is our first step towards providing a native, security-focused case management system that spans all SecOps workloads in the Defender portal, removing customer reliance on 3rd party SIEM/XDR and ticketing systems. This will be available for all Microsoft Sentinel customers that have onboarded to the unified SecOps platform.3.1KViews2likes0Comments