prompting
31 TopicsRedefining Cyber Defence with Microsoft Security Exposure Management (MSEM) and Security Copilot
Introduction Microsoft Security Exposure Management (MSEM) provides the Cyber Defense team with a unified, continuously updated awareness of assets exposure, relevant attack paths and provides classifications to these findings. While MSEM continuously creates and updates these finding, the Security Operations Center (SOC) Engineering team needs to reach to this data and interact with it as a part of their proactive discovery exercises. Microsoft Security Copilot (SCP) on the other hand, acts as an always-ready AI-powered copilot to the SOC Engineering team. When combined, the situational awareness from MSEM and the quick and consistent retrieval capabilities of SCP, MSEM and SCP empower the SOC Engineers with a natural-language front door into exposure insights and attack paths, this combination also opens the door to include MSEM content, and the reasoning over this content in Security Copilot prompts, in prompt books and allows the use of this content in automation scenarios that leverage security copilot. Traditionally, a SOC person needs to navigate to Microsoft Security Advanced Hunting, retrieve data related to assets with a certain level of exposure, and then start building plans for each asset to reduce its exposure, a plan that needs to take into consideration the nature of the exposure, the location the asset is hosted and the characteristics of the asset and requires working knowledge of each impacted system. This approach: Is a time-consuming process, especially when taking into consideration the learning curve associated with learning about each exposure before deciding on the best course of exposure reduction; and Can result in some undesired habits like adapting a reactive approach, rather than a proactive approach; Prioritizing assets with a certain exposure risk level; or attending to exposures that are already familiar to the person reviewing the list of exposures and attack paths. Overview of Exposure Management Microsoft Security Exposure Management is a security solution that provides a unified view of security posture across company assets and workloads. Security Exposure Management enriches asset information with security context that helps you to proactively manage attack surfaces, protect critical assets, and explore and mitigate exposure risk. Who uses Security Exposure Management? Security Exposure Management is aimed at: Security and compliance admins responsible for maintaining and improving organizational security posture. Security operations (SecOps) and partner teams who need visibility into data and workloads across organizational silos to effectively detect, investigate, and mitigate security threats. Security architects responsible for solving systematic issues in overall security posture. Chief Information Security Officers (CISOs) and security decision makers who need insights into organizational attack surfaces and exposure in order to understand security risk within organizational risk frameworks. What can I do with Security Exposure Management? With Security Exposure Management, you can: Get a unified view across the organization Manage and investigate attack surfaces Discover and safeguard critical assets Manage exposure Connect your data Reference links: Overview What is Microsoft Security Exposure Management (MSEM)? What's new in MSEM Get started Start using MSEM MSEM prerequisites How to import data from external data connectors in MSEM Concept Learn about critical asset management in MSEM Learn about attack surface management in MSEM Learn about exposure insights in MSEM Learn about attack paths in MSEM How-To Guide Review and classify critical assets in MSEM Review security initiatives in MSEM Investigate security metrics in MSEM Review security recommendations in MSEM Query the enterprise exposure graph MSEM Explore with the attack surface map in MSEM Review potential attack paths in MSEM Integration and licensing for MSEM Compare MSEM with Secure Score Overview of Security Copilot plugins and skills Microsoft Security Copilot is a generative AI-powered assistant designed to augment security operations by accelerating detection, investigation, and response. Its extensibility through plugins and skills enables organizations to tailor the platform to their unique environments, integrate diverse data sources, and automate complex workflows. Plugin Architecture and Categories: Security Copilot supports a growing ecosystem of plugins categorized into: First-party plugins: Native integrations with Microsoft services such as Microsoft Sentinel, Defender XDR, Intune, Entra, Purview, and Defender for Cloud. Third-party plugins: Integrations with external security platforms and ISVs, enabling broader telemetry and contextual enrichment. Custom plugins: User-developed extensions using KQL, GPT, or API-based logic to address specific use cases or data sources. Plugins act as grounding sources—providing context, verifying responses, and enabling Copilot to operate across embedded experiences or standalone sessions. Users can toggle plugins on/off, prioritize sources, and personalize settings (e.g., default Sentinel workspace) to streamline investigations. Skills and Promptbooks Skills in Security Copilot are modular capabilities that guide the AI in executing tasks such as incident triage, threat hunting, or policy analysis. These are often bundled into promptbooks, which are reusable, scenario-driven workflows that combine plugins, prompts, and logic to automate investigations or compliance checks. Security analysts can create, manage, and share promptbooks across tenants, enabling consistent execution of best practices. Promptbooks can be customized to include plugin-specific logic, such as querying Microsoft Graph API or running KQL-based detections. Role-Based Access and Governance Security Copilot enforces role-based access through Entra ID security groups: Copilot Owners: Full access to manage plugins, promptbooks, and tenant-wide settings. Copilot Contributors: Can create sessions and use promptbooks but have limited plugin publishing rights. Each embedded experience may require additional service-specific roles (e.g., Sentinel Reader, Endpoint Security Manager) to access relevant data. Governance files and onboarding templates help teams align plugin usage with organizational policies. Connecting Exposure Management with Security Copilot There are multiple benefits of connecting MSEM with Security Copilot (as explained in section 1 [Introduction] of this paper). We wrote a plugin with two skills to harness the Exposure Management insights within Security Copilot and to eventually understand the exposure of assets hosted in a particular cloud platform by your organization and of assets belonging to a specific user. A high-level architecture of the connectivity looks like this: The two skills of the plugins correspond to the following two use cases: Obtain exposure of an asset hosted on a particular cloud platform by your organization Obtain exposure of an asset belonging to a specific user As a user you could also specify the exposure level for which you want to extract the data, in each of the above use cases. Plugin Code (YAML) GitHub - Microsoft Security Exposure Management plugin for Security Copilot - YAML Proof of Concept (screen video) Conclusion Here, we proposed an alternative approach that drives up the SOC’s efficiency and helps the organization reduce the time from exposure discovery to exposure reduction. The alternative approach proposed allows the SOC person to retrieve assets that fit a certain profile, i.e. prompt Security Copilot to “List all assets hosted on Azure with Low Exposure Level” and after all affected assets are retrieved, the user can then prompt Security Copilot to “For each asset, help me create a 7-days plan to reduce these exposures” and can then finally conclude with the prompt “Create an Executive Report, start by explaining to none-technical audience the risks associated with the identified exposures, then list all affected assets, along with a summary of the steps needed to reduce the exposures identified”. These prompts can also be organized in a promptbook, further reducing the burden on the SOC person, and can also be made using Automation on regular intervals, where the automation can later email the report to intended audience or can be further extended to create relevant tickets in the IT Service Management System. An additional approach to risk management is to keep an eye on highly targeted personas within the organization, with the proposed integration a SOC person can prompt Security Copilot to find “What are the exposure risks associated with the devices owned by the Contoso person john.doe@contoso.com”. This helps the SOC person identify and remediate attack paths targeting devices used by highly targeted persons, where the SOC person can, within the same session, start digging deeper into finding any potential exploitation of these exposures, get recommendations on how to reduce these exposures, and draft an action plan.Smarter Prompts for Smarter Investigations: Dynamic Prompt Suggestions in Security Copilot
When a security analyst turns to an AI system for help—whether to hunt threats, investigate alerts, or triage incidents—the first step is usually a natural language prompt. But if that prompt is too vague, too general, or not aligned with the system’s capabilities, the response won’t be helpful. In high-stakes environments like cybersecurity, that’s not just a missed opportunity, it’s a risk. That’s exactly the problem we tackled in our recent paper, Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications, now published and deployed as a new skill in Security Copilot. Why Prompting Is a Bigger Problem in Security Than It Seems LLMs have made impressive progress in general-purpose settings—helping users write emails, summarize documents, or answer trivia. These systems often include smart prompt recommendations based on the flow of conversation. But when you shift into domain-specific systems like Microsoft Security Copilot, the game changes. Security analysts don’t ask open-ended questions. They ask task-specific ones: “List devices that ran a malicious file in the last 24 hours.” “Correlate failed login attempts across services.” “Visualize outbound traffic from compromised machines.” These questions map directly to skills—domain-specific functions that query data, connect APIs, or launch workflows. And that means prompt recommendations need to be tightly aligned with the available skills, underlying datasets, and current investigation context. General-purpose prompt systems don’t know how to do that. What Makes Domain-Specific Prompting Hard Designing prompt recommendations for systems like Security Copilot comes with unique constraints: Constrained Skill Set: The AI can only take actions it’s configured to support. Prompts must align with those skills—no hallucinations allowed. Evolving Context: A single investigation might involve multiple rounds of prompts, results, follow-ups, and pivots. Prompt suggestions must adapt dynamically. Deep Domain Knowledge: It’s not enough to suggest “Check network logs.” A useful prompt needs to reflect how real analysts work—across Defender, Sentinel, and more. Scalability: As new skills are added, prompt systems must scale without requiring constant manual curation or rewriting. Our Approach: Dynamic, Context-Aware, and Skill-Constrained We introduce a dynamic prompt recommendation system for Security Copilot. The key innovations include: Contextual understanding of the session: We track the user’s investigation path and surface prompts that are relevant to what they’re doing now, not just generic starters. Skill-awareness: The system knows what internal capabilities exist (e.g., “list devices,” “query login events”) and only recommends prompts that can be executed via those skills. Domain knowledge injection: By encoding metadata about products, datasets, and typical workflows (e.g., MITRE attack stages), the system produces prompts that make sense in security analyst workflows. Scalable prompt generation: Rather than relying on hardcoded lists, our system dynamically generates and ranks prompt suggestions. What It Looks Like in Action The dynamic prompt suggestion system is now live in Microsoft Entra, available in both Embedded and Immersive experiences. When a user enters a natural language prompt, the system automatically suggests several context-aware follow-up prompts, based on the user's prior interactions and the system’s understanding of the current task. These suggestions are generated in real time—users can simply click on a suggestion, and it’s executed immediately, allowing for quick and seamless follow-up queries without needing to rephrase or retype. Let’s walk through two examples: Embedded Experience We begin with the prompt: "How does Microsoft determine Risky Users?" The system returns the response and generates 3 follow-up suggestions, such as: "List dismissed risky detections." We click on that suggestion, which executes the query and shows the results. New suggestions continue to appear after each prompt execution, making it easy to explore related insights. Immersive Experience We start with a prompt: "Who am I?" Among the 5 suggested prompts, we select: "List the groups user nase74@woodgrove.ms is a member of." The user clicks, the query runs, and more follow-up suggestions appear, enabling a natural, guided flow throughout the session. Why This Matters for the Future of Security AI Prompting isn’t just an interface detail—it’s the entry point to intelligence. And in cybersecurity, where time, accuracy, and reliability matter, we need AI systems that are not just capable, but cooperative. Our research contributes to a future where security analysts don’t have to be prompt engineers to get the most out of AI. By making prompt recommendations dynamic, contextual, and grounded in real domain knowledge, we help close the gap between LLM potential and security reality. Interested in learning more? Check out the full paper: Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications If you're using or building upon this work in your own research, we’d appreciate you citing our paper: @article {tang2025dynamic, title={Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications}, author={Tang, Xinye and Zhai, Haijun and Belwal, Chaitanya and Thayanithi, Vineeth and Baumann, Philip and Roy, Yogesh K}, journal={arXiv preprint arXiv:2506.20815}, year={2025} }767Views2likes0CommentsAutomating Phishing Email Triage with Microsoft Security Copilot
This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort. Access the full solution on the Security Copilot Github: GitHub - UserReportedPhishing Solution. For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here. Introduction: Phishing Challenges Continue to Evolve Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn't just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s "Report Phishing" button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review. Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It's not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email: Slipped past existing filters, Was suspicious enough for a user to escalate, Lacks typical IOCs like malicious domains or attachments. As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn't rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate. What makes this novel is that it's the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters. Solution Overview: How the Logic App Solution Works (and Why It's Different) Core Components: Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable. Azure Function Apps: Parses and normalizes email data for efficient AI consumption. Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators. Key Benefits: Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails. Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review. Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies. Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time. Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows. Deployment Guide: Quick, Secure, and Reliable Setup The solution provides Azure Resource Manager (ARM) templates for rapid deployment: Prerequisites: Azure Subscription with Contributor access to a resource group. Microsoft Security Copilot enabled. Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions. (Optional) Microsoft Sentinel workspace. Refer to the up to date deployment instructions on the Security Copilot GitHub page. Technical Architecture & Workflow: The automated workflow operates as follows: Email Ingestion: Monitors the shared mailbox via Office 365 connector. Triggers on new email arrivals every 3 minutes. Assumes that the reported email has arrived as an attachment to a "carrier" email. Determine if the Email Came from Defender/Sentinel: If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary. Initial Email Processing: Exports raw email content from the shared mailbox. Determines if .msg or .eml attachments are in binary format and converts if necessary. Email Parsing via Azure Function App: Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure. Prepares clean JSON data for AI analysis. This step is required to "prep" the data for LLM analysis due to token limits. Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You'll also notice a number of JSON keys that are not used but provided for flexibility. Security Copilot Advanced AI Reasoning: Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals. Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators. Returns validated JSON output (some customers are parsing this JSON and performing other action). This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned: JSON Normalization & Error Handling: A “normalization” Azure Function ensures output matches the expected JSON schema. Sometimes LLMs will stray from a strict output structure, this aims to solve that problem. If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure. Detailed HTML Reporting: Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions. Reports are emailed directly to SOC team distribution lists or ticketing systems. Optional Sentinel Integration: Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created. The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning. Benign email detection: Example of phishing email detection: More sophisticated phishing with subtle clues: Enhanced Technical Details & Clarifications Attachment Processing: When multiple email attachments are detected, the Logic App processes each binary-format email sequentially. If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent. Security Copilot Reliability: The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency. If you run out of SCUs in an hour, it will pause until they are refreshed and continue. Sentinel Integration Reliability: Acknowledges inherent Sentinel logging delays (up to 15 minutes). Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created. Security Best Practices: Compare the Function & Logic App to your company security policies to ensure compliance. Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext. Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely. Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365's Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot
In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog. But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic Parameterized functions are important in the context of Security Copilot plugins because of: Dynamic prompt completion: Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic. Plugin reusability: By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions. Maintainability and modularity: Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds. Validation: Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it's treated as a value, not as part of the query logic. Plugin Spec mapping: OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless. Practical example In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher. Without using functions, this entire query would have to form part of the plugin Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot. CloudAppEvents | where RawEventData.TargetDomain has_any ( 'grok.com', 'x.ai', 'mistral.ai', 'cohere.ai', 'perplexity.ai', 'huggingface.co', 'adventureai.gg', 'ai.google/discover/palm2', 'ai.meta.com/llama', 'ai2006.io', 'aibuddy.chat', 'aidungeon.io', 'aigcdeep.com', 'ai-ghostwriter.com', 'aiisajoke.com', 'ailessonplan.com', 'aipoemgenerator.org', 'aissistify.com', 'ai-writer.com', 'aiwritingpal.com', 'akeeva.co', 'aleph-alpha.com/luminous', 'alphacode.deepmind.com', 'analogenie.com', 'anthropic.com/index/claude-2', 'anthropic.com/index/introducing-claude', 'anyword.com', 'app.getmerlin.in', 'app.inferkit.com', 'app.longshot.ai', 'app.neuro-flash.com', 'applaime.com', 'articlefiesta.com', 'articleforge.com', 'askbrian.ai', 'aws.amazon.com/bedrock/titan', 'azure.microsoft.com/en-us/products/ai-services/openai-service', 'bard.google.com', 'beacons.ai/linea_builds', 'bearly.ai', 'beatoven.ai', 'beautiful.ai', 'beewriter.com', 'bettersynonyms.com', 'blenderbot.ai', 'bomml.ai', 'bots.miku.gg', 'browsegpt.ai', 'bulkgpt.ai', 'buster.ai', 'censusgpt.com', 'chai-research.com', 'character.ai', 'charley.ai', 'charshift.com', 'chat.lmsys.org', 'chat.mymap.ai', 'chatbase.co', 'chatbotgen.com', 'chatgpt.com', 'chatgptdemo.net', 'chatgptduo.com', 'chatgptspanish.org', 'chatpdf.com', 'chattab.app', 'claid.ai', 'claralabs.com', 'claude.ai/login', 'clipdrop.co/stable-diffusion', 'cmdj.app', 'codesnippets.ai', 'cohere.com', 'cohesive.so', 'compose.ai', 'contentbot.ai', 'contentvillain.com', 'copy.ai', 'copymatic.ai', 'copymonkey.ai', 'copysmith.ai', 'copyter.com', 'coursebox.ai', 'coverler.com', 'craftly.ai', 'crammer.app', 'creaitor.ai', 'dante-ai.com', 'databricks.com', 'deepai.org', 'deep-image.ai', 'deepreview.eu', 'descrii.tech', 'designs.ai', 'docgpt.ai', 'dreamily.ai', 'editgpt.app', 'edwardbot.com', 'eilla.ai', 'elai.io', 'elephas.app', 'eleuther.ai', 'essayailab.com', 'essay-builder.ai', 'essaygrader.ai', 'essaypal.ai', 'falconllm.tii.ae', 'finechat.ai', 'finito.ai', 'fireflies.ai', 'firefly.adobe.com', 'firetexts.co', 'flowgpt.com', 'flowrite.com', 'forethought.ai', 'formwise.ai', 'frase.io', 'freedomgpt.com', 'gajix.com', 'gemini.google.com', 'genei.io', 'generatorxyz.com', 'getchunky.io', 'getgptapi.com', 'getliner.com', 'getsmartgpt.com', 'getvoila.ai', 'gista.co', 'github.com/features/copilot', 'giti.ai', 'gizzmo.ai', 'glasp.co', 'gliglish.com', 'godinabox.co', 'gozen.io', 'gpt.h2o.ai', 'gpt3demo.com', 'gpt4all.io', 'gpt-4chan+)', 'gpt6.ai', 'gptassistant.app', 'gptfy.co', 'gptgame.app', 'gptgo.ai', 'gptkit.ai', 'gpt-persona.com', 'gpt-ppt.neftup.app', 'gptzero.me', 'grammarly.com', 'hal9.com', 'headlime.com', 'heimdallapp.org', 'helperai.info', 'heygen.com', 'heygpt.chat', 'hippocraticai.com', 'huggingface.co/spaces/tiiuae/falcon-180b-demo', 'humanpal.io', 'hypotenuse.ai', 'ichatwithgpt.com', 'ideasai.com', 'ingestai.io', 'inkforall.com', 'inputai.com/chat/gpt-4', 'instantanswers.xyz', 'instatext.io', 'iris.ai', 'jasper.ai', 'jigso.io', 'kafkai.com', 'kibo.vercel.app', 'kloud.chat', 'koala.sh', 'krater.ai', 'lamini.ai', 'langchain.com', 'laragpt.com', 'learn.xyz', 'learnitive.com', 'learnt.ai', 'letsenhance.io', 'letsrevive.app', 'lexalytics.com', 'lgresearch.ai', 'linke.ai', 'localbot.ai', 'luis.ai', 'lumen5.com', 'machinetranslation.com', 'magicstudio.com', 'magisto.com', 'mailshake.com/ai-email-writer', 'markcopy.ai', 'meetmaya.world', 'merlin.foyer.work', 'mieux.ai', 'mightygpt.com', 'mosaicml.com', 'murf.ai', 'myaiteam.com', 'mygptwizard.com', 'narakeet.com', 'nat.dev', 'nbox.ai', 'netus.ai', 'neural.love', 'neuraltext.com', 'newswriter.ai', 'nextbrain.ai', 'noluai.com', 'notion.so', 'novelai.net', 'numind.ai', 'ocoya.com', 'ollama.ai', 'openai.com', 'ora.ai', 'otterwriter.com', 'outwrite.com', 'pagelines.com', 'parallelgpt.ai', 'peppercontent.io', 'perplexity.ai', 'personal.ai', 'phind.com', 'phrasee.co', 'play.ht', 'poe.com', 'predis.ai', 'premai.io', 'preppally.com', 'presentationgpt.com', 'privatellm.app', 'projectdecember.net', 'promptclub.ai', 'promptfolder.com', 'promptitude.io', 'qopywriter.ai', 'quickchat.ai/emerson', 'quillbot.com', 'rawshorts.com', 'read.ai', 'rebecc.ai', 'refraction.dev', 'regem.in/ai-writer', 'regie.ai', 'regisai.com', 'relevanceai.com', 'replika.com', 'replit.com', 'resemble.ai', 'resumerevival.xyz', 'riku.ai', 'rizzai.com', 'roamaround.app', 'rovioai.com', 'rytr.me', 'saga.so', 'sapling.ai', 'scribbyo.com', 'seowriting.ai', 'shakespearetoolbar.com', 'shortlyai.com', 'simpleshow.com', 'sitegpt.ai', 'smartwriter.ai', 'sonantic.io', 'soofy.io', 'soundful.com', 'speechify.com', 'splice.com', 'stability.ai', 'stableaudio.com', 'starryai.com', 'stealthgpt.ai', 'steve.ai', 'stork.ai', 'storyd.ai', 'storyscapeai.app', 'storytailor.ai', 'streamlit.io/generative-ai', 'summari.com', 'synesthesia.io', 'tabnine.com', 'talkai.info', 'talkpal.ai', 'talktowalle.com', 'team-gpt.com', 'tethered.dev', 'texta.ai', 'textcortex.com', 'textsynth.com', 'thirdai.com/pocketllm', 'threadcreator.com', 'thundercontent.com', 'tldrthis.com', 'tome.app', 'toolsaday.com/writing/text-genie', 'to-teach.ai', 'tutorai.me', 'tweetyai.com', 'twoslash.ai', 'typeright.com', 'typli.ai', 'uminal.com', 'unbounce.com/product/smart-copy', 'uniglobalcareers.com/cv-generator', 'usechat.ai', 'usemano.com', 'videomuse.app', 'vidext.app', 'virtualghostwriter.com', 'voicemod.net', 'warmer.ai', 'webllm.mlc.ai', 'wellsaidlabs.com', 'wepik.com', 'we-spots.com', 'wordplay.ai', 'wordtune.com', 'workflos.ai', 'woxo.tech', 'wpaibot.com', 'writecream.com', 'writefull.com', 'writegpt.ai', 'writeholo.com', 'writeme.ai', 'writer.com', 'writersbrew.app', 'writerx.co', 'writesonic.com', 'writesparkle.ai', 'writier.io', 'yarnit.app', 'zevbot.com', 'zomani.ai' ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ ("U.S. Social Security Number (SSN)", "Credit Card Number", "EU Tax Identification Number (TIN)","Amazon S3 Client Secret Access Key","All Credential Types"), 90, tostring(sit_SensitiveInfoTypeName) in~ ("All Full names"), 40, tostring(sit_SensitiveInfoTypeName) in~ ("Project Obsidian", "Phone Number"), 70, tostring(sit_SensitiveInfoTypeName) in~ ("IP"), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == "High" //| where RiskLevel == "High" | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok Create the parameters in the Log Analytics UI Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department. 3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned. Effect of switched parameters: To edit the function, follow the steps below: Navigate to the Logs menu for your Log Analytics workspace then select the function icon Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊 Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function: Next, we still use direct skill invocation, but this time specify our own parameters: Lastly, we test it out with a natural language prompt: tment Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below: Conclusion In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback. Additional Resources Using parameterized functions in Microsoft Defender XDR Using parameterized functions with Azure Data Explorer Functions in Azure Monitor log queries - Azure Monitor | Microsoft Learn Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub806Views0likes0CommentsTake Flight with Microsoft Security Copilot Flight School
Greetings pilots, and welcome to another pioneering year of AI innovation with Security Copilot. Find out how your organization can reach new heights with Security Copilot through the many exciting announcements on the way at both Microsoft Secure and RSA 2025. This is why now is the time to familiarize yourself and get airborne with Security Copilot. Go to School Microsoft Security Copilot Flight School is a comprehensive series charted to take students through fundamental concepts of AI definitions and architectures, take flight with prompting and automation, and hit supersonic speeds with Logic Apps and custom plugins. By the end of the course, students should be equipped with the requisite knowledge for how to successfully operate Security Copilot to best meet their organizational needs. The series contains 11 episodes with each having a flight time of around 10 minutes. Security Copilot is something I really, really enjoy, whether I’m actively contributing to its improvement or advocating for the platform’s use across security and IT workflows. Ever since I was granted access two years ago – which feels like a millennium in the age of AI – it’s been a passion of mine, and it’s why just recently I officially joined the Security Copilot product team. This series in many ways reflects not only my passion but similar passion found in my marketing colleagues Kathleen Lavallee (Senior Product Marketing Manager, Security Copilot) Shirleyse Haley (Senior Security Skilling Manager), and Shateva Long (Product Manager, Security Copilot). I hope that you enjoy it just as much as we did making it. Go ahead, and put on your favorite noise-cancelling headphones, it’s time, pilots, to take flight. Log Flight Hours There are two options for watching Security Copilot Flight School: either on Microsoft Learn or via the Youtube Playlist found on the Microsoft Security Youtube Channel. The first two episodes focus on establishing core fundamentals of Security Copilot platform design and architecture – or perhaps attaining your instrument rating. The episodes thereafter are plotted differently, around a standard operating procedure. To follow the ideal flight path Security Copilot should be configured and ready to go – head over to MS Learn and the Adoption Hub to get airborne. It’s also recommended that pilots watch the series sequentially, and be prepared to follow along with resources found on Github, to maximize learning and best align with the material. This will mean that you’ll need to coordinate with a pilot with owner permissions for your instance to create and manipulate the necessary resources. Episode 1 - What is Microsoft Security Copilot? Security is complex and requires highly specialized skills to face the challenges of today. Because of this, many of the people working to protect an organization work in silos that can be isolated from other business functions. Further, enterprises are highly fragmented environments with esoteric systems, data, and processes. All of which takes a tremendous amount of time, energy, and effort just to do the day-to-day. Security Copilot is a cloud-based, AI-powered security platform that is designed to address the challenges presented by complex and fragmented enterprise environments by redefining what security is and how security gets done. What is AI, and why exactly should it be used in a cybersecurity context? Episode 2 - AI Orchestration with Microsoft Security Copilot Why is The Paper Clip Pantry a 5-star restaurant renowned the world over for its Wisconsin Butter Burgers? Perhaps it’s how a chef uses a staff with unique skills and orchestrates the sourcing of resources in real time, against specific contexts to complete an order. After watching this episode you’ll understand how AI Orchestration works, why nobody eats a burger with only ketchup, and how the Paper Clip Pantry operates just like the Security Copilot Orchestrator. Episode 3 – Standalone and Embedded Experiences Do you have a friend who eats pizza in an inconceivable way? Maybe they eat a slice crust-first, or dip it into a sauce you never thought compatible with pizza? They work with pizza differently, just like any one security workflow could be different from one task, team, or individual to the next. This philosophy is why Security Copilot has two experiences – solutions embedded within products, and a standalone portal – to augment workflows no matter their current state. This episode will begin covering those experiences. Episode 4 – Other Embedded Experiences Turns out you can also insist upon putting cheese inside of pizza crust, or bake it thick enough as to require a fork and knife. I imagine, it’s probably something Windows 95 Man would do. In this episode, the Microsoft Entra, Purview, Intune, and Microsoft Threat Intelligence products showcase how Security Copilot advances their workflows within their portals. Beyond baking in the concepts of many workflows, many operators, the takeaway from this episode is that Security Copilot works with security adjacent workflows – IT, Identity, and DLP. Episode 5 – Manage Your Plugins ource different insights across your environment. Like our chef in The Paper Clip Pantry, we should probably define what we want to cook, what chefs to use, and set permissions for those that can interact within any input or output from the kitchen. Find out what plugins add to Security Copilot and how you can set plugin controls for your team and organization. Episode 6 – Prompting Is this an improv lesson, or a baking show? Or maybe if you watch this episode, you’ll learn how Security Copilot handles natural language inputs to provide you meaningful answers know as responses. Episode 7 – Prompt Engineering With the fundamentals of prompting in your flight log, it’s time to soar a bit higher with prompt engineering. In this episode you will learn how to structure prompts in a way to maximize the benefits of Security Copilot and begin building workflows. Congrats, pilot, your burgers will no longer come with just ketchup. Episode 8 – Using Promptbooks What would it look like to find a series of prompts and run them, in the same sequence with the same output every time? You guessed it, a promptbook, a repeatable workflow in the age of AI. See where to access promptbooks within the platform, and claw back some of your day to perfect your next butter burger. Episode 9 – Custom Promptbooks You’ve been tweaking your butter burger recipe for months now. You’ve finally landed at the perfect version by incorporating a secret nacho cheese recipe. The steps are defined, the recipe perfect. How do you repeat it? Just like your butter burger creation, you might discover or design workflows with Security Copilot. With custom promptbooks you can repeat and share them across your organization. In this episode you’ll learn about the different ways Security Copilot helps you develop your own custom AI workflows. Episode 10 – Logic Apps System automation, robot chefs? Actions? What if customers could order butter burgers with the click of a button, and the kitchen staff would automatically make one? Or perhaps every Friday at 2pm a butter burger was just delivered to you? Chances are there are different conditions across your organization that when present requires a workflow to begin. With Logic Apps, Security Copilot can be used to automatically aid workflows across any system a Logic App can connect to. More automation, less mouse clicking, that’s a flight plan everyone can agree on. Episode 11 – Extending to Your Ecosystem A famed restaurant critic stopped into the The Paper Clip Pantry ordered a butter burger, and it’s now the burger everyone is talking about. Business is booming and it's time to expand the menu – maybe a butter burger pizza, perhaps a doughnut butter burger? But you’ll need some new recipes and sources of knowledge to achieve this. Like a food menu the possibilities of expanding Security Copilot’s capabilities are endless. In this episode learn how this can be achieved with custom plugins and knowledgebases. Once you have that in your log, you will be a certified Ace, and ready to take flight with Security Copilot. Take Flight I really hope that you not only learn something new but have fun taking flight with the Security Copilot Flight School. As with any new and innovative technology, the learning never stops, and there will be opportunities to log more flight hours from our expert flight crews. Stay tuned at the Microsoft Security Copilot video hub, Microsoft Secure, and RSA 2025 for more content in the next few months. If you think it’s time to get the rest of your team and/or organization airborne there’s check out the Security Copilot adoption hub to get started: aka.ms/SecurityCopilotAdoptionHub Carry-on Resources Our teams have been hard at work building solutions to extend Security Copilot, you can find them on our community Github page found at: aka.ms/SecurityCopilotGitHubRepo To stay close to the latest in product news, development, and to interact with our engineering teams, please join the Security Copilot CCP to get the latest information: aka.ms/JoinCCP1.9KViews0likes0CommentsAdvancing Security Copilot with MAGIC: Automating Self-Correction in NL2KQL and Beyond
MAGIC, recently accepted at the Association for the Advancement of Artificial Intelligence (AAAI) 2025, introduces a breakthrough in automated self-correction for natural language-to-code systems.1.1KViews1like0CommentsEmpowering Security Copilot with NL2KQL: Transforming Natural Language into Insightful KQL queries
By leveraging NL2KQL, a powerful framework that translates natural language into KQL queries, Security Copilot makes querying in KQL as intuitive as a conversation. In this article, we’ll explore the story behind NL2KQL, its potential to transform security operations, and why it matters for the future of cybersecurity.2.1KViews3likes0CommentsNext-Gen Device Incident Investigation & Threat Hunting with Custom Plugins
The Security Copilot custom plugin empowers you to extend Security Copilot functionalities beyond the preinstalled and third-party plugins. This blog introduces two custom plugins that you can install and use in your environment. An incident investigation case study will be used to demonstrate the features of these two custom plugins. Additionally, a step-by-step guide will walk you through the setup process, which only takes a few clicks. The first custom plugin, “Custom Plugin Defender Device Investigation”, provides the following skills: Title: File - Files Downloaded Description: Lists files downloaded to this device in specific timeframe within past 30 days. Title: File - Last 15 Days Files Downloaded Description: Lists files downloaded to this device in the last 15 days. Title: File - Any Device Events Related To This File Description: Display device events that include the filename, in specific timeframe. Title: File - Sensitive Files Events Description: Lists sensitive files events on this device in the last 10 days. Title: File - File Origin Description: Display the origin or source of the file, in past 30 days. Title: Process - Process Executions Summary Description: Summary of process executions on this device in specific timeframe. Title: Process - Detailed Process Executions Description: Detailed all process execution events on device within a brief period, e.g. an hour. Title: Process - Detailed Process Events Description: Detailed specific process execution events on device within a defined time frame. Title: Lateral Movement - RDP To Device Description: Inbound RDP connection to this device in a specific timeframe. Title: Lateral Movement - Logon To Device Description: Logon events from other devices to this device in a specific timeframe. Title: Lateral Movement - Logons To Device In Last 10 Days Description: Logon events from other devices to this device in the last 10 days. Title: Network - Outbound Network Events Description: Device outbound network events, including attempts and failed connections. Title: Network - Inbound Network Events Description: Device inbound network events and attempts in a specific timeframe. Title: Network - Device Listening Ports Description: Displays device listening ports in specific timeframe. Title: Device Events - Scheduled Task Events Description: Scheduled task events seen on a device in a specific timeframe. Title: Device Events - User Account Events Description: User account events seen on a device in a specific timeframe. Title: Device Events - User Account Added Or Removed From Local Group Description: User account added or removed from local group in a specific timeframe. Title: Suspicious Activities - ASR Rules Triggered Description: ASR rules that were triggered on this device in the past 7 days. Title: Suspicious Activities - ASMSI Script Detection Description: Script detection from Windows Antimalware Scan Interface (AMSI) in past 7 days. Title: Suspicious Activities - Exploit Guard Events Description: Exploit Guard events detected on this device in past 7 days. Title: Suspicious Activities - Network Protection Events Description: Network Protection events triggered on this device in the past 7 days. Title: Suspicious Activities - Device Tampering Attempts Description: Possible tampering attempts on this device in the past 7 days. The second custom plugin, “Custom Plugin Defender Device Info”, offers specific device information often needed during an investigation. Its skills include: Title: Device OS Information Description: Latest device OS information with the device name as the input. Title: Device Current and Past IPs Description: The current and past IPs assigned to this device in the last 10 days. Title: Device Users and Login Counts Description: List users logged onto this device and the number of times, within the last 10 days. Title: Device Alert Information Description: Alerts observed on this device in the last 30 days. Title: Device Installed Applications Description: Currently installed applications on this device. Title: Device Vulnerability Information Description: Vulnerabilities identified on this device. Title: Device Critical Vulnerabilities Description: Vulnerability with CVSS score 7 or higher, or exploit is publicly available. Both custom plugins are available for download from the Security Copilot GitHub repository at this link. Step-by-step guides on how to install the custom plugin will be covered later in this blog. Let's start by demonstrating some of the capabilities of the two custom plugins through a case study of a Microsoft Defender XDR incident. For this incident, the Security Copilot incident summary reveals that the threat actor used a credential phishing attack to gain initial access. Over the course of the incident, several instances of lateral movement, credential access, and privilege escalation were detected, impacting users and devices across the network. Key activities included the use of tools like Mimikatz and Rubeus, suspicious remote sessions, and evidence of system manipulation. From the Security Copilot incident summary, you learn that the attack started when user “jonaw” clicked on a malicious URL in an email. Following that, a suspicious remote session was detected on device “vnevado-win10v”. To investigate the suspicious remote session on the device, one way is to leverage the “Lateral Movement – Logon To Device” skill from the “Custom Plugin Defender Device Investigation” plugin in Security Copilot's standalone mode. This skill presents the logon events that occurred on the device within the specified timeframe. The logon events include console logons, Remote Desktop logons, remote registry logons, scheduled task logons, and more. You can invoke this skill by navigating to the System Capabilities menu option from the prompt bar. To get to the System Capabilities menu option, select the Prompts option from the prompt bar, as shown next. Then the System Capabilities menu option appears. This skill is located under the plugin named “CUSTOM PLUGIN DEFENDER DEVICE INVESTIGATION”, as shown next. Once this skill is selected, you will need to fill in three input fields: the device name, start time, and end time. For this case study, the alert for the suspicious remote session was triggered for device vnevado-win10v, occurring at approximately 9:42 UTC on November 22 nd 2024. For the investigation, let's set the start time to 2024-11-22 9:30 UTC and the end time to 9:50 UTC, as shown in the next screenshot. The next screenshot demonstrates that Security Copilot executes this skill. Using the “Export to Excel” option in the Copilot response, you can download then manually review the logon events. Upon inspection, it is discovered that for device vnevado-win10v, there is a long list of logon events involving different user accounts within the 20-minute time frame. A screenshot showing a portion of the logon events is displayed next. You can then ask Security Copilot with this prompt: “Can you review the previous output of the logon events for the device vnevado-win10v between 2024-11-22 09:30 and 2024-11-22 09:50, summarize the logon events and also point out anything suspicious”. The next screenshot displays the Security Copilot prompt along with the beginning of its response. The logon event summary provided by Security Copilot is thorough but a bit long. At the end, it includes the identified suspicious logon activities: There are several instances where logon attempts are followed by successful logons within milliseconds, which could indicate automated or scripted logon attempts. There are 10 logon events with an "Unknown" logon type, which is unusual and may warrant further investigation. The account debrab has one logon event where it is marked as a local admin, which should be verified for legitimacy. For your reference, the last section of the Security Copilot’s logon event summary is shown in the next screen capture. After reviewing the logon event summary for device vnevado-win10v, let’s find out who might be the owner of this device. The “Device Users and Login Counts” skill from the “Custom Plugin Defender Device Info” plugin provides a summary of how many times each user has logged into the device over the past 30 days. Typically, the user with the most logins is likely the device owner. Once the skill is executed for device vnevado-win10v, Security Copilot reports that “user jonaw has logged onto the device vnevado-win10v a total of 189 times in the last 30 days”, as shown in the next screen capture. This helps to identify user “jonaw” as the likely device owner, which in turn makes user “debrab” appear even more suspicious. Let’s go back to the detailed logon events provided by Security Copilot earlier and take another look at user account “debrab”. The next screenshot shows the logon events for device vnevado-win10v, filtered to display only those associated with the user “debrab”. One notable observation is that the logon type for user “debrab” is either batch or unknown, which appears suspicious as well, especially with one batch logon with local admin privilege. What is a batch logon type? You can ask Security Copilot for more insights. The next screenshot displays Copilot’s responses, which explains that a batch logon type is typically used for scheduled tasks. The batch logon seems odd in this case. One of Security Copilot's key features is its ability to distinguish between normal and anomalous behavior in IT operations. In this case, let’s ask Security Copilot whether it’s common for someone with local admin privilege to log on to a device through a batch logon. As seen in the previous screenshot, Security Copilot points out that the batch logon is unusual, as it is typically used for scheduled tasks or automated processes, not for interactive sessions by administrators. Security Copilot’s response further confirms that the batch logon events with user account “debrab” are suspicious. This information and the other Security Copilot observations can assist you in identifying the suspicious remote session detected on device “vnevado-win10v”. The incident summary generated by Security Copilot not only mentions the detection of a suspicious remote session on device vnevado-win10v, but also reports the presence of suspicious files, including mimikatz.exe, rubeus.exe, xcopy.exe, and powershell.exe. The incident summary snippet is displayed next for reference. Let’s now examine what occurred on the device involving these suspicious files. A quick and easy way to start the investigation is to check for files downloaded to the device and reviewing the device's process execution events around the time of the incident to identify anything suspicious. Manually checking for downloaded files and examining process execution events can be time-consuming and labor-intensive. However, with the help of Security Copilot, these tasks can be performed more quickly and efficiently. The “File - Files Downloaded” skill from the “Custom Plugin Defender Device Investigation” plugin can be used to quickly identify files that were downloaded onto a device within a specific time period. Then, the “Process - Process Executions Summary” skill from the same Security Copilot plugin can be used to list the processes that executed on the device during the same timeframe. You can then ask Security Copilot to analyze these processes to identify anything suspicious. After the “File - Files Downloaded” skill executes, Security Copilot identifies a file named DomainDominance198.zip was downloaded to device vnevado-win10v. Another thing to keep in mind is that not all the information from the Copilot findings is directly visible in the Security Copilot console. You can expand the output result within the console or export the findings to Excel for a clearer view of the additional details. For this investigation, you can then more thoroughly review the URL from which the file was downloaded, verify the file location through its folder path, and locate the user account associated with the download. The next screenshot displays these additional details seen in the Excel spreadsheet. Then, the “Process - Process Executions Summary” skill provides a list of processes executed on the same device, vnevado-win10v, during the same period. Instead of manually reviewing all 128 processes, you can ask Security Copilot to analyze the processes and flag any suspicious ones. In addition, it's worth mentioning earlier in the investigation, leveraging the Microsoft Entra plugin, Security Copilot reports that user account “jonaw” belongs to Jonathan Wolcott, an account executive in the Sales department. With this information, let’s ask Security Copilot to identify any process execution that should typically not be carried out by someone outside of the IT department. Here is the Security Copilot prompt you can use: User “jonaw” is an account executive in the sales department, with this information, can you identify any processes that typically should not be carried out by someone outside of the IT department? Security Copilot then identifies six suspicious processes and provides its reasoning along the way. Once again, you can export the Security Copilot findings to Excel for a more thorough review. The next screenshot displays the results in Excel, with a more readable format. Now that a few more suspicious processes have been identified, let's revisit the downloaded file, DomainDominance198.zip, to see if more details can be uncovered. The skill, “File - Any Device Events Related To This File”, is part of the “Custom Plugin Defender Device Investigation” plugin in Security Copilot. It is designed to identify any device events or activities related to a specific file. It uses the filename as a keyword to filter and display only the device events containing this keyword within a defined time period. For this security incident, let's use this skill to search for device events containing the name of the downloaded file, DomainDominance198. Upon reviewing the Security Copilot response exported to Excel, you can see that a new file, DomainDominance198.ps1, has been created in the same directory as DomainDominance198.zip. In addition, the “File - File Origin” skill in the “Custom Plugin Defender Device Investigation” plugin provides details about a file's origin or source. It shows where the file came from, and any associated file or connection linked to it. In this case, as shown in the next screenshot, Security Copilot reveals that the file DomainDominance198.zip was downloaded from a specific URL. And that the file DomainDominance198.ps1 is associated with file DomainDominance198.zip, as shown next. The additional details in Security Copilot’s responses highlight the exact association, indicating that the File Origin Referrer URL for DomainDominance198.ps1 is DomainDominance198.zip, as shown in the next screen capture. With these insights, let's use another Security Copilot skill to conduct a more in-depth examination of PowerShell execution events on device vnevado-win10v. The skill, “Process - Detailed Process Events”, is also part of the “Custom Plugin Defender Device Investigation” plugin. It retrieves detailed process execution events, including process command line information and the parent process execution details, for the specified process on a given device within a defined time frame. When this skill is invoked, it requires four mandatory fields to be filled, as shown next. Security Copilot then displays the PowerShell execution events identified on device vnevado-win10v within the specified timeframe of 2024-11-22 09:30 to 2024-11-22 10:30, as shown next. From a more condensed text view of the responses from Security Copilot, a range of unusual or potentially harmful behaviors can be observed in the next screenshot. Some of these suspicious events are highlighted in yellow or displayed in bold in the next screenshot. The process execution events retrieved include command line details and parent process, therefore you are able to see both the PowerShell execution and processes launched with PowerShell as the parent process. The suspicious processes, such as mimikatz.exe, Rubeus.exe, xcopy.exe, PxExec.exe, and others mentioned in the Security Copilot incident summary, are identified here, allowing you to quickly recognize the correlation. Additionally, you can ask Security Copilot to assist you in reviewing the suspicious events. For instance, immediately after the xcopy command was used to copy the file “Rubeus.exe” to the remote device vnevado-win10b, a subsequent command involving “PsExec.exe” is observed in the detailed PowerShell execution events presented earlier by Security Copilot. The two command lines are shown in the next screen capture. Consulting with Security Copilot reveals that “PsExec.exe” executed a command remotely on the device vnevado-win10b. This command launched “Rubeus.exe” to dump Kerberos tickets for the user “nestorw” and saved the output to C:\Temp\AdminTicket.txt. Security Copilot notes that this action indicates credential dumping and potential lateral movement within the network. The next screenshot shows the prompt along with part of the responses from Security Copilot. As there are many other potentially harmful behaviors also observed in the detailed PowerShell execution events presented by Security Copilot earlier, you can submit each of these suspicious events to Security Copilot and ask for insights. Downloading and Installing the Custom Plugins The configuration files for the custom plugins can be downloaded from this link. Once you have the configuration file (in YAML format), here are the steps to upload and install it to your Security Copilot instance. Step 1: Select the Sources icon in the Prompt bar. Step 2: Scroll to the bottom of the Manage Sources page, within the Custom section, you'll find the "Add a plugin" option. Step 3: Click on “Add plugin” and then choose “Copilot for Security plugin”, as illustrated in the next screenshot. Step 4: Click on “Upload file” to install configuration file, which is in YAML format. Step 5: Click on Add. And voilà, the new custom plugin appears along with other plugins in the Manage sources section, as seen in the screen capture next. Now you can start using the custom plugins and they will appear in the “System Capabilities” section.2.3KViews2likes0CommentsAccelerating the Anomalous Sign-Ins detection with Microsoft Entra ID and Security Copilot
Overview In today’s complex threat landscape, identity protection is critical for securing organizational assets. A common sign of compromise is user activity indicating connections from multiple locations separated by over X- kilometers within a short period. Such events might represent risky sign-ins, requiring Security Analysts to determine whether they are true positives (indicating malicious activity) or false positives (such as misconfigured settings or benign anomalies). To enhance efficiency and accelerate the investigation process, organizations can leverage AI tools like Microsoft Security Copilot. By integrating Security Copilot with Microsoft Entra ID mainly AADUserRiskEvent and developing custom Promptbooks, organizations can investigate risky sign-ins, reduce manual workloads, and enable proactive decision-making to boost SOC efficiency in such scenarios. Use Case: Challenge and Solution Challenge Organizations face significant challenges in investigating and triaging identity protection alerts for sign-in anomalies, especially when users appear to log in from geographically disparate locations within hours. These challenges include: Volume of Alerts: Large organizations generate numerous risky sign-in events daily. False Positives: Legitimate activities, such as VPN connections or device relocations, may be flagged. Resource Constraints: Security teams must efficiently prioritize true positives for investigation. Solution Using Microsoft Security Copilot with a tailored Promptbook, Security Analysts can automate the initial triage process and focus on meaningful insights. This approach combines data querying, AI-driven analysis, and actionable recommendations to improve investigation workflows. Promptbook Structure The custom Promptbook comprises two key prompts: 1. First Prompt: Data Retrieval from Defender XDR via KQL Query This query retrieves users flagged for risky sign-ins within a 1-day window, focusing on events where the distance between locations exceeds 500 kilometers within 3 hours as example. Retrieve Defender XDR information using this KQL query: let riskyusers = AADUserRiskEvents | where TimeGenerated is greater than or equal ago(<TimeIntervalByDays>) | project UserPrincipalName, TimeGenerated, Location, IpAddress, RiskState, Id, RiskEventType; riskyusers | join kind=inner ( riskyusers | extend TimeGenerated1 = TimeGenerated, LocationDetails1 = Location ) on UserPrincipalName | where TimeGenerated is less than TimeGenerated1 and datetime_diff('hour', TimeGenerated1, TimeGenerated) is less than or equal <ConnectionsInterbalByHrs> | extend latyy = Location.geoCoordinates.latitude | extend longy= Location.geoCoordinates.longitude | extend latyy1 = LocationDetails1.geoCoordinates.latitude | extend longy1 = LocationDetails1.geoCoordinates.longitude | extend distance = geo_distance_2points(todouble(Location.geoCoordinates.latitude), todouble(Location.geoCoordinates.longitude), todouble(LocationDetails1.geoCoordinates.latitude), todouble(LocationDetails1.geoCoordinates.longitude)) | where distance is greater than or equal <SepratedDistanceByKM> | summarize arg_max(TimeGenerated, *) by Id | where RiskState is not equal @"dismissed" | project UserPrincipalName, TimeGenerated, IpAddress, Location, TimeGenerated1, IpAddress1, LocationDetails1, RiskEventType, distance Please make sure to set value for the following input parameters: <TimeIntervalByDays> example: 7d <ConnectionsInterbalByHrs> example: 3 <SepratedDistanceByKM> example: 5000 2. Second Prompt: AI Analysis for Patterns and Recommendations This prompt enables Security Copilot to analyze the retrieved data, identify patterns (e.g., recurring IP addresses or anomalous locations), and suggest further investigative steps and mitigative actions. /AnalyzeSecurityData Provide your insights as Security Analyst about what anomalies or similarity patterns can you identify. Provide a list of prompts for Security Copilot to investigate further and a list of recommendations. Use as input security data the information in the table from the previous prompt in this session. Automating the Process with Azure Logic Apps Organizations can further streamline the process by automating risky sign-in investigations using Azure Logic Apps. Here’s how: Create a Logic App: Set up a Logic App in the Azure portal. Trigger Configuration: Use a recurring schedule trigger to run the investigation daily. Integration with Security Copilot: Configure the Logic App to execute the Security Copilot’s Promptbook. Automate prompts for insights and recommendations. Notification Mechanism: Send results via email to the SOC team or log them in a ticketing system for further action. Note: to send only the result of the last prompt in the promptbook, use: last(body('Run_a_Security_Copilot_promptbook')?['evaluationResults'])['evaluationResultContent'] Benefits of the Approach Efficiency: Reduces manual efforts by automating repetitive tasks. Accuracy: AI analysis helps filter out false positives and prioritize true positives. Scalability: Easily extendable for other security use cases. Fast triage: Enables SOC teams to act quickly and decisively. Conclusion Incorporating Microsoft Security Copilot with a custom Promptbook into daily operations empowers Security Analysts to efficiently investigate and triage risky sign-in events. By automating processes through Azure Logic Apps, organizations can maintain a proactive security posture and better protect their identities and assets. Try it out: If your organization is looking to enhance its SOC capabilities, consider implementing this solution to harness the power of AI for identity protection. The Promptbook added to the github Security Copilot repo : Click here1.5KViews1like0CommentsHunt for identity-based threats with Security Copilot and Microsoft Sentinel
Enter Microsoft Sentinel and Security Copilot, a powerful duo that brings great value to your security operations. Microsoft Sentinel's User and Entity Behavior Analytics (UEBA) capabilities are designed to distill anomalies from vast amounts of raw data, providing clear and actionable insights. By leveraging advanced machine learning algorithms, Sentinel UEBA can identify unusual patterns and behaviors that may indicate potential security threats, allowing for proactive threat detection and response. On its part, Security Copilot further enhances this capability by offering investigators and threat hunters a head start by analyzing the inputs from Sentinel UEBA and helping the analyst prioritize their investigation or hunting efforts. This streamlines the investigation process, enabling security teams to proactively and quickly identify potential threats. In this blog we will showcase how Microsoft Sentinel UEBA can narrow down a set of anomalies associated with high blast radius users and how Security Copilot speeds up the investigation process, offering AI-enriched insights and recommendations through a Security Copilot Promptbook. This Promptbook brings together insights from 1 st party (built-in Microsoft plugins), 3 rd party plugins from ISVs and custom plugins leveraging the extensibility of the Security Copilot platform. The Microsoft Sentinel UEBA data is brought into the Security Copilot session via custom KQL plugin. Summary of requirements # Artefact Link 1 High Blast Radius User investigation custom plugin Security-Copilot/Plugins/Community Based Plugins/Microsoft Sentinel Custom Plugin Scenarios/High Blast Radius User investigation at plugins-blastradius · inwafula/Security-Copilot 2 AbuseIPDB plugin AbuseIPDB and Microsoft Security Copilot | Microsoft Learn 3 Microsoft Intune Microsoft Copilot in Intune features overview | Microsoft Learn 4 Cybersixgill Threat Intelligence Cybersixgill and Microsoft Security Copilot | Microsoft Learn 5 Rare process running as a service detection Security-Copilot/Plugins/Community Based Plugins/Microsoft Defender XDR Custom Plugin Scenarios/RareProcess at plugins-blastradius · inwafula/Security-Copilot 6 Promptbook Security-Copilot/Promptbook samples/High Blast Radius User investigation.md at main · Azure/Security-Copilot Below is a snapshot of the promptbook we shall step through and call out the highlights: The first prompt serves to make the connection between Security Copilot and Sentinel by executing a custom KQL plugin that will identify high blast radius users as computed by Sentinel UEBA on the user's relative position within Entra ID as well as the Azure roles assigned to the user. The higher the user is in the organization and the more impactful their Azure permissions are, the higher the blast radius. In this run we see that Copilot has come back with 17 users who fit the description of being “high blast radius” users, however, we want to narrow down to which particular user we should prioritize. Once this information is brought into the Security Copilot session, we can begin to apply the power of AI over it and glean some insights that will help us quickly prioritize which identity to focus on and get useful guidance along the way. The output below is in response to the second prompt which tasks Security Copilot with analyzing all the users that came back from the first prompt as being high blast radius users but go a step further and identify which specific one we should prioritize: The third prompt gathers reputation information about the IP used for anomalous activities, the fourth examines the user's device, while the fifth extends the investigation beyond the Entra ID tenant into the deep and dark web using Cybersixgill, a Bitsight company plugin for a comprehensive view of the threat exposure landscape. *Note: This is a simulated breach for demo purposes, not an actual one The 6 th prompt reaches into Intune to obtain information about the compliance state of the device while the 7 th uses data from Microsoft Defender XDR to check for any unusual processes that may have been executed on the device, additional context needed to paint a holistic picture of the gravity of the incident. The final prompt brings it all together by summarizing the investigation, suggesting a verdict, recommending follow-up actions and providing an explanation of Copilot’s reasoning for both. Below is the final output from this Promptbook’s run: Jan 6, 3:39 PM Investigation Summary User: Ismat Bekarevich (OPS) User Principal Name: isbe54@woodgrove.ms Blast Radius: High Recent Anomalous Activity: Logon from IP address 2601:xxxxx:7273:958b:e655:xxxx:b98c (City, United States) Group Memberships: AWS Admins, On-prem Admins, Azure ATP contosodemos2 Viewers, AAD DC Administrators Assigned Roles: Global Administrator Risk State: Remediated Credentials Compromised: Yes (Breach Date: 2024-11-26, Source: telegram) IP Address: 2601:xxxxx:c182:7273:958b:e655:xxxx:b98c Abuse Confidence Score: 0 (No known malicious activity) ISP: Comcast Cable Communications, LLC Country: United States Devices Assigned CPC-isbe5-L1BOV o Compliance Status: Compliant o Compliance Policies: Default Device Compliance Policy, Defender for Endpoint - Device Risk o Rare Processes Executed: None over the past 7 days DESKTOP-VSA7FBL o Compliance Status: Noncompliant o Compliance Issues: Default Device Compliance Policy, Defender for Endpoint - Device Risk Demo’s MacBook Pro (2) o Compliance Status: Compliant WoodgroveVM02 o Compliance Status: Compliant Verdict Threat Hunting Required: Yes Reasons: High Blast Radius: The user has significant access and potential impact within the organization. Compromised Credentials: The user's credentials have been compromised, posing a potential security risk. Noncompliant Device: One of the user's devices (DESKTOP-VSA7FBL) is noncompliant with critical security policies. Suggested Follow-Up Actions Monitor User Activity: Continuously monitor the user's activities for any further anomalies or suspicious behavior. Review and Remediate Noncompliant Device: Investigate and address the compliance issues on the device DESKTOP-VSA7FBL. Credential Reset: Perform a credential reset for the user to mitigate the risk of compromised credentials. Conditional Access Policies: Ensure that appropriate conditional access policies are in place to limit access from potentially risky locations or devices. Security Awareness Training: Provide the user with security awareness training to prevent future credential compromises. Conclusion Together, Security Copilot and Microsoft Sentinel provide a comprehensive security solution that empowers organizations to stay ahead of identity-based attacks, among others. By combining the strengths of both platforms, you can achieve greater efficiencies in your security operations. Try it out and give us your feedback and recommendations for further improvement. Additional resources Microsoft Security Copilot | Microsoft Security Microsoft Sentinel - Cloud-native SIEM Solution | Microsoft Azure Microsoft Sentinel UEBA reference | Microsoft Learn AbuseIPDB - IP address abuse reports - Making the Internet safer, one IP at a time Advanced Dark Web Threat Intelligence Platform | Cybersixgill Microsoft Digital Defense Report: 600 million cyberattacks per day around the globe2.2KViews1like1Comment