building with copilot
34 TopicsRedefining Cyber Defence with Microsoft Security Exposure Management (MSEM) and Security Copilot
Introduction Microsoft Security Exposure Management (MSEM) provides the Cyber Defense team with a unified, continuously updated awareness of assets exposure, relevant attack paths and provides classifications to these findings. While MSEM continuously creates and updates these finding, the Security Operations Center (SOC) Engineering team needs to reach to this data and interact with it as a part of their proactive discovery exercises. Microsoft Security Copilot (SCP) on the other hand, acts as an always-ready AI-powered copilot to the SOC Engineering team. When combined, the situational awareness from MSEM and the quick and consistent retrieval capabilities of SCP, MSEM and SCP empower the SOC Engineers with a natural-language front door into exposure insights and attack paths, this combination also opens the door to include MSEM content, and the reasoning over this content in Security Copilot prompts, in prompt books and allows the use of this content in automation scenarios that leverage security copilot. Traditionally, a SOC person needs to navigate to Microsoft Security Advanced Hunting, retrieve data related to assets with a certain level of exposure, and then start building plans for each asset to reduce its exposure, a plan that needs to take into consideration the nature of the exposure, the location the asset is hosted and the characteristics of the asset and requires working knowledge of each impacted system. This approach: Is a time-consuming process, especially when taking into consideration the learning curve associated with learning about each exposure before deciding on the best course of exposure reduction; and Can result in some undesired habits like adapting a reactive approach, rather than a proactive approach; Prioritizing assets with a certain exposure risk level; or attending to exposures that are already familiar to the person reviewing the list of exposures and attack paths. Overview of Exposure Management Microsoft Security Exposure Management is a security solution that provides a unified view of security posture across company assets and workloads. Security Exposure Management enriches asset information with security context that helps you to proactively manage attack surfaces, protect critical assets, and explore and mitigate exposure risk. Who uses Security Exposure Management? Security Exposure Management is aimed at: Security and compliance admins responsible for maintaining and improving organizational security posture. Security operations (SecOps) and partner teams who need visibility into data and workloads across organizational silos to effectively detect, investigate, and mitigate security threats. Security architects responsible for solving systematic issues in overall security posture. Chief Information Security Officers (CISOs) and security decision makers who need insights into organizational attack surfaces and exposure in order to understand security risk within organizational risk frameworks. What can I do with Security Exposure Management? With Security Exposure Management, you can: Get a unified view across the organization Manage and investigate attack surfaces Discover and safeguard critical assets Manage exposure Connect your data Reference links: Overview What is Microsoft Security Exposure Management (MSEM)? What's new in MSEM Get started Start using MSEM MSEM prerequisites How to import data from external data connectors in MSEM Concept Learn about critical asset management in MSEM Learn about attack surface management in MSEM Learn about exposure insights in MSEM Learn about attack paths in MSEM How-To Guide Review and classify critical assets in MSEM Review security initiatives in MSEM Investigate security metrics in MSEM Review security recommendations in MSEM Query the enterprise exposure graph MSEM Explore with the attack surface map in MSEM Review potential attack paths in MSEM Integration and licensing for MSEM Compare MSEM with Secure Score Overview of Security Copilot plugins and skills Microsoft Security Copilot is a generative AI-powered assistant designed to augment security operations by accelerating detection, investigation, and response. Its extensibility through plugins and skills enables organizations to tailor the platform to their unique environments, integrate diverse data sources, and automate complex workflows. Plugin Architecture and Categories: Security Copilot supports a growing ecosystem of plugins categorized into: First-party plugins: Native integrations with Microsoft services such as Microsoft Sentinel, Defender XDR, Intune, Entra, Purview, and Defender for Cloud. Third-party plugins: Integrations with external security platforms and ISVs, enabling broader telemetry and contextual enrichment. Custom plugins: User-developed extensions using KQL, GPT, or API-based logic to address specific use cases or data sources. Plugins act as grounding sources—providing context, verifying responses, and enabling Copilot to operate across embedded experiences or standalone sessions. Users can toggle plugins on/off, prioritize sources, and personalize settings (e.g., default Sentinel workspace) to streamline investigations. Skills and Promptbooks Skills in Security Copilot are modular capabilities that guide the AI in executing tasks such as incident triage, threat hunting, or policy analysis. These are often bundled into promptbooks, which are reusable, scenario-driven workflows that combine plugins, prompts, and logic to automate investigations or compliance checks. Security analysts can create, manage, and share promptbooks across tenants, enabling consistent execution of best practices. Promptbooks can be customized to include plugin-specific logic, such as querying Microsoft Graph API or running KQL-based detections. Role-Based Access and Governance Security Copilot enforces role-based access through Entra ID security groups: Copilot Owners: Full access to manage plugins, promptbooks, and tenant-wide settings. Copilot Contributors: Can create sessions and use promptbooks but have limited plugin publishing rights. Each embedded experience may require additional service-specific roles (e.g., Sentinel Reader, Endpoint Security Manager) to access relevant data. Governance files and onboarding templates help teams align plugin usage with organizational policies. Connecting Exposure Management with Security Copilot There are multiple benefits of connecting MSEM with Security Copilot (as explained in section 1 [Introduction] of this paper). We wrote a plugin with two skills to harness the Exposure Management insights within Security Copilot and to eventually understand the exposure of assets hosted in a particular cloud platform by your organization and of assets belonging to a specific user. A high-level architecture of the connectivity looks like this: The two skills of the plugins correspond to the following two use cases: Obtain exposure of an asset hosted on a particular cloud platform by your organization Obtain exposure of an asset belonging to a specific user As a user you could also specify the exposure level for which you want to extract the data, in each of the above use cases. Plugin Code (YAML) GitHub - Microsoft Security Exposure Management plugin for Security Copilot - YAML Proof of Concept (screen video) Conclusion Here, we proposed an alternative approach that drives up the SOC’s efficiency and helps the organization reduce the time from exposure discovery to exposure reduction. The alternative approach proposed allows the SOC person to retrieve assets that fit a certain profile, i.e. prompt Security Copilot to “List all assets hosted on Azure with Low Exposure Level” and after all affected assets are retrieved, the user can then prompt Security Copilot to “For each asset, help me create a 7-days plan to reduce these exposures” and can then finally conclude with the prompt “Create an Executive Report, start by explaining to none-technical audience the risks associated with the identified exposures, then list all affected assets, along with a summary of the steps needed to reduce the exposures identified”. These prompts can also be organized in a promptbook, further reducing the burden on the SOC person, and can also be made using Automation on regular intervals, where the automation can later email the report to intended audience or can be further extended to create relevant tickets in the IT Service Management System. An additional approach to risk management is to keep an eye on highly targeted personas within the organization, with the proposed integration a SOC person can prompt Security Copilot to find “What are the exposure risks associated with the devices owned by the Contoso person john.doe@contoso.com”. This helps the SOC person identify and remediate attack paths targeting devices used by highly targeted persons, where the SOC person can, within the same session, start digging deeper into finding any potential exploitation of these exposures, get recommendations on how to reduce these exposures, and draft an action plan.Agentic security your way: Build your own Security Copilot agents
Microsoft Security Copilot is redefining how security and IT teams operate. Today at Microsoft Secure, we’re unveiling powerful updates that put genAI and agent-driven automation at the center of modern defense. In a world where threats move faster than ever, alerts pile up, and resources stay tight, Security Copilot delivers the competitive edge: contextual intelligence, a growing network of agents, and the flexibility to build your own. The announcements focus on three key areas: building your own Security Copilot agents for tailored workflows, expanding the agent ecosystem with new Microsoft and partner solutions, and improving agent quality and performance. These updates build on the agents first introduced in March while giving security and IT teams more flexibility and control. This is the blueprint for the next era of agentic defense, and it starts now. Build your own Security Copilot agents, your way While we already offer a growing catalog of ready-to-use agents built by Microsoft and partners, we know that no two environments are alike. That’s why Security Copilot empowers you to create custom agents your way for tailored workflows – whether you're an analyst with limited coding experience or a developer using your favorite platform – you can build agents that fit your needs. Build agents in the Security Copilot portal Users can now build agents with a simplified, no-code interface in the standalone Security Copilot experience. Simply describe the task or workflow in natural language, and Copilot automatically generates the agent code. You can edit components, add any additional tools, including Sentinel MCP tools from our rich tool catalog, test the agent, optimize its instructions, and publish directly to your tenant. Create dynamic, ready-to-use agents in minutes – without writing any code. Build agents in a preferred MCP server-enabled development environment For teams with experienced developers, you can also use natural language and vibe-coding to build agents in a preferred MCP server-enabled coding platform, such as VS Code using GitHub Copilot. By enabling the Sentinel MCP server, developers can access MCP tools to build, refine, and deploy custom agents directly within their workspace. This approach gives full control over code, tools, and deployment while keeping the process within familiar development platforms. These options empower both technical and non-technical teams to rapidly create, test, and deploy custom Security Copilot agents. Organizations can automate workflows faster, design agents to their unique needs, and improve security and IT operations across the board. Discover new Security Copilot agents Since Security Copilot agents were first introduced in March, we have delivered more than a dozen Microsoft and partner-developed agents that help organizations tackle real challenges in security and IT operations. Analysts using the Conditional Access Optimization Agent in Microsoft Entra have been able to quickly uncover policy gaps, closing an average of 26 gaps per customer in just one month, with 73% of early adopters acting on at least one recommendation. The Phishing Triage Agent in Microsoft Defender has allowed analysts to shift from reactive sifting to proactive resolution, reducing triage time by up to 78%. Read how St Lukes University saves nearly 200 hours monthly in phishing alert triage and creating incident reports in minutes instead of hours. The Phishing Triage Agent is a game changer. It’s saving us nearly 200 hours monthly by autonomously handling and closing thousands of false positive alerts. - Krista Arndt, ACISO, St. Luke’s University Health Network We’re continuing to build on this momentum with new agents designed to address additional security and IT scenarios. The new Access Review Agent in Microsoft Entra tackles a common challenge: reduce access review fatigue and approving access without review. It analyzes ongoing reviews, flags anomalies or unusual access patterns, and delivers actionable guidance in a conversational interface. Reviewers can approve, revoke, or request more details right in Microsoft Teams, helping them focus on the riskiest access, make faster decisions, and strengthen compliance. With innovations like this, we’re not just reducing fatigue—we’re redefining how access governance is done, setting the standard for security agents that adapt to the way people work. Learn more about the Access Review Agent here. And, with the growing range of agentic use cases, the new Microsoft Security Store is your one-stop shop to discover, purchase, and deploy Security Copilot agents built by Microsoft and trusted partners. Find solutions aligned for SOC, IT, privacy, compliance, and governance teams, all in one place. By uniting discovery, deployment, and publishing in a single experience, Security Store powers a thriving ecosystem that gives your team a unique advantage: access to an ever-expanding range of agent capabilities that evolve as fast as the challenges they face. In addition to helping customers find the right solutions, Security Store also enables partners to bring their innovations to market. Partners can build and publish Security Copilot agents and SaaS solutions to grow their business and reach new customers. Today, we are announcing 30 new partner-built agents as well as 50 partner SaaS solutions in the Security Store. The launch of 30 new partner-built agents brings forward solutions like: A Forensic Agent by glueckkanja AG delivers deep-dive analysis of Defender XDR incidents to accelerate investigations, while their Privileged Admin Watchdog Agent helps enforce zero standing privilege principles by getting rid of persistent admin identities. These innovations, along with their other 6 agents in the Security Store today, demonstrate how glueckkanja AG is empowering organizations to tackle a wide range of security and IT challenges. 3 agents from adaQuest focused on automating investigation and response to focus security teams on what matters. A Ransomware Kill Chain Investigator Agent by adaQuest automates ransomware triage, an Entity Guard Investigator Agent by adaQuest investigates Defender incidents, and an Admin Guard Insight Agent analyzes administrative activity, detects anomalies, evaluates risk exposure and compliance, offering actionable insights to improve administrative security posture. An Identity Workload ID Agent by Invoke empowers identity administrators and security teams to manage and secure Workload Identities in Microsoft Entra, helping to reduce risk, strengthen compliance, provide more control over identity sprawl. To learn more about all new partner-built agents as well as partner SaaS offerings, read the blog or head to the Microsoft Security Store. Smarter, faster Security Copilot agents High-quality LLM instructions are critical to agent performance, yet manually fine-tuning them is time-consuming and error-prone. We’re excited to introduce tools that help improve custom-built agent quality and performance, starting with autotune instruction optimization. Autotune eliminates the need for manual tuning by automatically analyzing and refining agent instructions for optimal performance. Simply enable autotune during testing and submit, then receive a detailed results report with suggested prompt changes boost your agent’s AI quality score quickly and effortlessly. This optimization not only delivers better outcomes faster, but it also ensures that every agent in our ecosystem is always evolving - making them smarter, sharper, and more effective over time. But instructions are only part of the picture. To truly empower agents, context and data is key. By combining rich security signals from Microsoft Sentinel with advanced AI reasoning, Microsoft is setting a new standard for what agents can achieve—resolving incidents faster, optimizing workflows, and delivering deeper, more actionable insight. Security Copilot leverages a unified foundation of structured, graph, and semantic data from Sentinel to give agents the context they need to connect the dots across your environment. This deep integration transforms what AI can do, enabling agents to reason, adapt, and act with precision at machine speed. Read the Sentinel graph announcement here. Get Started Today With Security Copilot, the power of AI is now in your hands. Deploy ready-to-use agents from Microsoft and partners, or design custom agents built for your environment and workflows. These agents accelerate decision-making, surface critical insights, and let teams focus on strategic security work - turning complexity into clarity and speed. Explore Security Store today to experience how agentic automation is reshaping security operations and unlocking the full potential of your team. Learn more about how to create your own agents. Deep dive into these innovations at Microsoft Secure on Sept. 30, Oct. 1 or on demand. Then, join us at Microsoft Ignite, Nov, 17–21 in San Francisco, CA or online—for more innovations, hands-on labs, and expert connections.2.9KViews1like0CommentsSupercharging Security Copilot with Logic Apps: Best practices and pro tips
Integrating Microsoft Security Copilot with Azure Logic Apps enables security teams to automate investigations, orchestrate fast incident response, and unify workflows across the modern enterprise. By leveraging the unique strengths of both platforms, organizations can achieve scalable, efficient, and actionable security automation. Why Integrate Security Copilot with Logic Apps? Security Copilot brings AI-powered reasoning, automation, and natural language-to-action workflow capabilities. When paired with Logic Apps, it enables: Seamless orchestration: Launch incident investigations or automated email analysis with a single trigger. Advanced automation: Integrate across Microsoft and third-party security tools without heavy coding. Consistent, repeatable outcomes: Use Security Copilot's prompts and promptbooks for security-centric routines and reduce potential for error . Common scenarios include incident response initiation, scheduled security reports, and automated threat intelligence gathering. Best Practices for designing robust workflows: Identify your use case Not all scenarios require automation. Likewise, not all use cases benefit equally from combining automation with AI enrichment. The first step in unlocking value from Azure Logic Apps and Security Copilot is selecting the right use cases—those that align with both operational needs and the capabilities of these tools. To identify a suitable use case, we suggest the following guidelines: Start with repetitive tasks: Look for tasks that are performed frequently and follow a predictable pattern, such as alert enrichment, ticket creation, or user access reviews. These are ideal candidates for automation via Logic Apps. Assess the complexity of decision-making: If a task involves nuanced decision-making or contextual analysis—like investigating suspicious sign-ins or correlating threat indicators—Security Copilot’s AI capabilities can add significant value. Evaluate data availability and integration points: Ensure the use case involves systems and data sources that Logic Apps can connect to easily (e.g., Microsoft Sentinel, Entra ID, Office 365 E-mail). While it is possible to build your own custom, connectors, the availability of built-in connectors is a key consideration for the success of the integration. Consider the impact on security operations: Prioritize use cases that reduce manual effort, accelerate response times, or improve accuracy in threat detection and remediation. Check for existing playbooks or templates: Use cases that align with existing Logic Apps templates or Security Copilot skills are easier to implement and test. Microsoft’s GitHub repository for Copilot for Security or the Sentinel GitHub repos are great places to start. Validate with stakeholders: Collaborate with SOC managers, incident responders, and IT admins to confirm that the selected use case addresses a real pain point and fits within current workflows. Optimize for performance, cost, and scale Leverage direct skill invocation: This has the effect of cost reduction and faster execution as the planning process that natural language prompts must go through is bypassed. Optimize Security Copilot calls: Limit Copilot calls within workflows to actions that benefit from AI-value addition such as reducing cognitive load on the Security Analyst or providing reasoning over disparate sets of facts while taking advantage of the investigation context powered by the wide range of Security Copilot skills that are native to the product Logic App tuning: Fine-tune trigger frequency and need for AI-value addition i.e. you may only need to attach a Logic App that submits security copilot prompts as part of its flow based on the complexity of the expected incidents vs all detection rules and resulting incidents Pro Tips i. Prototype cost-effective, complex workflows Prototype complex workflows with test data before deploying to production environments. You can do this by simulating Security Copilot prompts by using variable instead of actual calls to Security Copilot during the testing phase. Follow the following steps to do this: a. Run the prompt or promptbook within Security Copilot to obtain the desired payload b. In this example we need to execute the following promptbook as part of a workflow that involves extraction of firewall device names and their owners so that we can send them an e-mail, alerting them to block public IPs exhibiting suspicious behaviors: c. Execute the promptbook d. Next, we prompt Security Copilot to generate an output that can be used to generate a JSON formatted payload which we will eventually use to create a schema for our Logic App ParseJSON step. e. Next, use a LLM, preferably an enterprise grade one such as Microsoft 365 or Security Copilot to generate the JSON payload f. Next, use the sample payload to create the input schema for the ParseJSON step in the Logic App g. Initialize a variable and save the sample JSON-this will act as simulated output Parsed from the EvaluationResult of the Promptbook from Security Copilot-effectively avoiding any costs involved with submitting the promptbook multiple times while you test and refine your Logic App h. You can now run the Logic App several times without submitting any prompts to Security Copilot . If you must test with payloads that vary considerably you can still do that by not saving it in the variable, and selecting the “Run with payload” option then pasting your payload in the resulting box Fig. 7 Logic App snippet showing manual execution of Logic App i. Once happy with Logic App flow and output you can replace the variable with the actual Security Copilot connection for your prompt or promptbook ii. Session management: Use the Session Id field to maintain investigative context—enabling multiple prompts within a workflow to share data without re-authentication. However, you can also spawn new sessions which allows for parallel execution of tasks without dependency on current session content iii. Provide descriptive connector names: Rename default connector names as you build out your logic app. This helps to troubleshoot the Logic App or maintain it, especially if it is being done by someone other than the one that built the original one. Example below describes exactly what the step does vs the default connector names: iv. Use custom code: Enhance workflows with inline Python or Function App steps for specialized operations, such complex text transformations or data extractions. In the example below, a function app is used to apply a regex operation to extract the e-mail GUID. This comes in handy when you do not have a built-in connector for specific requirements or existing ones are not as efficient tor flexible as a function app would be. v. Secure your Logic App workflows Managed identities: Leverage managed identities across all connectors that support this authentication method whenever you use them in your flows. Obfuscate secrets in run histories: Actions that handle passwords, secrets, keys, or other sensitive information are visible by default from the run history of the Logic App. For example, if your logic app gets a secret from Azure Key Vault to use when authenticating an HTTP action, you may want to hide that secret from view by enabling the toggle button for supported actions. See below: You may also use source IP addresses to perform access restrictions to this data. See details in this document Log and monitor activities: Enable logging for action taken by Logic Apps in your environment for greater visibility and control. If using Microsoft Sentinel, you can send Logic App activities to your Log Analytics workspace and benefit from queries such as the one below: SentinelHealth | where TimeGenerated > ago(30d) | where SentinelResourceType == "Playbook" | extend triggeredBy = ExtendedProperties.TriggeredByName.UserDisplayName vi. Use parameters Parameters allow workflows to be dynamic and reusable by enabling the injection of context-specific data—such as usernames, incident IDs, or IP addresses—at runtime. This flexibility means a single Logic App can serve multiple scenarios without hardcoding values, improving maintainability and scalability. Additionally, parameters help enforce security best practices by supporting secure input/output handling, which protects sensitive information during execution. Conclusion Security Copilot and Logic Apps together unlock a flexible, AI-powered automation platform for any security operations team. By following these best practices—efficient prompt design, session context management, robust security controls, and scheduled automation—organizations can level up their security response and proactivity. To go even further, explore Microsoft’s official documentation, the Security Copilot Adoption Hub, Techcommunity blog portal and our GitHub repo. I f you have any feedback or ideas on how you think we can further improve the value delivered by these solutions working together, please reach out. Always happy to hear back from you. Additional resources Security-Copilot/Logic Apps Microsoft Security Copilot – Microsoft Adoption Category: Security Copilot | Microsoft Community Hub462Views2likes0CommentsAutomating Phishing Email Triage with Microsoft Security Copilot
This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort. Access the full solution on the Security Copilot Github: GitHub - UserReportedPhishing Solution. For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here. Introduction: Phishing Challenges Continue to Evolve Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn't just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s "Report Phishing" button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review. Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It's not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email: Slipped past existing filters, Was suspicious enough for a user to escalate, Lacks typical IOCs like malicious domains or attachments. As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn't rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate. What makes this novel is that it's the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters. Solution Overview: How the Logic App Solution Works (and Why It's Different) Core Components: Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable. Azure Function Apps: Parses and normalizes email data for efficient AI consumption. Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators. Key Benefits: Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails. Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review. Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies. Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time. Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows. Deployment Guide: Quick, Secure, and Reliable Setup The solution provides Azure Resource Manager (ARM) templates for rapid deployment: Prerequisites: Azure Subscription with Contributor access to a resource group. Microsoft Security Copilot enabled. Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions. (Optional) Microsoft Sentinel workspace. Refer to the up to date deployment instructions on the Security Copilot GitHub page. Technical Architecture & Workflow: The automated workflow operates as follows: Email Ingestion: Monitors the shared mailbox via Office 365 connector. Triggers on new email arrivals every 3 minutes. Assumes that the reported email has arrived as an attachment to a "carrier" email. Determine if the Email Came from Defender/Sentinel: If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary. Initial Email Processing: Exports raw email content from the shared mailbox. Determines if .msg or .eml attachments are in binary format and converts if necessary. Email Parsing via Azure Function App: Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure. Prepares clean JSON data for AI analysis. This step is required to "prep" the data for LLM analysis due to token limits. Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You'll also notice a number of JSON keys that are not used but provided for flexibility. Security Copilot Advanced AI Reasoning: Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals. Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators. Returns validated JSON output (some customers are parsing this JSON and performing other action). This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned: JSON Normalization & Error Handling: A “normalization” Azure Function ensures output matches the expected JSON schema. Sometimes LLMs will stray from a strict output structure, this aims to solve that problem. If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure. Detailed HTML Reporting: Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions. Reports are emailed directly to SOC team distribution lists or ticketing systems. Optional Sentinel Integration: Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created. The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning. Benign email detection: Example of phishing email detection: More sophisticated phishing with subtle clues: Enhanced Technical Details & Clarifications Attachment Processing: When multiple email attachments are detected, the Logic App processes each binary-format email sequentially. If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent. Security Copilot Reliability: The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency. If you run out of SCUs in an hour, it will pause until they are refreshed and continue. Sentinel Integration Reliability: Acknowledges inherent Sentinel logging delays (up to 15 minutes). Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created. Security Best Practices: Compare the Function & Logic App to your company security policies to ensure compliance. Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext. Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely. Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365's Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot
In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog. But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic Parameterized functions are important in the context of Security Copilot plugins because of: Dynamic prompt completion: Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic. Plugin reusability: By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions. Maintainability and modularity: Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds. Validation: Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it's treated as a value, not as part of the query logic. Plugin Spec mapping: OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless. Practical example In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher. Without using functions, this entire query would have to form part of the plugin Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot. CloudAppEvents | where RawEventData.TargetDomain has_any ( 'grok.com', 'x.ai', 'mistral.ai', 'cohere.ai', 'perplexity.ai', 'huggingface.co', 'adventureai.gg', 'ai.google/discover/palm2', 'ai.meta.com/llama', 'ai2006.io', 'aibuddy.chat', 'aidungeon.io', 'aigcdeep.com', 'ai-ghostwriter.com', 'aiisajoke.com', 'ailessonplan.com', 'aipoemgenerator.org', 'aissistify.com', 'ai-writer.com', 'aiwritingpal.com', 'akeeva.co', 'aleph-alpha.com/luminous', 'alphacode.deepmind.com', 'analogenie.com', 'anthropic.com/index/claude-2', 'anthropic.com/index/introducing-claude', 'anyword.com', 'app.getmerlin.in', 'app.inferkit.com', 'app.longshot.ai', 'app.neuro-flash.com', 'applaime.com', 'articlefiesta.com', 'articleforge.com', 'askbrian.ai', 'aws.amazon.com/bedrock/titan', 'azure.microsoft.com/en-us/products/ai-services/openai-service', 'bard.google.com', 'beacons.ai/linea_builds', 'bearly.ai', 'beatoven.ai', 'beautiful.ai', 'beewriter.com', 'bettersynonyms.com', 'blenderbot.ai', 'bomml.ai', 'bots.miku.gg', 'browsegpt.ai', 'bulkgpt.ai', 'buster.ai', 'censusgpt.com', 'chai-research.com', 'character.ai', 'charley.ai', 'charshift.com', 'chat.lmsys.org', 'chat.mymap.ai', 'chatbase.co', 'chatbotgen.com', 'chatgpt.com', 'chatgptdemo.net', 'chatgptduo.com', 'chatgptspanish.org', 'chatpdf.com', 'chattab.app', 'claid.ai', 'claralabs.com', 'claude.ai/login', 'clipdrop.co/stable-diffusion', 'cmdj.app', 'codesnippets.ai', 'cohere.com', 'cohesive.so', 'compose.ai', 'contentbot.ai', 'contentvillain.com', 'copy.ai', 'copymatic.ai', 'copymonkey.ai', 'copysmith.ai', 'copyter.com', 'coursebox.ai', 'coverler.com', 'craftly.ai', 'crammer.app', 'creaitor.ai', 'dante-ai.com', 'databricks.com', 'deepai.org', 'deep-image.ai', 'deepreview.eu', 'descrii.tech', 'designs.ai', 'docgpt.ai', 'dreamily.ai', 'editgpt.app', 'edwardbot.com', 'eilla.ai', 'elai.io', 'elephas.app', 'eleuther.ai', 'essayailab.com', 'essay-builder.ai', 'essaygrader.ai', 'essaypal.ai', 'falconllm.tii.ae', 'finechat.ai', 'finito.ai', 'fireflies.ai', 'firefly.adobe.com', 'firetexts.co', 'flowgpt.com', 'flowrite.com', 'forethought.ai', 'formwise.ai', 'frase.io', 'freedomgpt.com', 'gajix.com', 'gemini.google.com', 'genei.io', 'generatorxyz.com', 'getchunky.io', 'getgptapi.com', 'getliner.com', 'getsmartgpt.com', 'getvoila.ai', 'gista.co', 'github.com/features/copilot', 'giti.ai', 'gizzmo.ai', 'glasp.co', 'gliglish.com', 'godinabox.co', 'gozen.io', 'gpt.h2o.ai', 'gpt3demo.com', 'gpt4all.io', 'gpt-4chan+)', 'gpt6.ai', 'gptassistant.app', 'gptfy.co', 'gptgame.app', 'gptgo.ai', 'gptkit.ai', 'gpt-persona.com', 'gpt-ppt.neftup.app', 'gptzero.me', 'grammarly.com', 'hal9.com', 'headlime.com', 'heimdallapp.org', 'helperai.info', 'heygen.com', 'heygpt.chat', 'hippocraticai.com', 'huggingface.co/spaces/tiiuae/falcon-180b-demo', 'humanpal.io', 'hypotenuse.ai', 'ichatwithgpt.com', 'ideasai.com', 'ingestai.io', 'inkforall.com', 'inputai.com/chat/gpt-4', 'instantanswers.xyz', 'instatext.io', 'iris.ai', 'jasper.ai', 'jigso.io', 'kafkai.com', 'kibo.vercel.app', 'kloud.chat', 'koala.sh', 'krater.ai', 'lamini.ai', 'langchain.com', 'laragpt.com', 'learn.xyz', 'learnitive.com', 'learnt.ai', 'letsenhance.io', 'letsrevive.app', 'lexalytics.com', 'lgresearch.ai', 'linke.ai', 'localbot.ai', 'luis.ai', 'lumen5.com', 'machinetranslation.com', 'magicstudio.com', 'magisto.com', 'mailshake.com/ai-email-writer', 'markcopy.ai', 'meetmaya.world', 'merlin.foyer.work', 'mieux.ai', 'mightygpt.com', 'mosaicml.com', 'murf.ai', 'myaiteam.com', 'mygptwizard.com', 'narakeet.com', 'nat.dev', 'nbox.ai', 'netus.ai', 'neural.love', 'neuraltext.com', 'newswriter.ai', 'nextbrain.ai', 'noluai.com', 'notion.so', 'novelai.net', 'numind.ai', 'ocoya.com', 'ollama.ai', 'openai.com', 'ora.ai', 'otterwriter.com', 'outwrite.com', 'pagelines.com', 'parallelgpt.ai', 'peppercontent.io', 'perplexity.ai', 'personal.ai', 'phind.com', 'phrasee.co', 'play.ht', 'poe.com', 'predis.ai', 'premai.io', 'preppally.com', 'presentationgpt.com', 'privatellm.app', 'projectdecember.net', 'promptclub.ai', 'promptfolder.com', 'promptitude.io', 'qopywriter.ai', 'quickchat.ai/emerson', 'quillbot.com', 'rawshorts.com', 'read.ai', 'rebecc.ai', 'refraction.dev', 'regem.in/ai-writer', 'regie.ai', 'regisai.com', 'relevanceai.com', 'replika.com', 'replit.com', 'resemble.ai', 'resumerevival.xyz', 'riku.ai', 'rizzai.com', 'roamaround.app', 'rovioai.com', 'rytr.me', 'saga.so', 'sapling.ai', 'scribbyo.com', 'seowriting.ai', 'shakespearetoolbar.com', 'shortlyai.com', 'simpleshow.com', 'sitegpt.ai', 'smartwriter.ai', 'sonantic.io', 'soofy.io', 'soundful.com', 'speechify.com', 'splice.com', 'stability.ai', 'stableaudio.com', 'starryai.com', 'stealthgpt.ai', 'steve.ai', 'stork.ai', 'storyd.ai', 'storyscapeai.app', 'storytailor.ai', 'streamlit.io/generative-ai', 'summari.com', 'synesthesia.io', 'tabnine.com', 'talkai.info', 'talkpal.ai', 'talktowalle.com', 'team-gpt.com', 'tethered.dev', 'texta.ai', 'textcortex.com', 'textsynth.com', 'thirdai.com/pocketllm', 'threadcreator.com', 'thundercontent.com', 'tldrthis.com', 'tome.app', 'toolsaday.com/writing/text-genie', 'to-teach.ai', 'tutorai.me', 'tweetyai.com', 'twoslash.ai', 'typeright.com', 'typli.ai', 'uminal.com', 'unbounce.com/product/smart-copy', 'uniglobalcareers.com/cv-generator', 'usechat.ai', 'usemano.com', 'videomuse.app', 'vidext.app', 'virtualghostwriter.com', 'voicemod.net', 'warmer.ai', 'webllm.mlc.ai', 'wellsaidlabs.com', 'wepik.com', 'we-spots.com', 'wordplay.ai', 'wordtune.com', 'workflos.ai', 'woxo.tech', 'wpaibot.com', 'writecream.com', 'writefull.com', 'writegpt.ai', 'writeholo.com', 'writeme.ai', 'writer.com', 'writersbrew.app', 'writerx.co', 'writesonic.com', 'writesparkle.ai', 'writier.io', 'yarnit.app', 'zevbot.com', 'zomani.ai' ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ ("U.S. Social Security Number (SSN)", "Credit Card Number", "EU Tax Identification Number (TIN)","Amazon S3 Client Secret Access Key","All Credential Types"), 90, tostring(sit_SensitiveInfoTypeName) in~ ("All Full names"), 40, tostring(sit_SensitiveInfoTypeName) in~ ("Project Obsidian", "Phone Number"), 70, tostring(sit_SensitiveInfoTypeName) in~ ("IP"), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == "High" //| where RiskLevel == "High" | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok Create the parameters in the Log Analytics UI Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department. 3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned. Effect of switched parameters: To edit the function, follow the steps below: Navigate to the Logs menu for your Log Analytics workspace then select the function icon Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊 Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function: Next, we still use direct skill invocation, but this time specify our own parameters: Lastly, we test it out with a natural language prompt: tment Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below: Conclusion In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback. Additional Resources Using parameterized functions in Microsoft Defender XDR Using parameterized functions with Azure Data Explorer Functions in Azure Monitor log queries - Azure Monitor | Microsoft Learn Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub806Views0likes0CommentsBusting myths on Microsoft Security Copilot
This blog aims to dispel common misconceptions surrounding Microsoft Security Copilot, a cutting-edge tool designed to enhance cybersecurity measures. By addressing these myths, we hope to provide clarity on how this innovative solution can be leveraged to strengthen your organization's security.2.4KViews9likes0CommentsUsing Security Copilot to Proactively Identify and Prioritize Vulnerabilities
Introduction There are many different approaches when it comes to prioritizing the vulnerabilities which need addressing with urgency. Any information or guidance to help you make better informed decisions can be critical but how can you stay informed? Leveraging all the information sources available to you can be the difference and allow you to be proactive when trying to protect your organization. One useful feed is offered by CISA (Cybersecurity & Infrastructure Security Agency) who works with partners to defend against today’s threats and collaborate to build a more secure and resilient infrastructure for the future. The Known Exploited Vulnerabilities (KEV) Catalog is a curated list maintained by CISA. It identifies vulnerabilities that have been actively exploited in the wild, posing significant risks to organizations and individuals. The catalog aims to enhance cybersecurity by providing timely information on these vulnerabilities, enabling proactive mitigation efforts. Key features of the KEV Catalog include: Identification: Lists vulnerabilities that are confirmed to be exploited. Details: Provides technical details, including affected products and versions. Mitigation: Offers guidance on how to address and remediate the vulnerabilities. Updates: Regularly updated to reflect new threats and exploited vulnerabilities. The KEV Catalog serves as a critical resource for cybersecurity professionals, helping them prioritize patching and defense strategies to protect against known threats. The feed is designed to help organizations stay informed about vulnerabilities that have been exploited in the wild. It is part of CISA's efforts to defend against current threats and build a more secure and resilient infrastructure for the future Workflow overview The automated CISA feed solution addresses prioritization challenges by streamlining the process of vulnerability management. This solution checks the latest CISA feed every 24 hours and queries the CVE findings against devices within Microsoft Defender for Endpoint. Security Copilot then checks for remediation actions and enriches the description, providing a comprehensive overview of the vulnerability. Key benefits of the Logic App include: Automated Updates: The Logic App automatically retrieves the latest CISA feed, ensuring that analysts have up-to-date information without manual intervention. This eliminates the need for manual checks and reduces the risk of missing critical updates. Device Vulnerability Assessment: It queries the CVE findings against devices within the organization, identifying which devices are vulnerable to the reported CVEs. This targeted approach allows analysts to focus on the most critical vulnerabilities affecting their specific environment, enhancing the efficiency of the remediation process. Remediation Insights: Security Copilot provides detailed remediation actions, helping analysts understand the steps needed to mitigate the vulnerabilities. By enriching the description with actionable insights, it simplifies the decision-making process and accelerates the implementation of security measures. Email Notifications: An email with the findings is sent to a designated mailbox, allowing for easy review and follow-up. This ensures that all relevant stakeholders are informed promptly, facilitating coordinated responses and continuous monitoring of the organization's security posture. Click here to get started and install the Logic App today. Conclusion To prioritize effectively, gather all necessary information for informed decisions. While the Logic App CISA workflow is one approach, other methods may better suit your organization. Function Apps can enhance decision making by automating and streamlining security operations with integrated tools and processes. The Security Copilot GitHub repository offers AI-powered solutions using machine learning and natural language processing to improve security. These tools help identify vulnerabilities, predict risks, and implement protective measures. Check it out!1.2KViews0likes2CommentsSecurely integrate On-Prem and Self-Hosted VM instances of Splunk with Microsoft Security Copilot
By leveraging Microsoft Entra ID Application Proxy and Azure Application Gateway with Web Application Firewall (WAF), you can securely connect on-premises or self-hosted Splunk instances to Microsoft Security Copilot—enabling seamless log analysis and threat investigation without exposing Splunk to the internet. This approach extends Security Copilot’s reach beyond SaaS applications, broadening the context needed for effective investigations across hybrid environments.Take Flight with Microsoft Security Copilot Flight School
Greetings pilots, and welcome to another pioneering year of AI innovation with Security Copilot. Find out how your organization can reach new heights with Security Copilot through the many exciting announcements on the way at both Microsoft Secure and RSA 2025. This is why now is the time to familiarize yourself and get airborne with Security Copilot. Go to School Microsoft Security Copilot Flight School is a comprehensive series charted to take students through fundamental concepts of AI definitions and architectures, take flight with prompting and automation, and hit supersonic speeds with Logic Apps and custom plugins. By the end of the course, students should be equipped with the requisite knowledge for how to successfully operate Security Copilot to best meet their organizational needs. The series contains 11 episodes with each having a flight time of around 10 minutes. Security Copilot is something I really, really enjoy, whether I’m actively contributing to its improvement or advocating for the platform’s use across security and IT workflows. Ever since I was granted access two years ago – which feels like a millennium in the age of AI – it’s been a passion of mine, and it’s why just recently I officially joined the Security Copilot product team. This series in many ways reflects not only my passion but similar passion found in my marketing colleagues Kathleen Lavallee (Senior Product Marketing Manager, Security Copilot) Shirleyse Haley (Senior Security Skilling Manager), and Shateva Long (Product Manager, Security Copilot). I hope that you enjoy it just as much as we did making it. Go ahead, and put on your favorite noise-cancelling headphones, it’s time, pilots, to take flight. Log Flight Hours There are two options for watching Security Copilot Flight School: either on Microsoft Learn or via the Youtube Playlist found on the Microsoft Security Youtube Channel. The first two episodes focus on establishing core fundamentals of Security Copilot platform design and architecture – or perhaps attaining your instrument rating. The episodes thereafter are plotted differently, around a standard operating procedure. To follow the ideal flight path Security Copilot should be configured and ready to go – head over to MS Learn and the Adoption Hub to get airborne. It’s also recommended that pilots watch the series sequentially, and be prepared to follow along with resources found on Github, to maximize learning and best align with the material. This will mean that you’ll need to coordinate with a pilot with owner permissions for your instance to create and manipulate the necessary resources. Episode 1 - What is Microsoft Security Copilot? Security is complex and requires highly specialized skills to face the challenges of today. Because of this, many of the people working to protect an organization work in silos that can be isolated from other business functions. Further, enterprises are highly fragmented environments with esoteric systems, data, and processes. All of which takes a tremendous amount of time, energy, and effort just to do the day-to-day. Security Copilot is a cloud-based, AI-powered security platform that is designed to address the challenges presented by complex and fragmented enterprise environments by redefining what security is and how security gets done. What is AI, and why exactly should it be used in a cybersecurity context? Episode 2 - AI Orchestration with Microsoft Security Copilot Why is The Paper Clip Pantry a 5-star restaurant renowned the world over for its Wisconsin Butter Burgers? Perhaps it’s how a chef uses a staff with unique skills and orchestrates the sourcing of resources in real time, against specific contexts to complete an order. After watching this episode you’ll understand how AI Orchestration works, why nobody eats a burger with only ketchup, and how the Paper Clip Pantry operates just like the Security Copilot Orchestrator. Episode 3 – Standalone and Embedded Experiences Do you have a friend who eats pizza in an inconceivable way? Maybe they eat a slice crust-first, or dip it into a sauce you never thought compatible with pizza? They work with pizza differently, just like any one security workflow could be different from one task, team, or individual to the next. This philosophy is why Security Copilot has two experiences – solutions embedded within products, and a standalone portal – to augment workflows no matter their current state. This episode will begin covering those experiences. Episode 4 – Other Embedded Experiences Turns out you can also insist upon putting cheese inside of pizza crust, or bake it thick enough as to require a fork and knife. I imagine, it’s probably something Windows 95 Man would do. In this episode, the Microsoft Entra, Purview, Intune, and Microsoft Threat Intelligence products showcase how Security Copilot advances their workflows within their portals. Beyond baking in the concepts of many workflows, many operators, the takeaway from this episode is that Security Copilot works with security adjacent workflows – IT, Identity, and DLP. Episode 5 – Manage Your Plugins ource different insights across your environment. Like our chef in The Paper Clip Pantry, we should probably define what we want to cook, what chefs to use, and set permissions for those that can interact within any input or output from the kitchen. Find out what plugins add to Security Copilot and how you can set plugin controls for your team and organization. Episode 6 – Prompting Is this an improv lesson, or a baking show? Or maybe if you watch this episode, you’ll learn how Security Copilot handles natural language inputs to provide you meaningful answers know as responses. Episode 7 – Prompt Engineering With the fundamentals of prompting in your flight log, it’s time to soar a bit higher with prompt engineering. In this episode you will learn how to structure prompts in a way to maximize the benefits of Security Copilot and begin building workflows. Congrats, pilot, your burgers will no longer come with just ketchup. Episode 8 – Using Promptbooks What would it look like to find a series of prompts and run them, in the same sequence with the same output every time? You guessed it, a promptbook, a repeatable workflow in the age of AI. See where to access promptbooks within the platform, and claw back some of your day to perfect your next butter burger. Episode 9 – Custom Promptbooks You’ve been tweaking your butter burger recipe for months now. You’ve finally landed at the perfect version by incorporating a secret nacho cheese recipe. The steps are defined, the recipe perfect. How do you repeat it? Just like your butter burger creation, you might discover or design workflows with Security Copilot. With custom promptbooks you can repeat and share them across your organization. In this episode you’ll learn about the different ways Security Copilot helps you develop your own custom AI workflows. Episode 10 – Logic Apps System automation, robot chefs? Actions? What if customers could order butter burgers with the click of a button, and the kitchen staff would automatically make one? Or perhaps every Friday at 2pm a butter burger was just delivered to you? Chances are there are different conditions across your organization that when present requires a workflow to begin. With Logic Apps, Security Copilot can be used to automatically aid workflows across any system a Logic App can connect to. More automation, less mouse clicking, that’s a flight plan everyone can agree on. Episode 11 – Extending to Your Ecosystem A famed restaurant critic stopped into the The Paper Clip Pantry ordered a butter burger, and it’s now the burger everyone is talking about. Business is booming and it's time to expand the menu – maybe a butter burger pizza, perhaps a doughnut butter burger? But you’ll need some new recipes and sources of knowledge to achieve this. Like a food menu the possibilities of expanding Security Copilot’s capabilities are endless. In this episode learn how this can be achieved with custom plugins and knowledgebases. Once you have that in your log, you will be a certified Ace, and ready to take flight with Security Copilot. Take Flight I really hope that you not only learn something new but have fun taking flight with the Security Copilot Flight School. As with any new and innovative technology, the learning never stops, and there will be opportunities to log more flight hours from our expert flight crews. Stay tuned at the Microsoft Security Copilot video hub, Microsoft Secure, and RSA 2025 for more content in the next few months. If you think it’s time to get the rest of your team and/or organization airborne there’s check out the Security Copilot adoption hub to get started: aka.ms/SecurityCopilotAdoptionHub Carry-on Resources Our teams have been hard at work building solutions to extend Security Copilot, you can find them on our community Github page found at: aka.ms/SecurityCopilotGitHubRepo To stay close to the latest in product news, development, and to interact with our engineering teams, please join the Security Copilot CCP to get the latest information: aka.ms/JoinCCP1.9KViews0likes0Comments