security operations
57 TopicsMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process—every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting – only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does “Data processing” meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics “Commitment Tiers”. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way.548Views1like5CommentsFrom idea to Security Copilot agent: Create, customize, and deploy
This week at Microsoft Secure, we announced the next big step forward in agentic security. In addition to Microsoft and partner-built agents, you can now create your own Security Copilot agents, extending the growing ecosystem of agents that help teams automate workflows, close gaps, and drive stronger security and IT outcomes. Why it matters: no two environments are the same. Out-of-the-box agents give you powerful starting points, but your workflows are unique. With custom agents, you get the flexibility to design and deploy solutions that fit your organization. Two ways to build: Your choice, your workflow Security Copilot gives you options. Analysts can easily build with a no-code interface. Developers can stay in their preferred coding environment. Either way, you end up with a fully functional, testable, and deployable agent. For full documentation and detailed guidance on building agents, check out the Microsoft Security Copilot documentation. But now, let’s walk through the key steps so you can get started building your own agent today. Option 1: Build in Security Copilot, no coding required Step 1: Create in natural language Click ‘Build’ in the left nav, describe what you want your agent to do in plain language, and submit. Security Copilot will engage in a back-and-forth conversation to clarify and capture your intent so you start with precision. Step 2: Auto-generate the configuration Security Copilot instantly creates a starter setup, giving you: An agent name and description Clear instructions and input parameters Recommended tools pulled from the catalog, including Microsoft, partner, and Sentinel MCP tools This saves time and generates a strong foundation you can build on Step 3: Customize to fit your needs Tailor the configuration to your needs, you can edit any part. Update instructions, swap tools, or add new ones from the tool catalog. If the right tool isn’t available, you can create one in natural language or a form-based experience. You’re in full control of how your agent works. Step 4: Keep YAML and no-code views aligned Every change you make is automatically reflected in the underlying YAML code. This ensures consistency between the no-code visual and code views, so both analysts and developers can work with confidence. Toggle on ‘view code’ to see it live. Step 5: Test and elevate with autotune instruction optimization Run full end-to-end tests or test individual components to see how your agent performs. Security Copilot shows detailed outputs and a step-by-step activity map of the agent’s dynamic plan, including the tools, inputs, and outputs. While you can test without it, turning on autotune instruction optimization delivers major advantages: Refined instruction recommendations you can copy directly into your config AI quality scoring on clarity, grounding, and detail to ensure your agent is effective before publishing Faster iteration with confidence your agent is tuned for real-world use Explore the activity graph tab to view a visual node map of the run, and click any node to see details of what happened at each step. Step 6: Publish and share When you’re ready, publish the agent into your Security Copilot instance at either a user or workspace scope (depending on admin permissions). If you’re a partner, you can also download the agent code, publish to the Microsoft Partner Center and contribute it to the Microsoft Security Store for broader visibility and adoption by customers. Benefit: Build production-ready agents in minutes without writing a single line of code. It’s that easy to build an agent tailored to your unique workflows, and you are not limited to the Security Copilot portal. If you prefer a developer-friendly environment, you can build entirely in VS Code using GitHub Copilot and Microsoft Sentinel MCP tools. You still get AI-powered guidance, YAML scaffolding, and testing support, along with rich context from Sentinel data and the full platform toolset, all while staying in the environment that works best for you. Option 2: Build in VS Code using GitHub Copilot + Microsoft Sentinel MCP Tools Step 1: Set up your development environment Enable the Microsoft Sentinel MCP server in VS Code. This gives you direct access to the collection of Security Copilot agent creation MCP tools and integrates with GitHub Copilot for code generation – all while staying in your preferred workspace. Step 2: Define agent behavior from natural language with platform context Describe the agent you want to build in natural language. GitHub Copilot interprets your intent, selects the relevant MCP tools, find relevant skills and tools in Security Copilot for your agent, and crafts the agent instructions. The agent YAML gets generated and outputted back to you. Because your agent is built on Microsoft Security Copilot and Sentinel, it automatically leverages rich data and tooling across the platform for context-aware, more effective results. Step 3: Iterate, customize and extend your agent Modify instructions, add tools, or create new tools as needed. Use prompts to vibe code your edits or copy the YAML into the code editor and directly modify the agent YAML there. GitHub Copilot keeps the chat and code in sync. Step 4: Deploy to Security Copilot for testing Once you’re ready to test your agent YAML, prompt GitHub Copilot to deploy the agent to your user scope. Then head to the Security Copilot portal to test and optimize your agent with autotune instruction optimization. Take advantage of detailed outputs, activity maps, and AI scoring to refine instructions and ensure your agent performs effectively in real-world scenarios. Step 5: Publish and share your agent Once validated, publish the agent into your Security Copilot instance at either user or workspace scope (depending on admin permissions). Partners can also download the agent code, publish to the Microsoft Partner Center, and contribute it to the Microsoft Security Store for broader discoverability and adoption. What you get: Full code-level control and the same AI-powered agent development experience while staying in your preferred workspace. Whichever approach you choose, you can build, test, and deploy agents that fit your workflows and environment. Microsoft Security Copilot and Microsoft Sentinel give you the tools and advanced AI guidance to create agents that work for your organization. Explore the Microsoft Security Store Automate your workflows with pre-built solutions. The Microsoft Security Store gives you a central place to discover and deploy agents and SaaS solutions created by Microsoft and partners. Browse ready-to-use solutions, learn from proven approaches, and adapt them with your own customizations. It’s the quickest way to expand your ecosystem of agents and accelerate impact. More resources about the Security Store: What is Security Store? Microsoft Learn Build, deploy, defend Security Copilot puts the power of agentic AI directly in your hands. Start with ready-to-use agents from Microsoft and partners, or create custom agents designed specifically for your environment and workflows. These agents streamline decision-making, surface critical insights, and free your team to focus on strategic security initiatives - making operations faster, smarter, and more responsive. Join us at Microsoft Ignite, online or in-person, for hands-on demos and insights on how Security Copilot agents empower teams to act faster and protect better. More resources on building Security Copilot agents: Watch the Mechanics video to see agents in action: Security Copilot agents Mechanics video For more detailed guidance on building agents, check out the Microsoft Security Copilot documentation Special thanks to my co-authors, Namrata Puri (Principal PM, Security Copilot) and Sherie Pan (PM, Security Copilot), for their insights and contributionsAgentic security your way: Build your own Security Copilot agents
Microsoft Security Copilot is redefining how security and IT teams operate. Today at Microsoft Secure, we’re unveiling powerful updates that put genAI and agent-driven automation at the center of modern defense. In a world where threats move faster than ever, alerts pile up, and resources stay tight, Security Copilot delivers the competitive edge: contextual intelligence, a growing network of agents, and the flexibility to build your own. The announcements focus on three key areas: building your own Security Copilot agents for tailored workflows, expanding the agent ecosystem with new Microsoft and partner solutions, and improving agent quality and performance. These updates build on the agents first introduced in March while giving security and IT teams more flexibility and control. This is the blueprint for the next era of agentic defense, and it starts now. Build your own Security Copilot agents, your way While we already offer a growing catalog of ready-to-use agents built by Microsoft and partners, we know that no two environments are alike. That’s why Security Copilot empowers you to create custom agents your way for tailored workflows – whether you're an analyst with limited coding experience or a developer using your favorite platform – you can build agents that fit your needs. Build agents in the Security Copilot portal Users can now build agents with a simplified, no-code interface in the standalone Security Copilot experience. Simply describe the task or workflow in natural language, and Copilot automatically generates the agent code. You can edit components, add any additional tools, including Sentinel MCP tools from our rich tool catalog, test the agent, optimize its instructions, and publish directly to your tenant. Create dynamic, ready-to-use agents in minutes – without writing any code. Build agents in a preferred MCP server-enabled development environment For teams with experienced developers, you can also use natural language and vibe-coding to build agents in a preferred MCP server-enabled coding platform, such as VS Code using GitHub Copilot. By enabling the Sentinel MCP server, developers can access MCP tools to build, refine, and deploy custom agents directly within their workspace. This approach gives full control over code, tools, and deployment while keeping the process within familiar development platforms. These options empower both technical and non-technical teams to rapidly create, test, and deploy custom Security Copilot agents. Organizations can automate workflows faster, design agents to their unique needs, and improve security and IT operations across the board. Discover new Security Copilot agents Since Security Copilot agents were first introduced in March, we have delivered more than a dozen Microsoft and partner-developed agents that help organizations tackle real challenges in security and IT operations. Analysts using the Conditional Access Optimization Agent in Microsoft Entra have been able to quickly uncover policy gaps, closing an average of 26 gaps per customer in just one month, with 73% of early adopters acting on at least one recommendation. The Phishing Triage Agent in Microsoft Defender has allowed analysts to shift from reactive sifting to proactive resolution, reducing triage time by up to 78%. Read how St Lukes University saves nearly 200 hours monthly in phishing alert triage and creating incident reports in minutes instead of hours. The Phishing Triage Agent is a game changer. It’s saving us nearly 200 hours monthly by autonomously handling and closing thousands of false positive alerts. - Krista Arndt, ACISO, St. Luke’s University Health Network We’re continuing to build on this momentum with new agents designed to address additional security and IT scenarios. The new Access Review Agent in Microsoft Entra tackles a common challenge: reduce access review fatigue and approving access without review. It analyzes ongoing reviews, flags anomalies or unusual access patterns, and delivers actionable guidance in a conversational interface. Reviewers can approve, revoke, or request more details right in Microsoft Teams, helping them focus on the riskiest access, make faster decisions, and strengthen compliance. With innovations like this, we’re not just reducing fatigue—we’re redefining how access governance is done, setting the standard for security agents that adapt to the way people work. Learn more about the Access Review Agent here. And, with the growing range of agentic use cases, the new Microsoft Security Store is your one-stop shop to discover, purchase, and deploy Security Copilot agents built by Microsoft and trusted partners. Find solutions aligned for SOC, IT, privacy, compliance, and governance teams, all in one place. By uniting discovery, deployment, and publishing in a single experience, Security Store powers a thriving ecosystem that gives your team a unique advantage: access to an ever-expanding range of agent capabilities that evolve as fast as the challenges they face. In addition to helping customers find the right solutions, Security Store also enables partners to bring their innovations to market. Partners can build and publish Security Copilot agents and SaaS solutions to grow their business and reach new customers. Today, we are announcing 30 new partner-built agents as well as 50 partner SaaS solutions in the Security Store. The launch of 30 new partner-built agents brings forward solutions like: A Forensic Agent by glueckkanja AG delivers deep-dive analysis of Defender XDR incidents to accelerate investigations, while their Privileged Admin Watchdog Agent helps enforce zero standing privilege principles by getting rid of persistent admin identities. These innovations, along with their other 6 agents in the Security Store today, demonstrate how glueckkanja AG is empowering organizations to tackle a wide range of security and IT challenges. 3 agents from adaQuest focused on automating investigation and response to focus security teams on what matters. A Ransomware Kill Chain Investigator Agent by adaQuest automates ransomware triage, an Entity Guard Investigator Agent by adaQuest investigates Defender incidents, and an Admin Guard Insight Agent analyzes administrative activity, detects anomalies, evaluates risk exposure and compliance, offering actionable insights to improve administrative security posture. An Identity Workload ID Agent by Invoke empowers identity administrators and security teams to manage and secure Workload Identities in Microsoft Entra, helping to reduce risk, strengthen compliance, provide more control over identity sprawl. To learn more about all new partner-built agents as well as partner SaaS offerings, read the blog or head to the Microsoft Security Store. Smarter, faster Security Copilot agents High-quality LLM instructions are critical to agent performance, yet manually fine-tuning them is time-consuming and error-prone. We’re excited to introduce tools that help improve custom-built agent quality and performance, starting with autotune instruction optimization. Autotune eliminates the need for manual tuning by automatically analyzing and refining agent instructions for optimal performance. Simply enable autotune during testing and submit, then receive a detailed results report with suggested prompt changes boost your agent’s AI quality score quickly and effortlessly. This optimization not only delivers better outcomes faster, but it also ensures that every agent in our ecosystem is always evolving - making them smarter, sharper, and more effective over time. But instructions are only part of the picture. To truly empower agents, context and data is key. By combining rich security signals from Microsoft Sentinel with advanced AI reasoning, Microsoft is setting a new standard for what agents can achieve—resolving incidents faster, optimizing workflows, and delivering deeper, more actionable insight. Security Copilot leverages a unified foundation of structured, graph, and semantic data from Sentinel to give agents the context they need to connect the dots across your environment. This deep integration transforms what AI can do, enabling agents to reason, adapt, and act with precision at machine speed. Read the Sentinel graph announcement here. Get Started Today With Security Copilot, the power of AI is now in your hands. Deploy ready-to-use agents from Microsoft and partners, or design custom agents built for your environment and workflows. These agents accelerate decision-making, surface critical insights, and let teams focus on strategic security work - turning complexity into clarity and speed. Explore Security Store today to experience how agentic automation is reshaping security operations and unlocking the full potential of your team. Learn more about how to create your own agents. Deep dive into these innovations at Microsoft Secure on Sept. 30, Oct. 1 or on demand. Then, join us at Microsoft Ignite, Nov, 17–21 in San Francisco, CA or online—for more innovations, hands-on labs, and expert connections.2.4KViews1like0CommentsAutomate Security Workflows in Microsoft Sentinel with BlinkOps
Automate Security Workflows in Microsoft Sentinel with BlinkOps Security teams are under increasing pressure to respond faster to threats while managing growing complexity across their environments. Microsoft Sentinel’s elevated integration with BlinkOps helps address this challenge by enabling AI-powered, no-code automation that simplifies and accelerates security operations. Introducing BlinkOps for Microsoft Sentinel BlinkOps is a no-code security automation platform designed for security and platform operations teams. It allows users to build and scale workflows using natural language prompts and a library of over 30,000 pre-built actions. With BlinkOps, teams can automate incident response, compliance, and operational tasks—without writing a single line of code. Now with an enhanced integration with Microsoft Sentinel, BlinkOps enables customers to generate automated playbooks triggered by Sentinel alerts and incidents. This integration helps streamline threat response, reduce mean time to respond (MTTR), and improve operational efficiency. Why BlinkOps? Microsoft Sentinel customers may leverage Microsoft Sentinel’s SOAR capabilities through Logic Apps today. BlinkOps enables a new set of additional capabilities to Microsoft Sentinel-powered SOC teams, including: AI-generated workflows: Create automation using natural language prompts. Pre-built content: Access a rich library of templates tailored to Sentinel use cases. No-code experience: Empower security analysts to build and manage workflows without engineering support. Scalability: Deploy automation across multiple tenants and environments with ease. Key Use Cases The BlinkOps connector for Microsoft Sentinel supports several high-impact scenarios: Automated response to alerts and incidents: Trigger sophisticated BlinkOps process workflows based on Sentinel signals to ensure swift, consistent action. Incorporate humans in interactive workflows so that automation is complemented with human judgment and decisions. Template-driven playbooks: Leverage curated templates for common SOC tasks. Examples Consider this scenario: A SOC team wants an automation to help manage the response to phishing alerts in Microsoft Sentinel. The SOC team starts in BlinkOps by prompting the system to create a workflow. In this case a simple prompt is all it takes, “I would like an automation to respond to Phishing incidents in Microsoft Sentinel. We use Microsoft Security tooling (Teams, Defender, Entra etc.)” 1.BlinkOps Builder Prompt Which then builds out a workflow of how to automate the handling of a phishing alert in a few seconds. 2. Building Workflow A straightforward 6 step set of actions is generated: 3. Phishing Workflow Then, if the SOC team wants to refine or edit a specific workflow step, they can also use the BlinkOps builder AI to update individual steps. In this case, drafting the message to send to the broader security team. Builder-Editing Action Getting Started To get started using BlinkOps and Microsoft Sentinel: 1. Visit https://www.blinkops.com/ to learn more about the platform. 2. Explore the BlinkOps connector in the Microsoft Sentinel Content Hub. 3. Use natural language to create your first workflow and start automating your SOC operations.711Views1like0CommentsCase Management: Incidents, Cases, and When to Use Them
In March, Case Management went to GA status within the unified portal for customers. This introduced new functionality and experiences such as: A new case queue Custom statuses New Case task experience Linking incidents to cases This can be a little confusing for existing users who are familiar with incidents and the incident experience for either Microsoft Defender or Sentinel. Let’s break this down into more detail. What are Incidents? Incidents are artifacts that act as containers for alerts to signal that a noteworthy event took place that involves one or more malicious activities. These serve to be a single landing page for alerts, activities, entities, and more. When to use Incidents? Incidents are the default experience for analysts as they perform incident investigations and response. Incidents are where they will find any and all details available for alerts and entities while performing the basic tasks of a SOC analyst. Incidents should be used when investigating and responding to malicious activity within the environment. The current incident experience provides features such as: Alert timeline Entity mapping and tracking Entity investigation graph Copilot for Security Pre-performed investigations and responses What are Cases? Cases are artifacts that represent an actionable or trackable item, such as incident investigation, validating a threat hunting hypothesis, reviewing threat intelligence review, managing endpoint vulnerabilities, and more. They can exist without alerts or incidents. When to use Cases vs. Incidents? This section is not meant to put one over the other, but is meant to clear up some confusion. Cases serve as items that can be created to track important activities within the SOC, they don’t have to just be for incident response. A case can be created for any notable activity that the SOC performs, as mentioned above. Cases can be used as a collaboration tool within your SOC team. While cases may seem redundant to incident, that is not true one bit. Here are a few distinguishing points: As incidents are a container for alerts, cases can be a container for incidents, allowing multiple incidents to be worked on at once if they are related by threat actor, impacted entities, and more. Cases offer a native task experience, similar to the experience within Microsoft Sentinel in Azure. Cases offer attachment support, allowing analysts a more traditional case management experience that incidents do not have. Cases allow for more customization, such as custom statuses. Incidents do not offer custom statuses. Let’s look at two example scenarios: Cases with Incidents I am a SOC Analyst that is reviewing the incident queue. I find an incident that involves multiple threat types and scripts. I would like to work on this incident with my colleagues while tracking notable artifacts that we find in our investigation. For example: I visit the unified incident queue and see that I have a multi-stage incident, involving multiple alerts for multiple assets. I perform my initial triage and confirm that this is a true positive that should be addressed. I will then cut a case and attach this incident to it for collaboration. Within the case, I can add a code block to list any query that I have performed within Advanced Hunting, as well as paste results from my queries directly in the case for tracking. If using Copilot for Security, I can copy and paste the Copilot incident summary in the case so that my colleagues can get an incident summary without having to leave the case. Cases without Incidents I am a SOC Analyst that is responsible for remediating device vulnerabilities. I check our current CVE’s within Exposure Management and see that I have several devices that are currently vulnerable to CVE-2025-5419, a Microsoft Edge Chromium vulnerability. I save my list of devices to a CSV file so that I can attach it to my case. I also copy the description of the CVE to add the case notes to make it more convenient for my colleagues to join the case and not need to leave it. I then pivot to Advanced Hunting to review activities by any of these vulnerable devices. I have a match and would like to connect that result to my case, so I use Export > Copy to Clipboard so that I can paste it in the case. Back within the case, I begin uploading the CSV of exposed devices as evidence, I leave a message that is formatted to draw attention to the findings, and I paste my findings based on my query. Based on my findings, I begin generating new tasks for each device owner and pasting the instructions for remediation of the CVE. These are just some examples of the many uses for cases within the Defender Portal. Hopefully this highlights the versatility of case management today and how it can operate both with and without an incident involved. Keep an eye out for more improvements as Case Management matures. If looking to learn about case management, please check out the below resources: Public documentation: Manage security operations cases natively in the Microsoft Defender portal - Unified security operations | Microsoft Learn Video based learning: https://www.youtube.com/watch?v=G-vfMJSL11g Demo: Case Management in Microsoft Defender1.3KViews0likes0CommentsHow to: Ingest Splunk alert data to Microsoft Sentinel SIEM
Thanks to Javier Soriano, Principal Product Manager - OneSOC Customer Experience Engineering, for the peer review Introduction Although the recommended approach is to not have multiple SIEM solutions in place, many organizations are still running dual-SIEM setups, sometimes even introducing additional ones in the mix. Combination most often seen is running a legacy solution and pairing it with modern SIEM solutions like Microsoft Sentinel SIEM. Side-by-side architecture is recommended for just long enough to complete the migration, train people and update processes - but organizations might stay with the side-by-side configuration longer when they are not ready to move away from legacy solutions. In such situations, organizations usually opt for sending alerts from their legacy SIEM to Sentinel SIEM: Cloud data is ingested and analyzed in Sentinel SIEM On-premises data is ingested and analyzed in legacy SIEM which generates alerts Alerts are forwarded from legacy SIEM to Sentinel SIEM With this approach, SOC teams have a single interface where they are able to cross-correlate and investigate alerts from their organizations while benefiting from full value of unified security operations in Microsoft Defender. Send Splunk alerts to Sentinel SIEM Splunk side When an alert is raised in Splunk, organizations have an option to set up following alert actions: Email notification action Webhook alert action Output results to a CSV lookup Log events Monitor triggered alerts Send alerts and dashboards to Splunk Mobile Users Interestingly, it is possible to work with Webhooks to make an HTTP POST request on a URL. The data is in JSON format which makes it easily consumable by Sentinel SIEM. For this to work, Splunk needs the hook URL from the target source (in this case, Sentinel SIEM). { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } Example: Splunk alert JSON payload Microsoft side From Microsoft perspective, organizations can take advantage of Logs Ingestion API, which allows for any application that can make a REST API call to send data to Sentinel SIEM. To configure Logs Ingestion API, a couple of supporting resources are needed: Microsoft Entra application which will authenticate against the API Custom table in Log Analytics workspace, where the data will be sent to and accessible from Sentinel SIEM Data Collection Rule (DCR) which will direct data to the target custom table Entra application from the first step needs to have RBAC assigned on the DCR resource A solution to call Logs Ingestion API so the data can be sent to the Sentinel SIEM. In order to make this process streamlined and easy to deploy, a solution has been developed which will automate creation of all of these supporting resources and allow you to have a Webhook URL ready which can be just placed in your Splunk solution: https://github.com/Azure/Azure-Sentinel/tree/master/Tools/SplunkAlertIngestion Picture: Content of the solution The script with supporting ARM templates can be run directly from the Azure Cloud Shell and configured with a couple of parameters: ./SplunkAlertIngestion.ps1 -ServicePrincipalName "" -tableName "" -workspaceResourceId "" -dataCollectionRuleName "" -location "" ServicePrincipalName - mandatory, define a name for the SP tableName - mandatory, define a name for the custom table with the suffix _CL (example: SplunkAlertInfo_CL) workspaceResourceId - mandatory, the resourceId can be fetched from the Log Analytics Workspace Properties blade (/subscriptions/xxx-xxx/resourceGroups/xxx/providers/Microsoft.OperationalInsights/workspaces/xxx) dataCollectionRuleName - mandatory, define DCR name location - mandatory, define Azure location for the resources (example: northeurope, eastus2) LogicAppName - not mandatory, define the name for the LogicApp, otherwise it will be named SplunkAlertAutomationLogicApp Result The script will create all supporting resources that are needed and will provide the Webhook URL as an output. Use this URL to configure trigger action in Splunk: Picture: Instructions for configuring webhook alert action in Splunk Once the webhook is configured on Splunk side, any time the alert is raised it will trigger the webhook, which will initiate the Logic App resource on Azure side responsible for parsing the data and sending that data through Logs Ingestion API to the destination table in Sentinel SIEM. Picture: Workflow of the Logic App Conclusion Ingesting alert data from other solutions in your organization to Sentinel SIEM allows for security teams to take advantage of unified security operations in Microsoft Defender - easier cross-correlation between various data sources, comprehensive threat intelligence and case management experience.580Views1like0CommentsAutomating Phishing Email Triage with Microsoft Security Copilot
This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort. Access the full solution on the Security Copilot Github: GitHub - UserReportedPhishing Solution. For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here. Introduction: Phishing Challenges Continue to Evolve Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn't just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s "Report Phishing" button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review. Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It's not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email: Slipped past existing filters, Was suspicious enough for a user to escalate, Lacks typical IOCs like malicious domains or attachments. As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn't rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate. What makes this novel is that it's the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters. Solution Overview: How the Logic App Solution Works (and Why It's Different) Core Components: Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable. Azure Function Apps: Parses and normalizes email data for efficient AI consumption. Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators. Key Benefits: Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails. Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review. Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies. Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time. Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows. Deployment Guide: Quick, Secure, and Reliable Setup The solution provides Azure Resource Manager (ARM) templates for rapid deployment: Prerequisites: Azure Subscription with Contributor access to a resource group. Microsoft Security Copilot enabled. Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions. (Optional) Microsoft Sentinel workspace. Refer to the up to date deployment instructions on the Security Copilot GitHub page. Technical Architecture & Workflow: The automated workflow operates as follows: Email Ingestion: Monitors the shared mailbox via Office 365 connector. Triggers on new email arrivals every 3 minutes. Assumes that the reported email has arrived as an attachment to a "carrier" email. Determine if the Email Came from Defender/Sentinel: If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary. Initial Email Processing: Exports raw email content from the shared mailbox. Determines if .msg or .eml attachments are in binary format and converts if necessary. Email Parsing via Azure Function App: Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure. Prepares clean JSON data for AI analysis. This step is required to "prep" the data for LLM analysis due to token limits. Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You'll also notice a number of JSON keys that are not used but provided for flexibility. Security Copilot Advanced AI Reasoning: Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals. Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators. Returns validated JSON output (some customers are parsing this JSON and performing other action). This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned: JSON Normalization & Error Handling: A “normalization” Azure Function ensures output matches the expected JSON schema. Sometimes LLMs will stray from a strict output structure, this aims to solve that problem. If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure. Detailed HTML Reporting: Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions. Reports are emailed directly to SOC team distribution lists or ticketing systems. Optional Sentinel Integration: Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created. The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning. Benign email detection: Example of phishing email detection: More sophisticated phishing with subtle clues: Enhanced Technical Details & Clarifications Attachment Processing: When multiple email attachments are detected, the Logic App processes each binary-format email sequentially. If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent. Security Copilot Reliability: The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency. If you run out of SCUs in an hour, it will pause until they are refreshed and continue. Sentinel Integration Reliability: Acknowledges inherent Sentinel logging delays (up to 15 minutes). Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created. Security Best Practices: Compare the Function & Logic App to your company security policies to ensure compliance. Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext. Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely. Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365's Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot
In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog. But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic Parameterized functions are important in the context of Security Copilot plugins because of: Dynamic prompt completion: Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic. Plugin reusability: By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions. Maintainability and modularity: Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds. Validation: Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it's treated as a value, not as part of the query logic. Plugin Spec mapping: OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless. Practical example In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher. Without using functions, this entire query would have to form part of the plugin Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot. CloudAppEvents | where RawEventData.TargetDomain has_any ( 'grok.com', 'x.ai', 'mistral.ai', 'cohere.ai', 'perplexity.ai', 'huggingface.co', 'adventureai.gg', 'ai.google/discover/palm2', 'ai.meta.com/llama', 'ai2006.io', 'aibuddy.chat', 'aidungeon.io', 'aigcdeep.com', 'ai-ghostwriter.com', 'aiisajoke.com', 'ailessonplan.com', 'aipoemgenerator.org', 'aissistify.com', 'ai-writer.com', 'aiwritingpal.com', 'akeeva.co', 'aleph-alpha.com/luminous', 'alphacode.deepmind.com', 'analogenie.com', 'anthropic.com/index/claude-2', 'anthropic.com/index/introducing-claude', 'anyword.com', 'app.getmerlin.in', 'app.inferkit.com', 'app.longshot.ai', 'app.neuro-flash.com', 'applaime.com', 'articlefiesta.com', 'articleforge.com', 'askbrian.ai', 'aws.amazon.com/bedrock/titan', 'azure.microsoft.com/en-us/products/ai-services/openai-service', 'bard.google.com', 'beacons.ai/linea_builds', 'bearly.ai', 'beatoven.ai', 'beautiful.ai', 'beewriter.com', 'bettersynonyms.com', 'blenderbot.ai', 'bomml.ai', 'bots.miku.gg', 'browsegpt.ai', 'bulkgpt.ai', 'buster.ai', 'censusgpt.com', 'chai-research.com', 'character.ai', 'charley.ai', 'charshift.com', 'chat.lmsys.org', 'chat.mymap.ai', 'chatbase.co', 'chatbotgen.com', 'chatgpt.com', 'chatgptdemo.net', 'chatgptduo.com', 'chatgptspanish.org', 'chatpdf.com', 'chattab.app', 'claid.ai', 'claralabs.com', 'claude.ai/login', 'clipdrop.co/stable-diffusion', 'cmdj.app', 'codesnippets.ai', 'cohere.com', 'cohesive.so', 'compose.ai', 'contentbot.ai', 'contentvillain.com', 'copy.ai', 'copymatic.ai', 'copymonkey.ai', 'copysmith.ai', 'copyter.com', 'coursebox.ai', 'coverler.com', 'craftly.ai', 'crammer.app', 'creaitor.ai', 'dante-ai.com', 'databricks.com', 'deepai.org', 'deep-image.ai', 'deepreview.eu', 'descrii.tech', 'designs.ai', 'docgpt.ai', 'dreamily.ai', 'editgpt.app', 'edwardbot.com', 'eilla.ai', 'elai.io', 'elephas.app', 'eleuther.ai', 'essayailab.com', 'essay-builder.ai', 'essaygrader.ai', 'essaypal.ai', 'falconllm.tii.ae', 'finechat.ai', 'finito.ai', 'fireflies.ai', 'firefly.adobe.com', 'firetexts.co', 'flowgpt.com', 'flowrite.com', 'forethought.ai', 'formwise.ai', 'frase.io', 'freedomgpt.com', 'gajix.com', 'gemini.google.com', 'genei.io', 'generatorxyz.com', 'getchunky.io', 'getgptapi.com', 'getliner.com', 'getsmartgpt.com', 'getvoila.ai', 'gista.co', 'github.com/features/copilot', 'giti.ai', 'gizzmo.ai', 'glasp.co', 'gliglish.com', 'godinabox.co', 'gozen.io', 'gpt.h2o.ai', 'gpt3demo.com', 'gpt4all.io', 'gpt-4chan+)', 'gpt6.ai', 'gptassistant.app', 'gptfy.co', 'gptgame.app', 'gptgo.ai', 'gptkit.ai', 'gpt-persona.com', 'gpt-ppt.neftup.app', 'gptzero.me', 'grammarly.com', 'hal9.com', 'headlime.com', 'heimdallapp.org', 'helperai.info', 'heygen.com', 'heygpt.chat', 'hippocraticai.com', 'huggingface.co/spaces/tiiuae/falcon-180b-demo', 'humanpal.io', 'hypotenuse.ai', 'ichatwithgpt.com', 'ideasai.com', 'ingestai.io', 'inkforall.com', 'inputai.com/chat/gpt-4', 'instantanswers.xyz', 'instatext.io', 'iris.ai', 'jasper.ai', 'jigso.io', 'kafkai.com', 'kibo.vercel.app', 'kloud.chat', 'koala.sh', 'krater.ai', 'lamini.ai', 'langchain.com', 'laragpt.com', 'learn.xyz', 'learnitive.com', 'learnt.ai', 'letsenhance.io', 'letsrevive.app', 'lexalytics.com', 'lgresearch.ai', 'linke.ai', 'localbot.ai', 'luis.ai', 'lumen5.com', 'machinetranslation.com', 'magicstudio.com', 'magisto.com', 'mailshake.com/ai-email-writer', 'markcopy.ai', 'meetmaya.world', 'merlin.foyer.work', 'mieux.ai', 'mightygpt.com', 'mosaicml.com', 'murf.ai', 'myaiteam.com', 'mygptwizard.com', 'narakeet.com', 'nat.dev', 'nbox.ai', 'netus.ai', 'neural.love', 'neuraltext.com', 'newswriter.ai', 'nextbrain.ai', 'noluai.com', 'notion.so', 'novelai.net', 'numind.ai', 'ocoya.com', 'ollama.ai', 'openai.com', 'ora.ai', 'otterwriter.com', 'outwrite.com', 'pagelines.com', 'parallelgpt.ai', 'peppercontent.io', 'perplexity.ai', 'personal.ai', 'phind.com', 'phrasee.co', 'play.ht', 'poe.com', 'predis.ai', 'premai.io', 'preppally.com', 'presentationgpt.com', 'privatellm.app', 'projectdecember.net', 'promptclub.ai', 'promptfolder.com', 'promptitude.io', 'qopywriter.ai', 'quickchat.ai/emerson', 'quillbot.com', 'rawshorts.com', 'read.ai', 'rebecc.ai', 'refraction.dev', 'regem.in/ai-writer', 'regie.ai', 'regisai.com', 'relevanceai.com', 'replika.com', 'replit.com', 'resemble.ai', 'resumerevival.xyz', 'riku.ai', 'rizzai.com', 'roamaround.app', 'rovioai.com', 'rytr.me', 'saga.so', 'sapling.ai', 'scribbyo.com', 'seowriting.ai', 'shakespearetoolbar.com', 'shortlyai.com', 'simpleshow.com', 'sitegpt.ai', 'smartwriter.ai', 'sonantic.io', 'soofy.io', 'soundful.com', 'speechify.com', 'splice.com', 'stability.ai', 'stableaudio.com', 'starryai.com', 'stealthgpt.ai', 'steve.ai', 'stork.ai', 'storyd.ai', 'storyscapeai.app', 'storytailor.ai', 'streamlit.io/generative-ai', 'summari.com', 'synesthesia.io', 'tabnine.com', 'talkai.info', 'talkpal.ai', 'talktowalle.com', 'team-gpt.com', 'tethered.dev', 'texta.ai', 'textcortex.com', 'textsynth.com', 'thirdai.com/pocketllm', 'threadcreator.com', 'thundercontent.com', 'tldrthis.com', 'tome.app', 'toolsaday.com/writing/text-genie', 'to-teach.ai', 'tutorai.me', 'tweetyai.com', 'twoslash.ai', 'typeright.com', 'typli.ai', 'uminal.com', 'unbounce.com/product/smart-copy', 'uniglobalcareers.com/cv-generator', 'usechat.ai', 'usemano.com', 'videomuse.app', 'vidext.app', 'virtualghostwriter.com', 'voicemod.net', 'warmer.ai', 'webllm.mlc.ai', 'wellsaidlabs.com', 'wepik.com', 'we-spots.com', 'wordplay.ai', 'wordtune.com', 'workflos.ai', 'woxo.tech', 'wpaibot.com', 'writecream.com', 'writefull.com', 'writegpt.ai', 'writeholo.com', 'writeme.ai', 'writer.com', 'writersbrew.app', 'writerx.co', 'writesonic.com', 'writesparkle.ai', 'writier.io', 'yarnit.app', 'zevbot.com', 'zomani.ai' ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ ("U.S. Social Security Number (SSN)", "Credit Card Number", "EU Tax Identification Number (TIN)","Amazon S3 Client Secret Access Key","All Credential Types"), 90, tostring(sit_SensitiveInfoTypeName) in~ ("All Full names"), 40, tostring(sit_SensitiveInfoTypeName) in~ ("Project Obsidian", "Phone Number"), 70, tostring(sit_SensitiveInfoTypeName) in~ ("IP"), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == "High" //| where RiskLevel == "High" | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok Create the parameters in the Log Analytics UI Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department. 3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned. Effect of switched parameters: To edit the function, follow the steps below: Navigate to the Logs menu for your Log Analytics workspace then select the function icon Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊 Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function: Next, we still use direct skill invocation, but this time specify our own parameters: Lastly, we test it out with a natural language prompt: tment Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below: Conclusion In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback. Additional Resources Using parameterized functions in Microsoft Defender XDR Using parameterized functions with Azure Data Explorer Functions in Azure Monitor log queries - Azure Monitor | Microsoft Learn Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub804Views0likes0Comments