security operations
39 TopicsFrom alert overload to decisive action: How Security Copilot agents are transforming security and IT
Security and IT teams operate in a constant stream of alerts, incidents, and investigations. As environments expand across identities, endpoints, cloud, and data, the challenge becomes clear: identifying real risk quickly enough to act. Security Copilot agents bring AI directly into the flow of work, helping teams understand risk with greater context, investigate threats more efficiently, and take action sooner. Security Copilot is now included with Microsoft 365 E5 and E7 licenses at no additional cost, so teams can start using agents right away. Over the past year, organizations have used Security Copilot to triage alerts, surface real threats earlier, and move faster from investigation to action. At this RSA 2026 conference, we are announcing new capabilities that reflect a continuous wave of innovation, evolving from built-in AI assistance and automated summaries to new agents that can analyze signals, investigate incidents, and execute security workflows. Real-world impact: measurable results Security Copilot agents help security and IT teams identify and respond to risk more effectively. Customers are seeing that impact in their day-to-day operations. At St. Luke’s University Health Network, the Phishing Triage Agent in Microsoft Defender saves security analysts more than 200 hours every month, automatically triaging phishing alerts and surfacing those that actually matter. Independent randomized controlled studies reinforce the results. Security professionals using the Phishing Triage Agent triaged alerts up to 78% faster, delivered 77% more accurate verdicts, and identified 6.5 times more malicious emails. That same impact extends beyond the SOC into other critical areas of security and IT. A data security team at a large telecommunications organization used the Data Security Triage Agent in Microsoft Purview to triage more than 40,000 Data Loss Prevention (DLP) alerts in 90 days, surfacing the 10% most critical alerts that required investigation. Identity teams are also seeing huge improvements with the Conditional Access Optimization Agent in Microsoft Entra, which continuously analyzes access policies against Zero Trust baselines and recommends actions. In controlled productivity studies, identity admins completed policy-related tasks 43% faster and 48% more accurately when identifying configuration weaknesses. IT teams are also seeing impact using the Vulnerability Remediation Agent in Microsoft Intune, which continuously detects new vulnerabilities as threats emerge. As one CTO at a renewable energy and technology company shared, the agent is “dramatically changing the way we approach working with vulnerabilities in our environment. A two‑week process is now a two‑minute process, really huge number for us.” Across these scenarios, teams begin investigations with clearer context and a better understanding of what actually matters. Instead of piecing together signals across dozens of tools, they can focus on the highest-risk issues and move from investigation to action with confidence. As environments continue expanding across identities, endpoints, applications, and data, quickly connecting signals and understanding risk becomes essential. New Security Copilot agents and capabilities announced at RSA Conference Our innovation continues. Microsoft is introducing new Security Copilot agents and expanded capabilities designed to help organizations analyze complex security data, triage alerts more effectively, and strengthen security posture across identity, endpoint, cloud, and data environments. New and updated Security Copilot agents built by Microsoft Security Analyst Agent in Microsoft Defender Security teams are often sitting on enormous volumes of security data, but turning that data into answers takes time. The Security Analyst Agent helps teams move from raw telemetry to real understanding much faster. By performing deep, multi-step investigations across Microsoft Defender and Sentinel telemetry, the agent can analyze up to ~100MB of security data to uncover anomalies, hidden risks, and high-impact threats that might otherwise stay buried. Analysts can chat directly with the agent to ask questions, explore hypotheses, and dig deeper into findings. The results include transparent reasoning and supporting evidence, helping teams quickly understand what matters and move forward with confidence. Security Alert Triage Agent in Microsoft Defender One of the biggest challenges for SOC teams is deciding which alerts actually deserve attention. The Security Alert Triage Agent helps cut through that noise so analysts can focus on the threats that truly matter. Building on its existing phishing triage capabilities, the agent now extends autonomous triage to identity and cloud alerts. Each verdict includes clear, transparent reasoning so analysts can quickly understand the outcome and prioritize the alerts that matter most. New capabilities for Conditional Access Optimization Agent in Microsoft Entra Identity environments are constantly evolving as organizations add new apps, users, and authentication methods. New capabilities in the Conditional Access Optimization Agent help identity teams identify and close critical policy gaps faster, with recommendations tailored to their organization’s needs. The agent now delivers business-context-aware recommendations, supports phased rollout of new policies, enables automated least-privilege enforcement for supported third-party agent identities, and helps drive passkey adoption. Together, these capabilities help organizations continuously strengthen identity security while maintaining productivity. New capabilities for Data Security Posture Agent in Microsoft Purview Sensitive data often moves through documents, emails, chats, and collaboration tools, which makes it easy for credentials or secrets to end up where they shouldn’t be. A new credential scanning capability in the Data Security Posture Agent helps data security teams proactively identify exposed credentials within their data environment. By analyzing data signals and access patterns, the agent surfaces potential credential exposure risks and helps teams quickly investigate and remediate them. This gives organizations better visibility into hidden data risks and strengthens overall protection of critical systems. New capabilities for Data Security Triage Agent in Microsoft Purview Insider Risk Management Investigating insider risk alerts often requires piecing together signals from many different sources to understand what is really happening. The Data Security Triage Agent now introduces an advanced AI reasoning layer that helps security teams evaluate those signals more holistically. By performing deeper, multi-step analysis across behavioral signals from users, devices, and data activity, the agent can surface the incidents that truly require investigation while filtering out noise. The result is faster, more accurate investigations and better confidence when responding to potential insider risks. New capabilities for Data Security Triage Agent in Microsoft Purview Data Loss Prevention Custom Sensitive Information Types (SITs) are often difficult for analysts to interpret quickly because the underlying definitions and patterns lack clear context at triage time. This latest enhancement makes custom Sensitive Information Types (SITs) easier for both the agent and analysts to understand in Data Loss Prevention alerts. Purview interprets custom SIT definitions, generates semantic descriptions of the data, and surfaces that context directly within the agent. This allows the agent to classify and prioritize alerts involving custom data more accurately, helping analysts quickly recognize real risk and respond appropriately. New Security Copilot agents built by partners To meet customers where they are across their existing security stack, the Security Copilot ecosystem continues to grow with more than 70 partner-built agents available today in the Security Store, bringing additional signals and investigation capabilities into the platform. Some of these agents include the following: Security Investigation Agent by Commvault – Correlates backup anomalies with identity and security signals across platforms such as Entra, CrowdStrike, Netskope, and Darktrace. MITRE Attack Coverage Insight Agent by Inspira – Evaluates analytic rule coverage, calculates ATT&CK coverage, identifies detection gaps, generates detection recommendations, and provides SOC detection maturity scoring. Endpoint Risk Insights Agent by Avanade – Provides endpoint risk insights by correlating signals across security telemetry. Identity Role Mining Agent by Invoke – Allows user to discover and analyze administrator roles in Microsoft Entra ID with ease and precision. Identity Threat Triage Agent by Silverfort - Correlates Silverfort's identity risk signals with Entra ID and Defender for Endpoint data in the Sentinel data lake to surface risky sign‑ins, MFA abuse, suspicious processes, and anomalies. Together, these partner agents extend Security Copilot’s ability to connect signals across Microsoft and third-party security platforms, giving organizations broader visibility and stronger investigation capabilities across their security environment. To explore all new Security Copilot agents, visit the Microsoft Security Store. New Security Copilot innovations that turn insight into action Security Copilot continues to integrate more deeply into the tools security and IT teams already use every day. These capabilities bring AI directly into the environments where investigations happen, helping teams explore threats, understand context, and take action without switching between tools. Security Copilot interactive chat experience in Microsoft Defender Analysts can ask questions, explore investigative hypotheses, and follow threat activity across incidents, alerts, identities, devices, and IPs without leaving their investigation. Copilot understands the context of the page analysts are working on and grounds responses in the relevant signals already available in Defender. As analysts ask questions, Copilot can run investigative steps, gather additional evidence, and surface new insights. This allows teams to iterate quickly, validate assumptions, and dig deeper into threats while staying in the same workflow. Secret finder skill in Security Copilot is now generally available Available in the Security Copilot standalone portal, the Secret Finder skill can be invoked to analyze unstructured content such as emails, chats, documents, and investigation notes to identify exposed credentials hidden in real-world workflows. Using agentic capabilities such as multi-step reasoning rather than simple pattern matching, it detects real, usable secrets and the systems they unlock, helping security teams quickly understand potential exposure and respond with confidence. Additional integrations and use cases are planned to expand how this capability can be used across security workflows. Security Copilot trigger in Logic Apps Building on how many organizations already use Logic Apps to automate security workflows, a new connector action for Security Copilot in Logic Apps flows allows teams to easily invoke partner-built agents and custom agents they create as part of repeatable workflows. This brings deeper AI-driven investigation, context, and decision support into tasks such as incident triage, threat intelligence analysis, and policy validation. See Security Copilot in action at RSA Conference Join us at RSA Conference to see the latest Security Copilot agents and capabilities in action. Stop by the Microsoft booth to connect with the team, explore new innovations, and experience how agents are helping security and IT teams investigate threats, understand risk, and strengthen security posture. Hear from Microsoft Security product leaders in these booth sessions March 23 | 5:15 PM Empowering the SOC with assistive and autonomous AI, Yuval Derman March 24 | 3:00 PM Security Copilot agents: Insight. Action. Impact., Lizzie Heinze and Donna Lee March 25 | 10:30 AM Turning Data Risk into Action with Security Copilot Agents, Paige Johnson and Tanay Baldua March 26 | 12:00 PM Defend identity autonomously with agentic AI in Microsoft Entra, Mitch Muro, Rahul Prakash, Nikhil Reddy Join our deep dive session March 24 | 8:30 AM | The Palace Hotel Security Copilot in action: An agentic approach to modern security Register here: Microsoft Security RSAC Events | Microsoft Corporate Stop by the Microsoft booth for a hands-on experience Test out the latest Security Copilot agents at the demo station and connect with our experts. Agentic AI Arena: Try a fun, gamified experience that shows how Security Copilot agents investigate threats, surface risk, and help security teams respond faster. Start using Security Copilot in your daily workflows If you have received access to Security Copilot as part of your Microsoft 365 E5 plan, we recommend following steps to get started quickly: Sign up for the Security Copilot skilling series Review new agentic scenarios and developer capabilities in the Security Copilot Adoption Hub Learn what’s included with your Microsoft 365 E5 plan in documentation Request assistance from a Microsoft 365 FastTrack specialist to unlock the full value of Security Copilot1.9KViews2likes0CommentsIntroducing Secret Finder: Finding Real Credentials Where Traditional Tools Fail
Secret Finder is an AI-powered capability in Microsoft Security Copilot that detects leaked credentials in unstructured content, such as emails, chat logs, documents, and screenshots, where traditional pattern-matching tools struggle. It relies on a multi‑step, multi‑agent reasoning workflow rather than a single pass detector. Detection, verification, and contextual analysis are handled by distinct reasoning stages, allowing Secret Finder to find real credentials without flooding users with false positives. Unlike regex-based scanners, Secret Finder uses reasoning to identify not just credentials, but the systems they unlock, helping security teams understand exposure and respond faster. In benchmark testing on synthetic datasets, Secret Finder achieved 98.33% true credential detection with zero false alarms on realistic emails, chats, notes, and documents—while traditional regex scanners detected only about 40% of the same credentials. Secret Finder is now generally available in Security Copilot, supporting 20+ credential types with high precision and actionable context. The Problem: Credentials Hide Where Traditional Tools Can't See When security incidents happen, leaked credentials don't always appear in clean, predictable formats. They show up buried in email threads, pasted into Teams messages, embedded in Word documents, or captured in screenshots of logs and terminals. These are exactly the places where security teams spend the most time and where traditional secret scanning tools fail. Most existing tools rely on regular expressions or simple pattern matching. This works reasonably well for structured environments like source code repositories, where credentials follow predictable formats. But in real-world incidents, secrets look different. A storage key might be split across multiple messages in an email thread. A credential could be reformatted, partially redacted, or embedded alongside explanatory text. In these situations, pattern matching produces two painful outcomes: it misses real credentials because the format doesn’t match a known rule, or it floods analysts with false positives that waste time. Security teams are left manually reviewing content, guessing which findings are real, and piecing together what systems might actually be at risk. In practice, this failure mode has a real human cost that security analysts end up reviewing thousands of alerts, manually inspecting email threads and chat logs, and trying to determine whether a suspicious string actually unlocks a storage account, API, or production service. Teams can spend days reconstructing context across messages and documents just to understand what a credential grants access to, slowing containment and increasing risk during active incidents. This is the gap Secret Finder was built to close. The Solution: Secret Finder Brings Reasoning to Secret Detection Secret Finder approaches secret detection as a reasoning problem, not a string-matching exercise. Instead of asking "does this text match a pattern?" It asks human-like questions: Is this text describing a credential or access mechanism? Does the value look real and usable? What system or resource could this access? This shift is subtle but powerful. Secret Finder doesn't just detect credentials, it connects them to doors: the specific targets those credentials unlock, such as API endpoints, storage accounts, applications, or services. This is critical for triage. Instead of stopping at “this looks like a credential,” Secret Finder tells analysts what that credential actually opens. Without context, a credential triggers manual follow‑up. When it’s linked to a specific target, analysts can immediately assess impact and act. By understanding messy, real-world content the way a human investigator would, Secret Finder delivers findings that security teams can trust and act on immediately. It's designed specifically for the unstructured, noisy environments where incidents actually unfold. Why Secret Finder Outperforms Traditional Pattern Matching Traditional secret scanners are built for clean data. Secret Finder is built for reality. Traditional tools struggle when: Credentials appear in natural language descriptions rather than code Context determines whether a string is sensitive or benign Credentials are incomplete, malformed, or partially redacted Secret Finder excels because it: Reasons through context, understanding surrounding text to identify what's truly sensitive Detects credentials and their associated resources together, providing the "what" and the "where" in a single pass Handles noisy, unstructured inputs like emails, chat logs, documents Assigns confidence scores to help teams prioritize findings and reduce alert fatigue What Secret Finder Can Do Today Secret Finder is now generally available in Microsoft Security Copilot, with capabilities shaped directly by real security workflows across incident response, red teaming, and SOC operations. It detects over 20 major credential categories, spanning cloud provider credentials like Azure Storage Keys and AWS Access Keys, authentication credentials including Microsoft Entra passwords and OAuth tokens, database connection strings, SSH private keys, API keys, and generic secrets that don't fit predefined patterns. This broad coverage means analysts can scan investigation artifacts without worrying whether the secret type is supported. What makes Secret Finder particularly effective is where it works. Email threads where credentials are discussed across multiple messages. Teams chats where credentials are pasted quickly during troubleshooting. Word documents and internal wikis where credentials are documented for operational handoffs. Incident reports and post-mortem notes written under pressure. These are the environments where traditional pattern-matching tools fail, and where Secret Finder delivers the most value. In benchmark evaluations, Secret Finder achieved 100% recall with 0% false positives on synthetic datasets containing embedded Azure Storage credentials, compared to 40% recall from traditional regex‑based tools such as CredScan. In more complex scenarios involving multiple credential types and noisy email content, Secret Finder maintained 98.33% recall with 0% false positives. These results were observed on synthetically generated evaluation datasets spanning emails, chats, notes, and documents, designed to reflect how engineers communicate and how credentials may be inadvertently shared in real‑world workflows. Scenario Precision Recall Single credential type 100% 100% Complex, multiple credential types 100% 98.33% Secret Finder is currently integrated into Security Copilot, actively supporting incident response workflows, and working toward deeper integrations with developer platforms such as GitHub to bring contextual secret detection to source code analysis at scale. Using Secret Finder in Security Copilot Secret Finder is available as a skill in Microsoft Security Copilot, making credential detection a seamless part of analyst workflows. How to use Secret Finder: Enable the Secret Finder skill in Security Copilot via "Manage Sources" → "Manage Plugins" (Figure 1) Select "FindSecretInText" from Promptbook (Figure 2) Submit unstructured content directly in the Copilot prompt: paste the text blob that might contain credentials Secret Finder analyzes the content using its multi-agent workflow, detecting credentials and associated doors Review actionable findings with contextual details Figure 1. Enabling the Secret Finder skill in Microsoft Security Copilot (Due to recent naming changes, users might see "Agentic secret finder" in Security copilot. Naming changes will reflect in a few weeks) Figure 2. Selecting the FindSecretInText prompt, which invokes Secret Finder’s multi‑step credential detection and verification workflow Figure 3. Submitting a text blob containing embedded credentials for analysis (example is synthetic) Figure 4. Secret Finder output with detected credentials and associated doors (example credentials and associated doors are synthetic) What's Next for Secret Finder Secret Finder is a living capability. Over the next six months, we are working towards coverage and deepening integrations: Exploring integrations with GitHub to reduce false positives in secret scanning for code repositories Optimizing for large-scale analysis to handle enterprise-wide scans efficiently with reduced latency Exploring graph-based risk modeling to map relationships between credentials, services, and attack paths Our long-term vision goes beyond detection: we want to help security teams understand how credentials are used, what risks exist if they're exposed, and what the impact of rotation or revocation would be. By moving from "what's leaked" to "what does it mean," Secret Finder will enable smarter prioritization, faster response, and more confident decision-making. Acknowledgments Secret Finder has been a cross-team effort over the past year, evolving from early research and prototyping through private preview, public preview, and now general availability. This milestone reflects contributions across many phases from initial system design and technical direction, to evaluation, product integration, and deployment at scale. Contributors include Mariko Wakabayashi leading the early research through production and to the team including Zixiao Chen and Avy Challa for GA improvements and bringing Secret Finder to production readiness. We also appreciate Tony Twum-Barimah, Malachi Jones, and the Security Copilot team, including Austin Trapp and Vinod Jagannathan for their technical and product support throughout the process, as well as Christian Rudnick and Helen Chang for guiding us through the responsible AI reviews before launch. Finally, a huge thanks to the incident responders and security researchers who shared valuable insights along the way. Secret Finder wouldn’t have been possible without their work and feedback.1.2KViews0likes2CommentsHow to Become a Microsoft Security Copilot Ninja: The Complete Level 400 Training
Learn how to become a Microsoft Security Copilot (Copilot) Ninja! This blog will walk you through the resources you'll need to master and make best use of Microsoft's Security Copilot product!174KViews28likes22CommentsUsing parameterized functions with KQL-based custom plugins in Microsoft Security Copilot
In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog. But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic Parameterized functions are important in the context of Security Copilot plugins because of: Dynamic prompt completion: Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic. Plugin reusability: By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions. Maintainability and modularity: Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds. Validation: Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it's treated as a value, not as part of the query logic. Plugin Spec mapping: OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless. Practical example In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher. Without using functions, this entire query would have to form part of the plugin Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot. CloudAppEvents | where RawEventData.TargetDomain has_any ( 'grok.com', 'x.ai', 'mistral.ai', 'cohere.ai', 'perplexity.ai', 'huggingface.co', 'adventureai.gg', 'ai.google/discover/palm2', 'ai.meta.com/llama', 'ai2006.io', 'aibuddy.chat', 'aidungeon.io', 'aigcdeep.com', 'ai-ghostwriter.com', 'aiisajoke.com', 'ailessonplan.com', 'aipoemgenerator.org', 'aissistify.com', 'ai-writer.com', 'aiwritingpal.com', 'akeeva.co', 'aleph-alpha.com/luminous', 'alphacode.deepmind.com', 'analogenie.com', 'anthropic.com/index/claude-2', 'anthropic.com/index/introducing-claude', 'anyword.com', 'app.getmerlin.in', 'app.inferkit.com', 'app.longshot.ai', 'app.neuro-flash.com', 'applaime.com', 'articlefiesta.com', 'articleforge.com', 'askbrian.ai', 'aws.amazon.com/bedrock/titan', 'azure.microsoft.com/en-us/products/ai-services/openai-service', 'bard.google.com', 'beacons.ai/linea_builds', 'bearly.ai', 'beatoven.ai', 'beautiful.ai', 'beewriter.com', 'bettersynonyms.com', 'blenderbot.ai', 'bomml.ai', 'bots.miku.gg', 'browsegpt.ai', 'bulkgpt.ai', 'buster.ai', 'censusgpt.com', 'chai-research.com', 'character.ai', 'charley.ai', 'charshift.com', 'chat.lmsys.org', 'chat.mymap.ai', 'chatbase.co', 'chatbotgen.com', 'chatgpt.com', 'chatgptdemo.net', 'chatgptduo.com', 'chatgptspanish.org', 'chatpdf.com', 'chattab.app', 'claid.ai', 'claralabs.com', 'claude.ai/login', 'clipdrop.co/stable-diffusion', 'cmdj.app', 'codesnippets.ai', 'cohere.com', 'cohesive.so', 'compose.ai', 'contentbot.ai', 'contentvillain.com', 'copy.ai', 'copymatic.ai', 'copymonkey.ai', 'copysmith.ai', 'copyter.com', 'coursebox.ai', 'coverler.com', 'craftly.ai', 'crammer.app', 'creaitor.ai', 'dante-ai.com', 'databricks.com', 'deepai.org', 'deep-image.ai', 'deepreview.eu', 'descrii.tech', 'designs.ai', 'docgpt.ai', 'dreamily.ai', 'editgpt.app', 'edwardbot.com', 'eilla.ai', 'elai.io', 'elephas.app', 'eleuther.ai', 'essayailab.com', 'essay-builder.ai', 'essaygrader.ai', 'essaypal.ai', 'falconllm.tii.ae', 'finechat.ai', 'finito.ai', 'fireflies.ai', 'firefly.adobe.com', 'firetexts.co', 'flowgpt.com', 'flowrite.com', 'forethought.ai', 'formwise.ai', 'frase.io', 'freedomgpt.com', 'gajix.com', 'gemini.google.com', 'genei.io', 'generatorxyz.com', 'getchunky.io', 'getgptapi.com', 'getliner.com', 'getsmartgpt.com', 'getvoila.ai', 'gista.co', 'github.com/features/copilot', 'giti.ai', 'gizzmo.ai', 'glasp.co', 'gliglish.com', 'godinabox.co', 'gozen.io', 'gpt.h2o.ai', 'gpt3demo.com', 'gpt4all.io', 'gpt-4chan+)', 'gpt6.ai', 'gptassistant.app', 'gptfy.co', 'gptgame.app', 'gptgo.ai', 'gptkit.ai', 'gpt-persona.com', 'gpt-ppt.neftup.app', 'gptzero.me', 'grammarly.com', 'hal9.com', 'headlime.com', 'heimdallapp.org', 'helperai.info', 'heygen.com', 'heygpt.chat', 'hippocraticai.com', 'huggingface.co/spaces/tiiuae/falcon-180b-demo', 'humanpal.io', 'hypotenuse.ai', 'ichatwithgpt.com', 'ideasai.com', 'ingestai.io', 'inkforall.com', 'inputai.com/chat/gpt-4', 'instantanswers.xyz', 'instatext.io', 'iris.ai', 'jasper.ai', 'jigso.io', 'kafkai.com', 'kibo.vercel.app', 'kloud.chat', 'koala.sh', 'krater.ai', 'lamini.ai', 'langchain.com', 'laragpt.com', 'learn.xyz', 'learnitive.com', 'learnt.ai', 'letsenhance.io', 'letsrevive.app', 'lexalytics.com', 'lgresearch.ai', 'linke.ai', 'localbot.ai', 'luis.ai', 'lumen5.com', 'machinetranslation.com', 'magicstudio.com', 'magisto.com', 'mailshake.com/ai-email-writer', 'markcopy.ai', 'meetmaya.world', 'merlin.foyer.work', 'mieux.ai', 'mightygpt.com', 'mosaicml.com', 'murf.ai', 'myaiteam.com', 'mygptwizard.com', 'narakeet.com', 'nat.dev', 'nbox.ai', 'netus.ai', 'neural.love', 'neuraltext.com', 'newswriter.ai', 'nextbrain.ai', 'noluai.com', 'notion.so', 'novelai.net', 'numind.ai', 'ocoya.com', 'ollama.ai', 'openai.com', 'ora.ai', 'otterwriter.com', 'outwrite.com', 'pagelines.com', 'parallelgpt.ai', 'peppercontent.io', 'perplexity.ai', 'personal.ai', 'phind.com', 'phrasee.co', 'play.ht', 'poe.com', 'predis.ai', 'premai.io', 'preppally.com', 'presentationgpt.com', 'privatellm.app', 'projectdecember.net', 'promptclub.ai', 'promptfolder.com', 'promptitude.io', 'qopywriter.ai', 'quickchat.ai/emerson', 'quillbot.com', 'rawshorts.com', 'read.ai', 'rebecc.ai', 'refraction.dev', 'regem.in/ai-writer', 'regie.ai', 'regisai.com', 'relevanceai.com', 'replika.com', 'replit.com', 'resemble.ai', 'resumerevival.xyz', 'riku.ai', 'rizzai.com', 'roamaround.app', 'rovioai.com', 'rytr.me', 'saga.so', 'sapling.ai', 'scribbyo.com', 'seowriting.ai', 'shakespearetoolbar.com', 'shortlyai.com', 'simpleshow.com', 'sitegpt.ai', 'smartwriter.ai', 'sonantic.io', 'soofy.io', 'soundful.com', 'speechify.com', 'splice.com', 'stability.ai', 'stableaudio.com', 'starryai.com', 'stealthgpt.ai', 'steve.ai', 'stork.ai', 'storyd.ai', 'storyscapeai.app', 'storytailor.ai', 'streamlit.io/generative-ai', 'summari.com', 'synesthesia.io', 'tabnine.com', 'talkai.info', 'talkpal.ai', 'talktowalle.com', 'team-gpt.com', 'tethered.dev', 'texta.ai', 'textcortex.com', 'textsynth.com', 'thirdai.com/pocketllm', 'threadcreator.com', 'thundercontent.com', 'tldrthis.com', 'tome.app', 'toolsaday.com/writing/text-genie', 'to-teach.ai', 'tutorai.me', 'tweetyai.com', 'twoslash.ai', 'typeright.com', 'typli.ai', 'uminal.com', 'unbounce.com/product/smart-copy', 'uniglobalcareers.com/cv-generator', 'usechat.ai', 'usemano.com', 'videomuse.app', 'vidext.app', 'virtualghostwriter.com', 'voicemod.net', 'warmer.ai', 'webllm.mlc.ai', 'wellsaidlabs.com', 'wepik.com', 'we-spots.com', 'wordplay.ai', 'wordtune.com', 'workflos.ai', 'woxo.tech', 'wpaibot.com', 'writecream.com', 'writefull.com', 'writegpt.ai', 'writeholo.com', 'writeme.ai', 'writer.com', 'writersbrew.app', 'writerx.co', 'writesonic.com', 'writesparkle.ai', 'writier.io', 'yarnit.app', 'zevbot.com', 'zomani.ai' ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ ("U.S. Social Security Number (SSN)", "Credit Card Number", "EU Tax Identification Number (TIN)","Amazon S3 Client Secret Access Key","All Credential Types"), 90, tostring(sit_SensitiveInfoTypeName) in~ ("All Full names"), 40, tostring(sit_SensitiveInfoTypeName) in~ ("Project Obsidian", "Phone Number"), 70, tostring(sit_SensitiveInfoTypeName) in~ ("IP"), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == "High" //| where RiskLevel == "High" | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok Create the parameters in the Log Analytics UI Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department. 3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned. Effect of switched parameters: To edit the function, follow the steps below: Navigate to the Logs menu for your Log Analytics workspace then select the function icon Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊 Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function: Next, we still use direct skill invocation, but this time specify our own parameters: Lastly, we test it out with a natural language prompt: tment Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below: Conclusion In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback. Additional Resources Using parameterized functions in Microsoft Defender XDR Using parameterized functions with Azure Data Explorer Functions in Azure Monitor log queries - Azure Monitor | Microsoft Learn Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub1.1KViews0likes1CommentRedefining Cyber Defence with Microsoft Security Exposure Management (MSEM) and Security Copilot
Introduction Microsoft Security Exposure Management (MSEM) provides the Cyber Defense team with a unified, continuously updated awareness of assets exposure, relevant attack paths and provides classifications to these findings. While MSEM continuously creates and updates these finding, the Security Operations Center (SOC) Engineering team needs to reach to this data and interact with it as a part of their proactive discovery exercises. Microsoft Security Copilot (SCP) on the other hand, acts as an always-ready AI-powered copilot to the SOC Engineering team. When combined, the situational awareness from MSEM and the quick and consistent retrieval capabilities of SCP, MSEM and SCP empower the SOC Engineers with a natural-language front door into exposure insights and attack paths, this combination also opens the door to include MSEM content, and the reasoning over this content in Security Copilot prompts, in prompt books and allows the use of this content in automation scenarios that leverage security copilot. Traditionally, a SOC person needs to navigate to Microsoft Security Advanced Hunting, retrieve data related to assets with a certain level of exposure, and then start building plans for each asset to reduce its exposure, a plan that needs to take into consideration the nature of the exposure, the location the asset is hosted and the characteristics of the asset and requires working knowledge of each impacted system. This approach: Is a time-consuming process, especially when taking into consideration the learning curve associated with learning about each exposure before deciding on the best course of exposure reduction; and Can result in some undesired habits like adapting a reactive approach, rather than a proactive approach; Prioritizing assets with a certain exposure risk level; or attending to exposures that are already familiar to the person reviewing the list of exposures and attack paths. Overview of Exposure Management Microsoft Security Exposure Management is a security solution that provides a unified view of security posture across company assets and workloads. Security Exposure Management enriches asset information with security context that helps you to proactively manage attack surfaces, protect critical assets, and explore and mitigate exposure risk. Who uses Security Exposure Management? Security Exposure Management is aimed at: Security and compliance admins responsible for maintaining and improving organizational security posture. Security operations (SecOps) and partner teams who need visibility into data and workloads across organizational silos to effectively detect, investigate, and mitigate security threats. Security architects responsible for solving systematic issues in overall security posture. Chief Information Security Officers (CISOs) and security decision makers who need insights into organizational attack surfaces and exposure in order to understand security risk within organizational risk frameworks. What can I do with Security Exposure Management? With Security Exposure Management, you can: Get a unified view across the organization Manage and investigate attack surfaces Discover and safeguard critical assets Manage exposure Connect your data Reference links: Overview What is Microsoft Security Exposure Management (MSEM)? What's new in MSEM Get started Start using MSEM MSEM prerequisites How to import data from external data connectors in MSEM Concept Learn about critical asset management in MSEM Learn about attack surface management in MSEM Learn about exposure insights in MSEM Learn about attack paths in MSEM How-To Guide Review and classify critical assets in MSEM Review security initiatives in MSEM Investigate security metrics in MSEM Review security recommendations in MSEM Query the enterprise exposure graph MSEM Explore with the attack surface map in MSEM Review potential attack paths in MSEM Integration and licensing for MSEM Compare MSEM with Secure Score Overview of Security Copilot plugins and skills Microsoft Security Copilot is a generative AI-powered assistant designed to augment security operations by accelerating detection, investigation, and response. Its extensibility through plugins and skills enables organizations to tailor the platform to their unique environments, integrate diverse data sources, and automate complex workflows. Plugin Architecture and Categories: Security Copilot supports a growing ecosystem of plugins categorized into: First-party plugins: Native integrations with Microsoft services such as Microsoft Sentinel, Defender XDR, Intune, Entra, Purview, and Defender for Cloud. Third-party plugins: Integrations with external security platforms and ISVs, enabling broader telemetry and contextual enrichment. Custom plugins: User-developed extensions using KQL, GPT, or API-based logic to address specific use cases or data sources. Plugins act as grounding sources—providing context, verifying responses, and enabling Copilot to operate across embedded experiences or standalone sessions. Users can toggle plugins on/off, prioritize sources, and personalize settings (e.g., default Sentinel workspace) to streamline investigations. Skills and Promptbooks Skills in Security Copilot are modular capabilities that guide the AI in executing tasks such as incident triage, threat hunting, or policy analysis. These are often bundled into promptbooks, which are reusable, scenario-driven workflows that combine plugins, prompts, and logic to automate investigations or compliance checks. Security analysts can create, manage, and share promptbooks across tenants, enabling consistent execution of best practices. Promptbooks can be customized to include plugin-specific logic, such as querying Microsoft Graph API or running KQL-based detections. Role-Based Access and Governance Security Copilot enforces role-based access through Entra ID security groups: Copilot Owners: Full access to manage plugins, promptbooks, and tenant-wide settings. Copilot Contributors: Can create sessions and use promptbooks but have limited plugin publishing rights. Each embedded experience may require additional service-specific roles (e.g., Sentinel Reader, Endpoint Security Manager) to access relevant data. Governance files and onboarding templates help teams align plugin usage with organizational policies. Connecting Exposure Management with Security Copilot There are multiple benefits of connecting MSEM with Security Copilot (as explained in section 1 [Introduction] of this paper). We wrote a plugin with two skills to harness the Exposure Management insights within Security Copilot and to eventually understand the exposure of assets hosted in a particular cloud platform by your organization and of assets belonging to a specific user. A high-level architecture of the connectivity looks like this: The two skills of the plugins correspond to the following two use cases: Obtain exposure of an asset hosted on a particular cloud platform by your organization Obtain exposure of an asset belonging to a specific user As a user you could also specify the exposure level for which you want to extract the data, in each of the above use cases. Plugin Code (YAML) GitHub - Microsoft Security Exposure Management plugin for Security Copilot - YAML Proof of Concept (screen video) Conclusion Here, we proposed an alternative approach that drives up the SOC’s efficiency and helps the organization reduce the time from exposure discovery to exposure reduction. The alternative approach proposed allows the SOC person to retrieve assets that fit a certain profile, i.e. prompt Security Copilot to “List all assets hosted on Azure with Low Exposure Level” and after all affected assets are retrieved, the user can then prompt Security Copilot to “For each asset, help me create a 7-days plan to reduce these exposures” and can then finally conclude with the prompt “Create an Executive Report, start by explaining to none-technical audience the risks associated with the identified exposures, then list all affected assets, along with a summary of the steps needed to reduce the exposures identified”. These prompts can also be organized in a promptbook, further reducing the burden on the SOC person, and can also be made using Automation on regular intervals, where the automation can later email the report to intended audience or can be further extended to create relevant tickets in the IT Service Management System. An additional approach to risk management is to keep an eye on highly targeted personas within the organization, with the proposed integration a SOC person can prompt Security Copilot to find “What are the exposure risks associated with the devices owned by the Contoso person john.doe@contoso.com”. This helps the SOC person identify and remediate attack paths targeting devices used by highly targeted persons, where the SOC person can, within the same session, start digging deeper into finding any potential exploitation of these exposures, get recommendations on how to reduce these exposures, and draft an action plan.849Views2likes0CommentsFrom idea to Security Copilot agent: Create, customize, and deploy
This week at Microsoft Secure, we announced the next big step forward in agentic security. In addition to Microsoft and partner-built agents, you can now create your own Security Copilot agents, extending the growing ecosystem of agents that help teams automate workflows, close gaps, and drive stronger security and IT outcomes. Why it matters: no two environments are the same. Out-of-the-box agents give you powerful starting points, but your workflows are unique. With custom agents, you get the flexibility to design and deploy solutions that fit your organization. Two ways to build: Your choice, your workflow Security Copilot gives you options. Analysts can easily build with a no-code interface. Developers can stay in their preferred coding environment. Either way, you end up with a fully functional, testable, and deployable agent. For full documentation and detailed guidance on building agents, check out the Microsoft Security Copilot documentation. But now, let’s walk through the key steps so you can get started building your own agent today. Option 1: Build in Security Copilot, no coding required Step 1: Create in natural language Click ‘Build’ in the left nav, describe what you want your agent to do in plain language, and submit. Security Copilot will engage in a back-and-forth conversation to clarify and capture your intent so you start with precision. Step 2: Auto-generate the configuration Security Copilot instantly creates a starter setup, giving you: An agent name and description Clear instructions and input parameters Recommended tools pulled from the catalog, including Microsoft, partner, and Sentinel MCP tools This saves time and generates a strong foundation you can build on Step 3: Customize to fit your needs Tailor the configuration to your needs, you can edit any part. Update instructions, swap tools, or add new ones from the tool catalog. If the right tool isn’t available, you can create one in natural language or a form-based experience. You’re in full control of how your agent works. Step 4: Keep YAML and no-code views aligned Every change you make is automatically reflected in the underlying YAML code. This ensures consistency between the no-code visual and code views, so both analysts and developers can work with confidence. Toggle on ‘view code’ to see it live. Step 5: Test and elevate with autotune instruction optimization Run full end-to-end tests or test individual components to see how your agent performs. Security Copilot shows detailed outputs and a step-by-step activity map of the agent’s dynamic plan, including the tools, inputs, and outputs. While you can test without it, turning on autotune instruction optimization delivers major advantages: Refined instruction recommendations you can copy directly into your config AI quality scoring on clarity, grounding, and detail to ensure your agent is effective before publishing Faster iteration with confidence your agent is tuned for real-world use Explore the activity graph tab to view a visual node map of the run, and click any node to see details of what happened at each step. Step 6: Publish and share When you’re ready, publish the agent into your Security Copilot instance at either a user or workspace scope (depending on admin permissions). If you’re a partner, you can also download the agent code, publish to the Microsoft Partner Center and contribute it to the Microsoft Security Store for broader visibility and adoption by customers. Benefit: Build production-ready agents in minutes without writing a single line of code. It’s that easy to build an agent tailored to your unique workflows, and you are not limited to the Security Copilot portal. If you prefer a developer-friendly environment, you can build entirely in VS Code using GitHub Copilot and Microsoft Sentinel MCP tools. You still get AI-powered guidance, YAML scaffolding, and testing support, along with rich context from Sentinel data and the full platform toolset, all while staying in the environment that works best for you. Option 2: Build in VS Code using GitHub Copilot + Microsoft Sentinel MCP Tools Step 1: Set up your development environment Enable the Microsoft Sentinel MCP server in VS Code. This gives you direct access to the collection of Security Copilot agent creation MCP tools and integrates with GitHub Copilot for code generation – all while staying in your preferred workspace. Step 2: Define agent behavior from natural language with platform context Describe the agent you want to build in natural language. GitHub Copilot interprets your intent, selects the relevant MCP tools, find relevant skills and tools in Security Copilot for your agent, and crafts the agent instructions. The agent YAML gets generated and outputted back to you. Because your agent is built on Microsoft Security Copilot and Sentinel, it automatically leverages rich data and tooling across the platform for context-aware, more effective results. Step 3: Iterate, customize and extend your agent Modify instructions, add tools, or create new tools as needed. Use prompts to vibe code your edits or copy the YAML into the code editor and directly modify the agent YAML there. GitHub Copilot keeps the chat and code in sync. Step 4: Deploy to Security Copilot for testing Once you’re ready to test your agent YAML, prompt GitHub Copilot to deploy the agent to your user scope. Then head to the Security Copilot portal to test and optimize your agent with autotune instruction optimization. Take advantage of detailed outputs, activity maps, and AI scoring to refine instructions and ensure your agent performs effectively in real-world scenarios. Step 5: Publish and share your agent Once validated, publish the agent into your Security Copilot instance at either user or workspace scope (depending on admin permissions). Partners can also download the agent code, publish to the Microsoft Partner Center, and contribute it to the Microsoft Security Store for broader discoverability and adoption. What you get: Full code-level control and the same AI-powered agent development experience while staying in your preferred workspace. Whichever approach you choose, you can build, test, and deploy agents that fit your workflows and environment. Microsoft Security Copilot and Microsoft Sentinel give you the tools and advanced AI guidance to create agents that work for your organization. Explore the Microsoft Security Store Automate your workflows with pre-built solutions. The Microsoft Security Store gives you a central place to discover and deploy agents and SaaS solutions created by Microsoft and partners. Browse ready-to-use solutions, learn from proven approaches, and adapt them with your own customizations. It’s the quickest way to expand your ecosystem of agents and accelerate impact. More resources about the Security Store: What is Security Store? Microsoft Learn Build, deploy, defend Security Copilot puts the power of agentic AI directly in your hands. Start with ready-to-use agents from Microsoft and partners, or create custom agents designed specifically for your environment and workflows. These agents streamline decision-making, surface critical insights, and free your team to focus on strategic security initiatives - making operations faster, smarter, and more responsive. Join us at Microsoft Ignite, online or in-person, for hands-on demos and insights on how Security Copilot agents empower teams to act faster and protect better. More resources on building Security Copilot agents: Watch the Mechanics video to see agents in action: Security Copilot agents Mechanics video For more detailed guidance on building agents, check out the Microsoft Security Copilot documentation Special thanks to my co-authors, Namrata Puri (Principal PM, Security Copilot) and Sherie Pan (PM, Security Copilot), for their insights and contributionsAgentic security your way: Build your own Security Copilot agents
Microsoft Security Copilot is redefining how security and IT teams operate. Today at Microsoft Secure, we’re unveiling powerful updates that put genAI and agent-driven automation at the center of modern defense. In a world where threats move faster than ever, alerts pile up, and resources stay tight, Security Copilot delivers the competitive edge: contextual intelligence, a growing network of agents, and the flexibility to build your own. The announcements focus on three key areas: building your own Security Copilot agents for tailored workflows, expanding the agent ecosystem with new Microsoft and partner solutions, and improving agent quality and performance. These updates build on the agents first introduced in March while giving security and IT teams more flexibility and control. This is the blueprint for the next era of agentic defense, and it starts now. Build your own Security Copilot agents, your way While we already offer a growing catalog of ready-to-use agents built by Microsoft and partners, we know that no two environments are alike. That’s why Security Copilot empowers you to create custom agents your way for tailored workflows – whether you're an analyst with limited coding experience or a developer using your favorite platform – you can build agents that fit your needs. Build agents in the Security Copilot portal Users can now build agents with a simplified, no-code interface in the standalone Security Copilot experience. Simply describe the task or workflow in natural language, and Copilot automatically generates the agent code. You can edit components, add any additional tools, including Sentinel MCP tools from our rich tool catalog, test the agent, optimize its instructions, and publish directly to your tenant. Create dynamic, ready-to-use agents in minutes – without writing any code. Build agents in a preferred MCP server-enabled development environment For teams with experienced developers, you can also use natural language and vibe-coding to build agents in a preferred MCP server-enabled coding platform, such as VS Code using GitHub Copilot. By enabling the Sentinel MCP server, developers can access MCP tools to build, refine, and deploy custom agents directly within their workspace. This approach gives full control over code, tools, and deployment while keeping the process within familiar development platforms. These options empower both technical and non-technical teams to rapidly create, test, and deploy custom Security Copilot agents. Organizations can automate workflows faster, design agents to their unique needs, and improve security and IT operations across the board. Discover new Security Copilot agents Since Security Copilot agents were first introduced in March, we have delivered more than a dozen Microsoft and partner-developed agents that help organizations tackle real challenges in security and IT operations. Analysts using the Conditional Access Optimization Agent in Microsoft Entra have been able to quickly uncover policy gaps, closing an average of 26 gaps per customer in just one month, with 73% of early adopters acting on at least one recommendation. The Phishing Triage Agent in Microsoft Defender has allowed analysts to shift from reactive sifting to proactive resolution, reducing triage time by up to 78%. Read how St Lukes University saves nearly 200 hours monthly in phishing alert triage and creating incident reports in minutes instead of hours. The Phishing Triage Agent is a game changer. It’s saving us nearly 200 hours monthly by autonomously handling and closing thousands of false positive alerts. - Krista Arndt, ACISO, St. Luke’s University Health Network We’re continuing to build on this momentum with new agents designed to address additional security and IT scenarios. The new Access Review Agent in Microsoft Entra tackles a common challenge: reduce access review fatigue and approving access without review. It analyzes ongoing reviews, flags anomalies or unusual access patterns, and delivers actionable guidance in a conversational interface. Reviewers can approve, revoke, or request more details right in Microsoft Teams, helping them focus on the riskiest access, make faster decisions, and strengthen compliance. With innovations like this, we’re not just reducing fatigue—we’re redefining how access governance is done, setting the standard for security agents that adapt to the way people work. Learn more about the Access Review Agent here. And, with the growing range of agentic use cases, the new Microsoft Security Store is your one-stop shop to discover, purchase, and deploy Security Copilot agents built by Microsoft and trusted partners. Find solutions aligned for SOC, IT, privacy, compliance, and governance teams, all in one place. By uniting discovery, deployment, and publishing in a single experience, Security Store powers a thriving ecosystem that gives your team a unique advantage: access to an ever-expanding range of agent capabilities that evolve as fast as the challenges they face. In addition to helping customers find the right solutions, Security Store also enables partners to bring their innovations to market. Partners can build and publish Security Copilot agents and SaaS solutions to grow their business and reach new customers. Today, we are announcing 30 new partner-built agents as well as 50 partner SaaS solutions in the Security Store. The launch of 30 new partner-built agents brings forward solutions like: A Forensic Agent by glueckkanja AG delivers deep-dive analysis of Defender XDR incidents to accelerate investigations, while their Privileged Admin Watchdog Agent helps enforce zero standing privilege principles by getting rid of persistent admin identities. These innovations, along with their other 6 agents in the Security Store today, demonstrate how glueckkanja AG is empowering organizations to tackle a wide range of security and IT challenges. 3 agents from adaQuest focused on automating investigation and response to focus security teams on what matters. A Ransomware Kill Chain Investigator Agent by adaQuest automates ransomware triage, an Entity Guard Investigator Agent by adaQuest investigates Defender incidents, and an Admin Guard Insight Agent analyzes administrative activity, detects anomalies, evaluates risk exposure and compliance, offering actionable insights to improve administrative security posture. An Identity Workload ID Agent by Invoke empowers identity administrators and security teams to manage and secure Workload Identities in Microsoft Entra, helping to reduce risk, strengthen compliance, provide more control over identity sprawl. To learn more about all new partner-built agents as well as partner SaaS offerings, read the blog or head to the Microsoft Security Store. Smarter, faster Security Copilot agents High-quality LLM instructions are critical to agent performance, yet manually fine-tuning them is time-consuming and error-prone. We’re excited to introduce tools that help improve custom-built agent quality and performance, starting with autotune instruction optimization. Autotune eliminates the need for manual tuning by automatically analyzing and refining agent instructions for optimal performance. Simply enable autotune during testing and submit, then receive a detailed results report with suggested prompt changes boost your agent’s AI quality score quickly and effortlessly. This optimization not only delivers better outcomes faster, but it also ensures that every agent in our ecosystem is always evolving - making them smarter, sharper, and more effective over time. But instructions are only part of the picture. To truly empower agents, context and data is key. By combining rich security signals from Microsoft Sentinel with advanced AI reasoning, Microsoft is setting a new standard for what agents can achieve—resolving incidents faster, optimizing workflows, and delivering deeper, more actionable insight. Security Copilot leverages a unified foundation of structured, graph, and semantic data from Sentinel to give agents the context they need to connect the dots across your environment. This deep integration transforms what AI can do, enabling agents to reason, adapt, and act with precision at machine speed. Read the Sentinel graph announcement here. Get Started Today With Security Copilot, the power of AI is now in your hands. Deploy ready-to-use agents from Microsoft and partners, or design custom agents built for your environment and workflows. These agents accelerate decision-making, surface critical insights, and let teams focus on strategic security work - turning complexity into clarity and speed. Explore Security Store today to experience how agentic automation is reshaping security operations and unlocking the full potential of your team. Learn more about how to create your own agents. Deep dive into these innovations at Microsoft Secure on Sept. 30, Oct. 1 or on demand. Then, join us at Microsoft Ignite, Nov, 17–21 in San Francisco, CA or online—for more innovations, hands-on labs, and expert connections.5.7KViews1like0CommentsAutomating Phishing Email Triage with Microsoft Security Copilot
This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort. Access the full solution on the Security Copilot Github: GitHub - UserReportedPhishing Solution. For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here. Introduction: Phishing Challenges Continue to Evolve Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn't just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s "Report Phishing" button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review. Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It's not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email: Slipped past existing filters, Was suspicious enough for a user to escalate, Lacks typical IOCs like malicious domains or attachments. As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn't rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate. What makes this novel is that it's the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters. Solution Overview: How the Logic App Solution Works (and Why It's Different) Core Components: Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable. Azure Function Apps: Parses and normalizes email data for efficient AI consumption. Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators. Key Benefits: Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails. Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review. Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies. Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time. Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows. Deployment Guide: Quick, Secure, and Reliable Setup The solution provides Azure Resource Manager (ARM) templates for rapid deployment: Prerequisites: Azure Subscription with Contributor access to a resource group. Microsoft Security Copilot enabled. Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions. (Optional) Microsoft Sentinel workspace. Refer to the up to date deployment instructions on the Security Copilot GitHub page. Technical Architecture & Workflow: The automated workflow operates as follows: Email Ingestion: Monitors the shared mailbox via Office 365 connector. Triggers on new email arrivals every 3 minutes. Assumes that the reported email has arrived as an attachment to a "carrier" email. Determine if the Email Came from Defender/Sentinel: If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary. Initial Email Processing: Exports raw email content from the shared mailbox. Determines if .msg or .eml attachments are in binary format and converts if necessary. Email Parsing via Azure Function App: Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure. Prepares clean JSON data for AI analysis. This step is required to "prep" the data for LLM analysis due to token limits. Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You'll also notice a number of JSON keys that are not used but provided for flexibility. Security Copilot Advanced AI Reasoning: Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals. Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators. Returns validated JSON output (some customers are parsing this JSON and performing other action). This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned: JSON Normalization & Error Handling: A “normalization” Azure Function ensures output matches the expected JSON schema. Sometimes LLMs will stray from a strict output structure, this aims to solve that problem. If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure. Detailed HTML Reporting: Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions. Reports are emailed directly to SOC team distribution lists or ticketing systems. Optional Sentinel Integration: Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created. The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning. Benign email detection: Example of phishing email detection: More sophisticated phishing with subtle clues: Enhanced Technical Details & Clarifications Attachment Processing: When multiple email attachments are detected, the Logic App processes each binary-format email sequentially. If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent. Security Copilot Reliability: The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency. If you run out of SCUs in an hour, it will pause until they are refreshed and continue. Sentinel Integration Reliability: Acknowledges inherent Sentinel logging delays (up to 15 minutes). Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created. Security Best Practices: Compare the Function & Logic App to your company security policies to ensure compliance. Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext. Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely. Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365's Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.Busting myths on Microsoft Security Copilot
This blog aims to dispel common misconceptions surrounding Microsoft Security Copilot, a cutting-edge tool designed to enhance cybersecurity measures. By addressing these myths, we hope to provide clarity on how this innovative solution can be leveraged to strengthen your organization's security.3.1KViews9likes0Comments