Blog Post

Microsoft Defender XDR Blog
4 MIN READ

Protect Copilot Studio AI Agents in Real Time with Microsoft Defender

Itai_Cohen's avatar
Itai_Cohen
Icon for Microsoft rankMicrosoft
Sep 08, 2025

Building AI agents has never been easier. Platforms like Microsoft Copilot Studio democratize the creation of AI agents and empower non-technical users to build intelligent agents that automate tasks and streamline business processes. These agents can answer questions, orchestrate complex tasks, and integrate with enterprise systems to boost productivity and creativity. Organizations are embracing a future where every team has AI agents working alongside them to increase efficiency and responsiveness.  

While AI agents unlock exciting new possibilities, they also introduce new security risks, most notably prompt injection attacks and a broader attack surface. Attackers are already testing ways to exploit them, such as abusing tool permissions, sneaking in malicious instructions, or tricking agents into sharing sensitive data. Prompt injection is especially concerning because it happens when an attacker feeds an agent malicious inputs to override the agent’s intended behavior. These risks aren’t due to flaws in Copilot Studio or any single platform — they’re a natural challenge that comes with democratizing AI development. As more people build and deploy agents, strong, real-time protection will be critical to keeping them secure. 

To help organizations safely unlock the potential of generative AI, Microsoft Defender has introduced innovations ranging from shadow AI discovery to out-of-the-box threat protection for both pre-built and custom-built generative AI apps. Today, we’re excited to take the next step in securing AI agents: Microsoft Defender now delivers real-time protection during agent runtime for AI agents built with Copilot Studio. It automatically stops agents from executing unsafe actions during runtime if suspicious behavior, such as a prompt injection attack attempt, is detected and notifies security teams with a detailed alert in the Defender portal. Defender’s AI agent runtime protection is part of our broader approach to securing Copilot Studio AI agents, as outlined in this blog post. 

Monitor AI agent runtime activities and detect prompt injection attacks

Prompt injections are particularly dangerous because they exploit the very AI logic that powers these agents. A well-crafted input can trick an agent’s underlying language model into ignoring its safety guardrails or revealing secrets it was supposed to keep. With thousands of agents operating and interacting with external inputs, the risk of prompt injection is not theoretical - it’s a pressing concern that grows with every new agent deployed. 

The new real-time protection for AI agents built with Copilot Studio adds a safety net at the most critical point when the agent is running and acting. It helps safeguard AI agents during their operation, reducing the chance that malicious inputs can exploit them during runtime.  

Microsoft Defender now monitors agent tool invocation calls in real time. If a suspicious or high-risk action is detected, such as a known prompt injection pattern, the action is blocked before it is executed. The agent halts processing and informs the user that their request was blocked due to a security risk. For example, if an HR chatbot agent is tricked by a hidden prompt to send out confidential salary information, Defender will detect this unauthorized action and block it before any tool is invoked.

Figure 1: Microsoft Defender blocks agent tool invocation in real time.

Investigate suspicious agent behaviors in a unified experience

See the full attack story, not just the alerts. Today’s attacks are targeted and multistage. When Defender stops risky Copilot Studio AI agent activity at runtime, it raises an alert - and immediately begins correlating related signals across email, endpoints, identities, apps, and cloud into a single incident. That builds the complete attack narrative, often before anyone even opens the queue, so the SOC can see how they’re being targeted and what to do next. 

In the Microsoft Defender portal, incidents arrive enriched with timelines, entity relationships, relevant TTPs, and threat intelligence. Automated investigation and response gathers evidence, determines scope, and recommends or executes remediation to cut triage time. With Security Copilot embedded, analysts get instant incident summaries, guided response and hunting in natural language, and contextualize threat intelligence to accelerate deeper analysis and stay ahead of threats. 

If you use Microsoft Sentinel, the unified SOC experience brings Defender XDR incidents together with thirdparty data. And with the new Microsoft Sentinel data lake (preview), teams can retain and analyze years of security data in one place, then hunt across that history using naturallanguage prompts that Copilot translates to KQL. 

Because runtime protection already stops the unsafe actions of Copilot Studio AI agents, most single alerts don’t require immediate intervention. But the SOC still needs to know when they’re being persistently targeted. Defender automatically flags emerging patterns, such as sustained activity from the same actor or technique, and, when warranted and a supporting scenario like ransomware, can trigger automatic attack disruption to contain active threats while analysts' review. 

For Copilot Studio builders, Defender extends the same protection to AI agents: realtime runtime protection helps prevent unsafe actions and promptinjection attempts, and detections are automatically correlated and investigated, without moving data outside a trusted, industryleading XDR. 

Figure 2: Defender XDR turns alerts into complete attack stories to enable fast, informed threat investigation.

By embedding security into the runtime of AI agents, Microsoft Defender helps organizations embrace the full potential of Copilot Studio while maintaining the trust and control they need. Real-time protection during agent runtime is a foundational step in Microsoft’s journey to secure the future of AI agents, laying the foundation for more advanced capabilities coming soon to Microsoft Defender. It reflects our belief that innovation and security go hand in hand. With this new capability, organizations can feel more confident using AI agents, knowing that Microsoft Defender is monitoring in real time to keep their environments protected.

 

Learn more:

 

Updated Sep 04, 2025
Version 1.0
No CommentsBe the first to comment