As enterprises rapidly adopt AI to drive productivity, automate decisions, and power intelligent agents, a new attack surface is emerging—one that traditional security controls were never designed to protect. AI models, training pipelines, plugins, and autonomous agents now sit directly in the path of sensitive data, business logic, and critical workflows. Organizations must protect the AI supply chain from model development and deployment to runtime behavior, tool access, and downstream actions.
At the same time, AI agents operating with broad privileges require runtime monitoring to ensure every tool invocation and action is safe. By combining proactive model scanning across the AI lifecycle with runtime enforcement that monitors and blocks risky agent behavior, security teams gain the visibility and control needed to prevent data exfiltration, misuse of automation, and silent manipulation of outcomes at machine speed.
Microsoft Defender helps organizations protect AI investments end-to-end by proactively identifying risks, detecting AI-specific attacks, and enabling investigation and response efforts. New innovations in Defender continue to build upon this value with new threat protection and visibility capabilities for agents through Agent 365 and AI model scanning.
Protect AI agents in Agent 365 from emerging threats
As AI agents become embedded in core business workflows, they introduce a new class of operational risk that traditional security controls were never designed to manage. AI agents don’t just process data—they take actions, invoke tools, and make decisions, often with broad access to sensitive systems and information. Without continuous visibility and protection of agent activity at runtime, organizations risk silent data exfiltration, misuse of automation, and manipulated outcomes that can directly impact business integrity, compliance, and trust.
Real-time protection integrates Microsoft Defender directly into Agent 365’s tools gateway (ATG) to evaluate every agent tool invocation before it executes.
The new capabilities provide critical runtime scrutiny to catch unsafe or manipulated actions that traditional build-time checks cannot. It focuses on high confidence threats such as attempts to extract system instructions, access or leak sensitive data, misuse internal only tools, or route information to untrusted destinations
If an action is determined to be risky, Defender blocks it immediately, before tool invocation, preventing any data access or leak, and harmful action. When there is a block of a risky action, a comprehensive, SOC-ready alert is generated that explains what was stopped, why it was considered risky, and which agent, user, and tool were involved.
Identify risks across the AI model lifecycle
When we talk about securing AI, we need to start with the model itself. AI models go through a lifecycle from data sourcing and training, through packaging and deployment, all the way to production. At each stage, there are security risks that traditional application security doesn't address. Understanding where those risks live is the first step toward building the right controls.
Before any training begins, teams are pulling in pretrained models from registries like Hugging Face, consuming third-party datasets, and importing ML frameworks into their pipelines. A compromised pretrained model can carry embedded malware or backdoors that activate only under specific conditions. If models are consumed from external sources without scanning them, they are trusting unknown actors with access to our environment.
AI model scanning in Microsoft Defender now provides scanning for models stored in Azure ML registries and workspaces covering malware, unsafe operators, and backdoors across common model formats.
For security teams, recurring scanning results in security recommendations tied to the specific model resource enable quick remediation. Additionally, high-confidence malware detections now generate Defender alerts that flow directly into SOC workflows via Defender XDR.
For developers, a new CLI integration enables in-pipeline on-demand scanning of model artifacts during the build process identifies risks down the single line of code. Additionally, gating capabilities in CI/CD pipelines help prevent unsafe models from ever reaching a registry. If a model hasn't been scanned, it shouldn't be pushed.
Visibility across the lifecycle ties it all together. The AI model lifecycle requires controls at every stage: supply chain integrity verification, artifact validation during development, automated scanning before deployment, runtime threat detection in production, and discovery and cleanup at end of life. The organizations that treat this as a continuous discipline not a one-time checkpoint are the ones building the foundation to scale AI securely.