AI adoption is accelerating across industries, bringing new risks that traditional security tools weren’t designed to address. Securing these systems has become a strategic priority. In the blog below, From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI, authored by Chirag Mehta of Constellation Research, you’ll see how Microsoft is evolving its security portfolio to meet these emerging challenges and protect AI workloads, data, and identities.
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research
AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address.
Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach.
Why AI Changes the Security Landscape
AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can:
- Misinterpret input in ways that humans cannot easily detect
- Be tricked into producing harmful or unintended responses through crafted prompts
- Leak sensitive training data in its outputs
- Take actions that go against business policies or legal requirements
These are not coding flaws. They are risks that originate from how AI systems process information and act on it.
These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly.
A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make.
The Shift Toward AI-Aware Cyber Resilience
Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners.
To get there, organizations must answer questions such as:
- Where is our sensitive data going, and is it being used safely to train models?
- What non-human identities, such as AI agents, are accessing systems and data?
- Can we detect when an AI system is being misused or manipulated?
- Are we in compliance with new AI regulations and data usage rules?
Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience.
Microsoft’s Approach to Secure AI
Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection.
Figure 1: Microsoft product listing to protect identities, data, endpoints, and clouds1. Microsoft Defender: Treating AI Workloads as Endpoints
AI models and applications are emerging as a new class of infrastructure that needs visibility and protection.
- Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities.
- Defender for Cloud Apps extends protection to AI-enabled apps running at the edge
- Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation
Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed.
2. Microsoft Entra: Managing Non-Human Identities
As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight.
- Microsoft Entra helps create and manage identities for AI agents
- Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context
- Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization
3. Microsoft Purview: Securing Data Used in AI
Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions.
- Data discovery and classification helps label sensitive information and track its use
- Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry
- Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval
Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect.
Here's a video that explains the above Microsoft security products:
Securing AI Is Now a Strategic Priority
AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale.
Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation.
Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness.
You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.