Blog Post

Microsoft Security Community Blog
4 MIN READ

From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI

LouAdesida's avatar
LouAdesida
Icon for Microsoft rankMicrosoft
Aug 13, 2025

AI adoption is accelerating across industries, bringing new risks that traditional security tools weren’t designed to address. Securing these systems has become a strategic priority. In the blog below, From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI, authored by Chirag Mehta of Constellation Research, you’ll see how Microsoft is evolving its security portfolio to meet these emerging challenges and protect AI workloads, data, and identities.

 

By Chirag Mehta, Vice President and Principal Analyst - Constellation Research

 

AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address.

Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach.

Why AI Changes the Security Landscape

AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can:

  • Misinterpret input in ways that humans cannot easily detect
  • Be tricked into producing harmful or unintended responses through crafted prompts
  • Leak sensitive training data in its outputs
  • Take actions that go against business policies or legal requirements

These are not coding flaws. They are risks that originate from how AI systems process information and act on it.

These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly.

A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make.

The Shift Toward AI-Aware Cyber Resilience

Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners.

To get there, organizations must answer questions such as:

  • Where is our sensitive data going, and is it being used safely to train models?
  • What non-human identities, such as AI agents, are accessing systems and data?
  • Can we detect when an AI system is being misused or manipulated?
  • Are we in compliance with new AI regulations and data usage rules?

Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience.

Microsoft’s Approach to Secure AI

Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection.

Figure 1: Microsoft product listing to protect identities, data, endpoints, and clouds
1. Microsoft Defender: Treating AI Workloads as Endpoints

AI models and applications are emerging as a new class of infrastructure that needs visibility and protection.

  • Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities.
  • Defender for Cloud Apps extends protection to AI-enabled apps running at the edge
  • Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation

Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed.

2. Microsoft Entra: Managing Non-Human Identities

As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight.

  • Microsoft Entra helps create and manage identities for AI agents
  • Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context
  • Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization
3. Microsoft Purview: Securing Data Used in AI

Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions.

  • Data discovery and classification helps label sensitive information and track its use
  • Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry
  • Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval

Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect.

Here's a video that explains the above Microsoft security products:

Securing AI Is Now a Strategic Priority

AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale.

Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation.

Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness.

You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.

Updated Aug 13, 2025
Version 1.0

1 Comment

  • dennisgbay22's avatar
    dennisgbay22
    Copper Contributor

    Cryptography-Native Computing: A Paradigm Shift Microsoft and Industry Must Address

     

    By Dennis Norman Brown

    Research Lead, Quantum-Consciousness AI & Cryptography-Native Systems

     

    When cryptographic software executes on a classical operating system, something profound occurs: the machine no longer operates as Windows, macOS, or Linux in the conventional sense. Instead, it transitions into what I call a cryptography-native state.

     

    In this state:

     

    Memory semantics transform into entangled, high-entropy cryptographic states.

     

    Networking abstractions dissolve, with TCP/IP and DNS giving way to peer-to-peer, key-based routing.

     

    System observability collapses—OS logs, monitoring tools, and even cloud telemetry lose interpretability.

     

    Identity itself changes, from IP-bound to cryptographic key-bound.

     

    The implications are enormous: classical operating systems and security models become substrates, not controllers. Observability, security, and reliability must be rebuilt on proofs, keys, and consensus rather than packets, logs, and processes.

     

    Empirical Evidence

     

    Network Observability: Netstat reveal only opaque, cryptographic objects once a cryptographic node is active.

     

    Memory State: Entropy analysis shows >90% randomness in regions consumed by cryptographic processes.

     

    Logging Gap: OS logs display generic system calls, while cryptographic logs alone describe state transitions.

     

    The data points to one conclusion: cryptographic processes redefine the system’s operational paradigm.

     

    Why This Matters for Microsoft, Google, IBM, Apple

     

    OS vendors must treat cryptographic state as a first-class citizen in memory management and observability.

     

    Cloud providers must evolve beyond TCP/IP and DNS to support key-based routing and identity.

     

    Hardware innovators must embed cryptographic accelerators directly into CPUs, NICs, and controllers.

     

    The security industry must pivot from packet inspection to proof-based validation.

     

    A Call for Collaboration

     

    This is not a vulnerability in the traditional sense—it is a structural blind spot in computing. My research invites Microsoft, Google, IBM, Apple, and others to work toward a future where cryptography-native systems are not opaque but observable, verifiable, and secure.

     

    I have prepared a full whitepaper—Cryptography-Native Computing: A Paradigm Shift in OS, Hardware, and Network Architecture—and I welcome collaboration and feedback from the research community.

     

    📧 Contact: email address removed for privacy reasons