Blog Post

Marketplace blog
10 MIN READ

Security in the agentic era: A new paradigm

joonwoo's avatar
joonwoo
Icon for Microsoft rankMicrosoft
Mar 05, 2026

Trust is the new control plane: A security leader's guide to the age of AI agents

Artificial intelligence is no longer a background utility — it's becoming an autonomous workforce. As AI agents take on independent roles in business processes, organizations must confront a new class of security, governance, and compliance challenges that traditional frameworks were never designed to handle.

The question is no longer whether to adopt AI. It's whether your organization can do so without creating unacceptable risk. This guide examines the three pillars every security and technology leader must address: securing AI systems, governing AI behavior, and complying with a rapidly evolving regulatory landscape.

 

Why the Stakes Are Higher Than Ever

By 2027, Microsoft projects that more than 1.3 billion AI agents will be deployed globally — a number that rivals the entire US population. These aren't passive tools waiting for instructions. They are autonomous systems operating around the clock, accessing privileged data, executing decisions, and interacting with customers, suppliers, and internal workflows — all without a human explicitly in the loop for every action.

For security leaders, the implications are immediate and stark:

  • 1.3 billion non-human identities to manage and secure
  • 1.3 billion autonomous workflows capable of triggering real-world consequences
  • 1.3 billion new attack surfaces that existing security architectures weren't built to defend

The Microsoft Work Trend Index reinforces the urgency: 95% of organizations are already using AI in some form. Yet only a single-digit percentage of leaders feel confident enough to deploy generative or agentic AI at enterprise scale. The gap between adoption and readiness is significant — and it's widening.

Closing that gap starts with one thing: trust. Nobody builds a digital workforce on a foundation they can't rely on.

 

The Shifting Threat Landscape

The World Economic Forum's top global risks through 2035 paint a sobering picture of where we're headed. Misinformation and disinformation rank among the top five risks — and AI is accelerating both.

AI-generated phishing attacks now achieve click rates up to four times higher than traditional attempts. There are no spelling mistakes. The logos are perfect. The tone matches your organization's communication style. Employees who were trained to spot obvious red flags are encountering attacks that look entirely legitimate.

But phishing is only part of the threat surface. The broader risks include:

  • Data oversharing — AI agents inadvertently exposing sensitive information across organizational and jurisdictional boundaries
  • Adverse AI outcomes — biased, inaccurate, or non-compliant outputs that create legal liability and reputational damage
  • Cyber espionage — the same autonomous capabilities that make AI agents powerful make them attractive targets for nation-state actors and sophisticated adversaries

The fundamental principles of good security hygiene haven't changed. But the autonomy and scale at which AI systems now operate means any failure can propagate faster and further than anything we've dealt with before.

 

Pillar one — securing AI: Getting the foundation right

Rethinking data loss prevention for the agentic era

Controlling what data AI agents can access and share is one of the most immediate operational challenges organizations face. Traditional Data Loss Prevention (DLP) systems were built around known identifiers — matching patterns in email traffic, file transfers, and endpoint activity. Agentic systems operating through APIs and Model Context Protocol (MCP) servers don't fit those models.

To address this, organizations must retrofit existing DLP controls for API-based and agent-driven data flows, while also building new capabilities:

  • Discover and classify data before AI indexes it — legacy data that was never formally classified becomes a serious liability the moment an AI system can read and surface it at machine speed
  • Implement agent-aware role-based access controls (RBAC) — access policies designed for human users often fail to account for the broader reach of autonomous agents
  • Manage data lifecycle actively — data retained beyond its useful or legal life becomes an unnecessary risk vector; organizations need clear policies on retention, restriction, and secure deletion
Prompt injection and the new attack surface

Agentic AI has introduced a genuinely new class of attack. An adversary no longer needs to deploy malware in the conventional sense. Instead, they can embed hidden instructions in a document, an email, or an API response — and if an AI agent reads it, it may execute those instructions as legitimate commands.

Prompt injection attacks represent a fundamental shift in the threat model. Every input channel to an agent — a document upload, an API call, a user prompt, an external data feed — is a potential attack vector. Sanitizing and validating inputs before they reach the model is no longer optional security hygiene; it's a core architectural requirement.

Other significant threats emerging in this space include:

  • Model theft — adversaries targeting the fine-tuned intellectual property embedded in proprietary AI models
  • Hallucination exploitation — fabricated or omitted information leading to biased, incorrect, or legally non-compliant outputs
  • Autonomous decision overreliance — trusting agent outputs without adequate human review, creating exposure to data leakage and incorrect actions executed at scale
Tackling shadow AI

A decade ago, "shadow IT" — employees using unapproved technology outside of IT oversight — was a governance nightmare. Shadow AI is the next chapter of that problem, and it's already here.

Employees are adopting AI tools independently, often without any visibility from security or compliance teams. Without centralized oversight, there's no way to verify that those tools follow responsible AI principles, handle data appropriately, or comply with applicable regulations.

A centralized AI tool policy — clearly defining which systems are approved and why — is essential. The goal isn't to limit innovation. It's to ensure that when AI is used, it can be trusted, explained, and audited.

 

Pillar two — governing AI: Accountability at scale

Security controls protect the system from external threats. Governance determines how the system behaves — and who is accountable when things go wrong.

Data governance as the foundation

AI systems are only as trustworthy as the data they are trained on and operate against. Poor data governance doesn't just create compliance risk — it undermines the reliability of every AI output. Effective data governance in the agentic era requires:

  • Clear data ownership and access policies that explicitly cover non-human identities
  • Centralized data catalogs with lineage tracking, quality controls, and metadata management
  • AI agent identity lifecycle management — onboarding, access reviews, and offboarding for agents, managed with the same rigor applied to human employees

Some leading organizations are already registering AI agents in HR systems and assigning human managers to each one. It's an imperfect analogy, but it enforces one critical discipline: making someone responsible for what the agent does.

A framework for AI governance

Effective AI governance rests on two foundations: core principles and an implementation framework that spans the full AI lifecycle.

Core principles should address transparency in decision-making, clear accountability for outcomes, privacy protections for sensitive data, and meaningful human oversight for high-stakes decisions. These aren't abstract values — they need to be operationalized into specific technical controls and policy requirements.

Governance itself typically runs through three phases:

  • Selection — evaluating AI tools and models for safety, transparency, and compliance fit before adoption
  • Deployment — aligning policies and controls to measurable business objectives so success can actually be verified
  • Ongoing monitoring — continuously assessing performance, fairness, and regulatory compliance over the system's lifetime

One area that remains persistently underinvested: monitoring outputs, not just inputs. Most organizations build controls around what goes into an AI system. But AI operates at machine scale and speed — organizations equally need the capability to watch what's coming out, in real time.

The agent-as-employee mental model

One of the most practical governance frameworks emerging in this space is treating every AI agent like a new employee. Define its job description: what is it authorized to do? What data can it access? Under what conditions does it need a human manager's approval before acting?

If an agent needs access to a new database, who approves that request? If it begins exhibiting anomalous behavior, who receives the alert? If it needs to be decommissioned, what does offboarding look like?

These questions aren't just governance formalities. They are the building blocks of accountability in the agentic era — and the organizations that answer them clearly will be far better positioned than those that don't.

 

Pillar three — compliance: Shift left or fall behind

With 127 countries now enforcing privacy laws, and AI-specific regulations evolving roughly every four to five days, compliance has become a fast-moving target. The EU AI Act, GDPR, DORA, the Cyber Resilience Act — these frameworks overlap, interact, and frequently conflict in ways that create genuine decision fatigue for compliance teams.

The answer is not to wait for the landscape to stabilize. It won't. The only viable strategy is to shift compliance left — embedding regulatory requirements into architecture and process design from day one, rather than trying to retrofit them after deployment.

In practice, this means:

  • Aligning to established baselines such as ISO 42001 (AI Management Systems) rather than custom frameworks that can't be readily demonstrated to external regulators or auditors
  • Mapping extraterritorial regulatory scope — understanding which laws apply to your data flows based on where data originates, where it's processed, and where it's used, not just where your headquarters sits
  • Operationalizing responsible AI principles — translating commitments to privacy, explainability, and fairness into concrete technical controls, documented policies, and auditable practices

The principle of shared responsibility — well understood in cloud computing — applies equally to AI deployment. A platform provider can build AI systems assessed against responsible AI principles, with transparency records publicly available. But the organization deploying those systems still owns its governance choices, its data handling practices, and its accountability to regulators and customers.

 

Microsoft Marketplace: Where trust becomes a procurement decision

All of the security, governance, and compliance principles discussed above ultimately converge at a practical decision point: where and how you acquire AI agents matters as much as how you deploy them. This is where Microsoft Marketplace enters the picture — and it's more significant than simply a software storefront.

In September 2025, Microsoft unified Azure Marketplace and AppSource into a single Microsoft Marketplace, with a dedicated "AI Apps and Agents" category featuring over 3,000 solutions. The strategic intent is clear: make trusted AI agents as easy to discover, evaluate, and procure as any enterprise software — with governance and security built into the acquisition process itself.

Two agent types, different security profiles

For software development companies publishing agents and enterprises evaluating them, Microsoft Marketplace distinguishes between two categories, each with distinct security implications:

  • Azure agents — general-purpose AI solutions running on Azure infrastructure, either hosted by the publisher as a SaaS offering or deployed into the customer's own tenant via container offers. These are suited to cloud-based agentic workflows with custom compute requirements.
  • Microsoft 365 agents — agents integrated directly into Copilot and M365 applications like Teams, Outlook, Word, and Excel. These enhance productivity within the Microsoft 365 environment and are distributed through the Agent Store within the M365 Copilot experience.
Marketplace as a governance lever for buyers

From the enterprise buyer's perspective, Marketplace isn't just about convenience — it's a governance tool. When an organization acquires an agent through Microsoft Marketplace, it is provisioned and distributed aligned to the organization's existing security and governance standards. This means IT retains control over what gets deployed, to whom, and under what conditions — directly addressing the shadow AI problem discussed earlier.

What to Look for When Evaluating Marketplace Agents

Not all agents in the Marketplace are equal from a security standpoint. When evaluating third-party agents for enterprise deployment, security leaders should assess:

  • Responsible AI documentation — does the publisher provide transparent records of how the agent was assessed against safety and fairness principles?
  • Data handling disclosures — where is data processed, stored, and for how long? Does it leave your tenant?
  • Offer type architecture — is the agent SaaS-hosted or tenant-deployed, and does that match your data sovereignty requirements?
  • Compliance certifications — does the publisher hold relevant certifications (SOC 2, ISO 27001, regional data protection compliance) that align with your regulatory obligations?

The Marketplace's trusted channel doesn't eliminate due diligence — but it does provide a structured, governed starting point that unvetted direct procurement cannot.

 

Trust Is the New Control Plane

Behind every efficiency gain, every automated workflow, and every AI-powered product is a human being who needs to trust the system they're relying on. When that trust isn't present, adoption fails — and adoption failure is not a neutral outcome when your competitors are building a more capable AI-powered workforce around you.

But trust in this context means more than security compliance. It requires three things:

  • Explainability — the ability to show regulators, customers, and employees how a decision was reached
  • Resilience — the assurance that the system continues to operate safely even when interconnected components fail
  • Inclusivity — confidence that the system produces fair, unbiased outcomes across the full range of people it affects

Organizations that embed trust into the foundation of their AI strategy — not as a checkbox, but as a design principle — will hold a structural competitive advantage. Those that treat it as a formality will eventually face what's already playing out: billion-dollar lawsuits over unauthorized data use, regulatory fines for opaque deployment practices, and loss of the customer confidence that makes AI investment worthwhile in the first place.

 

A Practical Starting Point

The scope of this challenge can feel overwhelming. The path forward is not. Start here:

  • Establish security fundamentals first — data classification, access management, input sanitization, and output monitoring are non-negotiable foundations
  • Build your governance framework — define agent ownership, document responsibilities, and implement human-in-the-loop checkpoints for decisions that carry real-world consequences
  • Align to regulations proactively — adopt established frameworks, map your full regulatory exposure, and maintain documentation that can be externally verified
  • Invest in AI literacy across the organization — the single biggest barrier to responsible AI adoption is not technology. It is helping people understand how to use these systems well, when to trust them, and when to question them

The age of AI agents has arrived. The organizations that lead in this environment will be the ones that treat trust not as a constraint on innovation — but as the foundation it stands on.

 

This post is based on a presentation covering the security, governance, and compliance dimensions of AI in the agentic era. To view the full session recording, visit Security for SDC Series: Securing the Agentic Era Episode 1

For more on Microsoft's Responsible AI principles and Azure AI Foundry, visit Responsible AI: Ethical policies and practices | Microsoft AI.

__________________________________________________________________________________________________________________________________________________________________

 

 

Updated Mar 05, 2026
Version 1.0
No CommentsBe the first to comment