Blog Post

Azure Integration Services Blog
4 MIN READ

šŸ”Secure AI Agent Knowledge Retrieval - Introducing Security Filters in Agent Loop

harimehta's avatar
harimehta
Icon for Microsoft rankMicrosoft
Nov 18, 2025

Building secure, permission-aware AI agents with Agent Loop

We’re excited to introduce a new capability in Azure Logic Apps that enables document-level authorization for Retrieval-Augmented Generation (RAG) workflows. With security filters, you can now ensure that agents only retrieve and respond with information users are authorized to view.

Why Security Trimming Matters

In RAG-enabled workflows, agents often retrieve knowledge from indexed documents. Without proper filtering, users may receive responses based on documents they shouldn’t access. Security trimming ensures:

  • Responses are contextually appropriate based on user permissions
  • Sensitive data is protected
  • AI interactions remain compliant and secure

The Challenge: Securing AI Agent Knowledge Bases

AI agents are transforming how organizations interact with their data, but they introduce a critical security challenge: how do you ensure an agent only retrieves and shares information the requesting user is permitted to see?

Without proper security controls, an AI agent with access to a corporate knowledge base could inadvertently expose confidential documents, financial records, or sensitive HR information to unauthorized users. Traditional approaches required developers to:

  • Manually implement complex security filters in every retrieval operation
  • Maintain parallel permission systems alongside existing access controls
  • Handle edge cases like nested group memberships and dynamic role changes
  • Risk security vulnerabilities from custom code errors

The Solution: Agent Loop + AI Search with Native ACL Support

The Azure Logic Apps Agent Loop now integrates seamlessly with Azure AI Search's document-level access control capabilities, providing a secure-by-default approach to AI agent knowledge retrieval. This integration combines the conversational power of AI agents with enterprise-grade security enforcement.

How It Works: Two-Phase Security Architecture

Phase I: Permission-Aware Indexing

During the ingestion phase, you must index your documents in Azure AI Search with a custom UserIds field that maps each document to the Object Ids of the users allowed to access it.

Azure AI Search indexes documents along with their permission metadata natively:

  • ADLS Gen2 Indexer (Pull Model): The enhanced indexer automatically retrieves ACL assignments from Azure Data Lake Storage containers and directories, computing effective permissions for each file
  • Push API (Push Model): Developers can manually push documents with permission metadata (user IDs or group IDs) using the REST API or Azure SDKs

Pro Tip: Use group IDs instead of individual user IDs for easier management. When a user's role changes, you simply update their group membership rather than reindexing documents.

Phase II: Filtered Retrieval via Agent Loop

This is where magic happens. In your Logic Apps workflow using the Azure AI Search action, you configure the agent to apply security filters during vector search automatically.

For User-Based Filtering:

In your Logic Apps workflow, you must configure the agent to apply a filter condition during vector search:

UserIds/any(u: u eq '@{currentRequest()['headers']['X-MS-CLIENT-PRINCIPAL-ID']}')

This ensures that agents only generate responses from documents the user is permitted to access. This filter expression:

  • Extracts the authenticated user's principal ID from the incoming request headers
  • Applies it as a filter condition during the AI Search query
  • Ensures only documents with matching user permissions are retrieved
  • Happens automatically before results reach the LLM for response generation

For Group-Based Filtering:

For more flexible permission management, developers can leverage group-based access control:

  • Extract the user's principal ID from request headers
  • Query Microsoft Entra to retrieve the user's group memberships
  • Apply a filter using group IDs instead: GroupIds/any(g: g in ('@{variables('userGroups')}'))

This approach provides significant advantages:

  • Easier maintenance: Update group memberships without reindexing
  • Hierarchical permissions: Support nested groups and organizational structures
  • Role-based access: Align with existing RBAC patterns in your organization

The Complete Agent Loop Flow

  1. User sends a query to the AI agent through your application
  2. Logic Apps Agent Loop receives the request with the user's authentication token
  3. Security filter is applied using the Azure AI Search action, leveraging the user's principal ID or group memberships
  4. Azure AI Search performs natural language search or vector search and returns only authorized documents
  5. LLM generates a response grounded exclusively in the user's permitted data
  6. Agent returns the answer with full confidence that no unauthorized information was accessed

Example: HR Knowledge Assistant

Imagine an HR AI agent built with Agent Loop that helps employees find information about benefits, policies, and procedures:

  • Executive team members can ask about confidential compensation strategies and merger discussions
  • People managers can inquire about performance review guidelines and team-specific policies
  • All employees can access general benefits information and company-wide policies

With the Agent Loop + AI Search integration, the same AI agent serves all these user types securely—automatically filtering knowledge retrieval based on each user's permissions. No separate agents, no custom code, no security gaps.

The Bottom Line

The integration of Agent Loop with Azure AI Search's ACL support transforms how organizations build secure AI agents. What once required complex custom security implementations now works through simple configuration in Logic Apps workflows.

By combining conversational AI capabilities with document-level access control, this solution enables organizations to deploy AI agents that users can trust—knowing every response respects their permissions and organizational security policies.

For developers, this means faster time-to-market for AI agent applications. For security teams, it means enforceable, auditable access controls. For end users, it means confident interaction with AI systems that understand boundaries.

Learn More

For a step-by-step guide on setting up security filters, indexing documents, and configuring your Logic App workflow, visit the full tutorial here:

Add security filters for agent knowledge trimming

For more information about document-access level control, refer to:

https://learn.microsoft.com/en-us/azure/search/search-document-level-access-overview

Updated Nov 17, 2025
Version 1.0
No CommentsBe the first to comment