Blog Post

Azure AI Foundry Blog
8 MIN READ

RAG Time Journey 5: Enterprise-ready RAG

Angie-Silva-Pereyra's avatar
Apr 02, 2025

Introduction

Congratulations on making it this far and welcome to RAG Time Journey 5! This is the next step in our multi-format educational series on all things Retrieval Augmented Generation (RAG). Here we will explore how Azure AI Search integrates security measures while following safe AI principles to ensure secure RAG solutions.

 

Explore additional posts in our RAG Time series: Journey 1, Journey 2, Journey 3, Journey 4.

 

The development of AI and RAG is leading many companies to incorporate AI-driven solutions into their operations. This transition highlights the importance of embracing best practices for enterprise readiness to ensure long-term success.

 

But what is enterprise readiness?

Enterprise readiness describes the process of being prepared to develop and manage a service, application, or product securely within an enterprise environment.

Let’s discover how Azure AI Search empowers you to build and maintain secure applications aligned with your enterprise security and compliance standards

 

Building the most secure and robust Azure AI Seach service

Security and compliance is an area where we should not cut corners when implementing solutions, which is why Microsoft is investing USD$20 billion in cybersecurity over five years and Azure have more than 8,500 security and threat intelligence experts employed across 77 countries to be at the forefront of it. But how does this benefit your RAG solutions using Azure AI Search? Let’s find out.

Security:

Security in RAG with Azure AI Search is essential to protect sensitive data during retrieval and response generation. This includes preventing unauthorized access or misuse and maintaining the confidentiality and integrity of enterprise information.

 

  • Secured by Design:

According to Charlie Bell, Executive Vice President of Security at Microsoft: "Microsoft runs on trust, and trust must be earned and maintained." In line with this, Azure AI Search is recognized as a state-of-the-art secured service that incorporates SFI program security requirements as a fundamental principle throughout its entire search infrastructure design and features such as indexing, data retrieval, search results usage, as well as the monitoring process.

 

  • Designed to Secure Your Data Flow

Azure AI Search is hosted on Azure and accessed over public networks by client applications. It's crucial that you understand what entry points and outbound traffic flows in your RAG solution for a secure experience fully.

 

Evaluate access risks

Start by identifying who or what can access, execute, create, read, update or delete components or data in your Search Service (inbound traffic). Using this  REST APIs, you can review all inbound requests handled by an Azure AI Search service.

Next, identify outbound requests generated by a search service to other applications. These requests are typically made by indexers for AI enrichment through custom skills and vectorizations during query execution. If you want to explore more, here there is a comprehensive list of possible operations that generate outbound requests can be found in the document titled Security overview - Azure AI Search | Microsoft Learn

 

 Mitigate access risks

Azure AI Search offers a variety of solutions to fit your needs. You can authenticate inbound requests using RBAC (Role-based Access Control) with Microsoft Entra identities or key-based authentication and enhance security by incorporating network security features to restrict endpoint access.

If you want to go beyond that and discern by IP connections, you can further limit access by configuring firewall access in the portal or using the IpRule parameter to allow access specific IP addresses or ranges, with that you can ensure that only authorized IPs have access, adding an extra layer of security to your data flow.

 

For outbound connections, it is recommended to utilize a resource's full access connection string that includes a key or a database login, or a managed identity if you're using Microsoft Entra ID and role-based access. If you need to reach Azure resources behind a firewall, you can create inbound rules on other Azure resources that admit search service requests or reach Azure resources protected by Azure Private Link by creating a shared private link for the indexer’s connection.

 

There are a variety of options you could try depending on your scenario.

 

  • Unpacking Authorization and Authentication

While these terms are frequently used interchangeably, they serve distinct roles in securing your data. Think of authentication as the police officer checking your ID to verify who you are before allowing you into a secure building, whereas authorization is focused on “What are you allowed to do once you are in?” Having these two concepts clear, let’s dive deeper into how Azure AI Search applies them.

 

Within Azure AI Search authentication is divided into service management and content management, for example, management tasks, such as creating or deleting services, managing API keys, and scaling, are known as service management and are authorized through role-based access control (RBAC) in Microsoft Entra with roles like Owner, Contributor, and Reader.

On the other hand, for content management, authorization determines access to objects within a search service. You could use role-based authorization for read-write permissions, while key-based authorization uses an API key and endpoint combination to control access. Admin keys provide full control, while query keys limit access to specific operations.

 

If you need a more detailed control on your data, you could also simulate a document-level security (row-level security) by adding a filterable field in your index that identifies user or group identities, enabling the application to filter content based on the user’s access permissions.

 

I highly recommend reading this blog about Access Control in Generative AI applications with Azure AI Search | Microsoft Community Hub to gain a deeper understanding of the topic.

 

Compliance

Azure has one of the largest compliance certification portfolios in the industry. Enterprise readiness involves robust security and compliance standards to protect sensitive data. Using Azure AI Search helps enterprises maintain trust, reduce risks, and remain prepared for audits.

 

  • Privacy built for your data

When you use any Azure service, you are entrusting us with one of your most valuable assets: your data. With Azure, you are the owner of the data that you provide for storing and hosting in Azure services. We do not share your data with advertiser-supported services, nor do we mine it for any purposes like marketing research or advertising, we process your data only with your agreement.

 

When setting up an Azure AI Search service, you select a region within a geography (Geo) to determine the region to storage your data and the geographical area where your data will be processed making it compliant with data residency guidelines. The only data flowing outside geo boundary is the search service metadata and object names (such a service name, index name, storage name, vector name and others), which will be used for supportability activities.

 

  • Regulatory Compliance controls

Certain regulatory controls must be implemented to ensure alignment with compliance requirements. In Azure AI Search, you could choose to use Azure Polic  y to enforce Microsoft cloud security benchmark recommendations and address non-compliance findings in Azure AI Search.

Below are some of the policies and compliance controls used by the product:

 

o   CIS Microsoft Azure Foundations Benchmark 1.3.0, 1.4.0 and 2.0.0

o   CMMC Level 3

o   FedRAMP High and Moderate

o   HIPAA HITRUST 9.2

o   Microsoft cloud security benchmark

 

For further information, please refer to Azure Policy Regulatory Compliance controls for Azure AI Search | Microsoft Learn

Data protection

In RAG scenarios, data protection is essential due to the specific challenges and complexities of handling sensitive or proprietary information. This is why Azure AI Search protection of data in transit and data at rest.

 

  • Secure your Data in Transit

This refers to the data transferred between the Azure AI Search service and external sources (like databases, APIs, or storage) when retrieving information to enhance the model’s generation.

Azure AI Search uses HTTPS port 443 for secure client-to-service connections over the public internet, supporting TLS (Transport Layer Security) 1.2 and 1.3 for encryption. TLS 1.3 is the default on newer systems and .NET versions, while TLS 1.2 is used on older systems, with the option to explicitly set TLS 1.3. For more information, see Security overview - Azure AI Search | Microsoft Learn

 

  • Secure your Data at Rest

Data at rest includes documents, metadata, or other information stored within the Azure AI Search index used to retrieve relevant data to support the model's output.

In Azure AI Search, data encryption is crucial for safeguarding content and definitions. Server-side encryption can be managed either by Microsoft (using built-in keys) or by the customer (using customer-managed keys (CMK)) via Azure Key Vault.

 

CMK provides additional protection by encrypting content twice, with both customer and Microsoft-managed keys. It should be noted that while CMK offers enhanced security, it can also affect query performance by 30-60% (depending on the index definition and types of queries), and it is recommended for high-security use cases. However, if you prefer a basic encryption, you could opt for service-managed keys which automatically use 256-bit AES encryption and apply to all content, both long-term and temporary.

 

Responsible AI

It is essential to implement strong security measures and ensure that your RAG solution adheres to the Responsible AI principles for maximum impact. Azure AI Search focuses on securing data while following best practices for fairness, accountability, and transparency in AI decisions. These practices help create secure, ethical, and trustworthy in your AI solutions.

 

From principles to practice

Azure AI Search is built with Responsible AI (RAI) principles at its core, ensuring that every new feature—whether Generative or Non-Generative AI—is designed with security, transparency, and fairness in mind. To empower users and promote trust, Azure AI Search implements safeguards and provides comprehensive transparency notes detailing intended use, capabilities, limitations, and other critical considerations. This commitment to Responsible AI helps organizations confidently deploy AI solutions that align with ethical standards and ensure reliable outcomes.

 

Quick concepts:

Fairness: AI systems should treat all people fairly. How might an AI system allocate opportunities, resources, or information in ways that are fair to the humans who use it?

Accountability: People should be accountable for AI systems. How can we create oversight so that humans can be accountable and in control?

 

Conclusion

Looking back at the strategies and solutions we've discussed; we gain valuable insights into how we can stay ahead of emerging challenges while maintaining the integrity of our RAG data. The journey toward robust security involves continuously adapting to a changing environment and proactively safeguarding sensitive information.

But how can you build a trustworthy, secure, and compliant RAG solution?

Here are some tips to help you get started:

  1. Understand Your Data: Analyze what type of data will be used in your RAG model to ensure to apply proper security measures.
  2. Identify Potential Threats: Assess security risks and vulnerabilities in your AI system.
  3. Consider Legal Requirements: Ensure your solution is compliant with applicable laws and regulations, such as GDPR, to safeguard user privacy and maintain legal standards.
  4. Enable Encryption and Access Controls: Utilize Azure AI Search’s encryption features and role-based access control (RBAC) to protect data in transit and at rest.
  5. Monitor and Improve Continuously: Establish regular monitoring rhythms to assess system performance, detect potential security risks, and identify areas for improvement.
  6. Incorporate User Feedback: Establish feedback loops to gather insights and continuously refine the system, enhancing security, fairness, and overall effectiveness.

Let’s finish with a quick retrospective:

  • What security best practices are you using in your RAG solutions?
  • Are you fully leveraging all the capabilities that Azure AI Search offers to enhance your RAG system?
  • Is your RAG prepared to ensure long-term security and compliance?

Remember, ensuring that your AI system is not only powerful and efficient but also trustworthy, resilient, and aligned with industry standards is one of the most critical factors for achieving enterprise readiness.

Next Steps

Ready to explore further?

Have questions, insights, or RAG project experiences to share? Comment below or start a discussion on GitHub - your feedback shapes our future content!

Updated Apr 02, 2025
Version 2.0
No CommentsBe the first to comment