Best practices to architect secure generative AI applications
Published May 01 2024 07:56 AM 5,116 Views
Microsoft

As development of applications powered by these advanced generative AI (Gen AI) tools surges, offering unprecedented capabilities in processing and generating human-like content, so does the rise of security and privacy concerns. One of the biggest security risks is exploiting those tools for leaking sensitive data or performing unauthorized actions. A critical aspect that must be addressed in your application is the prevention of information leaks and unauthorized API access due to weaknesses in your Gen AI app.

 

This blog post delves into the best practices to securely architect Gen AI applications, ensuring they operate within the bounds of authorized access and maintain the integrity and confidentiality of sensitive data.

 

Understanding the risks

Gen AI applications inherently require access to diverse data sets to process requests and generate responses. This access requirement spans from generally accessible to highly sensitive data, contingent on the application's purpose and scope. Without careful architectural planning, these applications could inadvertently facilitate unauthorized access to confidential information or privileged operations. The primary risks involve:

  • Information Leaks: Unauthorized access to sensitive data through the exploitation of the application's features.
  • Escalated Privileges: Unauthorized elevated access, enabling attackers or unauthorized users to perform actions beyond their standard permissions by assuming the Gen AI application identity.

Mitigating these risks necessitates a security-first mindset in the design and deployment of Gen AI-based applications.

 

Best practices for granting permissions

Limit Application Permissions

Developers should operate under the assumption that any data or functionality accessible to the application can potentially be exploited by users through carefully crafted prompts. This includes reading fine-tunning data or grounding data and performing API invocations. Recognizing this, it is crucial to meticulously manage permissions and access controls around the Gen AI application, ensuring that only authorized actions are possible.

 

A fundamental design principle involves strictly limiting application permissions to data and APIs. Applications should not inherently access segregated data or execute sensitive operations. By constraining application capabilities, developers can markedly decrease the risk of unintended information disclosure or unauthorized activities. Instead of granting broad permission to applications, developers should utilize user identity for data access and operations.

 

Utilizing User Identity for Data Access and Operations

Access to sensitive data and the execution of privileged operations should always occur under the user's identity, not the application. This strategy ensures the application operates strictly within the user's authorization scope. By integrating existing authentication and authorization mechanisms, applications can securely access data and execute operations without increasing the attack surface.

 

Examples of insecure practices

Here are a few examples of practices that can lead to data breach:

  1. Placing sensitive data in training files used for fine-tuning models, as such data that could be later extracted through sophisticated prompts.
  2. Using the application identity to access segregated grounding data found in vector databases, APIs, files, or any other sources. Such practice should be limited to data that should be available to all application users, as users with access to the application can craft prompts to extract any such information.
  3. Granting application identity permissions to perform segregated operations, like reading or sending emails on behalf of users, reading, or writing to an HR database or modifying application configurations. Calling segregating API without verifying the user permission can lead to security or privacy incidents.

To mitigate risk, always implicitly verify the end user permissions when reading data or acting on behalf of a user. For example, in scenarios that require data from a sensitive source, like user emails or an HR database, the application should employ the user’s identity for authorization, ensuring that users view data they are authorized to view.

 

Applying best practices

In the diagram below we see an application which utilizes for accessing resources and performing operations. Users’ credentials are not checked on API calls or data access. This creates a security risk where users without permissions can, by sending the “right” prompt, perform API operation or get access to data which they should not be allowed for otherwise.

 

Figure 1: By sending the "right prompt", users without permissions can perform API operations or get access to data which they should not be allowed for otherwise.Figure 1: By sending the "right prompt", users without permissions can perform API operations or get access to data which they should not be allowed for otherwise.

 

By explicitly validating user permission to APIs and data using OAuth, you can remove those risks. For this, a good approach is leveraging libraries like Semantic Kernel or LangChain. These libraries enable developers to define "tools" or "skills" as functions the Gen AI can opt to use for retrieving additional data or executing actions. Such tools can use OAuth to authenticate on behalf of the end-user, mitigating security risks while enabling applications to process user files intelligently. In the example below, we remove sensitive data from fine-tuning and static grounding data. All sensitive data or segregated APIs are accessed by a LangChain/SemanticKernel tool which passes the OAuth token for explicit validation or users’ permissions.

 

Figure 2: Explicitly validating user permission to APIs and data using OAuth can help remove potential risksFigure 2: Explicitly validating user permission to APIs and data using OAuth can help remove potential risks

 

 

Using Microsoft Azure AI Search for grounding

As an alternative, Microsoft provides an out of the box solution for user authorization when accessing grounding data by leveraging Azure AI Search. You are invited to learn more about using your data with Azure OpenAI securely.

 

Conclusion

The integration of Gen AIs into applications offers transformative potential, but it also introduces new challenges in ensuring the security and privacy of sensitive data. By adhering to the baseline best practices outlined above, developers can architect Gen AI-based applications that not only leverage the power of AI but do so in a manner that prioritizes security.

 

Roee Oz, Architect, Microsoft Defender for Cloud

Version history
Last update:
‎May 01 2024 07:56 AM
Updated by: