Build RAG Chat App using Azure Cosmos DB for MongoDB vCore and Azure OpenAI: Step-by-Step Guide
Published Mar 13 2024 11:02 PM 11.7K Views
Iron Contributor

Introduction

Retrieval Augmented Generation (RAG) is a way of making chat applications intelligently retrieve a subset of data from your data store to provide specific, contextual knowledge to the large language model to support how it answers a user’s prompt and ground its responses to your specific use case.​ Azure Cosmos DB for Mongo DB vCore is one of the only Azure databases that provides built-in vector search at any scale which makes it easy for you to store your semi-structured data and query it at the same time and place with the powerful capabilities of speed and scalability that Azure Cosmos DB gives to you. You no longer need to store your data somewhere and perform a search over it in another place as Azure Cosmos DB was built for AI-Driven Applications to do everything you need in one place.

test-after.png

 

Scenario

Imagine you are experimenting with Azure OpenAI large language models to develop your company's RAG chat application. You want to integrate cutting-edge LLM technology quickly and easily into your apps. You heard about the Semantic Kernel and that it provides a way to orchestrate your AI-driven chat flow in some very simple steps. You've created a powerful chat application using the Semantic Kernel but you want to make it grounded to your company's data. You may have heard about RAG and how to generate vector embeddings which are a bunch of numbers that you can compare the similarity between and provide the LLM with a specific source of information to respond from. You also heard that vector search is not supported by all databases so you'll need a way to work around this luckily Azure Cosmos DB for MongoDB vCore is one of the few databases that has built-in vector search. You decided to take your company's data and convert it to JSON to store it in the database and deploy a simple chat application on Azure App Service to compare the vector search alone and the RAG flow and share it with others to test it.

 

What will you learn?

In this blog, you'll learn to:

  • Create an Azure Cosmos DB for MongoDB vCore cluster.
  • Create an embeddings and a chat deployment in Azure OpenAI studio.
  • Create an Azure App service website with Continous deployment from GitHub.
  • Configure Azure App service application settings to enable communication between Azure resources.
  • Configure GitHub workflow to work successfully. 

 

What is the main objective?

Build a RAG Chat Web Application using the Semantic Kernel, Azure OpenAI, and Azure Cosmos DB for MongoDB vCore: Step-by-Step Guide

architecture-thumbnail (1).png

 

Prerequisites

  • An Azure subscription.
  • Access to Azure OpenAI in the desired Azure subscription.
  • A GitHub Account.

 

Summary of the steps:

Step 1: Create an Azure Cosmos DB for MongoDB vCore Cluster

Step 2: Create an Azure OpenAI resource and Deploy chat and embedding Models

Step 3: Create an Azure App Service and Deploy the RAG Chat Application

 

Step 1: Create an Azure Cosmos DB for MongoDB vCore Cluster

In this step, you'll:

  • Open the Azure Portal.
  • Create an Azure Cosmos DB for MongoDB vCore Cluster.

 

Open the Azure Portal

 

1. Visit the Azure Portal https://portal.azure.com in your browser and sign in.

JohnAziz_0-1710266511135.png

 

Now you are inside the Azure portal!

JohnAziz_1-1710266511465.png

 

Create a new Azure Cosmos DB for MongoDB vCore Cluster

In this step, you create an Azure Cosmos DB for MongoDB vCore Cluster to store your data, vector embedding, and perform vector search.

 

1. Type mongodb vcore in the search bar at the top of the portal page and select Azure Cosmos DB for MongoDB (vCore) from the available options.

search-mongo-db-vcore.png

 

2. Select Create from the toolbar to start provisioning your new cluster.

select-create-mongo-db-vcore.png

 

3. Add the following information to create a resource:

What Value
Subscription Use your preferred subscription. It's advised to use the same subscription across all the resources that communicate with each other on Azure.
Resource group Select Create new to create a new resource group. Enter a unique name for the resource group.
Cluster name Enter a globally unique name.
Location Select a region close to you for the best response time. For example, Select UK South.
MongoDB version Select the latest available version of MongoDB.


create-mongo-db-vcore-basic.png

 

4. Select Configure to configure your cluster tier.

 

5. Add the following information to configure the cluster tier. You can scale it up later:

What Value
Cluster tier Select M25 tier, 2 (Burstable) vCores.
Storage

Select 32 GiB.

6. Select Save.

configure-mongodb-cluster.png

 

7. Enter the cluster Admin Username and Password and store them in a secure location.

 

8. Select Next to configure the networking settings.

add-auth-mongodb-vcore.png

 

9. Select Allow Public Access from Azure services and resources within the Azure to this cluster.

10. Select Add current IP address to the firewall rules to allow local access to the cluster.

 

11. Select Review + create.

cosmos-mongo-db-network.png

 

12. Confirm your configuration settings and select Create to start provisioning the resource.

 

Note: The cluster creation can take up to 10 minutes. It's recommended to move on with the rest of the steps and get back to it later.

 

Step 2: Create an Azure OpenAI resource and Deploy chat and embedding Models

In this step, you'll:

  • Create an Azure OpenAI resource.
  • Create chat and embedding model deployments.

 

Create an Azure OpenAI resource

In this step, you create an Azure OpenAI Service resource that enables you to interact with different large language models (LLMs).

 

1. Type openai in the search bar at the top of the portal page and select Azure OpenAI from the available options.

search-openai.png

 

2. Select Create from the toolbar to provision a new OpenAI resource.

select-create-openai.png

 

3. Add the following information to create a resource:

What Value
Subscription Use the same subscription you used to apply for Azure OpenAI access.
Resource group Use the resource group you created in the previous step.
Region Select a region close to you for the best response time. For example, Select UK South.
Name Enter a globally unique name.
Pricing tier
 Select S0. Currently, this is the only available pricing tier.

 

create-openai-basic-conf.png

 

4. Now that the basic information is added, select Next to confirm your details and proceed to the next page.

5. Select Next to confirm your network details.

6. Select Next to confirm your tag details.

 

7. Confirm your configuration settings and select Create to start provisioning the resource. Wait for the deployment to finish.

 

8. After the deployment finishes, select Go to resource to inspect your created resource. Here, you can manage your resource and find important information like the endpoint URL and API keys.

select-go-to-resource-openai.png

 

 

Create chat and embedding model deployments

In this step, you create an Azure OpenAI embedding model deployment and a chat model deployment. Creating a deployment on your previously provisioned resource allows you to generate text embeddings (i.e. numerical representation for text) and have a natural language conversation with your data.

 

1. Select Go to Azure OpenAI Studio from the toolbar to open the studio.

select-go-to-openai-studio.png

 

2. Select Create new deployment to go to the deployments tab.

select-create-new-deployment-tab.png

 

3. Select + Create new deployment from the toolbar. A Deploy model window opens.

select-create-new-deployment.png

 

4. Add the following information to create a chat model deployment:

What Value
Select a model Select gpt-35-turbo.
Model version Select 0301.
Deployment name Add a name that's unique for this cloud instance. For example, chat-model because this model type is optimized for having conversations.

 

5. Select Create.

chat-model-deployment.png

 

6. Select + Create new deployment from the toolbar. A Deploy model window opens.

 

7. Add the following information to create an embedding model deployment:

What Value
Select a model Select text-embedding-ada-002.
Model version Select 2.
Deployment name Add a name that's unique for this cloud instance. For example, embedding-model because this model type is optimized for creating embeddings.

 

8. Select Create.

embedding-model-deployment.png

 

 

Step 3: Create an Azure App Service and Deploy the RAG Chat Application

In this step, you'll:

  • Fork the sample Repository on GitHub.
  • Create an Azure App service resource with a deployment from GitHub.
  • Modify Azure App service Application settings in the Azure portal.
  • Configure the Workflow to deploy your application from GitHub.
  • Test the website before and After adding the data.

 

Fork the sample Repository on GitHub

In this step, you create a copy from the source code on your GitHub account to be able to edit it and use it later.

 

1. Visit the sample github.com/john0isaac/rag-semantic-kernel-mongodb-vcore in your browser and sign in.

github-repo-screenshot.png

 

2. Select Fork from the top of the sample page.

 

3. Select an owner for the fork then, select Create fork.

github-fork.png

 

Create an Azure App service resource with a deployment from GitHub

In this step, you create an Azure App service resource and connect it with your GitHub account to deploy a Python application.

 

1. Type app service in the search bar at the top of the portal page and select App Services from the available options.

search-app-service.png

 

2. Select Create Web App from the toolbar to start provisioning a new web application.

select-create-app-service.png

 

3.  Add the following information to fill in the basic configuration of the application:

What Value
Subscription Use the same subscription you used to apply for Azure OpenAI access.
Resource group Use the same resource group you created before.
Name Enter a unique name for your website. For example, rag-mongodb-demo.
Publish? Select Code. This option specifies whether your deployment consists of code or a container.
Runtime stack Select Python 3.10.
Operating System Select Linux.
Region Select UK South. This is the region where the rest of the resources you created reside.

 

basic-config-app-service.png

 

4. Add the following information to create the app service plan. You can scale it up later:

What Value
Linux Plan Select a pre-existing plan or create a new plan.
Pricing Plan

 Select Basic B1.

 

basic-config-app-service-plan.png

 

5. Select Deployment from the toolbar to move to the deployment configuration tab.

 

6. Add the following information to enable continuous deployment from GitHub:

What Value
Continuous deployment Select Enable.
GitHub account Select your GitHub account.
Organization Select your organization. If you are using your personal account then select it.
Repository Select rag-semantic-kernel-mongodb-vcore.
Branch Select main.

continuous-deployment-github.png

 

7. Select Review + create.

 

8. Confirm your configuration settings and select Create to start provisioning the resource. Wait for the deployment to finish.

 

9. After the deployment finishes, select Go to resource to inspect your created resource. Here, you can manage your resource and find important information like the application settings and logs.

select-go-to-resource-app.png

 

Modify Azure App service Application settings in the Azure portal

 In this step, you configure the Application settings to make the website able to communicate with other cloud resources.
 
1. In the Web App resource, select Configuration from the left side menu.
application-configuration-settings.png
 
2. Select + New application setting to add new environment variables to the function configuration.
 
3. Add the following names and values one by one and select Ok. Make sure to add your own values.
These application settings are for the Azure OpenAI resources that you created:
What Value
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME <chatModelDeploymentName>
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME <embeddingModelDeploymentName>
AZURE_OPENAI_DEPLOYMENT_NAME <azureOpenAiResourceName>
AZURE_OPENAI_ENDPOINT https://<azureOpenAiResourceName>.openai.azure.com/
AZURE_OPENAI_API_KEY <azureOpenAiResourceKey>
 
You can get the Azure OpenAI key from the Azure OpenAI resource page.
Select Keys and Endpoint and copy any of the available keys.
openai-keys.png

 

These application settings are for Azure Cosmos DB for MongoDB vCore:
AZCOSMOS_API mongo-vcore
AZCOSMOS_CONNSTR mongodb+srv://<mongoAdminUser>:<mongoAdminPassword>@<mongoClusterName>.global.mongocluster.cosmos.azure.com/?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=false&maxIdleTimeMS=120000
 
You can get the Cosmo DB connection string from the Azure Cosmos DB for MongoDB vCore resource page.
Select Connection strings and copy the connection string. Make sure to replace the user and password with the ones you created.
connection-string-cosmos.png
 
These application settings are new and will be created when the application starts:
AZCOSMOS_DATABASE_NAME <cosmosDatabaseName>
AZCOSMOS_CONTAINER_NAME <cosmosContainerName>

 

Any value should work for them.

 

4. Select Save.

configuration-settings-save.png

 

5. Select General settings to edit the application startup command.

select-general-settings.png

 

6. Type entrypoint.sh in the startup command field and select Save.

add-startup-command.png

 

Configure the Workflow to deploy your application from GitHub

In this step, you modify the GitHub deployment workflow to point to the folder that contains the application.
 
1. Visit your forked repository on GitHub and notice the failing workflow.
 
2. Open the function workflow file .github/workflows/main_ragmongodbdemo.yml.
workflow-file.png

 

3.  Open the file and select the pen icon to edit it.

select-pen-edit.png

4. Modify lines 31 and 36 to the following:

31 run: cd src && pip install -r ./requirements.txt
36 run: cd src && zip ../release.zip ./* -r


modify-workflow-lines.png

 

5. Select Commit changes, and review your commit message and description. Select Commit changes.

commit-changeds.png

 

6. Select Actions to review the workflow run status.

 

select-actions.png

 

Test the website before and After adding the data

In this step, you test the application before adding the data, add the data, and test again.
 
1. Select the workflow name to open it and get the website URL.
select-website-url.png

 

2. Type in the chat message What is Azure Functions? and it should respond with I don't know.

test-before.png

 

3. Navigate to your Azure App service resource page and select SSH.

 

ssh-page.png

 

4. Select Go to open a new SSH page.

select-go-shh.png

 

5. In the SSH terminal, run this command:

python ./scripts/add_data.py

 

run-add-data-command.png

 

6. Navigate back to the live website and type in the chat message What is Azure Functions? and it should respond with the correct answer now.

test-after.png

 

Congratulations!! You successfully built the full application.

 

If you want to learn how to add your own data see this guide on the repository's main readme.

 

Clean Up

Once you finish experimenting on Microsoft Azure you might want to delete the resources to not consume any more money from your subscription.

You can delete the resource group and it will delete everything inside it or delete the resources one by one that's totally up to you.

 

Conclusion

Congratulations! You've learned how to create an Azure Cosmos DB for MongoDB vCore cluster, how to create an Azure OpenAI resource, how to deploy an embedding model and a chat model from Azure OpenAI studio, how to create an Azure App service and configure continuous deployment with GitHub, and how to modify application settings to enable the communication across Azure resources. By using these technologies, you can build a RAG chat application with the option to perform vector search too over your own data and provide grounded (relevant) responses.

 

Next steps

 

Documentation

 

Training Content

 

Found this useful? Share it with others and follow me to get updates on:

Feel free to share your comments and/or inquiries in the comment section below..

See you in future demos!

Co-Authors
Version history
Last update:
‎Mar 14 2024 09:15 AM
Updated by: