Teach ChatGPT to Answer Questions: Using Azure AI Search & Azure OpenAI (Semantic Kernel)
Published Nov 19 2023 12:00 AM 24.4K Views
Microsoft

Teach ChatGPT to Answer Questions Based on PDF content: Using Azure AI Search and Azure OpenAI (Semantic Kernel.ver)

 

 

Spoiler

Semantic Kernel vs. Lang Chain

Readers have the opportunity to explore two different approaches - using either Semantic Kernel or Lang Chain.

For those interested, here's the link to the Lang Chain version of this tutorial: Teach ChatGPT to Answer Questions: Using Azure AI Search & Azure OpenAI (Lang Chain.ver)

Can't I just copy and paste text from a PDF file to teach ChatGPT?

 

The purpose of this tutorial is to explain how to efficiently extract and use information from large amounts of PDFs. Dealing with a 5-page PDF can be straightforward, but it's a different story when you're dealing with complex documents of 100+ pages. In these situations, the integration of Azure AI Search with Azure OpenAI enables fast and accurate information retrieval and processing. In this tutorial, we handle 5 PDFs, but you can apply this method to scale to handle more than 10,000 files. In this two-part series, we will explore how to build intelligent service using Azure. In Series 1, we'll use Azure AI Search to extract keywords from unstructured data stored in Azure Blob Storage. In Series 2, we'll Create a feature to answer questions based on PDF documents using Azure OpenAI. Here is an overview of this tutorial.


00-1-architecture.png

 

 

This tutorial is related to the following topics

 

- AI Engineer
- Developer
- Azure Blob Storage
- Azure AI Search
- Azure OpenAI

- Semantic Kernel


Learning objectives

 

In this tutorial, you'll learn the following:
- How to store your unstructured data in Azure Blob Storage.
- How to create search experiences based on data stored in Blob Storage with Azure AI Search.
- Learn how to teach ChatGPT to answer questions based on your PDF content using Azure AI search and Azure OpenAI.
 

Prerequisites

 

 

Microsoft Cloud Technologies used in this Tutorial

 

- Azure Blob Storage
- Azure AI Search
- Azure OpenAI Service
 

Table of Contents

 

Series 1: Extract Key Phrases for Search Queries Using Azure AI Search
1. Create a Blob Container
2. Store PDF Documents in Azure Blob Storage
3. Create an AI Search Service
4. Connect to Data from Azure Blob Storage
5. Add Cognitive Skills
6. Customize Target Index and Create an Indexer
7. Extract Key Phrases for Search Queries Using Azure AI Search

Series 2: Implement a ChatGPT Service with Azure OpenAI
1. Change your indexer settings to use Azure OpenAI
2. Create an Azure OpenAI
3. Set up the project and install the libraries
4. Set up the project in VS Code
5. Search with Azure AI Search
6. Get answers from PDF content using Azure OpenAI and AI Search
7. Note: Full code for example.py and config.py
 

Series 1: Extract Key Phrases for Search Queries Using Azure AI Search

In Series 1, we'll use Azure AI Search to extract key phrases from unstructured data stored in Azure Blob Storage.
 
series1-architecture1.png
 
This series is designed to guide you through the essential steps of storing, connecting, and searching data in the cloud.
 
Overview of Series 1
 
1. Store Unstructured Data in the Cloud

We'll begin by exploring how to store unstructured data, such as PDFs, in the cloud. This section covers the basics of uploading data in Azure Blob Storage.

2. Connect Stored Data to Azure AI Search

Once our data is stored in the cloud, the next step is to connect that data to Azure AI Search.

3. Search Stored Data

In this final step of Series 1, we'll set up an index on the data you connect to and search based on the data you store in the cloud to retrieve results in Azure AI Search. So we can efficiently retrieve and use the data stored in Azure Blob Storage
 

1. Create a Blob Container

 

Azure Blob Storage is a service designed for storing large amounts of unstructured data, such as PDFs.

1. To begin, create an Azure Storage account by typing `storage accounts` in the search bar and selecting Services - Storage accounts.

Minseok_Song_0-1703832568938.png

 


2. Select the +Create button.
 
Minseok_Song_1-1703832568794.png

 



3. Enter the resource group name that will serve as the folder for the storage account, enter the storage account name, and select a region. When you're done, select the Review button.
 
Minseok_Song_2-1703832569423.png

 

 
4. Select the Create button.
 
Minseok_Song_3-1703832568999.png

 

2. Store PDF Documents in Azure Blob Storage

 

1. After your storage account is set up, navigate to Storage Browser by typing `storage browser` in the search bar.
 
Minseok_Song_4-1703832646776.png

 

2. In the Storage Browser, select the blob storage you created.
 
Minseok_Song_5-1703832647383.png

 

3. Add a new container to store PDF documents.
 
- Select your storage account.
- Select the Blob containers button.
- Select the +Add Container button to create a new container.
 
Minseok_Song_6-1703832646945.png

 

3. Once the container is set up, upload your PDFs into this container.

- Select the container you created.
 
Minseok_Song_7-1703832646668.png

 

- Select the Upload button and upload your PDF documents. 
 
Minseok_Song_8-1703832646832.png

 

For the tutorial, I downloaded 5 PDF documents of recent papers on GPT from Microsoft Academic and uploaded them to the container.
 
Minseok_Song_9-1703832647303.png

 

3. Create a AI Search Service

 

1. Type `ai search` into the search bar and select Services – AI Search.
 
Minseok_Song_10-1703832646874.png

2. Select the +Create button.
Minseok_Song_11-1703832646841.png

 

3. Create a new AI Search Service.

- Select your Resource Group.
- Specify the Service name.
- Select your Location.
 

NOTE:

Azure OpenAI resource is currently available in limited regions.
If Azure OpenAI resource is not available in your region, I recommend setting your location to East US.

- Choose a Pricing tier that suits your needs; since semantic ranker is available from the basic tier, I recommend setting your Pricing tier to basic for the tutorial.


NOTE:

In this tutorial we will use the Basic tier to explore semantic ranker with Azure AI Search. You can expect a cost of approximately $2.50 per 100 program runs with this tier.

If you plan to use the free tier, please note that the code demonstrated in this tutorial may differ from what you'll need.

(Azure AI Search is priced even when you're not using it. If you're just going through the tutorial for practice, I recommend deleting the Azure AI Search you created when you're done all tutorial.)


- Select the Review + create button.
 
Minseok_Song_12-1703832646831.png
 
4. Navigate to the AI Search service you created and select Semantic ranker, then select the Free plan. (If you choose the free tier, you can skip it.)
  Minseok_Song_13-1703832646680.png

 

4. Connect to Data from Azure Blob Storage


1. Navigate to the AI Search service you created and select Import data.
 
Minseok_Song_14-1703832647188.png

 

 

2. Select Azure Blob Storage as the data source and connect it to the Blob Storage where your PDFs are stored.
 
Minseok_Song_15-1703832647201.png

 

 
3. Specify your Data source name.

4. Select Choose an existing connection and select the blob storage container you created.

5. Select Next: Add cognitive skills button.
 
Minseok_Song_16-1703832647194.png

 

5. Add Cognitive Skills


1.  Attach AI Services
- To power your cognitive skills, select an existing AI Services resource or create a new one; the Free resource(default) is sufficient for this tutorial.
 
Minseok_Song_17-1703832647109.png

 


2. Specify the Skillset name.
 
TIP:
If you want to search for text in a photo, you need to check Enable OCR and merge all text into merged_content field. In this tutorial, we will not check it because we will search based on the text in the paper.

3. Select Enrichment granularity level. (In this tutorial, we'll use a page-by-page granularity, so we'll select Pages (5000 characters chunk).)

4. Select Extract Key phrases. (You can select additional checkboxes depending on the type of PDF data.)

5. Select Next: Customize target index button.
 
Minseok_Song_18-1703832646810.png

 

 

NOTE:
Why set the Enrichment granularity level to Pages (5000 characters chunk)?
To get ChatGPT responses based on a PDF, We need to call the GPT-3.5-turbo model of ChatGPT API. The GPT-3.5-turbo model can handle up to 4096 tokens, including both the text you use as input and the length of the answer the ChatGPT API returns. For this reason, documents that are too long cannot be entered all at once, but must be broken into multiple chunks and processed after multiple calls to the ChatGPT API. (Tokens can be words, punctuation, spaces, etc.)
 
TIP:

How to keep sensitive data private?

 

To ensure the privacy of sensitive data, Azure Cognitive Search provides a Personally Identifiable Information (PII) detection skill. This cognitive skill is specifically designed to identify and protect PII in your data. To learn more about Azure Cognitive Search's PII detection skill, read the following article.

Personally Identifiable Information (PII) Detection cognitive skill

 

- To enable this feature, select Extract Personally identifiable information.

 

Minseok_Song_19-1703832646708.png

 

 

6. Customize Target Index and Create an Indexer


1. Customize target Index.

- Specify your Index name.
- Check the boxes as shown in the image below.

TIP:

You can change the fields to suit your data. I have attached a document with a description of each field in the index. (Depending on your settings for the index fields, the code you implement may differ from the tutorial.)

Index data from Azure Blob Storage

 
Minseok_Song_20-1703832646910.png

 

 
2. Add a new field.

In this tutorial, we have selected the Enrichment granularity level of Pages (5000 characters chunk). So, we need to create a field to search for pages that are separated by 5000 character chunks.

- Select + Add field button.
- Create a field named `pages`.
- Select Collection(Edm.String) as the type for the `pages` field.
- Check the box Retrievable.
 
Minseok_Song_21-1703832646906.png

 

3. Delete unnecessary fields.
 
Minseok_Song_22-1703832646826.png

 

 
4. Create an Indexer.
 
- Specify your Indexer Name.
- Select the Schedule – Once.
(For data coming in in real time, you'll need to set up a schedule periodically, but since we're dealing with unchanging PDF data in this tutorial, we'll only need to schedule Once.)
- Select the Submit button.
 
Minseok_Song_23-1703832647162.png

 

 

7. Extract Key Phrases for Search Queries Using Azure AI Search

 

1. Once your indexer and index creation are complete, navigate to your AI Search service and select the Indexes page.
 
2. Select the index you created.
 
Minseok_Song_24-1703832646719.png

 

 
3. You can use a query string or simply enter text to perform a search.
Ex) In this tutorial, I entered the following question: `How to prompt GPT to be reliable?`
 
Minseok_Song_25-1703832646956.png

 

4. Set Semantic configurations.
- Semantic configurations are available from the basic price tier onwards. If you chose the free tier, you can skip it.
- Select Semantic configurations, then select + Add semantic configuration.
 
Minseok_Song_26-1703832647081.png

 

- Specify your semantic configuration Name.
- Select the Title field – content.
- Select the Save button.
 
Minseok_Song_27-1703832646849.png

 

- When you've finished setting up your semantic configuration, return and select the Save button.
 
Minseok_Song_28-1703832647302.png

 

We completed extracting key phrases based on our questions using Azure AI Search.
In the next series, we'll connect this AI Search service with Azure Open AI to make a ChatGPT that answers questions based on PDFs stored in blob storage.

Series 2: Implement a ChatGPT Service with Azure OpenAI

 

In series2, we will implement the feature to answer questions based on PDFs using Azure AI Search and Azure OpenAI, and implement this feature in code.

 

series2-architecture1.png

 

Intent of the Code Design

The primary goal of the code design in this tutorial is to construct the code in a way that is easy to understand, especially for first-time viewers. Each segment of the code is encapsulated as a separate function. This modularization ensures that the main function acts as a clear, coherent guide through the logic of the system.

 

Ex. Part of the main function. (Semantic Kernel.ver)

 

async def main():
…
    search_results = await search_documents(QUESTION)

    documents = await filter_documents(search_results)
…
    kernel = await create_kernel(sk)

    await create_embeddings(kernel)

    await create_vector_store(kernel, embeddings)

    await store_documents(kernel, documents)

    related_page = await search_with_vector_store(memory, QUESTION)

    await add_chat_service(kernel)

    answer = await answer_with_sk(kernel, QUESTION, related_page)
…
 

Overview of the code

 

Part 1: Retrieving and Scoring Documents

We'll use Azure AI Search to retrieve documents related to our question, score them for relevance to our question, and extract documents with a certain score or higher.

 

Part 2: Document Embedding and Vector Database Storage

We'll embed the documents we extracted in part 1. We will then store these embedded documents in a vector database, organizing them into pages (chunks of 5,000 characters each).

 

What is Embedding?

What is a Vector Database?

 

Part 3: Extracting Relevant Content and Implementing a Function to Answer Questions

We will extract the most relevant page from the vector database based on the question.

Then we will implement a function to generate answers from the extracted content.

 

1. Change your indexer settings to use Azure OpenAI

 

1. navigate to your AI Search service and select the Indexers page.
 
2. Select the indexer you created.
 
Minseok_Song_0-1703833470875.png

 

3. Select the Indexer Definition (JSON)
 
4. In the JSON, modify the "outputFieldMappings" part as shown below.
 
  "outputFieldMappings": [
    {
      "sourceFieldName": "/document/content/pages/*/keyphrases/*",
      "targetFieldName": "keyphrases"
    },
    {
      "sourceFieldName": "/document/content/pages/*",
      "targetFieldName": "pages"
    }
]
 
 
Minseok_Song_1-1703833470607.png

 

5. Select the Save button.
 
NOTE:
You must click the Save button with your mouse. Using the shortcut Ctrl + S doesn't actually save your changes, it just changes the color of the icon.
 
6. Select the Reset button.
 
7. Select the Run button.
 
Minseok_Song_2-1703833470238.png

 

TIP:
Description of “outputFieldMappings”
"outputFieldMappings" are settings that map data processed by the Cognitive Search service to specific fields in the index.
For example, in the path "/document/content/pages//keyphrases/", keywords are extracted from each page and mapped to the "keyphrases" field.
>Similarly, for the “pages” field that we created earlier, we need to specify what data will be mapped to this field. In this tutorial, we have selected the Enrichment granularity level of Pages (5000 characters chunk). So, we need to specify that 5000-character chunks from "/document/content/pages/" are mapped to the "pages" field. We need to add JSON code to map the data to the "pages" field so that we can send chunks of 5000 characters to OpenAI instead of sending entire pages.
 

2. Create an Azure OpenAI


Currently, access to the Azure OpenAI service is granted by request only. You can request access to the Azure OpenAI service by filling out the form at https://aka.ms/oai/access/ .

1. Type 'azure openai' in the search bar and select Services - Azure OpenAI.
 
Minseok_Song_3-1703833539385.png

 

2. Select the + Create button.
 
Minseok_Song_4-1703833539267.png

 

3. Fill in Basics.
 
Minseok_Song_5-1703833539919.png

 

NOTE:
Azure OpenAI resource is currently available in limited regions.
If Azure OpenAI resource is not available in your region, I recommend setting your location to East US.
 
4. Select a network security Type.
 
Minseok_Song_6-1703833540412.png

 

5. Select the Create button.
 
Minseok_Song_7-1703833539539.png

 

6. Deploy your Azure OpenAI model.
 
- Navigate to your Azure OpenAI, then Select the Go to Azure OpenAI Studio button.
 
Minseok_Song_8-1703833539283.png

 

 
- In Azure openAI Studio, select the Deployments button.
 
Minseok_Song_9-1703833540397.png

 

- Select the + Create new deployment button, then create the gpt-35-turbo and text-embedding-ada-002 models

NOTE:
In this tutorial we will use the gpt-35-turbo and text-embedding-ada-002 models. I recommend using the same name for both the deployment name and the model name.
 
Minseok_Song_10-1703833539395.png

 

3. Set up the project and install the libraries


1. Create a folder where you can work.

- We will create an `azure-proj` folder inside the `User` folder and work inside the `gpt-proj1` folder.
- Open a command prompt window and create a folder named `azure-proj` in the default path.

mkdir azure-proj

- Navigate to the `azure-proj` folder you just created.

cd azure-proj

- In the same way, create a `gpt-proj1` folder inside the `azure-proj` folder. Navigate to the `gpt-proj1` folder.

mkdir gpt-proj
cd gpt-proj1
 
 
 LeeStott_36-1700167807537.jpeg

 

2. Create a virtual environment.

- Type the following command to create a virtual environment named `.venv`.

Python -m venv .venv

- Once the virtual environment is created, type the following command to activate the virtual environment.

.venv\Scripts\activate.bat

- Once activated, the name of the virtual environment will appear on the far left of the command prompt window. 
LeeStott_37-1700167807538.jpeg

 

3. Install the required packages.
- At the Command prompt, type the following command.

pip install semantic-kernel==0.9.1b1

TIP:
How to use CMD in VS Code
Select TERMINAL at the bottom of VS Code, then select the + button, then select the Command Prompt.
LeeStott_38-1700167807539.jpeg

 

 

 4. Set up the project in VS Code


1. In VS Code, select the folder that you have created.

- Open VS Code and select File > Open Folder from the menu. Select the `gpt-proj1` folder that you created earlier, which is located at C:\Users\yourUserName\azure-proj\gpt-proj1.
 
LeeStott_39-1700167807540.jpeg

 

2. Create a new file.

- In the left pane of VS Code, right-click and select 'New File' to create a new file named `example.py`.
 
LeeStott_40-1700167807540.jpeg

 

3. Import the required packages.

- Type the following code in the `example.py' file in VS Code.
 
# Library imports
from collections import OrderedDict
import requests

# Semantic Kernel library imports
import semantic_kernel as sk
import semantic_kernel.connectors.ai.open_ai as sk_oai
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureTextEmbedding
from semantic_kernel.memory.semantic_text_memory import SemanticTextMemory
from semantic_kernel.core_plugins.text_memory_plugin import TextMemoryPlugin
from semantic_kernel.prompt_template.input_variable import InputVariable
from semantic_kernel.functions.kernel_arguments import KernelArguments

4. Create a configuration file - `config.py`.

NOTE:
Complete folder structure:
└── YourUserName
         └── azure-proj
            └── gpt-proj1
                  ├── example.py
                  └── config.py

- Create a `config.py` file. This file should contain information about your Azure.
- Add the code below to your `config.py` file.
 
# Azure AI Search service settings
SEARCH_SERVICE_NAME = 'your-search-service-name' # 'teachchatgpt-search'
SEARCH_SERVICE_ENDPOINT = f'https://{SEARCH_SERVICE_NAME.lower()}.search.windows.net/'
SEARCH_SERVICE_KEY = 'your-search-service-key'
SEARCH_SERVICE_API_VERSION = 'your-API-version' # '2023-10-01-Preview'

# Azure AI Search service index settings
SEARCH_SERVICE_INDEX_NAME1 = 'your-search-service-index-name' # 'teachchatgpt-index'

# Azure AI Search service semantic configuration settings
SEARCH_SERVICE_SEMANTIC_CONFIG_NAME = 'your-semantic-configuration-name' # 'teachchatgpt-config'

# Azure OpenAI settings
AZURE_OPENAI_NAME = 'your-openai-name' # 'teachchatgpt-azureopenai'
AZURE_OPENAI_ENDPOINT = f'https://{AZURE_OPENAI_NAME.lower()}.openai.azure.com/'
AZURE_OPENAI_KEY = 'your-openai-key'
AZURE_OPENAI_API_VERSION = 'your-API-version' # '2024-02-15-preview'
 
5. Fill in your `config.py` file with your Azure information.
 
NOTE:
You'll need to include information about your Azure AI Search Service name, index name, semantic configuration name, key, and API version, and Azure OpenAI name, key, and API version.

TIP:
Find your Azure information
1. Find the Azure AI Search Keys.
 
- Navigate to your AI Search service, then select Keys, then copy and paste your key into the `config.py` file.
 
Minseok_Song_11-1703833914373.png

 

2. Find the Azure AI Search Index name.
 
- Navigate to your AI Search service, then select Indexes, then copy and paste your index name into the `config.py` file.
 
Minseok_Song_12-1703833914250.png

 

3. Find the Azure AI Search Semantic configuration name.
- Navigate to your AI Search service, select Indexes, and then click your index name.
- Select Semantic configurations and copy and paste your Semantic configuration name into the `config.py` file.
 
Minseok_Song_13-1703833914231.png

 

4. Find the Azure OpenAI Keys.
 
- Navigate to your Azure OpenAI, then select Keys and Endpoint, then copy and paste your key into the config.py file
 
Minseok_Song_14-1703833914361.png

 

5. Choose your Azure AI Search API and Azure OpenAI version.
 
- Select your version of the Azure AI Search API and Azure OpenAI API using the hyperlinks below.
- I have selected the latest version of the Azure AI Search API, 2023-07-01-preview, and the Azure OpenAI API, 2023-08-01-preview.
 

5. Search with Azure AI Search

 

In this section, we'll use Azure Cognitive Search within VS Code. We have already installed all the necessary packages in the previous chapter. Now we will focus on how to use Azure AI Search and Azure OpenAI in VS Code.
In Chapters 5 and 6, we'll create functions that use Azure AI Service and Azure OpenAI and use them in `example.py`.
To use Azure AI Search and Azure OpenAI, we need to import the information from Azure that we entered in `config.py` into `example.py` that we created earlier.
All the following code comes from `example.py`.
The full code is provided at the end of the chapter for your convenience.
 
1. Add code to `example.py` that imports the values from `config.py`.
 
# Configuration imports
from config import (
    SEARCH_SERVICE_ENDPOINT,
    SEARCH_SERVICE_KEY,
    SEARCH_SERVICE_API_VERSION,
    SEARCH_SERVICE_INDEX_NAME1,
    SEARCH_SERVICE_SEMANTIC_CONFIG_NAME,
    AZURE_OPENAI_ENDPOINT,
    AZURE_OPENAI_KEY,
    AZURE_OPENAI_API_VERSION,
)

2. Add the Azure AI Search Service header.
 
# Azure AI Search service header settings
HEADERS = {
    'Content-Type': 'application/json',
    'api-key': SEARCH_SERVICE_KEY
}
3. Now, we will create functions related to Azure AI Search and run them from the main function.

- Add the two functions.

 

async def search_documents(question):
    """Search documents using Azure AI Search."""
    # Construct the Azure AI Search service access URL.
    url = (SEARCH_SERVICE_ENDPOINT + 'indexes/' +
                SEARCH_SERVICE_INDEX_NAME1 + '/docs')
    # Create a parameter dictionary.
    params = {
        'api-version': SEARCH_SERVICE_API_VERSION,
        'search': question,
        'select': '*',
        # '$top': 5, Extract the top 5 documents from your storage.
        '$top': 5,
        'queryLanguage': 'en-us',
        'queryType': 'semantic',
        'semanticConfiguration': SEARCH_SERVICE_SEMANTIC_CONFIG_NAME,
        '$count': 'true',
        'speller': 'lexicon',
        'answers': 'extractive|count-3',
        'captions': 'extractive|highlight-false'
        }
    # Make a GET request to the Azure AI Search service and store the response in a variable.
    resp = requests.get(url, headers=HEADERS, params=params)
    # Return the JSON response containing the search results.
    search_results = resp.json()

    return search_results
    
async def filter_documents(search_results):
    """Filter documents with a reranker score above a certain threshold."""
    documents = OrderedDict()
    for result in search_results['value']:
        # The '@search.rerankerScore' range is 0 to 4.00, where a higher score indicates a stronger semantic match.
        if result['@search.rerankerScore'] > 0.8:
            documents[result['metadata_storage_path']] = {
                'chunks': result['pages'][:10],
                'captions': result['@search.captions'][:10],
                'score': result['@search.rerankerScore'],
                'file_name': result['metadata_storage_name']
            }

    return documents

 

 


4. Now we'll run the code using the above functions in the main function.

- Add the code below.
- When you run it, you'll see the total number of PDFs in the blob storage, the top few documents adopted, and the number of chunks.
- I asked the question, 'Tell me about effective prompting strategies' based on the paper I had stored on the blob storage.
- If you want to see the full search results, add `print(search_results)` to your main function.
 
async def main():

    QUESTION = 'Tell me about effective prompting strategies'

    # Search for documents with Azure AI Search.

    search_results = await search_documents(QUESTION)

    documents = await filter_documents(search_results)

    print('Total Documents Found: {}, Top Documents: {}'.format(
        search_results['@odata.count'], len(search_results['value'])))

# Execute the main function.
if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
 

6. Get answers from PDF content using Azure OpenAI and AI Search

 

Now that Azure AI Search is working well in VS Code, it's time to start using Azure OpenAI.
In this chapter, we'll create functions related to Azure OpenAI and ultimately create and run a program in `example.py` that answers a question with Azure OpenAI based on the search information from Azure AI Search.
 

1. We will create functions related to Azure OpenAI and Semantic Kernel and run them from
the main function.


- Add the following functions above the main function.

 

async def search_with_vector_store(memory, question):
    """Search for documents related to your question from the vector store."""
    related_page = await memory.search('TeachGPTtoPDF', question)
    return related_page

async def add_chat_service(kernel):
    """Add a chat service."""
    azure_chat_service = AzureChatCompletion(
        service_id = 'chat_service',
        deployment_name = 'gpt-35-turbo', # Azure OpenAI deployment name
        endpoint = AZURE_OPENAI_ENDPOINT,
        api_key = AZURE_OPENAI_KEY)
    return kernel.add_service(azure_chat_service)

async def answer_with_sk(kernel, question, related_page):
    """Answer question with related_page using Semantic Kernel."""

    prompt = """
    Provide a detailed answer to the <question> using the information from the <related_page>.

    <question>
    {{$question}}
    </question>

    <related_page>
    {{$related_page}}
    </related_page>

    Answer:
    """

    execution_settings = sk_oai.OpenAIChatPromptExecutionSettings(
        service_id = 'chat_service',
        ai_model_id = 'gpt-35-turbo',
        max_tokens = 500,
        temperature = 0.0
    )

    prompt_template_config = sk.PromptTemplateConfig(
    template = prompt,
    name = 'chat_prompt_template',
    template_format = 'semantic-kernel',
    input_variables = [
        InputVariable(name = 'question', description = 'The question that requires a detailed answer.', is_required=True),
        InputVariable(name ="related_page", description = 'The text of the page that contains information relevant to the question.', is_required=True),
    ],
    execution_settings = execution_settings)

    chat_function = kernel.create_function_from_prompt(prompt_template_config = prompt_template_config,
                                                        plugin_name = 'chat_plugin',
                                                        function_name = 'chat_function')

    arguments = KernelArguments(question = question, related_page = related_page[0].text)

    answer = await kernel.invoke(chat_function, arguments)

    return answer

 

2. Add the code below to your main function.

 

async def main():

    QUESTION = 'Tell me about effective prompting strategies'

    # Search for documents with Azure AI Search.

    search_results = await search_documents(QUESTION)

    documents = await filter_documents(search_results)

    print('Total Documents Found: {}, Top Documents: {}'.format(
        search_results['@odata.count'], len(search_results['value'])))

    # Answer your question using Semantic Kernel.

    kernel = await create_kernel(sk)

    embeddings = await create_embeddings(kernel)

    memory = await create_vector_store(kernel, embeddings)

    await store_documents(memory, documents)

    related_page = await search_with_vector_store(memory, QUESTION)

    await add_chat_service(kernel)

    answer = await answer_with_sk(kernel, QUESTION, related_page)

    print('Question: ', QUESTION)
    print('Answer: ', answer)
    print('Reference: ', related_page[0].id)

 

3. Now let's run it and see if it answers your question.
- The result of executing the code.


```

Total Documents Found: 5, Top Documents: 3

Question:  Tell me about effective prompting strategies

Answer:  Effective prompting strategies are techniques used to encourage individuals to engage in desired behaviors or complete tasks. These strategies can be particularly useful for individuals with disabilities or those who struggle with executive functioning skills. Some effective prompting strategies include:

  1. Visual prompts: These can include pictures, diagrams, or written instructions that provide a visual cue for the individual to follow.
  2. Verbal prompts: These can include verbal reminders or instructions given by a caregiver or teacher.
  3. Gestural prompts: These can include physical cues, such as pointing or gesturing, to guide the individual towards the desired behavior or task.
  4. Modeling: This involves demonstrating the desired behavior or task for the individual to imitate.
  5. Graduated guidance: This involves providing physical assistance to the individual as they complete the task, gradually reducing the amount of assistance as they become more independent.
  6. Time-based prompts: These can include setting a timer or providing a schedule to help the individual stay on task and complete the task within a designated time frame.

Overall, effective prompting strategies should be tailored to the individual's needs and abilities, and should be used consistently to help them develop independence and achieve success.

Reference:  Prompting GPT-3 To Be Reliable.pdf_1
```

 

NOTE: Full code for example.py and config.py

 
This chapter is designed to provide all the code used in the tutorial. It is a separate section from the rest of the tutorial.
For your convenience, I've attached the full code used in the tutorial.
 
1. config.py
 
# Azure AI Search service settings
SEARCH_SERVICE_NAME = 'your-search-service-name' # 'teachchatgpt-search'
SEARCH_SERVICE_ENDPOINT = f'https://{SEARCH_SERVICE_NAME.lower()}.search.windows.net/'
SEARCH_SERVICE_KEY = 'your-search-service-key'
SEARCH_SERVICE_API_VERSION = 'your-API-version' # '2023-10-01-Preview'

# Azure AI Search service index settings
SEARCH_SERVICE_INDEX_NAME1 = 'your-search-service-index-name' # 'teachchatgpt-index'

# Azure AI Search service semantic configuration settings
SEARCH_SERVICE_SEMANTIC_CONFIG_NAME = 'your-semantic-configuration-name' # 'teachchatgpt-config'

# Azure OpenAI settings
AZURE_OPENAI_NAME = 'your-openai-name' # 'teachchatgpt-azureopenai'
AZURE_OPENAI_ENDPOINT = f'https://{AZURE_OPENAI_NAME.lower()}.openai.azure.com/'
AZURE_OPENAI_KEY = 'your-openai-key'
AZURE_OPENAI_API_VERSION = 'your-API-version' # '2024-02-15-preview'
 
2. example.py
 
# Library imports
from collections import OrderedDict
import requests

# Semantic Kernel library imports
import semantic_kernel as sk
import semantic_kernel.connectors.ai.open_ai as sk_oai
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureTextEmbedding
from semantic_kernel.memory.semantic_text_memory import SemanticTextMemory
from semantic_kernel.core_plugins.text_memory_plugin import TextMemoryPlugin
from semantic_kernel.prompt_template.input_variable import InputVariable
from semantic_kernel.functions.kernel_arguments import KernelArguments

# Configuration imports
from config import (
    SEARCH_SERVICE_ENDPOINT,
    SEARCH_SERVICE_KEY,
    SEARCH_SERVICE_API_VERSION,
    SEARCH_SERVICE_INDEX_NAME1,
    SEARCH_SERVICE_SEMANTIC_CONFIG_NAME,
    AZURE_OPENAI_ENDPOINT,
    AZURE_OPENAI_KEY,
    AZURE_OPENAI_API_VERSION,
)

# Azure AI Search service header settings
HEADERS = {
    'Content-Type': 'application/json',
    'api-key': SEARCH_SERVICE_KEY
}

async def search_documents(question):
    """Search documents using Azure AI Search."""
    # Construct the Azure AI Search service access URL.
    url = (SEARCH_SERVICE_ENDPOINT + 'indexes/' +
                SEARCH_SERVICE_INDEX_NAME1 + '/docs')
    # Create a parameter dictionary.
    params = {
        'api-version': SEARCH_SERVICE_API_VERSION,
        'search': question,
        'select': '*',
        # '$top': 5, Extract the top 5 documents from your storage.
        '$top': 5,
        'queryLanguage': 'en-us',
        'queryType': 'semantic',
        'semanticConfiguration': SEARCH_SERVICE_SEMANTIC_CONFIG_NAME,
        '$count': 'true',
        'speller': 'lexicon',
        'answers': 'extractive|count-3',
        'captions': 'extractive|highlight-false'
        }
    # Make a GET request to the Azure AI Search service and store the response in a variable.
    resp = requests.get(url, headers=HEADERS, params=params)
    # Return the JSON response containing the search results.
    search_results = resp.json()

    return search_results
    
async def filter_documents(search_results):
    """Filter documents with a reranker score above a certain threshold."""
    documents = OrderedDict()
    for result in search_results['value']:
        # The '@search.rerankerScore' range is 0 to 4.00, where a higher score indicates a stronger semantic match.
        if result['@search.rerankerScore'] > 0.8:
            documents[result['metadata_storage_path']] = {
                'chunks': result['pages'][:10],
                'captions': result['@search.captions'][:10],
                'score': result['@search.rerankerScore'],
                'file_name': result['metadata_storage_name']
            }

    return documents

async def create_kernel(sk):
    """Create a Semantic Kernel."""
    return sk.Kernel()

async def create_embeddings(kernel):
    """Create an embedding model."""
    embeddings = AzureTextEmbedding(
        deployment_name = "text-embedding-ada-002",
        endpoint = AZURE_OPENAI_ENDPOINT,
        api_key = AZURE_OPENAI_KEY
    )
    kernel.add_service(embeddings)
    return embeddings

async def create_vector_store(kernel, embeddings):
    """Create a vector store."""
    memory = SemanticTextMemory(storage = sk.memory.VolatileMemoryStore(), embeddings_generator = embeddings)
    kernel.import_plugin_from_object(TextMemoryPlugin(memory), "TextMemoryPlugin")
    return memory

async def store_documents(memory, file_content):
    """Store documents in the vector store."""
    for key, value in file_content.items():
        page_number = 1
        for page in value['chunks']:
            page_id = f"{value['file_name']}_{page_number}"
            await memory.save_information(
                collection = 'TeachGPTtoPDF',
                id = page_id,
                text = page
            )
            page_number += 1

async def search_with_vector_store(memory, question):
    """Search for documents related to your question from the vector store."""
    related_page = await memory.search('TeachGPTtoPDF', question)
    return related_page

async def add_chat_service(kernel):
    """Add a chat service."""
    azure_chat_service = AzureChatCompletion(
        service_id = 'chat_service',
        deployment_name = 'gpt-35-turbo', # Azure OpenAI deployment name
        endpoint = AZURE_OPENAI_ENDPOINT,
        api_key = AZURE_OPENAI_KEY)
    return kernel.add_service(azure_chat_service)

async def answer_with_sk(kernel, question, related_page):
    """Answer question with related_page using Semantic Kernel."""

    prompt = """
    Provide a detailed answer to the <question> using the information from the <related_page>.

    <question>
    {{$question}}
    </question>

    <related_page>
    {{$related_page}}
    </related_page>

    Answer:
    """

    execution_settings = sk_oai.OpenAIChatPromptExecutionSettings(
        service_id = 'chat_service',
        ai_model_id = 'gpt-35-turbo',
        max_tokens = 500,
        temperature = 0.0
    )

    prompt_template_config = sk.PromptTemplateConfig(
    template = prompt,
    name = 'chat_prompt_template',
    template_format = 'semantic-kernel',
    input_variables = [
        InputVariable(name = 'question', description = 'The question that requires a detailed answer.', is_required=True),
        InputVariable(name ="related_page", description = 'The text of the page that contains information relevant to the question.', is_required=True),
    ],
    execution_settings = execution_settings)

    chat_function = kernel.create_function_from_prompt(prompt_template_config = prompt_template_config,
                                                        plugin_name = 'chat_plugin',
                                                        function_name = 'chat_function')

    arguments = KernelArguments(question = question, related_page = related_page[0].text)

    answer = await kernel.invoke(chat_function, arguments)

    return answer

async def main():

    QUESTION = 'Tell me about effective prompting strategies'

    # Search for documents with Azure AI Search.

    search_results = await search_documents(QUESTION)

    documents = await filter_documents(search_results)

    print('Total Documents Found: {}, Top Documents: {}'.format(
        search_results['@odata.count'], len(search_results['value'])))

    # Answer your question using Semantic Kernel.

    kernel = await create_kernel(sk)

    embeddings = await create_embeddings(kernel)

    memory = await create_vector_store(kernel, embeddings)

    await store_documents(memory, documents)

    related_page = await search_with_vector_store(memory, QUESTION)

    await add_chat_service(kernel)

    answer = await answer_with_sk(kernel, QUESTION, related_page)

    print('Question: ', QUESTION)
    print('Answer: ', answer)
    print('Reference: ', related_page[0].id)

# Execute the main function.
if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
 

Congratulations!

You've completed this tutorial

Congratulations! You've learned the integration tutorial of Azure AI Search with Azure OpenAI
 
00-1-architecture.png

 


In this tutorial, we have navigated through a practical journey of integrating Azure Blob Storage, Azure AI Search, and Azure OpenAI to create a powerful search and response mechanism.

1. Storing Data in Azure Blob Storage

Our first step was to efficiently store PDF files in Azure Blob Storage, an unstructured data store known for its scalability and security. This storage served as a foundational base, housing the search material that would later be indexed and queried to retrieve relevant information.

2. Implementing Azure AI Search

In the next step, we used Azure AI Search to search based on the data we had stored in Azure Blob Storage. This powerful service was instrumental in indexing and searching the data stored in Azure Blob Storage.

3. Integrating Azure OpenAI with VS Code

The final step of our tutorial was to integrate Azure OpenAI through a program created in VS Code. This program was designed to use the search information processed and refined by Azure AI Search to generate accurate and contextually relevant answers. The synergy between these technologies illustrated the seamless interplay of storage, search, and response mechanisms.
I hope that the knowledge and skills imparted will serve as invaluable tools in your future projects. The harmonious integration of Azure Blob Storage, Azure AI Search, and Azure OpenAI represents the pinnacle of unstructured data management and utilization.

Thank you for your commitment and hard work throughout this learning journey.

Next Steps


Documentation


Training Content

 
3 Comments
Version history
Last update:
‎Mar 26 2024 02:36 AM
Updated by: