Blog Post

Educator Developer Blog
9 MIN READ

Create your own copilot using Azure Prompt flow and Streamlit

PascalBurume's avatar
PascalBurume
Iron Contributor
May 16, 2024

LLMs such as GPT have certain limitations. They may not have up-to-date information due to their knowledge cutoff date for training. This poses a significant challenge when we want our AI models to provide accurate, context-aware, and timely responses. Imagine asking an LLM about the latest technology trends or seeking real-time updates on a breaking news event; traditional language models might fall short in these scenarios.

 

In this blog, we will introduce you to a game-changing technique called retrieval-augmented generation (RAG). This unique approach empowers language models such as GPT to bridge the gap between their static knowledge and the dynamic real world. With RAG, we’ll show you how to equip your generative AI applications with the ability to pull in fresh information, ground your organizational data, cross-reference facts to address hallucinations and stay contextually aware, all in real-time.

 

Generative AI technology has the potential to greatly enhance education in the health sector, particularly in fields like anatomy and physiology. This is because AI platforms can create highly detailed and interactive models of the human body, making complex systems like the cardiovascular or nervous systems easier to understand than with traditional methods.

 

Another benefit of generative AI is its ability to personalize the learning experience. By analyzing a student's performance, the AI can identify areas where the student needs improvement and generate customized practice questions to target those areas. Additionally, generative AI can simulate patient interactions, which is essential in enhancing diagnostic skills.

 

This blog will show how generative AI using Azure AI studio prompt flow with Multi-Round Q&A on Your Data chat can make anatomy and physiology education more interactive, engaging, and effective and help students prepare for their healthcare careers.

 

1. Architecture

2. Create an Azure AI Search resource

You need an Azure AI Search resource to index your data for your copilot solution. This will let you use custom data in a prompt flow.

  1. In a web browser, open the Azure portal at https://portal.azure.com and sign in using your Azure credentials.
  1. On the home page, select + Create a resource and search for Azure AI Search. Then create a new Azure AI Search resource with the following settings:
    • SubscriptionSelect your Azure subscription
    • Resource groupSelect or create a resource group
    • Service nameEnter a unique service name
    • LocationMake a random choice from any of the following regions*
      • Australia East
      • Canada East
      • East US
      • East US 2
      • France Central
      • Japan East
      • North Central US
      • Sweden Central
      • Switzerland
    • Pricing tier: Standard

3. Wait for your Azure AI Search resource deployment to be completed.

 

3. Create an Azure AI project

 

Now you’re ready to create an Azure AI Studio project and the Azure AI resources to support it.

  1. In a web browser, open Azure AI Studio at https://ai.azure.com sign in using your Azure credentials.
  1. On the Manage page, select + New AI hub. Then, in the Getting started wizard, create a project with the following settings:
    • AI HubCreate a new resource with the following settings:
      • AI Hub nameA unique name
      • Azure SubscriptionYour Azure subscription
      • Resource groupSelect the resource group containing your Azure AI Search resource
      • LocationThe same location as your Azure AI Search resource
      • Azure OpenAI: (New) Autofills with your selected hub name


          • Project nameCreate a new project with the following settings:
            • The project name
            • Choose the Hub created early


 

3. Wait for your project to be created.

 

4. Deploy models

 

To implement your solution, you will require two models.

  • An embedding model that turns text data into vectors for easy indexing and processing.
  • A model that can produce responses in natural language to queries using your data.
  1. In the Azure AI Studio, in your project, in the navigation pane on the left, under Components, select the Deployments page.
  1. Create a new deployment (using a real-time endpoint) of the text-embedding-ada-002 model with the following settings:
    • Deployment name: text-embedding-ada-002
    • Model versionDefault
    • Advanced options:
      • Content filterDefault
      • Tokens per minute rate limit: 5K
  1. Repeat the previous steps to deploy a gpt-35-turbo model with the deployment name gpt-35-turbo.

 

 

5.Add data to your project

 

The data for your copilot consists of a set of Essentials of Anatomy and Physiology in PDF format designed to provide a comprehensive introduction to human anatomy and physiology. Let’s add it to the project.

  1. In Azure AI Studio, in your project, select the Data page in the navigation pane on the left under Components.
  2. Select + New data.
  3. expand the drop-down menu to select Upload files/folders in the Add your data wizard.
  4. Select Upload files/folder and select Upload files.
  5. Set the data name to “xxxxxxx”.

 

6. Create an index for your data

 

Now that you’ve added a data source to your project, you can use it to create an index in your Azure AI Search resource.

  1. In Azure AI Studio, in your project, select the Indexes page in the navigation pane on the left under Components.
  2. Add a new index with the following settings:
    • Source data:
      • Data source: Use existing project data
        • Select the “xxxxxx” data source
    • Index storage:
      • Select the AzureAISearch connection to your Azure AI Search resource
    • Search settings:
      • Vector settings: Add vector search to this search resource
      • Azure OpenAI Resource: Default_AzureOpenAI
      • Acknowledge that an embedding model will be deployed if not already there
    • Index settings:
      • Index name: “xxxxxxx”
      • Virtual machine: Auto select
  1. Wait for the indexing process to be completed, which can take several minutes. The index creation operation consists of the following jobs:
    • Crack, chunk, and embed the text tokens in your data.
    • Update Azure AI Search with the new index.
    • Register the index asset.

 

 

7. Examine the index

 

Before using your index in a RAG-based prompt flow, let’s verify that it can be used to affect generative AI responses.

  1. In the navigation pane on the left, under Tools, select the Playground page.
  2. On the Setup panel, select the Add your data tab, and then add the brochures-index project index and select the hybrid (vector + keyword) search type.
  3. After the index has been added and the chat session has restarted, resubmit the prompt  What can you recommend for beginners?
  4. Review the response, which should be based on data in the index. 

 

 

8. Use the index in your prompt flow

 

In this example, I’ll use a ready-made template from the Prompt Flow page. I’ll choose the “Multi-Round Q&A on Your Data” template, which applies an augmented generation method to the retrieved Essentials of Anatomy and Philosophy book, based on a user’s question.

 

You can easily clone this Flow and start exploring it.

 

 

Let us analyze the individual parts that comprise it and delve into their unique characteristics:

  • Inputs: User’s query and Chat history: contains the history chat
  • Output : Model’s augmented response

 

 

  • modify_query_with_history

Before the question gets embedded, modify_query_with_history adds context to the question by including instructions and the chat history. This is the default prompt:

 

 

  • Examples of chat history:

Input: Hi, I'm struggling to understand how the nervous system interacts with other body systems. Can you help?

Output: Absolutely! The nervous system interacts with other body systems primarily through nerve impulses that control and coordinate actions. For example, the nervous system sends signals to the muscular system to initiate movement.

 

Input: That makes sense. Can you explain how the nervous system affects the digestive system?

Output: Sure! The nervous system regulates the speed at which food is digested through signals that increase or decrease the rate of muscle contractions in the gastrointestinal tract. This is crucial for efficient digestion and nutrient absorption.

 

Input: Thanks! How about the cardiovascular system? How does the nervous system interact with that?

Output: The nervous system controls the heart rate and blood pressure. It can send signals to the heart to speed up or slow down based on the body's needs, like during exercise or rest.

 

  • lookup

We use lookup  to link our data to the model. The data must be indexed first, which is done in the Azure AI Studio ‘Indexes’ component on the side bar.

In the lookup section,  set the following parameter values:

  1. mlindex_contentSelect the empty field to open the Generate pane
    1. index_type: Registered Index
    2. mlindex_asset_id: brochures-index:1
  2. queries: ${modify_query_with_history.output}
  3. query_type: Hybrid (vector + keyword)
  4. top_k: 2

 

To get the right path, go to Build / your_project_name / Data / your_index_name, click on Index Data and copy the Data connection URI from the Data links section

 

  • generate_prompt_context 

generate_prompt_context receives a list of search result entities as input and turns them into a string with content and source information for each document. This function enhances the intelligence level by adding pertinent details to the prompt, enabling more knowledgeable and context-sensitive responses.

 

 

  • Prompt_variants

With prompt_variants, you can make different versions of prompts to get more variety in the questions you ask.

 

 

  • chat_with_context

chat_with_context  uses the context created by generate_prompt_context to improve the conversation. It takes into account the previous context and the related document chunks, which helps it to reply more logically and correctly.

 

 

Let's test the chat to see how it reacts.

 

 

After creating the flow, we can deploy it as a managed endpoint, which can be consumed through REST API by clicking the "Deploy" button on the flow page.

 

 

 

After that, you will need to choose a virtual machine that will be used to facilitate the deployment process :

 

 

Note that there is a feature available that you can opt to enable, called Inferencing data collection (currently in preview). When enabled, it automatically collects inputs and outputs as a data asset within your Azure AI Studio. This can be used later as a test dataset.

9. Consuming your Prompt Flow

 

After deploying your flow in Azure AI Studio, you can consume it as a managed endpoint. To access this feature, simply navigate to the "Deployments" tab and click on your flow's name. From there, you can also test your flow to ensure it's working properly before consumption.

 

We can use streamlit in VS Code to write the code that will view your Endpoint and Keys. Go to the consume tab and copy and paste them into your code.

 

 

 

import streamlit as st
import urllib.request
import json
import os
import ssl
from dotenv import load_dotenv

# Load environment variables
load_dotenv()
AZURE_ENDPOINT_KEY = os.environ['AZURE_ENDPOINT_KEY'] = 'xxxxxxxxxxxxxxxxxxxxxx'

def allowSelfSignedHttps(allowed):
    # Bypass the server certificate verification on the client side
    if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
        ssl._create_default_https_context = ssl._create_unverified_context
# Streamlit UI components
    st.image("education.png", width=600)
    st.title(' Welcome to your Essential of Anantomy and Physiology Assistant!')
    st.sidebar.title(" Copilot for Anantomy and Physiology  !")
    st.sidebar.caption("Made by an Pascal Burume")
    st.sidebar.info("""
    Generative AI technology has the potential to greatly enhance education in the health sector, particularly in fields like anatomy and physiology. This is because AI platforms can create highly detailed and interactive models of the human body, making complex systems like the cardiovascular or nervous systems easier to understand than with traditional methods.
    """)
def main():
    allowSelfSignedHttps(True)
    # Initialize chat history
    if "chat_history" not in st.session_state:
        st.session_state.chat_history = []

    # Display chat history
    for interaction in st.session_state.chat_history:
        if interaction["inputs"]["chat_input"]:
            with st.chat_message("user"):
                st.write(interaction["inputs"]["chat_input"])
        if interaction["outputs"]["chat_output"]:
            with st.chat_message("assistant"):
                st.write(interaction["outputs"]["chat_output"])

    # React to user input
    if user_input := st.chat_input("Ask me anything..."):
        # Display user message in chat message container
        st.chat_message("user").markdown(user_input)

        # Query API
        data = {"chat_history": st.session_state.chat_history, 'chat_input': user_input}
        body = json.dumps(data).encode('utf-8')
        url = 'https://xxxxxxxxxxxxxxxxxxxxx.ml.azure.com/score'
        headers = {
            'Content-Type': 'application/json',
            'Authorization': f'Bearer {AZURE_ENDPOINT_KEY}',
            'azureml-model-deployment': 'xxxxxxxxxx-1'
        }
        req = urllib.request.Request(url, body, headers)

        try:
            response = urllib.request.urlopen(req)
            response_data = json.loads(response.read().decode('utf-8'))

            # Check if 'chat_output' key exists in the response_data
            if 'chat_output' in response_data:
                with st.chat_message("assistant"):
                    st.markdown(response_data['chat_output'])

                st.session_state.chat_history.append(
                    {"inputs": {"chat_input": user_input},
                     "outputs": {"chat_output": response_data['chat_output']}}
                )

            else:
                st.error("The response data does not contain a 'chat_output' key.")
        except urllib.error.HTTPError as error:
            st.error(f"The request failed with status code: {error.code}")

if __name__ == "__main__":
    main()

 

 

 

 

Samples prompts:

  • What can you recommend for me today
  • Give me a plan of study today
  • I want dive into the muscular system

 

 

You can also use the backend of Azure Monitor to view metrics for your flow, in the “Monitoring” tab.

 

 

9. Conclusion

As we wrap up this exploration into the transformative capabilities of generative AI technologies, particularly within the realms of education and healthcare, it's clear that the potential for innovation is immense. By leveraging retrieval-augmented generation (RAG), we have unlocked a path that bridges the gap between static data and the dynamic needs of real-world applications. This blog has outlined not just the theoretical possibilities but also practical steps to implement these technologies using Azure AI Studio.

 

Thank you for joining us on this insightful journey through the capabilities of modern AI technologies. We are excited about the future possibilities as we continue to push the boundaries of what AI can achieve in educational contexts. Let’s move forward into a future where technology and education merge to create enriching, empowering learning experiences.

 

10. Resources

 

  1. Streamlit • A faster way to build and share data apps
  2. Deploy a flow in prompt flow as a managed online endpoint for real-time inference - Azure Machine Learning | Microsoft Learn
  3. Get started in prompt flow - Azure Machine Learning | Microsoft Learn
Updated May 16, 2024
Version 2.0
No CommentsBe the first to comment