From Prompts to Embeddings and Vector Stores - How to use OpenAI in real-life scenarios
Published Jul 31 2023 06:14 AM 12.5K Views

LLMs (Large Language models) are everywhere these days but effectively utilizing them can prove to be a daunting task.
That’s why I decided to create this guide where I will share some examples on:

  • How to create effective prompts for many real-life scenarios
  • How to create embeddings and store them in Vector stores
  • How to use Azure AI services for image and speech recognition and as a bonus how to combine the results with Azure OpenAI GPT.

I have created a public GitHub repository with many examples that showcase all of the above and I will explain each one of the examples and what it solves and how.

Denise_Schlesinger_0-1690808992499.png

 


Let’s start from the beginning – The prompts

Most of us know that one of the secrets of getting the results we need with LLMs is creating effective prompts.

Prompts are used to provide the AI with the context it needs to generate a response. However, if the prompt is not specific enough, the AI may generate irrelevant or even harmful responses. To avoid this, it is important to create exact prompts that provide the AI with clear instructions on what to generate.

Due to the way the instruction-following models are trained or the data they are trained on, there are specific prompt formats that work particularly well and align better with the tasks at hand. Best practices for prompt engineering with OpenAI API include using the latest model for best results, putting instructions at the beginning of the prompt and using ### or """ to separate the instruction and context, being specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc., articulating the desired output format through examples starting with zero-shot, then few-shot.

Few-shot learning is another important aspect of using OpenAI. This technique involves training the model on a small number of examples rather than a large dataset. This can be useful when working with limited data or when trying to train the model for a specific task. Few-shot learning can help improve the accuracy of the model’s responses and make it more versatile.

 

Code examples

User’s Question to JSON - create a prompt to extract entities by turning a user’s question into a JSON object that can be used to search in a Database.

Food orders – speech to text to JSON – get a food order, use Azure AI Speech SDK to transform speech to text and then use Azure OpenAI GPT3.5 to extract a JSON object that represents the order and can be used to search in a Database.

Imitate someone’s writing style to generate emails – Use Azure OpenAI GPT3.5 with few shot learning to imitate someone’s writing style.

Rephrase and Translate – use Azure OpenAI GPT3.5 to rephrase into proper English a product description and extract tags for cataloguing, then translate into Spanish and French.

Create a text ad from an image – use Azure AI Computer vision SDK to extract dense captions from an image and then use Azure OpenAI GPT3.5 to create a text ad based on the elements identified in the image.

 

The need for Semantic Search

At some point, we have all experienced the frustration of trying to locate a specific piece of information in a database, only to come up empty-handed. This is often due to the fact that many databases only allow for keyword or phrase searches, and may not return results that are similar or related to the search query.
This is where the application of semantic search can prove to be quite useful. By utilizing embeddings and OpenAI, semantic search enables users to find items that are similar in meaning rather than just in their textual content.

 

What are embeddings?

Embeddings are another important aspect of using OpenAI.

Embeddings are mathematical representations of words or phrases that can be used to compare different pieces of text. This allows the model to understand the meaning behind the words and generate more accurate responses.
OpenAI’s embeddings model is a vector of floating-point numbers that represents the “meaning” of text.

Embeddings are commonly used for:

  • Search (where results are ranked by relevance to a query string)
  • Clustering (where text strings are grouped by similarity)
  • Recommendations (where items with related text strings are recommended)
  • Anomaly detection (where outliers with little relatedness are identified)
  • Diversity measurement (where similarity distributions are analyzed)
  • Classification (where text strings are classified by their most similar label)

The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.

Azure OpenAI offers embedding-ada-002 and I recommend using it for creating embeddings. 

 

What are Vector stores?

Vector stores are databases that store embeddings for different phrases or words. By using a vector store, developers can quickly access pre-computed embeddings, which can save time and improve the accuracy of the model’s responses. Vector stores are especially useful for applications that require fast responses, such as chatbots or voice assistants.
Choosing the right vector store for your project depends on several factors such as the size of your data set, the complexity of your queries, the similarity measure you want to use, and the features you need. Some vector stores are designed to be easy to use and scalable while others are more complex but offer more features. It is important to evaluate each vector store based on your specific needs and requirements. You can also consider factors such as cost, performance, and support when choosing a vector store.

What is Langchain?

Harrison Chase's LangChain is a powerful Python library that simplifies the process of building NLP applications using large language models. Its primary goal is to create intelligent agents that can understand and execute human language instructions. With LangChain, you can connect to a variety of data and computation sources and build applications that perform NLP tasks on domain-specific data sources, private repositories, and more.

As of May 2023, the LangChain GitHub repository has garnered over 42,000 stars and has received contributions from more than 270 developers worldwide.

In the examples below I use the Langchain library in many cases, mostly for:

  • LLMs and Prompts

This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs like Azure OpenAI. It supports a variety of LLMs, including OpenAI, Llama, and GPT4All.

  • Data Augmented Generation

Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. LangChain’s Document Loaders and Utils modules facilitate connecting to sources of data and computation. If you have a mix of text files, PDF documents, HTML web pages, etc, you can use the document loaders in Langchain.

 

Code examples

Find duplicates using embeddings – use Azure OpenAI to find similarities between pieces of text.

Ask your file repository questions – use Azure OpenAI to scan candidates resumes and create embeddings saving it to an in memory vector store (faiss – open source vector store) then query using GPT3.5.

Ask your pdfs files questions - use Azure OpenAI to scan PDF files and create embeddings saving it to an in memory vector store (faiss – open source vector store) then query using GPT3.5.

Index your documentation URLS and ask questions with GPT - use Azure OpenAI to scan URLs and create embeddings saving it to an in memory vector store (faiss – open source vector store) then query using GPT3.5.

Ask your Analytical DB (Azure Data Explorer aka Kusto) questions in English – generate KQL (Kusto Query language) from the user’s input using OpenAI GPT3.5 turbo model then query Azure Data Explorer (Kusto) using the generated KQL.

Ask your MongoDB DB questions in English – generate MQL (Mongo Query language) from the user’s input using OpenAI GPT3.5 turbo model then query your MongoDB  using the generated MQL.

Ask your SQL DB questions in English using Langchain agents - This example shows how to retrieve data from Azure SQL DB by using GPT.  Asking questions in plain english that gets "translated" by GPT into SQL using Langchain SQLDatabaseToolkit


In conclusion, OpenAI is a powerful tool that can help businesses and developers make the most of machine learning. By understanding how to create effective prompts, use embeddings, and work with vector stores and Azure AI cognitive services, developers can create more accurate and versatile AI applications.

 

Cheers

Denise

 

 

 

 

 

Version history
Last update:
‎Jul 31 2023 07:35 AM
Updated by: