Introduction
Prerequisites
- Azure OpenAI Service, LLM we will be using for our simple application
- Visual Studio Code - IDE
- Refer to the blog GitHub Repository
What are they?
- Semantic Kernel: an open-source SDK that allows you to orchestrate your existing code and more with AI.
- LangChain: a framework to build LLM-applications easily and gives you insights on how the application works
- PromptFlow: this is a set of developer tools that helps you build an end-to-end LLM Applications. Using PromptFlow, you can take your application from an idea to production.
Semantic Kernel
- Kernel: the kernel is at the center stage of your development process as it contains the plugins and services necessary for you to develop your AI application.
- Planners: special prompts that allow an agent to generate a way to complete a task such as using function calling to complete a task.
- Plugins: they allow you to give your copilot skills, using both code and prompts
- Memories: in addition to connecting your application to LLMs and creating various tasks, Semantic Kernel has a memory feature to store context and embeddings giving additional information to your prompts.
- Install the necessary libraries using:
pip install semantic-kernel==1.0.2
- Add you
keys and endpoint
to.env
, using the format in.env.example
- Create a
services.py
file to be able to bring in your LLM into your application
"""
This module defines an enumeration representing different services.
"""
from enum import Enum
class Service(Enum):
"""
Attributes:
OpenAI (str): Represents the OpenAI service.
AzureOpenAI (str): Represents the Azure OpenAI service.
HuggingFace (str): Represents the HuggingFace service.
"""
OpenAI = "openai"
AzureOpenAI = "azureopenai"
HuggingFace = "huggingface"
4. Create a new Kernel
where you will host your application then import Service
into your application which will allow you to add your LLM into our application.
# Import the Kernel class from the semantic_kernel module
from semantic_kernel import Kernel
# Create an instance of the Kernel class
kernel = Kernel()
from services import Service
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = Service.OpenAI
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
kernel.add_service(
AzureChatCompletion(service_id="default"),
)
5. Next we will create and add our plugin. We have the plugin folder TranslatePlugin
within it we have our Swahili Plugin
with our config and prompt txt files which guide the model on how it will perform its task.
# Add the TranslatePlugin from a directory to the kernel
kernel.add_plugin(parent_directory="", plugin_name="TranslatePlugin")
result = await kernel.invoke(plugin_name="TranslatePlugin",
function_name="Swahili",
question="what is the WiFi password",
time_of_day="afternoon",
style="professional")
print(result)
6. The output will be the requested translation.
LangChain
- Model I/O: this is where you can bring in your LLM and format its inputs and outputs
- Retrieval: In RAG applications, this component specifically helps you load your data, connect with vector databases and transform your documents to meet the needs of your application.
- Other Higher level Components
- Tools: allows you to create Intergrations with external services and applications
- Agents: these are responsible as a guide on what step to take next.
- Chains: these are a sequence of calls linking various components to create LLM apps
- Install the necessary libraries:
pip install langchain openai
- Login to Azure CLI using
az login --use-device-code
and authenticate your connection - Add you
keys and endpoint
from.env
to your notebook, then set the environment variables for your API key and type for authentication.
import os
from azure.identity import DefaultAzureCredential
# Get the Azure Credential
credential = DefaultAzureCredential()
# Set the API type to `azure_ad`
os.environ["OPENAI_API_TYPE"] = "azure_ad"
# Set the API_KEY to the token from the Azure credential
os.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").token
4. Create your model
class and configure it to interact with Azure OpenAI
# Import the necessary modules
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version=AZURE_OPENAI_API_VERSION,
azure_deployment=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
)
5. Use ChatPromptTemplate
to curate your prompt
# Import the necessary modules
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Create a ChatPromptTemplate object with messages
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates tasks into Kiswahili. Follow these guidelines:\n"
"The translation must be accurate and culturally appropriate.\n"
"Use the {{$time_of_day}} to determine the appropriate greeting to use during translation.\n"
"Be creative and accurate to communicate effectively.\n"
"Incorporate the {{$style}} suggestion, if provided, to determine the tone for the translation.\n"
"After translating, add an English translation of the task in the specified language.\n"
"For example, if the question is 'what is the WiFi password', your response should be:\n"
"'Habari ya mchana! Tafadhali nipe nenosiri la WiFi.' (Translation: Good afternoon! Please provide me with the WiFi password.)"
),
("human", "{question}"),
]
)
6. Chain
your model and prompt together to get a response
# Chain the prompt and the model together
chain = prompt | model
# Invoke the chain with the input parameters
response = chain.invoke(
{
"question":"what is the WiFi password",
"time_of_day":"afternoon",
"style":"professional"
}
)
# Print the response
response
7. The output will be the requested translation.
PromptFlow
- First, you install the
promptflow extension
on Visual Studio Code
2. Next, ensure you install the necessary dependencies and libraries your will need for the project.
3. In our case we will be build a chat flow with template.
Click on somewhere and create a chat flow for the application
4. Once the flow is ready, we can open flow.dag
and click on the visual editor
to see how our application is structured.
5. We will need to connect to our LLM, you can do this by creating a new connection
. Update your Azure OpenAI
endpoint and your connection name. Click create connection then you will have your connection ready.
6. Update the connection and run the flow to test your application.
7. Update the chat.jinja2
file to customize the prompt template.
8. Edit the yaml file
to add more functionality to your flow, in our case for the Tutor, we will add more inputs.
interactive mode
and see your AI Tutor come to lifeIn Summary:
- GitHub Repository: https://github.com/BethanyJep/Swahili-Tutor
- Semantic Kernel: microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps (github.com)
- Semantic Kernel documentation: Create AI agents with Semantic Kernel | Microsoft Learn
- Promptflow documentation: Prompt flow — Prompt flow documentation (microsoft.github.io)
- LangChain: Introduction | 🦜️:link: LangChain