openai
92 TopicsBuilding a Basic Chatbot with Azure OpenAI
Overview In this turorial, we'll build a simple chatbot that uses Azure OpenAI to generate responses to user queries. To create a basic chatbot, we need to set up a language model resource that enables conversation capabilities. In this tutorial, we will: Set up the Azure OpenAI resource using the Azure AI Foundry portal. Retrieve the API key needed to connect the resource to your chatbot application. Once the API key is configured in your code, you will be able to integrate the language model into your chatbot and enable it to generate responses. By the end of this tutorial, you'll have a working chatbot that can generate responses using the Azure OpenAI model. Signing In and Setting Up Your Azure AI Foundry Workspace Signing In to Azure AI Foundry Open the Azure AI Foundry page in your web browser. Login to your Azure account. If you don't have an account, you can sign up. Setting Up Your Azure AI Foundry Workspace Select + Create project to create a new project. Perform the following tasks: Enter Project name. It must be a unique value. Select Hub you'd like to use (create a new one if needed). Select Create. Setting Up the Azure OpenAI Resource in Azure AI Foundry In this step, you'll learn how to set up the Azure OpenAI resource in Azure AI Foundry. Azure OpenAI is a pre-trained language model that can generate responses to user queries. We'll be using it in our chatbot. Select Models + endpoints from the left side menu. On this page, you can deploy language models and set up Azure AI resources. In this step, we will deploy the Azure OpenAI GPT-4 language model. Select + Deploy model. Select Deploy base model. In this tutorial, we will deploy the GPT-4o model. Select GPT-4o. Select Confirm. Select Deploy. The model will be deployed. Once the deployment is complete, you will see the model listed on the Models + endpoints page. Now that the model is deployed, you can retrieve the API key needed to connect the model to your chatbot application. Select the model you deployed on the Models + endpoints page. ` On the model details page, you can view information about the model, including the API key. We will come back this page later to add the required information into the environment variables. Setting Up the Project and Install the Libraries Now, you will create a folder to work in and set up a virtual environment to develop a program. Creating a Folder to Work Inside It Open a terminal window and type the following command to create a folder named basic-chatbot in the default path. mkdir basic-chatbot Type the following command inside your terminal to navigate to the basic-chatbot folder you created. cd basic-chatbot Creating a Virtual Environment Type the following command inside your terminal to create a virtual environment named .venv. python -m venv .venv Type the following command inside your terminal to activate the virtual environment. .venv\Scripts\activate.bat NOTE If it worked, you should see (.venv) before the command prompt. Installing the Required Packages Type the following commands inside your terminal to install the required packages. openai: A Python library that provides integration with the Azure OpenAI API. python-dotenv: A Python library for managing environment variables stored in an .env file. pip install openai python-dotenv Setting up the Project in Visual Studio Code To create a basic chatbot program, you will need two files: example.py: This file will contain the code to interact with Azure resources. .env: This file will store the Azure credentials and configuration details. NOTE Purpose of the .env File The .env file is essential for storing the Azure information required to connect and use the resources you created. By keeping the Azure credentials in the .env file, you can ensure a secure and organized way to manage sensitive information. Setting Up example.py File Open Visual Studio Code. Select File from the menu bar. Select Open Folder. Select the basic-chatbot folder that you created, which is located at C:\Users\yourUserName\basic-chatbot. In the left pane of Visual Studio Code, right-click and select New File to create a new file named example.py. Add the following code to the example.py file to import the required libraries. from openai import AzureOpenAI from dotenv import load_dotenv import os # Load environment variables from the .env file load_dotenv() # Retrieve environment variables AZURE_OPENAI_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT") AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY") AZURE_OPENAI_MODEL_NAME = os.getenv("AZURE_OPENAI_MODEL_NAME") AZURE_OPENAI_CHAT_DEPLOYMENT_NAME = os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME") AZURE_OPENAI_API_VERSION = os.getenv("AZURE_OPENAI_API_VERSION") # Initialize Azure OpenAI client client = AzureOpenAI( api_key=AZURE_OPENAI_API_KEY, api_version=AZURE_OPENAI_API_VERSION, base_url=f"{AZURE_OPENAI_ENDPOINT}/openai/deployments/{AZURE_OPENAI_CHAT_DEPLOYMENT_NAME}" ) print("Chatbot: Hello! How can I assist you today? Type 'exit' to end the conversation.") while True: user_input = input("You: ") if user_input.lower() == "exit": print("Chatbot: Ending the conversation. Have a great day!") break response = client.chat.completions.create( model=AZURE_OPENAI_MODEL_NAME, messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": user_input} ], max_tokens=200 ) print("Chatbot:", response.choices[0].message.content.strip()) Setting Up .env File To set up your development environment, we will create a .env file and store the necessary credentials directly. NOTE Complete folder structure: └── YourUserName . └── basic-chatbot . ├── example.py . └── .env In the left pane of Visual Studio Code, right-click and select New File to create a new file named .env. Add the following code to the .env file to include your Azure information. AZURE_OPENAI_API_KEY=your_azure_openai_api_key AZURE_OPENAI_ENDPOINT=https://your_azure_openai_endpoint AZURE_OPENAI_MODEL_NAME=your_model_name AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=your_deployment_name AZURE_OPENAI_API_VERSION=your_api_version Retrieving Environment Variables from Azure AI Foundry Now, you will retrieve the required information from Azure AI Foundry and update the .env file. Go to the Models + endpoints page and select your deployed model. On the Model Details page, copy the following information in to the .env file.: AZURE_OPENAI_API_KEY AZURE_OPENAI_ENDPOINT AZURE_OPENAI_MODEL_NAME AZURE_OPENAI_CHAT_DEPLOYMENT_NAME Paste this information into the .env file in the respective placeholders. Running the Chatbot Program Type the following command inside your terminal to run the program and see if it can answer questions. python example.py Interact with the chatbot by typing your questions or messages. The chatbot will generate responses based on the Azure OpenAI model you deployed. NOTE You can find the full example of this chatbot, including the code and .env template, in my GitHub repository: GitHub Repository460Views2likes0CommentsUsing Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance
DeepSeek-R1 is very popular, and it can achieve the same capabilities as OpenAI o1 in advanced reasoning. Microsoft has also added DeepSeek-R1 models to Azure AI Foundry and GitHub Models. We can compare DeepSeek-R1 ith other available models through GitHub Models Playground Note This series revolves around deployment of SLMs to Edge Devices 'Edge AI' we will focus on the deployment advanced reasoning models, with different application scenarios. You can learn more in the following session AI Tour BRK453. In this experiement we want to deploy advanced reasoning models to the edge, so that they can run on edge devices with limited computing power and offline environments. At this time, the recommendation is to use the traditional ONNX model . We can use Microsoft Olive to convert the DeepSeek-R1 Distrill model. Getting started with Microsoft Olive is very straightforward. Install the Microsoft Olive library through the command line and Python 3.10+ (recommended) pip install olive-ai The DeepSeek-R1 Distrill model series has different parameters such as 1.5B, 7B, 8B, 14B, 32B, 70B, etc. This article is mainly based on the 1.5B, 7B, and 14B models (so a Small Language Model). CPU Inference Let's discuss 1.5B and 7B, which are models with lower parameter. We can directly use the CPU as computing for inference to test the effect (hardware environment Azure DevBox, AMD EPYC 7763 64-Core + 64GB Memory + 2T SSD) Quantization conversion olive auto-opt --model_name_or_path <Your DeepSeek-R1-Distill-Qwen-1.5B/7B local location> --output_path <Your Convert ONNX INT4 Model local location> --device cpu --provider CPUExecutionProvider --precision int4 --use_model_builder --log_level 1 You can download it directly from my Hugging face Repo (Note: This model is for testing and has not been fully tested by AI Content Safety or provided as an Offical Model) DeepSeek-R1-Distill-Qwen-1.5B-ONNX-INT4-CPU DeepSeek-R1-Distill-Qwen-7B-ONNX-INT4-CPU Running with ONNX Runtime GenAI Install ONNX Runtime GenAI and ONNX Runtime CPU support libraries pip install onnxruntime-genai pip install onnxruntime Sample Code https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/demo-1.5b.ipynb https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/demo-7b.ipynb Performance comparison 1.5B vs 7B We compare two different inference scenarios explain 1+1=2 1.5B quantized ONNX model memory occupied, time consumption and number of tokens generated: 7B quantized ONNX model memory occupied, time consumption and number of tokens generated 2. Find all pairwise different isomorphism groups with order 147 and no elements with order 49 1.5B quantized ONNX model memory occupied, time consumption and number of tokens generated: 7B quantized ONNX model memory occupied, time consumption and number of tokens generated Results of the numbers Through the test, we can see that the 1.5B model of DeepSeek is more suitable for use on CPU inference and can be deployed on traditional PCs or IoT devices. As for 7B, although it has better inference, it is not very effective on CPU operation. GPU Inference It is ideal if we have a GPU on the edge device. We can quantize and convert it to an ONNX model for CPU inference through Microsoft Olive. Of course, it can also be converted to a model for GPU inference. Here I take the 14B DeepSeek-R1-Distill-Qwen-14B as an example and make an inference comparison with Microsoft's Phi-4-14B Quantization conversion olive auto-opt --model_name_or_path <Your Phi-4-14B or DeepSeek-R1-Distill-Qwen-14B local path > --output_path <Your converted Phi-4-14B or DeepSeek-R1-Distill-Qwen-14B local path > --device gpu --provider CUDAExecutionProvider --precision int4 --use_model_builder --log_level 1 You can download it directly from my Hugging face Repo (Note: This model is for testing and has not been fully tested by AI Content Safety and not an Official Model) DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU Phi-4-14B-ONNX-INT4-GPU Running with ONNX Runtime GenAI CUDA Install ONNX Runtime GenAI and ONNX Runtime GPU support libraries pip install onnxruntime-genai-cuda pip install onnxruntime-gpu Compare the results in the GPU environment with Gradio It is recommended to use a GPU with more than 8G memory To increase the comparison of the results, we compare it with Phi-4-14B-ONNX-INT4-GPU and DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU to see the different results. We also show we use OpenAI o1-mini (it is recommended to use o1-mini through GitHub Models), Sample Code https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/Performance_AdvancedReasoning_ONNX_CPU.ipynb You can test any prompt on Gradio to compare the results of Phi-4-14B-ONNX-INT4-GPU, DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU and OpenAI o1 mini. DeepSeek-R1 reduces the cost of inference models and produces more instructive results on professional problems, but Phi-4-14B also has advantages in reasoning and uses lower computing power to complete inference. As for OpenAI o1 mini, it is more comprehensive and can touch all problems. If you want to deploy to Edge Device, Phi-4-14B and quantized DeepSeek-R1 are good choices for you. This blog is just a simple test and the first in this series. Please share your feedback and continue the discussion in the Microsoft AI Discord Channel. Feel free to me a message or comment. We look forward to sharing more around the opportunity of EdgeAI and more content in this series. Resource DeepSeek-R1 in GitHub Models https://github.com/marketplace/models/azureml-deepseek/DeepSeek-R1 DeepSeek-R1 in Azure AI Foundry https://ai.azure.com/explore/models/DeepSeek-R1/version/1/registry/azureml-deepseek Phi-4-14B in Hugging face https://huggingface.co/microsoft/phi-4 Learn about Microsoft Olive https://github.com/microsoft/olive Learn about ONNX Runtime GenAI https://github.com/microsoft/onnxruntime-genai Microsoft AI Discord Channel BRK453 Exploring cutting-edge models: LLMs, SLMs, local development and more https://aka.ms/aitour/brk453623Views0likes0CommentsFine-Tuning and Deploying Phi-3.5 Model with Azure and AI Toolkit
What is Phi-3.5? Phi-3.5 as a state-of-the-art language model with strong multilingual capabilities. Emphasize that it is designed to handle multiple languages with high proficiency, making it a versatile tool for Natural Language Processing (NLP) tasks across different linguistic backgrounds. Key Features of Phi-3.5 Highlight the core features of the Phi-3.5 model: Multilingual Capabilities: Explain that the model supports a wide variety of languages, including major world languages such as English, Spanish, Chinese, French, and others. You can provide an example of its ability to handle a sentence or document translation from one language to another without losing context or meaning. Fine-Tuning Ability: Discuss how the model can be fine-tuned for specific use cases. For instance, in a customer support setting, the Phi-3.5 model can be fine-tuned to understand the nuances of different languages used by customers across the globe, improving response accuracy. High Performance in NLP Tasks: Phi-3.5 is optimized for tasks like text classification, machine translation, summarization, and more. It has superior performance in handling large-scale datasets and producing coherent, contextually correct language outputs. Applications in Real-World Scenarios To make this section more engaging, provide a few real-world applications where the Phi-3.5 model can be utilized: Customer Support Chatbots: For companies with global customer bases, the model’s multilingual support can enhance chatbot capabilities, allowing for real-time responses in a customer’s native language, no matter where they are located. Content Creation for Global Markets: Discuss how businesses can use Phi-3.5 to automatically generate or translate content for different regions. For example, marketing copy can be adapted to fit cultural and linguistic nuances in multiple languages. Document Summarization Across Languages: Highlight how the model can be used to summarize long documents or articles written in one language and then translate the summary into another language, improving access to information for non-native speakers. Why Choose Phi-3.5 for Your Project? End this section by emphasizing why someone should use Phi-3.5: Versatility: It’s not limited to just one or two languages but performs well across many. Customization: The ability to fine-tune it for particular use cases or industries makes it highly adaptable. Ease of Deployment: With tools like Azure ML and Ollama, deploying Phi-3.5 in the cloud or locally is accessible even for smaller teams. Objective Of Blog Specialized Language Models (SLMs) are at the forefront of advancements in Natural Language Processing, offering fine-tuned, high-performance solutions for specific tasks and languages. Among these, the Phi-3.5 model has emerged as a powerful tool, excelling in its multilingual capabilities. Whether you're working with English, Spanish, Mandarin, or any other major world language, Phi-3.5 offers robust, reliable language processing that adapts to various real-world applications. This makes it an ideal choice for businesses looking to deploy multilingual chatbots, automate content generation, or translate customer interactions in real time. Moreover, its fine-tuning ability allows for customization, making Phi-3.5 versatile across industries and tasks. Customization and Fine-Tuning for Different Applications The Phi-3.5 model is not just limited to general language understanding tasks. It can be fine-tuned for specific applications, industries, and language models, allowing users to tailor its performance to meet their needs. Customizable for Industry-Specific Use Cases: With fine-tuning, the model can be trained further on domain-specific data to handle particular use cases like legal document translation, medical records analysis, or technical support. Example: A healthcare company can fine-tune Phi-3.5 to understand medical terminology in multiple languages, enabling it to assist in processing patient records or generating multilingual health reports. Adapting for Specialized Tasks: You can train Phi-3.5 to perform specialized tasks like sentiment analysis, text summarization, or named entity recognition in specific languages. Fine-tuning helps enhance the model's ability to handle unique text formats or requirements. Example: A marketing team can fine-tune the model to analyse customer feedback in different languages to identify trends or sentiment across various regions. The model can quickly classify feedback as positive, negative, or neutral, even in less widely spoken languages like Arabic or Korean. Applications in Real-World Scenarios To illustrate the versatility of Phi-3.5, here are some real-world applications where this model excels, demonstrating its multilingual capabilities and customization potential: Case Study 1: Multilingual Customer Support Chatbots Many global companies rely on chatbots to handle customer queries in real-time. With Phi-3.5’s multilingual abilities, businesses can deploy a single model that understands and responds in multiple languages, cutting down on the need to create language-specific chatbots. Example: A global airline can use Phi-3.5 to power its customer service bot. Passengers from different countries can inquire about their flight status or baggage policies in their native languages—whether it's Japanese, Hindi, or Portuguese—and the model responds accurately in the appropriate language. Case Study 2: Multilingual Content Generation Phi-3.5 is also useful for businesses that need to generate content in different languages. For example, marketing campaigns often require creating region-specific ads or blog posts in multiple languages. Phi-3.5 can help automate this process by generating localized content that is not just translated but adapted to fit the cultural context of the target audience. Example: An international cosmetics brand can use Phi-3.5 to automatically generate product descriptions for different regions. Instead of merely translating a product description from English to Spanish, the model can tailor the description to fit cultural expectations, using language that resonates with Spanish-speaking audiences. Case Study 3: Document Translation and Summarization Phi-3.5 can be used to translate or summarize complex documents across languages. Its ability to preserve meaning and context across languages makes it ideal for industries where accuracy is crucial, such as legal or academic fields. Example: A legal firm working on cross-border cases can use Phi-3.5 to translate contracts or legal briefs from German to English, ensuring the context and legal terminology are accurately preserved. It can also summarize lengthy documents in multiple languages, saving time for legal teams. Fine-Tuning Phi-3.5 Model Fine-tuning a language model like Phi-3.5 is a crucial step in adapting it to perform specific tasks or cater to specific domains. This section will walk through what fine-tuning is, its importance in NLP, and how to fine-tune the Phi-3.5 model using Azure Model Catalog for different languages and tasks. We'll also explore a code example and best practices for evaluating and validating the fine-tuned model. What is Fine-Tuning? Fine-tuning refers to the process of taking a pre-trained model and adapting it to a specific task or dataset by training it further on domain-specific data. In the context of NLP, fine-tuning is often required to ensure that the language model understands the nuances of a particular language, industry-specific terminology, or a specific use case. Why Fine-Tuning is Necessary Pre-trained Large Language Models (LLMs) are trained on diverse datasets and can handle various tasks like text summarization, generation, and question answering. However, they may not perform optimally in specialized domains without fine-tuning. The goal of fine-tuning is to enhance the model's performance on specific tasks by leveraging its prior knowledge while adapting it to new contexts. Challenges of Fine-Tuning Resource Intensiveness: Fine-tuning large models can be computationally expensive, requiring significant hardware resources. Storage Costs: Each fine-tuned model can be large, leading to increased storage needs when deploying multiple models for different tasks. LoRA and QLoRA To address these challenges, techniques like LoRA (Low-rank Adaptation) and QLoRA (Quantized Low-rank Adaptation) have emerged. Both methods aim to make the fine-tuning process more efficient: LoRA: This technique reduces the number of trainable parameters by introducing low-rank matrices into the model while keeping the original model weights frozen. This approach minimizes memory usage and speeds up the fine-tuning process. QLoRA: An enhancement of LoRA, QLoRA incorporates quantization techniques to further reduce memory requirements and increase the efficiency of the fine-tuning process. It allows for the deployment of large models on consumer hardware without the extensive resource demands typically associated with full fine-tuning. from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from peft import get_peft_model, LoraConfig # Load a pre-trained model model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") # Configure LoRA lora_config = LoraConfig( r=16, # Rank lora_alpha=32, lora_dropout=0.1, ) # Wrap the model with LoRA model = get_peft_model(model, lora_config) # Define training arguments training_args = TrainingArguments( output_dir="./results", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, ) # Create a Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) # Start fine-tuning trainer.train() This code outlines how to set up a model for fine-tuning using LoRA, which can significantly reduce the resource requirements while still adapting the model effectively to specific tasks. In summary, fine-tuning with methods like LoRA and QLoRA is essential for optimizing pre-trained models for specific applications in NLP, making it feasible to deploy these powerful models in various domains efficiently. Why is Fine-Tuning Important in NLP? Task-Specific Performance: Fine-tuning helps improve performance for tasks like text classification, machine translation, or sentiment analysis in specific domains (e.g., legal, healthcare). Language-Specific Adaptation: Since models like Phi-3.5 are trained on general datasets, fine-tuning helps them handle industry-specific jargon or linguistic quirks. Efficient Resource Utilization: Instead of training a model from scratch, fine-tuning leverages pre-trained knowledge, saving computational resources and time. Steps to Fine-Tune Phi-3.5 in Azure AI Foundry Fine-tuning the Phi-3.5 model in Azure AI Foundry involves several key steps. Azure provides a user-friendly interface to streamline model customization, allowing you to quickly configure, train, and deploy models. Step 1: Setting Up the Environment in Azure AI Foundry Access Azure AI Foundry: Log in to Azure AI Foundry. If you don’t have an account, you can create one and set up a workspace. Create a New Experiment: Once in the Azure AI Foundry, create a new training experiment. Choose the Phi-3.5 model from the pre-trained models provided in the Azure Model Zoo. Set Up the Data for Fine-Tuning: Upload your custom dataset for fine-tuning. Ensure the dataset is in a compatible format (e.g., CSV, JSON). For instance, if you are fine-tuning the model for a customer service chatbot, you could upload customer queries in different languages. Step 2: Configure Fine-Tuning Settings Select the Training Dataset: Select the dataset you uploaded and link it to the Phi-3.5 model. 2) Configure the Hyperparameters: Set up training hyperparameters like the number of epochs, learning rate, and batch size. You may need to experiment with these settings to achieve optimal performance. 3) Choose the Task Type: Specify the task you are fine-tuning for, such as text classification, translation, or summarization. This helps Azure AI Foundry understand how to optimize the model during fine-tuning. 4) Fine-Tuning for Specific Languages: If you are fine-tuning for a specific language or multilingual tasks, ensure that the dataset is labeled appropriately and contains enough examples in the target language(s). This will allow Phi-3.5 to learn language-specific features effectively. Step 3: Train the Model Launch the Training Process: Once the configuration is complete, launch the training process in Azure AI Foundry. Depending on the size of your dataset and the complexity of the model, this could take some time. Monitor Training Progress: Use Azure AI Foundry’s built-in monitoring tools to track performance metrics such as loss, accuracy, and F1 score. You can view the model’s progress during training to ensure that it is learning effectively. Code Example: Fine-Tuning Phi-3.5 for a Specific Use Case Here's a code snippet for fine-tuning the Phi-3.5 model using Python and Azure AI Foundry SDK. In this example, we are fine-tuning the model for a customer support chatbot in multiple languages. from azure.ai import Foundry from azure.ai.model import Model # Initialize Azure AI Foundry foundry = Foundry() # Load the Phi-3.5 model model = Model.load("phi-3.5") # Set up the training dataset training_data = foundry.load_dataset("customer_queries_dataset") # Fine-tune the model model.fine_tune(training_data, epochs=5, learning_rate=0.001) # Save the fine-tuned model model.save("fine_tuned_phi_3.5") Best Practices for Evaluating and Validating Fine-Tuned Models Once the model is fine-tuned, it's essential to evaluate and validate its performance before deploying it in production. Split Data for Validation: Always split your dataset into training and validation sets. This ensures that the model is evaluated on unseen data to prevent overfitting. Evaluate Key Metrics: Measure performance using key metrics such as: Accuracy: The proportion of correct predictions. F1 Score: A measure of precision and recall. Confusion Matrix: Helps visualize true vs. false predictions for classification tasks. Cross-Language Validation: If the model is fine-tuned for multiple languages, test its performance across all supported languages to ensure consistency and accuracy. Test in Production-Like Environments: Before full deployment, test the fine-tuned model in a production-like environment to catch any potential issues. Continuous Monitoring and Re-Fine-Tuning: Once deployed, continuously monitor the model’s performance and re-fine-tune it periodically as new data becomes available. Deploying Phi-3.5 Model After fine-tuning the Phi-3.5 model, the next crucial step is deploying it to make it accessible for real-world applications. This section will cover two key deployment strategies: deploying in Azure for cloud-based scaling and reliability, and deploying locally with AI Toolkit for simpler offline usage. Each deployment strategy offers its own advantages depending on the use case. Deploying in Azure Azure provides a powerful environment for deploying machine learning models at scale, enabling organizations to deploy models like Phi-3.5 with high availability, scalability, and robust security features. Azure AI Foundry simplifies the entire deployment pipeline. Set Up Azure AI Foundry Workspace: Log in to Azure AI Foundry and navigate to the workspace where the Phi-3.5 model was fine-tuned. Go to the Deployments section and create a new deployment environment for the model. Choose Compute Resources: Compute Target: Select a compute target suitable for your deployment. For large-scale usage, it’s advisable to choose a GPU-based compute instance. Example: Choose an Azure Kubernetes Service (AKS) cluster for handling large-scale requests efficiently. Configure Scaling Options: Azure allows you to set up auto-scaling based on traffic. This ensures that the model can handle surges in demand without affecting performance. Model Deployment Configuration: Create an Inference Pipeline: In Azure AI Foundry, set up an inference pipeline for your model. Specify the Model: Link the fine-tuned Phi-3.5 model to the deployment pipeline. Deploy the Model: Select the option to deploy the model to the chosen compute resource. Test the Deployment: Once the model is deployed, test the endpoint by sending sample requests to verify the predictions. Configuration Steps (Compute, Resources, Scaling) During deployment, Azure AI Foundry allows you to configure essential aspects like compute type, resource allocation, and scaling options. Compute Type: Choose between CPU or GPU clusters depending on the computational intensity of the model. Resource Allocation: Define the minimum and maximum resources to be allocated for the deployment. For real-time applications, use Azure Kubernetes Service (AKS) for high availability. For batch inference, Azure Container Instances (ACI) is suitable. Auto-Scaling: Set up automatic scaling of the compute instances based on the number of requests. For example, configure the deployment to start with 1 node and scale to 10 nodes during peak usage. Cost Comparison: Phi-3.5 vs. Larger Language Models When comparing the costs of using Phi-3.5 with larger language models (LLMs), several factors come into play, including computational resources, pricing structures, and performance efficiency. Here’s a breakdown: Cost Efficiency Phi-3.5: Designed as a Small Language Model (SLM), Phi-3.5 is optimized for lower computational costs. It offers competitive performance at a fraction of the cost of larger models, making it suitable for budget-conscious projects. The smaller size (3.8 billion parameters) allows for reduced resource consumption during both training and inference. Larger Language Models (e.g., GPT-3.5): Typically require more computational resources, leading to higher operational costs. Larger models may incur additional costs for storage and processing power, especially in cloud environments. Performance vs. Cost Performance Parity: Phi-3.5 has been shown to achieve performance parity with larger models on various benchmarks, including language comprehension and reasoning tasks. This means that for many applications, Phi-3.5 can deliver similar results to larger models without the associated costs. Use Case Suitability: For simpler tasks or applications that do not require extensive factual knowledge, Phi-3.5 is often the more cost-effective choice. Larger models may still be preferred for complex tasks requiring deep contextual understanding or extensive factual recall. Pricing Structure Azure Pricing: Phi-3.5 is available through Azure with a pay-as-you-go billing model, allowing users to scale costs based on usage. Pricing details for Phi-3.5 can be found on the Azure pricing page, where users can customize options based on their needs. Code Example: API Setup and Endpoints for Live Interaction Below is a Python code snippet demonstrating how to interact with a deployed Phi-3.5 model via an API in Azure: import requests # Define the API endpoint and your API key api_url = "https://<your-azure-endpoint>/predict" api_key = "YOUR_API_KEY" # Prepare the input data input_data = { "text": "What are the benefits of renewable energy?" } # Make the API request response = requests.post(api_url, json=input_data, headers={"Authorization": f"Bearer {api_key}"}) # Print the model's response if response.status_code == 200: print("Model Response:", response.json()) else: print("Error:", response.status_code, response.text) Deploying Locally with AI Toolkit For developers who prefer to run models on their local machines, the AI Toolkit provides a convenient solution. The AI Toolkit is a lightweight platform that simplifies local deployment of AI models, allowing for offline usage, experimentation, and rapid prototyping. Deploying the Phi-3.5 model locally using the AI Toolkit is straightforward and can be used for personal projects, testing, or scenarios where cloud access is limited. Introduction to AI Toolkit The AI Toolkit is an easy-to-use platform for deploying language models locally without relying on cloud infrastructure. It supports a range of AI models and enables developers to work in a low-latency environment. Advantages of deploying locally with AI Toolkit: Offline Capability: No need for continuous internet access. Quick Experimentation: Rapid prototyping and testing without the delays of cloud deployments. Setup Guide: Installing and Running Phi-3.5 Locally Using AI Toolkit Install AI Toolkit: Go to the AI Toolkit website and download the platform for your operating system (Linux, macOS, or Windows). Install AI Toolkit by running the appropriate installation command in your terminal. Download the Phi-3.5 Model: Once AI Toolkit is installed, you can download the Phi-3.5 model locally by running: 3. Run the Model Locally: After downloading the model, start a local session by running: This will launch a local server on your machine where the model will be available for interaction. Code Example: Using Phi-3.5 Locally in a Project Below is a Python code example demonstrating how to send a query to the locally deployed Phi-3.5 model running on the AI Toolkit. import requests # Define the local endpoint local_url = "http://localhost:8000/predict" # Prepare the input data input_data = { "text": "What are the benefits of renewable energy?" } # Make the API request response = requests.post(local_url, json=input_data) # Print the model's response if response.status_code == 200: print("Model Response:", response.json()) else: print("Error:", response.status_code, response.text) Comparing Language Capabilities Test Results: How Phi-3.5 Handles Different Languages The Phi-3.5 model demonstrates robust multilingual capabilities, effectively processing and generating text in various languages. Below are comparative examples showcasing its performance in English, Spanish, and Mandarin: English Example: Input: "What are the benefits of renewable energy?" Output: "Renewable energy sources, such as solar and wind, reduce greenhouse gas emissions and promote sustainability." Spanish Example: Input: "¿Cuáles son los beneficios de la energía renovable?" Output: "Las fuentes de energía renovable, como la solar y la eólica, reducen las emisiones de gases de efecto invernadero y promueven la sostenibilidad." Mandarin Example: Input: "可再生能源的好处是什么?" Output: "可再生能源,如太阳能和风能,减少温室气体排放,促进可持续发展。" Performance Benchmarking and Evaluation Across Different Languages Benchmarking Phi-3.5 across different languages involves evaluating its accuracy, fluency, and contextual understanding. For instance, using BLEU scores and human evaluations, the model can be assessed on its translation quality and coherence in various languages. Real-World Use Case: Multilingual Customer Service Chatbot A practical application of Phi-3.5's multilingual capabilities is in developing a customer service chatbot that can interact with users in their preferred language. For instance, the chatbot could provide support in English, Spanish, and Mandarin, ensuring a wider reach and better user experience. Optimizing and Validating Phi-3.5 Model Model Performance Metrics To validate the model's performance in different scenarios, consider the following metrics: Accuracy: Measure how often the model's outputs are correct or align with expected results. Fluency: Assess the naturalness and readability of the generated text. Contextual Understanding: Evaluate how well the model understands and responds to context-specific queries. Tools to Use in Azure and Ollama for Evaluation Azure Cognitive Services: Utilize tools like Text Analytics and Translator to evaluate performance. Ollama: Use local testing environments to quickly iterate and validate model outputs. Conclusion In summary, Phi-3.5 exhibits impressive multilingual capabilities, effective deployment options, and robust performance metrics. Its ability to handle various languages makes it a versatile tool for natural language processing applications. Phi-3.5 stands out for its adaptability and performance in multilingual contexts, making it an excellent choice for future NLP projects, especially those requiring diverse language support. We encourage readers to experiment with the Phi-3.5 model using Azure AI Foundry or the AI Toolkit, explore fine-tuning techniques for their specific use cases, and share their findings with the community. For more information on optimized fine-tuning techniques, check out the Ignite Fine-Tuning Workshop. References Customize the Phi-3.5 family of models with LoRA fine-tuning in Azure Fine-tune Phi-3.5 models in Azure Fine Tuning with Azure AI Foundry and Microsoft Olive Hands on Labs and Workshop Customize a model with fine-tuning https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=azure-openai%2Cturbo%2Cpython-new&pivots=programming-language-studio Microsoft AI Toolkit - AI Toolkit for VSCode511Views1like2CommentsMake your own private ChatGPT
Introduction Creating your own private ChatGPT allows you to leverage AI capabilities while ensuring data privacy and security. This guide walks you through building a secure, customized chatbot using tools like Azure OpenAI, Cosmos DB and Azure App service. Why Build a Private ChatGPT? With the rise of AI-driven applications, organizations, people often face challenges related to data privacy, customization, and integration. Building a private ChatGPT addresses these concerns by: Maintaining Data Privacy: Keep sensitive information within your infrastructure. Customizing Responses: Tailor the chatbot’s behavior and language to suit your requirements. Ensuring Security: Leverage enterprise-grade security protocols. Avoiding Data Sharing: Prevent your data from being used to train external models. If organizations do not take these measures their data may go into future model training and can leak your sensitive data to public. Eg: Chatgpt collects personal data mentioned in their privacy policy Prerequisites Before you begin, ensure you have: Access to Azure OpenAI Service. A development environment set up with Python. Basic knowledge of FastAPI and MongoDB. An Azure account with necessary permissions. If you do not have Azure subscription, try Azure for students for FREE. Step 1: Set Up Azure OpenAI Log in to the Azure Portal and create an Azure OpenAI resource. Deploy a model, such as GPT-4o (multimodal), and note down the endpoint and API key. Note there is also an option of keyless authentication. Configure permissions to control access. Step 2: Use Chatgpt like app sample You can select any repository to be as base template for your app, in this I will be using the third option AOAIchat. It is developed by me. GitHub - mckaywrigley/chatbot-ui: AI chat for any model. Azure-Samples/azure-search-openai-demo: A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. sourabhkv/AOAIchat: Azure OpenAI chat This architecture diagram represents a typical flow for a private ChatGPT application with the following components: App UX (User Interface): This is the front-end application (mobile, web, or desktop) where users interact with the chatbot. It sends the user's input (prompt) and displays the AI's responses. App Service: Acts as the backend application, handling user requests and coordinating with other services. Functions: Receives user inputs and prepares them for processing by the Azure OpenAI service. Streams AI responses back to the App UX. Reads from and writes to Cosmos DB to manage chat history. Azure OpenAI Service: This is the core AI service, processing the user input and generating responses using models like GPT-4o. The App Service sends the user input (along with context) to this service and receives the AI-generated responses. Cosmos DB: A NoSQL database used to store and manage chat history. Operations: Writes user messages and AI-generated responses for future reference or analysis. Reads chat history to provide context for AI responses, enabling more intelligent and contextual conversations. Data Flow: User inputs are sent from the App UX to the App Service. The App Service forwards the input (with additional context, if needed) to Azure OpenAI. Azure OpenAI generates a response, which is streamed back to the App UX via the App Service. The App Service writes user inputs and AI responses to Cosmos DB for persistence. This architecture ensures scalability, secure data handling, and the ability to provide contextual responses by integrating database and AI services. What can you do with my template? AOAIchat supports personal, enterprise chat enabled by RAG People can enable RAG mode if they want to search within their database, else it behaves like normal ChatGPT. It supports multimodality, (supports image, text input) also depends on model deployed in Azure AI foundry. Step 3: Deploy to Azure Deploy a Cosmos DB account in nearest region Deploy Azure OpenAI model (gpt-4o, gpt-4o-mini recommended) Deploy Azure App service, try using container I would recommend B1plan to your nearest region, select docker registry sourabhkv/aoaichatdb:0.1 startup command uvicorn app:app --host 0.0.0.0 --port 80 After app service starts, put all environment variables The application requires the following environment variables to be set for proper configuration: Environment Variable Description AZURE_OPENAI_ENDPOINT The endpoint for Azure OpenAI API. AZURE_OPENAI_API_KEY API key for accessing Azure OpenAI. DEPLOYMENT_NAME Azure OpenAI deployment name. API_VERSION API version for Azure OpenAI. MAX_TOKENS Maximum tokens for API responses. MONGO_DETAILS MongoDB connection string. AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint> AZURE_OPENAI_API_KEY=<your_azure_openai_api_key> DEPLOYMENT_NAME=<your_deployment_name> API_VERSION=<your_api_version> MAX_TOKENS=<max_tokens> MONGO_DETAILS=<your_mongo_connection_string> Optional feature: implement authentication to secure access. Within app service select Authentication and select service providers. I went with Entra based authentication with single tenant. There is option of multi-tenant, personal accounts as well. Restart App service and within 2 minutes your private ChatGPT is ready. Pricing Pricing may depend on the plan you have deployed resources and region. Check Azure calculator for price estimation. My estimate for pricing I deployed all my resources in Sweden central Cosmos DB config - Cosmos DB for MongoDB (RU) serverless config with single write master, 2 GB transactional storage, 2 backup plan (FREE) ~ 0.75$ Azure OpenAI service - plan S0, model gpt-4o-mini global deployment, Input 20000 tokens, Output 10000 tokens ~ 9.00$ App service plan - OS Linux, Tier B1, instance count 1 ~13.14$ Total monthly cost = 22.89$ This price may vary in future, in region I calculated my configuration in Azure calculator Governance Azure OpenAI provides content filters to block any kind of input that violates responsible AI practices. Categories include Hate and Fairness Sexual Violence Self-harm User Prompt Attacks (direct and indirect) The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Azure OpenAI Service includes default safety settings applied to all models set as medium. Content filters can be modified to different level depending on use case. It supports RAG, I have provided detailed solution for it in my GitHub. Practical implementation GE Aerospace, in partnership with Microsoft and Accenture, has launched a company-wide generative AI platform, leveraging Microsoft Azure and Azure OpenAI Service. This solution aims to transform asset tracking and compliance in aviation, enabling quick access to maintenance records and reducing manual processing time from days to minutes. It supports informed decision-making by providing insights into aircraft leasing, compliance gaps, and asset health. For enterprises implementing private ChatGPT solutions, this illustrates the potential of generative AI for streamlining document-intensive processes while ensuring data security and compliance through cloud-based infrastructure like Azure. GE Aerospace Launches Company-wide Generative AI Platform for Employees | GE Aerospace News Build your own private ChatGPT style app with enterprise-ready architecture - By Microsoft Mechanics How to make private ChatGPT for FREE? It can be FREE if all of the setup is running locally on your hardware. Cosmos DB <-> MongoDB. Azure OpenAI <-> Ollama / LM studio Refer this NOTE : I have used gpt-4o, gpt-4o-mini these values are hardcoded in webpage, if you are using other models, you might have to change them in index.html. App Service <-> Local machine Register for Github models to access API for FREE. Note: GitHub models have rate limit for different models. Useful links sourabhkv/AOAIchat: Azure OpenAI chat What is RAG? Get started with Azure OpenAI API Chat with Azure OpenAI models using your own data3.6KViews1like0CommentsGPT-4o Support and New Token Management Feature in Azure API Management
We’re happy to announce new features coming to Azure API Management enhancing your experience with GenAI APIs. Our latest release brings expanded support for GPT-4 models, including text and image-based input, across all GenAI Gateway capabilities. Additionally, we’re expanding our token limit policy with a token quota capability to give you even more control over your token consumption. Token quota This extension of the token limit policy is designed to help you manage token consumption more effectively when working with large language models (LLMs). Key benefits of token quota: Flexible quotas: In addition to rate limiting, set token quotas on an hourly, daily, weekly, or monthly basis to manage token consumption across clients, departments or projects. Cost management: Protect your organization from unexpected token usage costs by aligning quotas with your budget and resource allocation. Enhanced visibility: In combination with emit-token-metric policy, track and analyze token usage patterns to make informed adjustments based on real usage trends. With this new capability, you can empower your developers to innovate while maintaining control over consumption and costs. It’s the perfect balance of flexibility and responsible consumption for your AI projects. Learn more about token quota in our documentation. GPT4o support GPT-4o integrates text and images in a single model, enabling it to handle multiple content types simultaneously. Our latest release enables you take advantage of the full power of GPT-4o with expanded support across all GenAI Gateway capabilities in Azure API Management. Key benefits: Cost efficiency: Control and attribute costs with token monitoring, limits, and quotas. Return cached responses for semantically similar prompts. High reliability: Enable geo-redundancy and automatic failovers with load balancing and circuit breakers. Developer enablement: Replace custom backend code with built-in policies. Publish AI APIs for consumption. Enhanced governance and monitoring: Centralize monitoring and logs for your AI APIs. Phased rollout and availability We’re excited about these new features and want to ensure you have the most up-to-date information about their availability. As with any major update, we’re implementing a phased rollout strategy to ensure safe deployment across our global infrastructure. Because of that some of your services may not have these updates until the deployment is complete. These new features will be available first in the new SKUv2 of Azure API Management followed by SKUv1 rollout towards the end of 2024. Conclusion These new features in Azure API Management represent our step forward in managing and governing your use of GPT4o and other LLMs. By providing greater control, visibility and traffic management capabilities, we’re helping you unlock the full potential of Generative AI while keeping resource usage in check. We’re excited about the possibilities these new features bring and are committed to expanding their availability. As we continue our phased rollout, we appreciate your patience and encourage you to keep an eye out for the updates.1.4KViews1like0CommentsAnnouncing AI building blocks in Logic Apps (Consumption)
We’re thrilled to announce that the Azure OpenAI and AI Search connectors, along with the Parse Document and Chunk Text actions, are now available in the Logic Apps Consumption SKU! These capabilities, already available in the Logic Apps Standard SKU, can now be leveraged in serverless, pay-as-you-go workflows to build powerful AI-driven applications providing cost-efficiency and flexibility. What’s new in Consumption SKU? This release brings almost all the advanced AI capabilities from Logic Apps Standard to Consumption SKU, enabling lightweight, event-driven workflows that automatically scale with your needs. Here’s a summary of the operations now available: Azure OpenAI connector operations Get Completions: Generate text with Azure OpenAI’s GPT models for tasks such as summarization, content creation, and more. Get Embeddings: Generate vector embeddings from text for advanced scenarios like semantic search and knowledge mining. AI Search connector operations Index Document: Add or update a single document in an AI Search index. Index Multiple Documents: Add or update multiple documents in an AI Search index in one operation. *Note: The Vector Search operation for enabling retrieval pattern will be highlighted in an upcoming release in December.* Parse Document and Chunk Text Actions Under the Data operations connector: Parse Document: Extract structured data from uploaded files like PDFs or images. Chunk Text: Split large text blocks into smaller chunks for downstream processing, such as generating embeddings or summaries. Demo workflow: Automating document ingestion with AI To showcase these capabilities, here’s an example workflow that automates document ingestion, processing, and indexing: Trigger: Start the workflow with an HTTP request or an event like a file upload to Azure Blob Storage. Get Blob Content: Retrieve the document to be processed. Parse Document: Extract structured information, such as key data points from a service agreement. Chunk Text: Split the document content into smaller, manageable text chunks. Generate Embeddings: Use the Azure OpenAI connector to create vector embeddings for the text chunks. Select array: To compose the inputs being passed to Index documents operation Index Data: Store the embeddings and metadata for downstream applications, like search or analytics Why choose Consumption SKU? With this release, Logic Apps Consumption SKU allows you to: - Build smarter, scalable workflows: Leverage advanced AI capabilities without upfront infrastructure costs. - Pay only for what you use: Ideal for event-driven workloads where cost-efficiency is key. - Integrate seamlessly: Combine AI capabilities with hundreds of existing Logic Apps connectors. What’s next? In December, we’ll be announcing the Vector Search operation for the AI Search connector, enabling retrieval capability in Logic Apps Consumption SKU to bring feature parity with Standard SKU. This will allow you to perform advanced search scenarios by matching queries with contextually similar content. Stay tuned for updates!655Views3likes0CommentsAnnouncing API Management and API Center Community Live Stream on Thursday, December 12
We're thrilled to announce a community stand-up – a live-stream event for users of Azure API Management and API Center, hosted on YouTube. Join us for an engaging session where we'll delve into the latest industry trends, product updates, and best practices. Event Details Date: Thursday, 12 December 2024 Time: 8 AM PST / 11 AM EST Format: Live stream on YouTube What to Expect Azure API Management and API Center updates and deep dive into Microsoft Ignite announcements: Discover the latest features in our services, including shared workspace gateway, Premium v2 tier, enhancements to GenAI gateway capabilities, and more. Learn how these advancements can benefit your organization and enhance your API management practices. Insights into the API industry: Our product team will share their perspectives on the new developments in the API industry. Interactive Q&A session: Do you have a burning question about our products or are you looking to provide feedback? This is your chance! Join our live Q&A session to get answers directly from our team. Networking opportunities: Connect with fellow API management practitioners in the chat, exchange ideas, and learn from each other's experiences. How to Join Simply tune into our live stream in the Microsoft Azure Developers channel on YouTube at the scheduled date and time. You can select the “Notify me” button to receive a reminder before the event starts. Don't miss out on this exciting opportunity to engage with our product team and fellow API Management and API Center users. Mark your calendars and we'll see you there!184Views0likes0Comments