azure ai studio
25 TopicsBuilding your own copilot – yes, but how? (Part 1 of 2)
Are you interested in building your own AI co-pilot? Check out the first of a two-part blog post from Carlotta Castelluccio that covers the basics of creating a virtual assistant that can help you with tasks like scheduling, email management, and more. Learn about the tools and technologies involved, including Microsoft's Bot Framework and Language Understanding Intelligent Service (LUIS). Whether you're a software developer or just curious about the possibilities of AI, this post is a great introduction to building your own co-pilot.32KViews3likes2CommentsFine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
Phi-3 is a family of small language models (SLMs) developed by Microsoft that delivers exceptional performance and cost-effectiveness. In this tutorial, you will learn how to fine-tune the Phi-3 model and integrate the custom Phi-3 model with Prompt flow in Azure AI Studio. By leveraging Azure AI / ML Studio, you will establish a workflow for deploying and utilizing custom AI models.20KViews1like0CommentsEvaluate Fine-tuned Phi-3 / 3.5 Models in Azure AI Studio Focusing on Microsoft's Responsible AI
Fine-tuning a model can sometimes lead to unintended or undesired responses. To ensure that the model remains safe and effective, it's important to evaluate the model's potential to generate harmful content and its ability to produce accurate, relevant, and coherent responses. In this tutorial, you will learn how to evaluate the safety and performance of a fine-tuned Phi-3 / Phi-3.5 model integrated with Prompt flow in Azure AI Studio. Before beginning the technical steps, it's essential to understand Microsoft's Responsible AI Principles, an ethical framework designed to guide the responsible development, deployment, and operation of AI systems. These principles guide the responsible design, development, and deployment of AI systems, ensuring that AI technologies are built in a way that is fair, transparent, and inclusive. These principles are the foundation for evaluating the safety of AI models.19KViews1like1CommentCreate Your Own Copilot Using Copilot Studio
Hello everyone, I am Suniti, Beta MLSA pursuing my graduation in the field of Data Science. Today, we're diving into creating our very own copilot to guide students towards ‘becoming MLSAs’. But first thing first, let's explore Copilot Studio!18KViews4likes2CommentsExploring AI Development and Management: A Journey through Contoso Chat and LLM Ops
In this blog, we'll navigate through the world of AI models, exploring Contoso Chat, Prompt Engineering, limitations of Prompt Engineering, and Large Language Models. We'll introduce tools like the RAG Pattern and Azure AI Studio that can boost AI responses and system performance. Ready to dive into the intricacies of AI development and management? Join us!15KViews3likes1CommentJourney Series for Generative AI Application Architecture - Foundation
At Build last year, Microsoft CTO Kevin Scott proposed Copilot Stack to provide problem-solving ideas for Generative AI applications. Based on the Coplit Stack, community have developed many frameworks in the past year, such as Semantic Kernel, AutoGen, and LangChain. These frameworks are more biased toward front-end applications, and enterprises need a better engineering solution. This series hopes to give you some ideas based on Microsoft Cloud and related frameworks and tools.9.2KViews3likes1CommentJourney Series for Generative AI Application Architecture - Fine-tune SLM with Microsoft Olive
For some industries and traditional enterprises, they prefer to train their own industry models based on their own underlying structures or applications combined with their own data.8.8KViews1like0CommentsStep-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy
Introductions Ollama WebUI is a streamlined interface for deploying and interacting with open-source large language models (LLMs) like Llama 3 and Mistral, enabling users to manage models, test them via a ChatGPT-like chat environment, and integrate them into applications through Ollama’s local API. While it excels for self-hosted models on platforms like Azure VMs, it does not natively support Azure OpenAI API endpoints—OpenAI’s proprietary models (e.g., GPT-4) remain accessible only through OpenAI’s managed API. However, tools like LiteLLM bridge this gap, allowing developers to combine Ollama-hosted models with OpenAI’s API in hybrid workflows, while maintaining compliance and cost-efficiency. This setup empowers users to leverage both self-managed open-source models and cloud-based AI services. Problem Statement As of February 2025, Ollama WebUI, still do not support Azure Open AI API. The Ollama Web UI only support self-hosted Ollama API and managed OpenAI API service (PaaS). This will be an issue if users want to use Open AI models they already deployed on Azure AI Foundry. Objective To integrate Azure OpenAI API via LiteLLM proxy into with Ollama Web UI. LiteLLM translates Azure AI API requests into OpenAI-style requests on Ollama Web UI allowing users to use OpenAI models deployed on Azure AI Foundry. If you haven’t hosted Ollama WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Ollama WebUI deployed already. Step 1: Deploy OpenAI models on Azure Foundry. If you haven’t created an Azure AI Hub already, search for Azure AI Foundry on Azure, and click on the “+ Create” button > Hub. Fill out all the empty fields with the appropriate configuration and click on “Create”. After the Azure AI Hub is successfully deployed, click on the deployed resources and launch the Azure AI Foundry service. To deploy new models on Azure AI Foundry, find the “Models + Endpoints” section on the left hand side and click on “+ Deploy Model” button > “Deploy base model” A popup will appear, and you can choose which models to deploy on Azure AI Foundry. Please note that the o-series models are only available to select customers at the moment. You can request access to the o-series models by completing this request access form, and wait until Microsoft approves the access request. Click on “Confirm” and another popup will emerge. Now name the deployment and click on “Deploy” to deploy the model. Wait a few moments for the model to deploy. Once it successfully deployed, please save the “Target URI” and the API Key. Step 2: Deploy LiteLLM Proxy via Docker Container Before pulling the LiteLLM Image into the host environment, create a file named “litellm_config.yaml” and list down the models you deployed on Azure AI Foundry, along with the API endpoints and keys. Replace "API_Endpoint" and "API_Key" with “Target URI” and “Key” found from Azure AI Foundry respectively. Template for the “litellm_config.yaml” file. model_list: - model_name: [model_name] litellm_params: model: azure/[model_name_on_azure] api_base: "[API_ENDPOINT/Target_URI]" api_key: "[API_Key]" api_version: "[API_Version]" Tips: You can find the API version info at the end of the Target URI of the model's endpoint: Sample Endpoint - https://example.openai.azure.com/openai/deployments/o1-mini/chat/completions?api-version=2024-08-01-preview Run the docker command below to start LiteLLM Proxy with the correct settings: docker run -d \ -v $(pwd)/litellm_config.yaml:/app/config.yaml \ -p 4000:4000 \ --name litellm-proxy-v1 \ --restart always \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug Make sure to run the docker command inside the directory where you created the “litellm_config.yaml” file just now. The port used to listen for LiteLLM Proxy traffic is port 4000. Now that LiteLLM proxy had been deployed on port 4000, lets change the OpenAI API settings on Ollama WebUI. Navigate to Ollama WebUI’s Admin Panel settings > Settings > Connections > Under the OpenAI API section, write http://127.0.0.1:4000 as the API endpoint and set any key (You must write anything to make it work!). Click on “Save” button to reflect the changes. Refresh the browser and you should be able to see the AI models deployed on the Azure AI Foundry listed in the Ollama WebUI. Now let’s test the chat completion + Web Search capability using the "o1-mini" model on Ollama WebUI. Conclusion Hosting Ollama WebUI on an Azure VM and integrating it with OpenAI’s API via LiteLLM offers a powerful, flexible approach to AI deployment, combining the cost-efficiency of open-source models with the advanced capabilities of managed cloud services. While Ollama itself doesn’t support Azure OpenAI endpoints, the hybrid architecture empowers IT teams to balance data privacy (via self-hosted models on Azure AI Foundry) and cutting-edge performance (using Azure OpenAI API), all within Azure’s scalable ecosystem. This guide covers every step required to deploy your OpenAI models on Azure AI Foundry, set up the required resources, deploy LiteLLM Proxy on your host machine and configure Ollama WebUI to support Azure AI endpoints. You can test and improve your AI model even more with the Ollama WebUI interface with Web Search, Text-to-Image Generation, etc. all in one place.8.4KViews1like4Comments