azure ai foundry
19 TopicsStep-by-Step Tutorial: Building an AI Agent Using Azure AI Foundry
This blog post provides a comprehensive tutorial on building an AI agent using Azure AI Agent service and the Azure AI Foundry portal. AI agents represent a powerful new paradigm in application development, offering a more intuitive and dynamic way to interact with software. They can understand natural language, reason about user requests, and take actions to fulfill those requests. This tutorial will guide you through the process of creating and deploying an intelligent agent on Azure. We'll cover setting up an Azure AI Foundry hub, crafting effective instructions to define the agent's behavior, including recognizing user intent, processing requests, and generating helpful responses. We'll also discuss testing the agent's conversational abilities and provide additional resources for expanding your knowledge of AI agents and the Azure AI ecosystem. This hands-on guide is perfect for anyone looking to explore the practical application of Azure's conversational AI capabilities and build intelligent virtual assistants. Join us as we dive into the exciting world of AI agents.15KViews2likes2CommentsStep-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy
Introductions Ollama WebUI is a streamlined interface for deploying and interacting with open-source large language models (LLMs) like Llama 3 and Mistral, enabling users to manage models, test them via a ChatGPT-like chat environment, and integrate them into applications through Ollama’s local API. While it excels for self-hosted models on platforms like Azure VMs, it does not natively support Azure OpenAI API endpoints—OpenAI’s proprietary models (e.g., GPT-4) remain accessible only through OpenAI’s managed API. However, tools like LiteLLM bridge this gap, allowing developers to combine Ollama-hosted models with OpenAI’s API in hybrid workflows, while maintaining compliance and cost-efficiency. This setup empowers users to leverage both self-managed open-source models and cloud-based AI services. Problem Statement As of February 2025, Ollama WebUI, still do not support Azure Open AI API. The Ollama Web UI only support self-hosted Ollama API and managed OpenAI API service (PaaS). This will be an issue if users want to use Open AI models they already deployed on Azure AI Foundry. Objective To integrate Azure OpenAI API via LiteLLM proxy into with Ollama Web UI. LiteLLM translates Azure AI API requests into OpenAI-style requests on Ollama Web UI allowing users to use OpenAI models deployed on Azure AI Foundry. If you haven’t hosted Ollama WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Ollama WebUI deployed already. Step 1: Deploy OpenAI models on Azure Foundry. If you haven’t created an Azure AI Hub already, search for Azure AI Foundry on Azure, and click on the “+ Create” button > Hub. Fill out all the empty fields with the appropriate configuration and click on “Create”. After the Azure AI Hub is successfully deployed, click on the deployed resources and launch the Azure AI Foundry service. To deploy new models on Azure AI Foundry, find the “Models + Endpoints” section on the left hand side and click on “+ Deploy Model” button > “Deploy base model” A popup will appear, and you can choose which models to deploy on Azure AI Foundry. Please note that the o-series models are only available to select customers at the moment. You can request access to the o-series models by completing this request access form, and wait until Microsoft approves the access request. Click on “Confirm” and another popup will emerge. Now name the deployment and click on “Deploy” to deploy the model. Wait a few moments for the model to deploy. Once it successfully deployed, please save the “Target URI” and the API Key. Step 2: Deploy LiteLLM Proxy via Docker Container Before pulling the LiteLLM Image into the host environment, create a file named “litellm_config.yaml” and list down the models you deployed on Azure AI Foundry, along with the API endpoints and keys. Replace "API_Endpoint" and "API_Key" with “Target URI” and “Key” found from Azure AI Foundry respectively. Template for the “litellm_config.yaml” file. model_list: - model_name: [model_name] litellm_params: model: azure/[model_name_on_azure] api_base: "[API_ENDPOINT/Target_URI]" api_key: "[API_Key]" api_version: "[API_Version]" Tips: You can find the API version info at the end of the Target URI of the model's endpoint: Sample Endpoint - https://example.openai.azure.com/openai/deployments/o1-mini/chat/completions?api-version=2024-08-01-preview Run the docker command below to start LiteLLM Proxy with the correct settings: docker run -d \ -v $(pwd)/litellm_config.yaml:/app/config.yaml \ -p 4000:4000 \ --name litellm-proxy-v1 \ --restart always \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug Make sure to run the docker command inside the directory where you created the “litellm_config.yaml” file just now. The port used to listen for LiteLLM Proxy traffic is port 4000. Now that LiteLLM proxy had been deployed on port 4000, lets change the OpenAI API settings on Ollama WebUI. Navigate to Ollama WebUI’s Admin Panel settings > Settings > Connections > Under the OpenAI API section, write http://127.0.0.1:4000 as the API endpoint and set any key (You must write anything to make it work!). Click on “Save” button to reflect the changes. Refresh the browser and you should be able to see the AI models deployed on the Azure AI Foundry listed in the Ollama WebUI. Now let’s test the chat completion + Web Search capability using the "o1-mini" model on Ollama WebUI. Conclusion Hosting Ollama WebUI on an Azure VM and integrating it with OpenAI’s API via LiteLLM offers a powerful, flexible approach to AI deployment, combining the cost-efficiency of open-source models with the advanced capabilities of managed cloud services. While Ollama itself doesn’t support Azure OpenAI endpoints, the hybrid architecture empowers IT teams to balance data privacy (via self-hosted models on Azure AI Foundry) and cutting-edge performance (using Azure OpenAI API), all within Azure’s scalable ecosystem. This guide covers every step required to deploy your OpenAI models on Azure AI Foundry, set up the required resources, deploy LiteLLM Proxy on your host machine and configure Ollama WebUI to support Azure AI endpoints. You can test and improve your AI model even more with the Ollama WebUI interface with Web Search, Text-to-Image Generation, etc. all in one place.10KViews1like4CommentsShowcasing Phi-4-Reasoning: A Game-Changer for AI Developers
Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision. What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces.. We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle complex reasoning tasks, including mathematical problems, scientific queries, and coding challenges Why Phi-4-reasoning and Phi-4-reasoning-plus Excels Advanced Reasoning Capabilities: Phi-4-Reasoning is trained to generate detailed reasoning chains, making it highly effective in solving complex problems. It leverages additional inference-time compute to enhance its reasoning abilities High-Quality Data Curation: The model's training involved meticulous data curation and the use of high-quality synthetic datasets, allowing it to compete with much larger models Performance: Despite its smaller size, Phi-4-Reasoning outperforms larger models like OpenAI's o1-mini and DeepSeek-R1-Distill-Llama-70B on various benchmarks, including mathematical reasoning and PhD-level science questions Industry Applications: Phi-4-Reasoning is versatile and can be deployed in various industry scenarios, including medicine, where it can be used for reasoning tasks on edge devices Real-World Examples Wordplay Riddle: Phi-4-Reasoning can solve wordplay riddles by recognizing patterns and applying logical reasoning. For example, it deduced that collecting three strawberries would yield 9 r's because the word "strawberry" contains three r's Example 1: How many strawberries for 9 r’s? User: How many strawberries do I need to collect 9 r’s? Phi-4-Reasoning: <think> TRUNCATED </think> It’s a wordplay riddle! Notice that the word “strawberry” has three r’s in it. So if you “collect” three strawberries, you get 3 Å~ 3 = 9 r’s. Thus, you need 3 strawberries. Solving a Riddle: The model can decrypt complex riddles by identifying patterns and performing arithmetic operations. It demonstrated this capability by solving a riddle involving foreign language number-words and reversed number-words. What This Problem Proves: Pattern Recognition: Phi-4-Reasoning demonstrates its ability to identify patterns in words and apply logical reasoning to solve the riddle. Wordplay Understanding: The model showcases its proficiency in understanding and solving wordplay riddles, which require a combination of linguistic knowledge and logical deduction. Efficiency: Phi-4-Reasoning quickly arrives at the correct solution by applying a straightforward multiplication operation, highlighting its efficiency in solving such problems. Example 2: Solving a Riddle User: I will give you a riddle to solve with a few examples, and something to complete at the end What is a likely completion for X that is consistent the with examples above? Phi-4-Reasoning: <think> TRUNCATED </think> We can “decrypt” the riddle by noticing that each line has five parts. In every example the parts are: A “number-word” in a foreign language (or with an extra minus sign), A “first name” (of any nationality), A “reversed number-word” (or two, joined by a hyphen), A “second name”, and A final number. When you “reverse” the third word you get a familiar numberin English: In line 1 “evif” is “five” (5) and the first number is “uno” = 1, and 5 – 1 = 4. In line 2 “neves” is “seven” (7) and “tres” = 3, and 7 – 3 = 4. In line 3 “evlewt” is “twelve” (12) and “drei” = 3, and 12 – 3 = 9. In lines 4 and 5 with two reversed “number-words”: – Line 4: “xis” →“six” (6) and “ytnewt” →“twelve” (12); 6 + 12 = 18, plus 3 = 21. – Line 5: “eerht” → “three” (3) and “ytnewt” → “twelve” (12); 3 + 12 = 15, plus 3 = 18. Now look at the final line: “minus one-point-two Satya eno Bill X” It follows the same five–part format. Notice that “eno” reversed is “one” (1). And the “first number” is “minus onepoint-two” (–1.2). Since for a one–number–word line the rule is: final number = (number from reversed word)−(first number) we compute: final = 1 − (−1.2) = 1 + 1.2 = 2.2 Thus, a likely completion for X is 2.2. Phi-4-Reasoning: Phi-4-Reasoning decrypts the riddle by identifying a pattern in the examples provided. Each line consists of five parts: a number-word in a foreign language, a first name, a reversed number-word, a second name, and a final number. By reversing the third word and performing arithmetic operations, Phi-4-Reasoning determines that the likely completion for X is 2.2 What This Problem Proves: Complex Pattern Recognition: Phi-4-Reasoning excels in recognizing complex patterns involving multiple elements, such as foreign language number-words and reversed number-words. Logical Deduction: The model demonstrates its ability to apply logical deduction to solve the riddle by performing arithmetic operations based on the identified patterns. Versatility: Phi-4-Reasoning showcases its versatility in handling diverse types of riddles and problems, making it a valuable tool for AI developers. Example 3. solve complex graph traversal tasks, such as finding a path through a maze. The maze is represented as an adjacency list, which is a common way to describe graphs in computer science. Each node in the graph (or maze) is connected to other nodes, and the task is to find a valid path from the starting node (node 0) to the destination node Example 3: Solving a maze User: Please solve the maze provided in adjacency list form below, by outputting a valid path from node 0 to node {destination}, in a space-separated list of integers. Adjacency list: {depicted as an image below} Phi-4-Reasoning: <think> TRUNCATED </think> {path depicted as an image below} What This Problem Proves: Graph Traversal Capability: Phi-4-Reasoning can effectively navigate through graphs using algorithms like Depth-First Search (DFS) or Breadth-First Search (BFS). This capability is crucial for solving problems that involve finding paths, cycles, or connectivity in graphs. Logical Reasoning: The model demonstrates its ability to apply logical reasoning to determine the correct sequence of nodes to traverse from the start to the destination. This involves understanding the structure of the graph and making decisions based on the connections between nodes. Pattern Recognition: Phi-4-Reasoning can recognize patterns in the adjacency list and use them to find a solution. This is important for tasks that require identifying and following specific paths or routes. Versatility: The ability to solve a maze using an adjacency list showcases the model's versatility in handling different types of data structures and problem-solving scenarios. This is beneficial for AI developers who need to work with various data representations and algorithms. Efficiency: The model's ability to quickly and accurately find a valid path through the maze highlights its efficiency in solving complex problems. This is valuable for applications that require fast and reliable solutions. Conclusion: Phi-4-Reasoning's ability to solve a maze using an adjacency list demonstrates its advanced reasoning capabilities, making it a powerful tool for AI developers. Its proficiency in graph traversal, logical reasoning, pattern recognition, versatility, and efficiency makes it well-suited for tackling a wide range of complex problems. Deployment and Integration Phi-4-Reasoning can be deployed on various platforms, including Azure AI Foundry and Hugging Face. It supports quantization using tools like Microsoft Olive, making it suitable for deployment on edge devices such as IoT, laptops, and mobile devices. Phi-4-Reasoning is a groundbreaking AI model that offers advanced reasoning capabilities, high performance, and versatility. Its ability to handle complex reasoning tasks makes it an invaluable tool for AI developers, enabling them to create innovative solutions across various industries. References Make Phi-4-mini-reasoning more powerful with industry reasoning on edge devices | Microsoft Community Hub Phi-4 Reasoning Technical Paper Phi-4-Mini-Reasoning Technical Paper One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog PhiCookBook Access Phi-4-reasoning models Phi Models at Azure AI Foundry Models Phi Models on Hugging Face Phi Models on GitHub Marketplace Models3.6KViews0likes0CommentsMulti-Agent Systems and MCP Tools Integration with Azure AI Foundry
The Power of Connected Agents: Building Multi-Agent Systems Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in! The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents. Why Connected Agents Matter As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative: 1. Simplified Complexity: Instead of building one massive agent that does everything (and becomes difficult to maintain), you can create smaller, specialized agents with clearly defined responsibilities. 2. No Custom Orchestration Required: The main agent naturally delegates tasks using natural language - no need to write complex routing logic or orchestration code. 3. Better Reliability and Debugging: When something goes wrong, it's much easier to identify which specific agent is causing issues rather than debugging a monolithic system. 4. Flexibility and Extensibility: Need to add a new capability? Just create a new connected agent without modifying your main agent or other parts of the system. How Multi-Agent Systems Work The architecture is surprisingly straightforward: 1. A main agent acts as the orchestrator, interpreting user requests and delegating tasks 2. Connected sub-agents perform specialized functions like data retrieval, analysis, or summarization 3. Results flow back to the main agent, which compiles the final response For example, imagine building a ticket triage system. When a new support ticket arrives, your main agent might: - Delegate to a classifier agent to determine the ticket type - Send the ticket to a priority-setting agent to determine urgency - Use a team-assignment agent to route it to the right department All this happens seamlessly without you having to write custom routing logic! Setting Up a Multi-Agent Solution The module walks you through the entire process: 1. Initializing the agents client 2. Creating connected agents with specialized roles 3. Registering them as tools for the main agent 4. Building the main agent that orchestrates the workflow 5. Running the complete system Taking It Further: Integrating MCP Tools with Azure AI Agents Once you've mastered multi-agent systems, the next level is connecting your agents to external tools and services. The Integrate MCP Tools with Azure AI Agents module teaches you how to use the Model Context Protocol (MCP) to give your agents access to a dynamic catalog of tools. What is Dynamic Tool Discovery? Traditionally, adding new tools to an AI agent meant hardcoding each one directly into your agent's code. But what if tools change frequently, or if different teams manage different tools? This approach quickly becomes unmanageable. Dynamic tool discovery through MCP solves this problem by: 1. Centralizing Tool Management: Tools are defined and managed in a central MCP server 2. Enabling Runtime Discovery: Agents discover available tools during runtime through the MCP client 3. Supporting Automatic Updates: When tools are updated on the server, agents automatically get the latest versions The MCP Server-Client Architecture The architecture involves two key components: 1. MCP Server: Acts as a registry for tools, hosting tool definitions decorated with `@mcp.tool`. Tools are exposed over HTTP when requested. 2. MCP Client: Acts as a bridge between your MCP server and Azure AI Agent. It discovers available tools, generates Python function stubs to wrap them, and registers those functions with your agent. This separation of concerns makes your AI solution more maintainable and adaptable to change. Setting Up MCP Integration The module guides you through the complete process: 1. Setting up an MCP server with tool definitions 2. Creating an MCP client to connect to the server 3. Dynamically discovering available tools 4. Wrapping tools in async functions for agent use 5. Registering the tools with your Azure AI agent Once set up, your agent can use any tool in the MCP catalog as if it were a native function, without any hardcoding required! Practical Applications for Student Developers As a student developer, how might you apply these concepts in real projects? Classroom Projects: - Build a research assistant that delegates to specialized agents for different academic subjects - Create a coding tutor that uses different agents for explaining concepts, debugging code, and suggesting improvements Hackathons: - Develop a sustainability app that uses connected agents to analyze environmental data from different sources - Create a personal finance advisor with specialized agents for budgeting, investment analysis, and financial planning Personal Portfolio Projects: - Build a content creation assistant with specialized agents for brainstorming, drafting, editing, and SEO optimization - Develop a health and wellness app that uses MCP tools to connect to fitness APIs, nutrition databases, and sleep tracking services Getting Started Ready to dive in? Both modules include hands-on exercises where you'll build real working examples: - A ticket triage system using connected agents - An inventory management assistant that integrates with MCP tools The prerequisites are straightforward: - Experience with deploying generative AI models in Azure AI Foundry - Programming experience with Python or C# Conclusion Multi-agent systems and MCP tools integration represent the next evolution in AI application development. By mastering these concepts, you'll be able to build more sophisticated, maintainable, and extensible AI solutions - skills that will make you stand out in internship applications and job interviews. The best part? These modules are designed with practical, hands-on learning in mind - perfect for student developers who learn by doing. So why not give them a try? Your future AI applications (and your resume) will thank you for it! Want to learn more about Model Context Protocol 'MCP' see MCP for Beginners Happy coding!1.8KViews1like0CommentsAI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
This blog post, Part 7 in a series on AI agents, focuses on the Planning Design Pattern for effective task orchestration. It explains how to define clear goals, decompose complex tasks into manageable subtasks, and leverage structured output (e.g., JSON) for seamless communication between agents. The post includes code snippets demonstrating how to create a planning agent, orchestrate multi-agent workflows, and implement iterative planning for dynamic adaptation. It also links to a practical example notebook (07-autogen.ipynb) and further resources like AutoGen Magnetic One, encouraging readers to explore advanced planning concepts. Links to the previous posts in the series are provided for easy access to foundational AI agent concepts.1.7KViews1like0CommentsConfigure Embedding Models on Azure AI Foundry with Open Web UI
Introduction Let’s take a closer look at an exciting development in the AI space. Embedding models are the key to transforming complex data into usable insights, driving innovations like smarter chatbots and tailored recommendations. With Azure AI Foundry, Microsoft’s powerful platform, you’ve got the tools to build and scale these models effortlessly. Add in Open Web UI, a intuitive interface for engaging with AI systems, and you’ve got a winning combo that’s hard to beat. In this article, we’ll explore how embedding models on Azure AI Foundry, paired with Open Web UI, are paving the way for accessible and impactful AI solutions for developers and businesses. Let’s dive in! To proceed with configuring the embedding model from Azure AI Foundry on Open Web UI, please firstly configure the requirements below. Requirements: Setup Azure AI Foundry Hub/Projects Deploy Open Web UI – refer to my previous article on how you can deploy Open Web UI on Azure VM. Optional: Deploy LiteLLM with Azure AI Foundry models to work on Open Web UI - refer to my previous article on how you can do this as well. Deploying Embedding Models on Azure AI Foundry Navigate to the Azure AI Foundry site and deploy an embedding model from the “Model + Endpoint” section. For the purpose of this demonstration, we will deploy the “text-embedding-3-large” model by OpenAI. You should be receiving a URL endpoint and API Key to the embedding model deployed just now. Take note of that credential because we will be using it in Open Web UI. Configuring the embedding models on Open Web UI Now head to the Open Web UI Admin Setting Page > Documents and Select Azure Open AI as the Embedding Model Engine. Copy and Paste the Base URL, API Key, the Embedding Model deployed on Azure AI Foundry and the API version (not the model version) into the fields below: Click “Save” to reflect the changes. Expected Output Now let us look into the scenario for when the embedding model configured on Open Web UI and when it is not. Without Embedding Models configured. With Azure Open AI Embedding models configured. Conclusion And there you have it! Embedding models on Azure AI Foundry, combined with the seamless interaction offered by Open Web UI, are truly revolutionizing how we approach AI solutions. This powerful duo not only simplifies the process of building and deploying intelligent systems but also makes cutting-edge technology more accessible to developers and businesses of all sizes. As we move forward, it’s clear that such integrations will continue to drive innovation, breaking down barriers and unlocking new possibilities in the AI landscape. So, whether you’re a seasoned developer or just stepping into this exciting field, now’s the time to explore what Azure AI Foundry and Open Web UI can do for you. Let’s keep pushing the boundaries of what’s possible!1.4KViews0likes0CommentsFine-Tuning and Deploying Phi-3.5 Model with Azure and AI Toolkit
What is Phi-3.5? Phi-3.5 as a state-of-the-art language model with strong multilingual capabilities. Emphasize that it is designed to handle multiple languages with high proficiency, making it a versatile tool for Natural Language Processing (NLP) tasks across different linguistic backgrounds. Key Features of Phi-3.5 Highlight the core features of the Phi-3.5 model: Multilingual Capabilities: Explain that the model supports a wide variety of languages, including major world languages such as English, Spanish, Chinese, French, and others. You can provide an example of its ability to handle a sentence or document translation from one language to another without losing context or meaning. Fine-Tuning Ability: Discuss how the model can be fine-tuned for specific use cases. For instance, in a customer support setting, the Phi-3.5 model can be fine-tuned to understand the nuances of different languages used by customers across the globe, improving response accuracy. High Performance in NLP Tasks: Phi-3.5 is optimized for tasks like text classification, machine translation, summarization, and more. It has superior performance in handling large-scale datasets and producing coherent, contextually correct language outputs. Applications in Real-World Scenarios To make this section more engaging, provide a few real-world applications where the Phi-3.5 model can be utilized: Customer Support Chatbots: For companies with global customer bases, the model’s multilingual support can enhance chatbot capabilities, allowing for real-time responses in a customer’s native language, no matter where they are located. Content Creation for Global Markets: Discuss how businesses can use Phi-3.5 to automatically generate or translate content for different regions. For example, marketing copy can be adapted to fit cultural and linguistic nuances in multiple languages. Document Summarization Across Languages: Highlight how the model can be used to summarize long documents or articles written in one language and then translate the summary into another language, improving access to information for non-native speakers. Why Choose Phi-3.5 for Your Project? End this section by emphasizing why someone should use Phi-3.5: Versatility: It’s not limited to just one or two languages but performs well across many. Customization: The ability to fine-tune it for particular use cases or industries makes it highly adaptable. Ease of Deployment: With tools like Azure ML and Ollama, deploying Phi-3.5 in the cloud or locally is accessible even for smaller teams. Objective Of Blog Specialized Language Models (SLMs) are at the forefront of advancements in Natural Language Processing, offering fine-tuned, high-performance solutions for specific tasks and languages. Among these, the Phi-3.5 model has emerged as a powerful tool, excelling in its multilingual capabilities. Whether you're working with English, Spanish, Mandarin, or any other major world language, Phi-3.5 offers robust, reliable language processing that adapts to various real-world applications. This makes it an ideal choice for businesses looking to deploy multilingual chatbots, automate content generation, or translate customer interactions in real time. Moreover, its fine-tuning ability allows for customization, making Phi-3.5 versatile across industries and tasks. Customization and Fine-Tuning for Different Applications The Phi-3.5 model is not just limited to general language understanding tasks. It can be fine-tuned for specific applications, industries, and language models, allowing users to tailor its performance to meet their needs. Customizable for Industry-Specific Use Cases: With fine-tuning, the model can be trained further on domain-specific data to handle particular use cases like legal document translation, medical records analysis, or technical support. Example: A healthcare company can fine-tune Phi-3.5 to understand medical terminology in multiple languages, enabling it to assist in processing patient records or generating multilingual health reports. Adapting for Specialized Tasks: You can train Phi-3.5 to perform specialized tasks like sentiment analysis, text summarization, or named entity recognition in specific languages. Fine-tuning helps enhance the model's ability to handle unique text formats or requirements. Example: A marketing team can fine-tune the model to analyse customer feedback in different languages to identify trends or sentiment across various regions. The model can quickly classify feedback as positive, negative, or neutral, even in less widely spoken languages like Arabic or Korean. Applications in Real-World Scenarios To illustrate the versatility of Phi-3.5, here are some real-world applications where this model excels, demonstrating its multilingual capabilities and customization potential: Case Study 1: Multilingual Customer Support Chatbots Many global companies rely on chatbots to handle customer queries in real-time. With Phi-3.5’s multilingual abilities, businesses can deploy a single model that understands and responds in multiple languages, cutting down on the need to create language-specific chatbots. Example: A global airline can use Phi-3.5 to power its customer service bot. Passengers from different countries can inquire about their flight status or baggage policies in their native languages—whether it's Japanese, Hindi, or Portuguese—and the model responds accurately in the appropriate language. Case Study 2: Multilingual Content Generation Phi-3.5 is also useful for businesses that need to generate content in different languages. For example, marketing campaigns often require creating region-specific ads or blog posts in multiple languages. Phi-3.5 can help automate this process by generating localized content that is not just translated but adapted to fit the cultural context of the target audience. Example: An international cosmetics brand can use Phi-3.5 to automatically generate product descriptions for different regions. Instead of merely translating a product description from English to Spanish, the model can tailor the description to fit cultural expectations, using language that resonates with Spanish-speaking audiences. Case Study 3: Document Translation and Summarization Phi-3.5 can be used to translate or summarize complex documents across languages. Its ability to preserve meaning and context across languages makes it ideal for industries where accuracy is crucial, such as legal or academic fields. Example: A legal firm working on cross-border cases can use Phi-3.5 to translate contracts or legal briefs from German to English, ensuring the context and legal terminology are accurately preserved. It can also summarize lengthy documents in multiple languages, saving time for legal teams. Fine-Tuning Phi-3.5 Model Fine-tuning a language model like Phi-3.5 is a crucial step in adapting it to perform specific tasks or cater to specific domains. This section will walk through what fine-tuning is, its importance in NLP, and how to fine-tune the Phi-3.5 model using Azure Model Catalog for different languages and tasks. We'll also explore a code example and best practices for evaluating and validating the fine-tuned model. What is Fine-Tuning? Fine-tuning refers to the process of taking a pre-trained model and adapting it to a specific task or dataset by training it further on domain-specific data. In the context of NLP, fine-tuning is often required to ensure that the language model understands the nuances of a particular language, industry-specific terminology, or a specific use case. Why Fine-Tuning is Necessary Pre-trained Large Language Models (LLMs) are trained on diverse datasets and can handle various tasks like text summarization, generation, and question answering. However, they may not perform optimally in specialized domains without fine-tuning. The goal of fine-tuning is to enhance the model's performance on specific tasks by leveraging its prior knowledge while adapting it to new contexts. Challenges of Fine-Tuning Resource Intensiveness: Fine-tuning large models can be computationally expensive, requiring significant hardware resources. Storage Costs: Each fine-tuned model can be large, leading to increased storage needs when deploying multiple models for different tasks. LoRA and QLoRA To address these challenges, techniques like LoRA (Low-rank Adaptation) and QLoRA (Quantized Low-rank Adaptation) have emerged. Both methods aim to make the fine-tuning process more efficient: LoRA: This technique reduces the number of trainable parameters by introducing low-rank matrices into the model while keeping the original model weights frozen. This approach minimizes memory usage and speeds up the fine-tuning process. QLoRA: An enhancement of LoRA, QLoRA incorporates quantization techniques to further reduce memory requirements and increase the efficiency of the fine-tuning process. It allows for the deployment of large models on consumer hardware without the extensive resource demands typically associated with full fine-tuning. from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from peft import get_peft_model, LoraConfig # Load a pre-trained model model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") # Configure LoRA lora_config = LoraConfig( r=16, # Rank lora_alpha=32, lora_dropout=0.1, ) # Wrap the model with LoRA model = get_peft_model(model, lora_config) # Define training arguments training_args = TrainingArguments( output_dir="./results", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, ) # Create a Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) # Start fine-tuning trainer.train() This code outlines how to set up a model for fine-tuning using LoRA, which can significantly reduce the resource requirements while still adapting the model effectively to specific tasks. In summary, fine-tuning with methods like LoRA and QLoRA is essential for optimizing pre-trained models for specific applications in NLP, making it feasible to deploy these powerful models in various domains efficiently. Why is Fine-Tuning Important in NLP? Task-Specific Performance: Fine-tuning helps improve performance for tasks like text classification, machine translation, or sentiment analysis in specific domains (e.g., legal, healthcare). Language-Specific Adaptation: Since models like Phi-3.5 are trained on general datasets, fine-tuning helps them handle industry-specific jargon or linguistic quirks. Efficient Resource Utilization: Instead of training a model from scratch, fine-tuning leverages pre-trained knowledge, saving computational resources and time. Steps to Fine-Tune Phi-3.5 in Azure AI Foundry Fine-tuning the Phi-3.5 model in Azure AI Foundry involves several key steps. Azure provides a user-friendly interface to streamline model customization, allowing you to quickly configure, train, and deploy models. Step 1: Setting Up the Environment in Azure AI Foundry Access Azure AI Foundry: Log in to Azure AI Foundry. If you don’t have an account, you can create one and set up a workspace. Create a New Experiment: Once in the Azure AI Foundry, create a new training experiment. Choose the Phi-3.5 model from the pre-trained models provided in the Azure Model Zoo. Set Up the Data for Fine-Tuning: Upload your custom dataset for fine-tuning. Ensure the dataset is in a compatible format (e.g., CSV, JSON). For instance, if you are fine-tuning the model for a customer service chatbot, you could upload customer queries in different languages. Step 2: Configure Fine-Tuning Settings Select the Training Dataset: Select the dataset you uploaded and link it to the Phi-3.5 model. 2) Configure the Hyperparameters: Set up training hyperparameters like the number of epochs, learning rate, and batch size. You may need to experiment with these settings to achieve optimal performance. 3) Choose the Task Type: Specify the task you are fine-tuning for, such as text classification, translation, or summarization. This helps Azure AI Foundry understand how to optimize the model during fine-tuning. 4) Fine-Tuning for Specific Languages: If you are fine-tuning for a specific language or multilingual tasks, ensure that the dataset is labeled appropriately and contains enough examples in the target language(s). This will allow Phi-3.5 to learn language-specific features effectively. Step 3: Train the Model Launch the Training Process: Once the configuration is complete, launch the training process in Azure AI Foundry. Depending on the size of your dataset and the complexity of the model, this could take some time. Monitor Training Progress: Use Azure AI Foundry’s built-in monitoring tools to track performance metrics such as loss, accuracy, and F1 score. You can view the model’s progress during training to ensure that it is learning effectively. Code Example: Fine-Tuning Phi-3.5 for a Specific Use Case Here's a code snippet for fine-tuning the Phi-3.5 model using Python and Azure AI Foundry SDK. In this example, we are fine-tuning the model for a customer support chatbot in multiple languages. from azure.ai import Foundry from azure.ai.model import Model # Initialize Azure AI Foundry foundry = Foundry() # Load the Phi-3.5 model model = Model.load("phi-3.5") # Set up the training dataset training_data = foundry.load_dataset("customer_queries_dataset") # Fine-tune the model model.fine_tune(training_data, epochs=5, learning_rate=0.001) # Save the fine-tuned model model.save("fine_tuned_phi_3.5") Best Practices for Evaluating and Validating Fine-Tuned Models Once the model is fine-tuned, it's essential to evaluate and validate its performance before deploying it in production. Split Data for Validation: Always split your dataset into training and validation sets. This ensures that the model is evaluated on unseen data to prevent overfitting. Evaluate Key Metrics: Measure performance using key metrics such as: Accuracy: The proportion of correct predictions. F1 Score: A measure of precision and recall. Confusion Matrix: Helps visualize true vs. false predictions for classification tasks. Cross-Language Validation: If the model is fine-tuned for multiple languages, test its performance across all supported languages to ensure consistency and accuracy. Test in Production-Like Environments: Before full deployment, test the fine-tuned model in a production-like environment to catch any potential issues. Continuous Monitoring and Re-Fine-Tuning: Once deployed, continuously monitor the model’s performance and re-fine-tune it periodically as new data becomes available. Deploying Phi-3.5 Model After fine-tuning the Phi-3.5 model, the next crucial step is deploying it to make it accessible for real-world applications. This section will cover two key deployment strategies: deploying in Azure for cloud-based scaling and reliability, and deploying locally with AI Toolkit for simpler offline usage. Each deployment strategy offers its own advantages depending on the use case. Deploying in Azure Azure provides a powerful environment for deploying machine learning models at scale, enabling organizations to deploy models like Phi-3.5 with high availability, scalability, and robust security features. Azure AI Foundry simplifies the entire deployment pipeline. Set Up Azure AI Foundry Workspace: Log in to Azure AI Foundry and navigate to the workspace where the Phi-3.5 model was fine-tuned. Go to the Deployments section and create a new deployment environment for the model. Choose Compute Resources: Compute Target: Select a compute target suitable for your deployment. For large-scale usage, it’s advisable to choose a GPU-based compute instance. Example: Choose an Azure Kubernetes Service (AKS) cluster for handling large-scale requests efficiently. Configure Scaling Options: Azure allows you to set up auto-scaling based on traffic. This ensures that the model can handle surges in demand without affecting performance. Model Deployment Configuration: Create an Inference Pipeline: In Azure AI Foundry, set up an inference pipeline for your model. Specify the Model: Link the fine-tuned Phi-3.5 model to the deployment pipeline. Deploy the Model: Select the option to deploy the model to the chosen compute resource. Test the Deployment: Once the model is deployed, test the endpoint by sending sample requests to verify the predictions. Configuration Steps (Compute, Resources, Scaling) During deployment, Azure AI Foundry allows you to configure essential aspects like compute type, resource allocation, and scaling options. Compute Type: Choose between CPU or GPU clusters depending on the computational intensity of the model. Resource Allocation: Define the minimum and maximum resources to be allocated for the deployment. For real-time applications, use Azure Kubernetes Service (AKS) for high availability. For batch inference, Azure Container Instances (ACI) is suitable. Auto-Scaling: Set up automatic scaling of the compute instances based on the number of requests. For example, configure the deployment to start with 1 node and scale to 10 nodes during peak usage. Cost Comparison: Phi-3.5 vs. Larger Language Models When comparing the costs of using Phi-3.5 with larger language models (LLMs), several factors come into play, including computational resources, pricing structures, and performance efficiency. Here’s a breakdown: Cost Efficiency Phi-3.5: Designed as a Small Language Model (SLM), Phi-3.5 is optimized for lower computational costs. It offers competitive performance at a fraction of the cost of larger models, making it suitable for budget-conscious projects. The smaller size (3.8 billion parameters) allows for reduced resource consumption during both training and inference. Larger Language Models (e.g., GPT-3.5): Typically require more computational resources, leading to higher operational costs. Larger models may incur additional costs for storage and processing power, especially in cloud environments. Performance vs. Cost Performance Parity: Phi-3.5 has been shown to achieve performance parity with larger models on various benchmarks, including language comprehension and reasoning tasks. This means that for many applications, Phi-3.5 can deliver similar results to larger models without the associated costs. Use Case Suitability: For simpler tasks or applications that do not require extensive factual knowledge, Phi-3.5 is often the more cost-effective choice. Larger models may still be preferred for complex tasks requiring deep contextual understanding or extensive factual recall. Pricing Structure Azure Pricing: Phi-3.5 is available through Azure with a pay-as-you-go billing model, allowing users to scale costs based on usage. Pricing details for Phi-3.5 can be found on the Azure pricing page, where users can customize options based on their needs. Code Example: API Setup and Endpoints for Live Interaction Below is a Python code snippet demonstrating how to interact with a deployed Phi-3.5 model via an API in Azure: import requests # Define the API endpoint and your API key api_url = "https://<your-azure-endpoint>/predict" api_key = "YOUR_API_KEY" # Prepare the input data input_data = { "text": "What are the benefits of renewable energy?" } # Make the API request response = requests.post(api_url, json=input_data, headers={"Authorization": f"Bearer {api_key}"}) # Print the model's response if response.status_code == 200: print("Model Response:", response.json()) else: print("Error:", response.status_code, response.text) Deploying Locally with AI Toolkit For developers who prefer to run models on their local machines, the AI Toolkit provides a convenient solution. The AI Toolkit is a lightweight platform that simplifies local deployment of AI models, allowing for offline usage, experimentation, and rapid prototyping. Deploying the Phi-3.5 model locally using the AI Toolkit is straightforward and can be used for personal projects, testing, or scenarios where cloud access is limited. Introduction to AI Toolkit The AI Toolkit is an easy-to-use platform for deploying language models locally without relying on cloud infrastructure. It supports a range of AI models and enables developers to work in a low-latency environment. Advantages of deploying locally with AI Toolkit: Offline Capability: No need for continuous internet access. Quick Experimentation: Rapid prototyping and testing without the delays of cloud deployments. Setup Guide: Installing and Running Phi-3.5 Locally Using AI Toolkit Install AI Toolkit: Go to the AI Toolkit website and download the platform for your operating system (Linux, macOS, or Windows). Install AI Toolkit by running the appropriate installation command in your terminal. Download the Phi-3.5 Model: Once AI Toolkit is installed, you can download the Phi-3.5 model locally by running: 3. Run the Model Locally: After downloading the model, start a local session by running: This will launch a local server on your machine where the model will be available for interaction. Code Example: Using Phi-3.5 Locally in a Project Below is a Python code example demonstrating how to send a query to the locally deployed Phi-3.5 model running on the AI Toolkit. import requests # Define the local endpoint local_url = "http://localhost:8000/predict" # Prepare the input data input_data = { "text": "What are the benefits of renewable energy?" } # Make the API request response = requests.post(local_url, json=input_data) # Print the model's response if response.status_code == 200: print("Model Response:", response.json()) else: print("Error:", response.status_code, response.text) Comparing Language Capabilities Test Results: How Phi-3.5 Handles Different Languages The Phi-3.5 model demonstrates robust multilingual capabilities, effectively processing and generating text in various languages. Below are comparative examples showcasing its performance in English, Spanish, and Mandarin: English Example: Input: "What are the benefits of renewable energy?" Output: "Renewable energy sources, such as solar and wind, reduce greenhouse gas emissions and promote sustainability." Spanish Example: Input: "¿Cuáles son los beneficios de la energía renovable?" Output: "Las fuentes de energía renovable, como la solar y la eólica, reducen las emisiones de gases de efecto invernadero y promueven la sostenibilidad." Mandarin Example: Input: "可再生能源的好处是什么?" Output: "可再生能源,如太阳能和风能,减少温室气体排放,促进可持续发展。" Performance Benchmarking and Evaluation Across Different Languages Benchmarking Phi-3.5 across different languages involves evaluating its accuracy, fluency, and contextual understanding. For instance, using BLEU scores and human evaluations, the model can be assessed on its translation quality and coherence in various languages. Real-World Use Case: Multilingual Customer Service Chatbot A practical application of Phi-3.5's multilingual capabilities is in developing a customer service chatbot that can interact with users in their preferred language. For instance, the chatbot could provide support in English, Spanish, and Mandarin, ensuring a wider reach and better user experience. Optimizing and Validating Phi-3.5 Model Model Performance Metrics To validate the model's performance in different scenarios, consider the following metrics: Accuracy: Measure how often the model's outputs are correct or align with expected results. Fluency: Assess the naturalness and readability of the generated text. Contextual Understanding: Evaluate how well the model understands and responds to context-specific queries. Tools to Use in Azure and Ollama for Evaluation Azure Cognitive Services: Utilize tools like Text Analytics and Translator to evaluate performance. Ollama: Use local testing environments to quickly iterate and validate model outputs. Conclusion In summary, Phi-3.5 exhibits impressive multilingual capabilities, effective deployment options, and robust performance metrics. Its ability to handle various languages makes it a versatile tool for natural language processing applications. Phi-3.5 stands out for its adaptability and performance in multilingual contexts, making it an excellent choice for future NLP projects, especially those requiring diverse language support. We encourage readers to experiment with the Phi-3.5 model using Azure AI Foundry or the AI Toolkit, explore fine-tuning techniques for their specific use cases, and share their findings with the community. For more information on optimized fine-tuning techniques, check out the Ignite Fine-Tuning Workshop. References Customize the Phi-3.5 family of models with LoRA fine-tuning in Azure Fine-tune Phi-3.5 models in Azure Fine Tuning with Azure AI Foundry and Microsoft Olive Hands on Labs and Workshop Customize a model with fine-tuning https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=azure-openai%2Cturbo%2Cpython-new&pivots=programming-language-studio Microsoft AI Toolkit - AI Toolkit for VSCode1.3KViews1like2CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.1.1KViews2likes1CommentCreate Stunning AI Videos with Sora on Azure AI Foundry!
Special credit to Rory Preddy for creating the GitHub resource that enable us to learn more about Azure Sora. Reach him out on LinkedIn to say thanks. Introduction Artificial Intelligence (AI) is revolutionizing content creation, and video generation is at the forefront of this transformation. OpenAI's Sora, a groundbreaking text-to-video model, allows creators to generate high-quality videos from simple text prompts. When paired with the powerful infrastructure of Azure AI Foundry, you can harness Sora's capabilities with scalability and efficiency, whether on a local machine or a remote setup. In this blog post, I’ll walk you through the process of generating AI videos using Sora on Azure AI Foundry. We’ll cover the setup for both local and remote environments. Requirements: Azure AI Foundry with sora model access A Linux Machine/VM. Make sure that the machine already has the package below: Java JRE 17 (Recommended) OR later Maven Step Zero – Deploying the Azure Sora model on AI Foundry Navigate to the Azure AI Foundry portal and head to the “Models + Endpoints” section (found on the left side of the Azure AI Foundry portal) > Click on the “Deploy Model” button > “Deploy base model” > Search for Sora > Click on “Confirm”. Give a deployment name and specify the Deployment type > Click “Deploy” to finalize the configuration. You should receive an API endpoint and Key after successful deploying Sora on Azure AI Foundry. Store these in a safe place because we will be using them in the next steps. Step one – Setting up the Sora Video Generator in the local/remote machine. Clone the roryp/sora repository on your machine by running the command below: git clone https://github.com/roryp/sora.git cd sora Then, edit the application.properties file in the src/main/resources/ folder to include your Azure OpenAI Credentials. Change the configuration below: azure.openai.endpoint=https://your-openai-resource.cognitiveservices.azure.com azure.openai.api-key=your_api_key_here If port 8080 is used for another application, and you want to change the port for which the web app will run, change the “server.port” configuration to include the desired port. Allow appropriate permissions to run the “mvnw” script file. chmod +x mvnw Run the application ./mvnw spring-boot:run Open your browser and type in your localhost/remote host IP (format: [host-ip:port]) in the browser search bar. If you are running a remote host, please do not forget to update your firewall/NSG to allow inbound connection to the configured port. You should see the web app to generate video with Sora AI using the API provided on Azure AI Foundry. Now, let’s generate a video with Sora Video Generator. Enter a prompt in the first text field, choose the video pixel resolution, and set the video duration. (Due to technical limitation, Sora can only generate video of a maximum of 20 seconds). Click on the “Generate video” button to proceed. The cost to generate the video should be displayed below the “Generate Video” button, for transparency purposes. You can click on the “View Breakdown” button to learn more about the cost breakdown. The video should be ready to download after a maximum of 5 minutes. You can check the status of the video by clicking on the “Check Status” button on the web app. The web app will inform you once the download is ready and the page should refresh every 10 seconds to fetch real-time update from Sora. Once it is ready, click on the “Download Video” button to download the video. Conclusion Generating AI videos with Sora on Azure AI Foundry is a game-changer for content creators, marketers, and developers. By following the steps outlined in this guide, you can set up your environment, integrate Sora, and start creating stunning AI-generated videos. Experiment with different prompts, optimize your workflow, and let your imagination run wild! Have you tried generating AI videos with Sora or Azure AI Foundry? Share your experiences or questions in the comments below. Don’t forget to subscribe for more AI and cloud computing tutorials!951Views0likes3CommentsModel Mondays S2:E2 - Understanding Model Context Protocol (MCP)
This week in Model Mondays, we focus on the Model Context Protocol (MCP) — and learn how to securely connect AI models to real-world tools and services using MCP, Azure AI Foundry, and industry-standard authorization. Read on for my recap About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here’s how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watch upcoming Mondel Monday livestreams Watch Playlists to replay past Model Monday episodes Register Here - to join the AMA on MCP on Friday Jun 27 Visit The Forum- to view Foundry Friday AMAs and recaps Spotlight On: Model Context Protocol (MCP) This week, the Model Monday’s spotlight was on the Model Context Protocol (MCP) with subject matter expert Den Delimarsky. Don't forget to check out the slides from the presentation, for resource links! In this blog post, I’ll talk about my five key takeaways from this episode: What Is MCP and Why Does It Matter? What Is MCP Authorization and Why Is It Important? How Can I Get Started with MCP? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1 . What Is MCP and Why is it Important? MCP is a protocol that standardizes how AI applications connect the underlying AI models to required knowledge sources (data) and interaction APIs (functions) for more effective task execution. Because these models are pre-trained, they lack access to real-time or proprietary data sources (for knowledge) and real-world environments (for interaction). MCP allows them to "discover and use" relevant knowledge and action tools to add relevant context to the model for task execution. Explore: The MCP Specification Learn: MCP For Beginners Want to learn more about MCP - check out the AI Engineer World Fair 2025 "MCP and Keynotes" track. It kicks off with a keynote from Asha Sharma that gives you a broader vision for Azure AI Foundry. Then look for the talk from Harald Kirschner on MCP and VS Code. 2. What Is MCP Authorization and Why Does It Matter? MCP (Model Context Protocol) authorization is a system that helps developers manage who can access their apps, especially when they are hosted in the cloud. The goal is to simplify the process of securing these apps by using common tools like OAuth and identity providers (such as Google or GitHub), so developers don't have to be security experts. Key Takeaways: The new MCP proposal uses familiar identity providers to simplify the authorization process. It allows developers to secure their apps without requiring deep knowledge of security. The update ensures better security controls and prepares the system for future authentication methods. Related Reading: Aaron Parecki, Let's Fix OAuth in MCP Den Delimarsky, Improving The MCP Authorization Spec - One RFC At A Time MCP Specification, Authorization protocol draft On Monday, Den joined us live to talk about the work he did for the authorization protocol. Watch the session now to get a sense for what the MCP Authorization protocol does, how it works, and why it matters. Have questions? Submit them to the forum or Join the Foundry Friday AMA on Jun 27 at 1:30pm ET. 3. How Can I Get Started? If you want to start working with MCP, here’s how to do it easily: Learn the Fundamentals: Explore MCP For Beginners Use an MCP Server: Explore VSCode Agent Mode support . Use MCP with AI Agents: Explore the Azure MCP Server 4. What’s New in Azure AI Foundry? Managed Compute for Cohere Models: Faster, secure AI deployments with low latency. Prompt Shields: New Azure security system to protect against prompt injection and unsafe content. OpenAI o3 Pro Model: A fast, low-cost model similar to GPT-4 Turbo. Codex Mini Model: A smaller, quicker model perfect for developer command-line tasks. MCP Security Upgrades: Now easier to secure AI apps using familiar OAuth identity providers. 5. My Aha Moment Before this session, I used to think that connecting apps to AI was complicated and risky. I believed developers had to build their own security systems from scratch, which sounded tough. But this week, I learned that MCP makes it simple. We can now use trusted logins like Google or GitHub and securely connect AI models to real-world apps without extra hassle. How I Learned This ? To be honest, I also used Copilot to help me understand and summarize this topic in simple words. I wanted to make sure I really understood it well enough to explain it to my friends and peers. I believe in learning with the tools we have, and AI is one of them. By using Copilot and combining it with what I learned from the Model Monday’s session, I was able to write this blog in a way that is easy to understand Takeaway for Beginners: It’s okay to use AI to learn what matters is that you grow, verify, and share the knowledge in your own way. Coming Up Next Week: Next week, we dive into SLMs & Reasoning (Phi-4) with Mojan Javaheripi, PhD, Senior Researcher at Microsoft Research. This session will explore how Small Language Models (SLMs) can perform advanced reasoning tasks, and what makes models like Phi-4 reasoning efficient, scalable, and useful in practical AI applications. Register Here! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me: I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to, Tech Community and Linkedin. In this blog series I have summarized my takeaways from this week's Model Mondays livestream.800Views1like2Comments