azure ai foundry
26 TopicsPhi-4: Small Language Models That Pack a Punch
What Are Small Language Models, and Why Should You Care? If you've been following AI development, you can probably recall "bigger is better" being the mantra for years. GPT-3.5 was 175 billion parameters, GPT-4 is even larger, and everyone seemed to be in an arms race to build the biggest model possible. But here's the thing: bigger models are expensive to run, slow to respond, and often overkill for what you actually need. Small Language Models (SLMs) flip this script. These are models with fewer parameters (typically 1-15 billion) that are trained really thoughtfully on high-quality data. The outcome of this is models that can run on your laptop, respond instantly, and still handle complex reasoning tasks. You can extrapolate from this, increased speed, privacy, and cost-effectiveness. Microsoft's been exploring this space for a while. It started with Phi-1, which showed that small models trained on carefully curated "textbook-like" data could punch way above their weight class. Then came Phi-2 and Phi-3, each iteration getting better at reasoning and problem-solving. Now we have Phi-4, and it's honestly impressive. At 14 billion parameters, it outperforms models that are 5 times its size on math and reasoning tasks. Microsoft trained it on 9.8 trillion tokens over three weeks, using a mix of synthetic data (generated by larger models like GPT-4o) and high-quality web content. The key innovation isn't just throwing more data at it but they were incredibly selective about what to include, focusing on teaching reasoning patterns rather than memorizing facts. The Phi family has also expanded recently. There's Phi-4-mini at 3.8 billion parameters for even lighter deployments, and Phi-4-multimodal at 5.6 billion parameters that can handle text, images, and audio all at once. Pretty cool if you're building something that needs to understand screenshots or transcribe audio. How Well Does It Actually Perform? Let's talk numbers, because that's where Phi-4 really shines. On MMLU (a broad test of knowledge across 57 subjects), Phi-4 scores 84.8%. That's better than Phi-3's 77.9% and competitive with models like GPT-4o-mini. On MATH (competition-level math problems), it hits 56.1%, which is significantly higher than Phi-3's 42.5%. For code generation on HumanEval, it achieves 82.6%. Model Parameters MMLU MATH HumanEval Phi-3-medium 14B 77.9% 42.5% 62.5% Phi-4 14B 84.8% 56.1% 82.6% Llama 3.3 70B 86.0% ~51% ~73% GPT-4o-mini Unknown ~82% 52.2% 87.2% Microsoft tested Phi-4 on the November 2024 AMC-10 and AMC-12 math competitions. These are tests that over 150,000 high school students take each year, and the questions appeared after all of Phi-4's training data was collected. Phi-4 beat not just similar-sized models, but also much larger ones. That suggests it's actually learned to reason, not just memorize benchmark answers. The model also does well on GPQA (graduate-level science questions) and even outperforms its teacher model GPT-4o on certain reasoning tasks. That's pretty remarkable for a 14 billion parameter model. If you're wondering about practical performance, Phi-4 runs about 2-4x faster than comparable larger models and uses significantly less memory. You can run it on a single GPU or even on newer AI-capable laptops with NPUs. That makes it practical for real-time applications where latency matters. Try Phi-4 Yourself You can start experimenting with Phi-4 right now without any complicated setup. Azure AI Foundry Microsoft's Azure AI Foundry is probably the quickest way to get started. Once you're logged in: Go to the Model Catalog and search for "Phi-4" Click "Use this Model" Select an active subscription in the subsequent pop-up and confirm Deploy and start chatting or testing prompts The playground lets you adjust parameters like temperature and see how the model responds. You can test it on math problems, coding questions, or reasoning tasks without writing any code. There's also a code view that shows you how to integrate it into your own applications. Hugging Face (for open-source enthusiasts) If you prefer to work with open-source tools, the model weights are available on Hugging Face. You can run it locally or use their hosted inference API: # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="microsoft/phi-4") messages = [ {"role": "user", "content": "What's the derivative of x²?"}, ] pipe(messages) Other Options The Phi Cookbook on GitHub has tons of examples for different use cases like RAG (retrieval-augmented generation), function calling, and multimodal inputs. If you want to run it locally with minimal setup, you can use Ollama (ollama pull phi-4) or LM Studio, which provides a nice GUI. The Azure AI Foundry Labs also has experimental features where you can test Phi-4-multimodal with audio and image inputs. What's Next? Phi-4 is surprisingly capable for its size, and it's practical enough to run almost anywhere. Whether you're building a chatbot, working on educational software, or just experimenting with AI, it's worth checking out. We might explore local deployment in more detail later, including how to build multi-agent systems where several SLMs work together, and maybe even look at fine-tuning Phi-4 for specific tasks. But for now, give it a try and see what you can build with it. The model weights are MIT licensed, so you're free to use them commercially. Microsoft's made it pretty easy to get started, so there's really no reason not to experiment. Resources: Azure AI Foundry Phi-4 on Hugging Face Phi Cookbook Phi-4 Technical Report352Views0likes0CommentsFoundry Fridays: Your Gateway to Azure AI Discovery
🎓 What Is Foundry Fridays? Every Friday at 1:30 PM ET, join the Azure AI Foundry Discord Community https://aka.ms/model-mondays/discord for a 30-minute live Ask Me Anything (AMA) session. It’s your chance to connect with the experts behind Azure AI—Principal PMs, researchers, and engineers—who are building the tools you’ll use in classrooms, hackathons, and real-world projects. Whether you're experimenting with model fine-tuning, curious about local inference, or diving into agentic workflows and open-source tooling, this is where your questions get answered live and unscripted. 💡 Why Students & Educators Should Join Direct Access to Experts Ask your questions live and get real-time insights from the people building Azure AI. Weekly Themes That Matter From model routing and MCP registries to SAMBA architectures, AI Agents, Model Router, Deployment Templates each week brings a new topic to explore. Community-Led Conversations Hosted by leaders like Nitya Narasimhan and Lee Stott, these sessions are interactive, inclusive, and designed to spotlight your questions. No Slides, Just Substance Skip the lectures—this is about real talk, real tech, and real learning. 📚 Bonus Learning: Model Mondays Want even more? Catch up on the Model Mondays series on demand at https://aka.ms/model-mondays and get ready for Season 3, streaming every Monday at 1:30 PM ET. 🚀 How to Join Join the Discord: https://aka.ms/model-mondays/discord Find the AMA: Check the #community-calls and #model-mondays channels or look for pinned events. Ask Anything: Bring your questions, ideas, or just listen in. No registration needed. 💬 Final Thoughts Whether you're coding your first AI project, mentoring students, or researching the next big thing listen and ask the experts questions and hear from the wider community. Foundry Fridays is your space to learn, connect, and grow. So grab your headphones, jump into Discord, and let’s shape the future of AI—together. 🗓️ Fridays | 1:30 PM ET 📍 Azure AI Foundry Discord 🔗 https://aka.ms/model-mondays/discord138Views0likes0CommentsModel Mondays S2E01 Recap: Advanced Reasoning Session
About Model Mondays Want to know what Reasoning models are and how you can build advanced reasoning scenarios like a Deep Research agent using Azure AI Foundry? Check out this recap from Model Mondays Season 2 Ep 1. Model Mondays is a weekly series to help you build your model IQ in three steps: 1. Catch the 5-min Highlights on Monday, to get up to speed on model news 2. Catch the 15-min Spotlight on Monday, for a deep-dive into a model or tool 3. Catch the 30-min AMA on Friday, for a Q&A session with subject matter experts Want to follow along? Register Here- to watch upcoming livestreams for Season 2 Visit The Forum- to see the full AMA schedule for Season 2 Register Here - to join the AMA on Friday Jun 20 Spotlight On: Advanced Reasoning This week, the Model Mondays spotlight was on Advanced Reasoning with subject matter expert Marlene Mhangami. In this blog post, I'll talk about my five takeaways from this episode: Why Are Reasoning Models Important? What Is an Advanced Reasoning Scenario? How Can I Get Started with Reasoning Models ? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1. Why Are Reasoning Models Important? In today's fast-evolving AI landscape, it's no longer enough for models to just complete text or summarize content. We need AI that can: Understand multi-step tasks Make decisions based on logic Plan sequences of actions or queries Connect context across turns Reasoning models are large language models (LLMs) trained with reinforcement learning techniques to "think" before they answer. Rather than simply generating a response based on probability, these models follow an internal thought process producing a chain of reasoning before responding. This makes them ideal for complex problem-solving tasks. And they’re the foundation of building intelligent, context-aware agents. They enable next-gen AI workflows in everything from customer support to legal research and healthcare diagnostics. Reason: They allow AI to go beyond surface-level response and deliver solutions that reflect understanding, not just language patterning. 2. What does Advanced Reasoning involve? An advanced reasoning scenario is one where a model: Breaks a complex prompt into smaller steps Retrieves relevant external data Uses logic to connect dots Outputs a structured, reasoned answer Example: A user asks: What are the financial and operational risks of expanding a startup to Southeast Asia in 2025? This is the kind of question that requires extensive research and analysis. A reasoning model might tackle this by: Retrieving reports on Southeast Asia market conditions Breaking down risks into financial, political, and operational buckets Cross-referencing data with recent trends Returning a reasoned, multi-part answer 3. How Can I Get Started with Reasoning Models? To get started, you need to visit a catalog that has examples of these models. Try the GitHub Models Marketplace and look for the reasoning category in the filter. Try the Azure AI Foundry model catalog and look for reasoning models by name. Example: The o-series of models from Azure Open AI The DeepSeek-R1 models The Grok 3 models The Phi-4 reasoning models Next, you can use SDKs or Playground for exploring the model capabiliies. 1. Try Lab 331 - for a beginner-friendly guide. 2. Try Lab 333 - for an advanced project. 3. Try the GitHub Model Playground - to compare reasoning and GPT models. 4. Try the Deep Research Agent using LangChain - sample as a great starting project. Have questions or comments? Join the Friday AMA on Azure AI Foundry Discord: 4. Spotlight: My Aha Moment Before this session, I thought reasoning meant longer or more detailed responses. But this session helped me realize that reasoning means structured thinking — models now plan, retrieve, and respond with logic. This inspired me to think about building AI agents that go beyond chat and actually assist users like a teammate. It also made me want to dive deeper into LangChain + Azure AI workflows to build mini-agents for real-world use. 5. Highlights: What’s New in Azure AI Here’s what’s new in the Azure AI Foundry: Direct From Azure Models - Try hosted models like OpenAI GPT on PTU plans SORA Video Playground - Generate video from prompts via SORA models Grok 3 Models - Now available for secure, scalable LLM experiences DeepSeek R1-0528 - A reasoning-optimized, Microsoft-tuned open-source model These are all available in the Azure Model Catalog and can be tried with your Azure account. Did You Know? Your first step is to find the right model for your task. But what if you could have the model automatically selected for you_ based on the prompt you provide? That's the magic of Model Router a deployable AI chat model that dynamically selects the best LLM based on your prompt. Instead of choosing one model manually, the Router makes that choice in real time. Currently, this works with a fixed set of Azure OpenAI models, including a reasoning model option. Keep an eye on the documentation for more updates. Why it’s powerful: Saves cost by switching between models based on complexity Optimizes performance by selecting the right model for the task Lets you test and compare model outputs quickly Try it out in Azure AI Foundry or read more in the Model Catalog Coming Up Next Next week, we dive into Model Context Protocol, an open protocol that empowers agentic AI applications by making it easier to discover and integrate knowledge and action tools with your model choices. Register Here to get reminded - and join us live on Monday! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me. I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to,, Tech Community and Linkedin. In this blog series I have summarizef my takeaways from this week's Model Mondays livestream .384Views0likes0CommentsModel Mondays S2:E3 Understanding SLMs and Reasoning with Mojan Javaheripi
This week in Model Mondays, we focus on Small Language Models (SLMs) and Reasoning — and learn how reasoning models leverage inference-time scaling to execute complex tasks, but how can we use these in resource-constrained devices? Read on for my recap of Mojan Javaheripi's insights on Phi-4 reasoning models that are redefining small language models (SLM) for the agentic era of apps. About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here's how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watch upcoming Model Monday livestreams Watch Playlists to replay past Model Monday episodes Register Here- to join the AMA on SLMs and Reasoning on Friday Jul 03 Visit The Forum - to view Foundry Friday AMAs and recaps This post was generated with AI help and human revision & review. To learn more about our motivation and workflows, please refer to this document in our website. We are continuing to experiment with ideas here - feedback is welcome! Just drop us a comment and let us know! Spotlight On: SLMs and Reasoning Missed watching the livestream? Catch up on the episode below - and visit https://aka.ms/model-mondays/playlist to catch up on all the previous episodes in the series. And check out the Discussion Forum post here for all the resources and updates from the AMA and more. 1. What is this topic and why is it important? Small Language Models (SLMs) like Phi-4 represent a breakthrough in making advanced reasoning capabilities accessible on resource-constrained devices. While large language models require massive computational resources, SLMs can deliver sophisticated reasoning while running on edge devices, mobile phones, and local hardware. This is crucial because it democratizes AI access, reduces latency, and enables privacy-preserving applications where data doesn't need to leave the device. Reasoning models use inference-time scaling, meaning they can "think" longer about complex problems to arrive at better solutions. Phi-4 specifically excels at mathematical reasoning, code generation, and logical problem-solving while maintaining a smaller footprint than traditional large models. 2. What is one key takeaway from the episode? The key insight is that Phi-4 proves that model size doesn't always correlate with reasoning capability. Through advanced training techniques and architectural improvements, SLMs can achieve reasoning performance that rivals much larger models while being practical for deployment in real-world, resource-constrained environments. This opens up entirely new possibilities for agentic applications that can run locally and respond quickly. 3. How can I get started? To get started with SLMs and reasoning: 1. Explore Phi-4 on Azure AI Foundry Model Catalog 2. Try the reasoning capabilities in Azure AI Foundry Playground 3. Download and experiment with Phi-4 for local development 4. Check out the sample applications and use cases in the Azure AI Foundry documentation What's new in Azure AI Foundry? Azure AI Foundry continues to evolve to support the growing ecosystem of Small Language Models and agentic apps. Recently, new capabilities have been added to make it easier to fine-tune SLMs like Phi-4 directly in Azure AI Studio. Updates include: Enhanced Model Catalog: Easier discovery of SLMs, reasoning models, and multi-modal models. Improved Prompt Flow Integration: Now with templates specifically designed for SLM-based reasoning tasks. New Evaluation Tools: Built-in model comparison dashboards to quickly test reasoning performance across different SLM variants. Edge Deployment Support: Simplified workflows for packaging and deploying SLMs to local devices and edge environments. Want to get a summary of ALL the news from Jun 2025? Just visit this post on Azure AI Foundry Discussions Forum for all the links! My A-Ha Moment The biggest Aha moment for me was realizing that a model doesn’t need to be huge to be smart. Phi-4 proved that small models can actually handle complex reasoning tasks just like big models. What really clicked for me: You don’t need heavy GPUs or cloud servers. These models can run on mobile phones, edge devices, or small local machines. Your data stays on your device, which is great for privacy and faster responses. It’s a total game changer because now we can build intelligent apps that work even on low-resource devices. Coming up Next Week Next week, we dive into AI Developer Experiences with Leo Yao. We'll explore how to streamline the AI developer journey from model selection and usage to evaluation and app deployment using the AI Toolkit and Azure AI Foundry extensions for Visual Studio Code. Discover the key capabilities they provide for generative AI app & agent development. Register Here! to be notified - then watch live on YouTube below. Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! 1. Join the Discord - for real-time chats, events & learning 2. Explore the Forum - for AMA recaps, Q&A, and help! About Me: I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to, Tech Community and Linkedin. In this blog series I have summarized my takeaways from this week's Model Mondays livestream.222Views0likes0CommentsModel Mondays S2:E6 Understanding Research & Innovation with SeokJin Han and Saumil Shrivastava
In this week's blog post, we dive into the cutting-edge research happening at Azure AI Foundry Labs. From the MCP Server that makes it easy to experiment with new models and tools, to Magentic-UI that brings human-centered agent workflows to life, there’s a lot to unpack!155Views0likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.985Views2likes1CommentModel Mondays S2E9: Models for AI Agents
1. Weekly Highlights This episode kicked off with the top news and updates in the Azure AI ecosystem: GPT-5 and GPT-OSS Models Now in Azure AI Foundry: Azure AI Foundry now supports OpenAI’s GPT-5 lineup (including GPT-5, GPT-5 Mini, and GPT-5 Nano) and the new open-weight GPT-OSS models (120B, 20B). These models offer powerful reasoning, real-time agent tasks, and ultra-low latency Q&A, all with massive context windows and flexible deployment via the Model Router. Flux 1 Context Pro & Flux 1.1 Pro from Black Forest Labs: These new vision models enable in-context image generation, editing, and style transfer, now available in the Image Playground in Azure AI Foundry. Browser Automation Tool (Preview): Agents can now perform real web tasks—search, navigation, form filling, and more—via natural language, accessible through API and SDK. GitHub Copilot Agent Mode + Playwright MCP Server: Debug UIs with AI: Copilot’s agent mode now pairs with Playwright MCP Server to analyze, identify, and fix UI bugs automatically. Discord Community: Join the conversation, share your feedback, and connect with the product team and other developers. 2. Spotlight On: Azure AI Agent Service & Agent Catalog This week’s spotlight was on building and orchestrating multi-agent workflows using the Azure AI Agent Service and the new Agent Catalog. What is the Azure AI Agent Service? A managed platform for building, deploying, and scaling agentic AI solutions. It supports modular, multi-agent workflows, secure authentication, and seamless integration with Azure Logic Apps, OpenAPI tools, and more. Agent Catalog: A collection of open-source, ready-to-use agent templates and workflow samples. These include orchestrator agents, connected agents, and specialized agents for tasks like customer support, research, and more. Demo Highlights: Connected Agents: Orchestrate workflows by delegating tasks to specialized sub-agents (e.g., mortgage application, market insights). Multi-Agent Workflows: Design complex, hierarchical agent graphs with triggers, events, and handoffs (e.g., customer support with escalation to human agents). Workflow Designer: Visualize and edit agent flows, transitions, and variables in a modular, no-code interface. Integration with Azure Logic Apps: Trigger workflows from 1400+ external services and apps. 3. Customer Story: Atomic Work Atomic Work showcased how agentic AI can revolutionize enterprise service management, making employees more productive and ops teams more efficient. Problem: Traditional IT service management is slow, manual, and frustrating for both employees and ops teams. Solution: Atomic Work’s “Atom” is a universal, multimodal agent that works across channels (Teams, browser, etc.), answers L1/L2 questions, automates requests, and proactively assists users. Technical Highlights: Multimodal & Cross-Channel: Atom can guide users through web interfaces, answer questions, and automate tasks without switching tools. Data Ingestion & Context: Regularly ingests up-to-date documentation and context, ensuring accurate, current answers. Security & Integration: Built on Azure for enterprise-grade security and seamless integration with existing systems. Demo: Resetting passwords, troubleshooting VPN, requesting GitHub repo access—all handled by Atom, with proactive suggestions and context-aware actions. Atom can even walk users through complex UI tasks (like generating GitHub tokens) by “seeing” the user’s screen and providing step-by-step guidance. 4. Key Takeaways Here are the key learnings from this episode: Agentic AI is Production-Ready: Azure AI Agent Service and the Agent Catalog make it easy to build, deploy, and scale multi-agent workflows for real-world business needs. Modular, No-Code Workflow Design: The workflow designer lets you visually create and edit agent graphs, triggers, and handoffs—no code required. Open-Source & Extensible: The Agent Catalog provides open-source templates and welcomes community contributions. Real-World Impact: Solutions like Atomic Work show how agentic AI can transform IT, HR, and customer support, making organizations more efficient and employees more empowered. Community & Support: Join the Discord and Forum to connect, ask questions, and share your own agentic AI projects. Sharda's Tips: How I Wrote This Blog Writing this blog is like sharing my own learning journey with friends. I start by thinking about why the topic matters and how it can help someone new to Azure or agentic AI. I use simple language, real examples from the episode, and organize my thoughts with GitHub Copilot to make sure I cover all the important points. Here’s the prompt I gave Copilot to help me draft this blog: Generate a technical blog post for Model Mondays S2E9 based on the transcript and episode details. Focus on Azure AI Agent Service, Agent Catalog, and real-world demos. Explain the concepts for students, add a section on practical applications, and share tips for writing technical blogs. Make it clear, engaging, and useful for developers and students. After watching the video, I felt inspired to try out these tools myself. The way the speakers explained and demonstrated everything made me believe that anyone can get started, no matter their background. My goal with this blog is to help you feel the same way—curious, confident, and ready to explore what AI and Azure can do for you. If you have questions or want to share your own experience, I’d love to hear from you. Coming Up Next Week Next week: Document Processing with AI! Join us as we explore how to automate document workflows using Azure AI Foundry, with live demos and expert guests. 1️⃣ | Register For The Livestream – Aug 18, 2025 2️⃣ | Register For The AMA – Aug 22, 2025 3️⃣ | Ask Questions & View Recaps – Discussion Forum About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ with three elements: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream Want to get started? Register For Livestreams – every Monday at 1:30pm ET Watch Past Replays to revisit other spotlight topics Register For AMA – to join the next AMA on the schedule Recap Past AMAs – check the AMA schedule for episode specific links Join The Community Great devs don't build alone! In a fast-paced developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord – for real-time chats, events & learning Explore the Forum – for AMA recaps, Q&A, and Discussion! About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I summarize my takeaways from each week's Model Mondays livestream.209Views0likes0CommentsModel Mondays S2E8: On-Device & Local AI
Model Mondays S2E8: On-Device & Local AI Welcome to Episode 8! This week, we explored how AI is moving from the cloud to your own device, making it faster, more private, and more accessible. We also saw a real-world customer story from Xander Glasses, showing how AI can help people with hearing loss. RFD Observability tools in Azure AI Foundry: Real-time model telemetry, auto evals, quick evals, Python grader. GitHub Copilot Pro with Spark: AI pair programmer for code explanation and workflow suggestions. Synthetic Data for Vision Models: Training accurate models with procedurally generated data. Agent-Friendly Websites: Making sites accessible to AI agents via APIs, semantic markup, and OpenAPI specs. MCP (Model Context Protocol): Standardizing agent memory and context for scalable AI.153Views0likes0CommentsMulti-Agent Systems and MCP Tools Integration with Azure AI Foundry
The Power of Connected Agents: Building Multi-Agent Systems Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in! The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents. Why Connected Agents Matter As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative: 1. Simplified Complexity: Instead of building one massive agent that does everything (and becomes difficult to maintain), you can create smaller, specialized agents with clearly defined responsibilities. 2. No Custom Orchestration Required: The main agent naturally delegates tasks using natural language - no need to write complex routing logic or orchestration code. 3. Better Reliability and Debugging: When something goes wrong, it's much easier to identify which specific agent is causing issues rather than debugging a monolithic system. 4. Flexibility and Extensibility: Need to add a new capability? Just create a new connected agent without modifying your main agent or other parts of the system. How Multi-Agent Systems Work The architecture is surprisingly straightforward: 1. A main agent acts as the orchestrator, interpreting user requests and delegating tasks 2. Connected sub-agents perform specialized functions like data retrieval, analysis, or summarization 3. Results flow back to the main agent, which compiles the final response For example, imagine building a ticket triage system. When a new support ticket arrives, your main agent might: - Delegate to a classifier agent to determine the ticket type - Send the ticket to a priority-setting agent to determine urgency - Use a team-assignment agent to route it to the right department All this happens seamlessly without you having to write custom routing logic! Setting Up a Multi-Agent Solution The module walks you through the entire process: 1. Initializing the agents client 2. Creating connected agents with specialized roles 3. Registering them as tools for the main agent 4. Building the main agent that orchestrates the workflow 5. Running the complete system Taking It Further: Integrating MCP Tools with Azure AI Agents Once you've mastered multi-agent systems, the next level is connecting your agents to external tools and services. The Integrate MCP Tools with Azure AI Agents module teaches you how to use the Model Context Protocol (MCP) to give your agents access to a dynamic catalog of tools. What is Dynamic Tool Discovery? Traditionally, adding new tools to an AI agent meant hardcoding each one directly into your agent's code. But what if tools change frequently, or if different teams manage different tools? This approach quickly becomes unmanageable. Dynamic tool discovery through MCP solves this problem by: 1. Centralizing Tool Management: Tools are defined and managed in a central MCP server 2. Enabling Runtime Discovery: Agents discover available tools during runtime through the MCP client 3. Supporting Automatic Updates: When tools are updated on the server, agents automatically get the latest versions The MCP Server-Client Architecture The architecture involves two key components: 1. MCP Server: Acts as a registry for tools, hosting tool definitions decorated with `@mcp.tool`. Tools are exposed over HTTP when requested. 2. MCP Client: Acts as a bridge between your MCP server and Azure AI Agent. It discovers available tools, generates Python function stubs to wrap them, and registers those functions with your agent. This separation of concerns makes your AI solution more maintainable and adaptable to change. Setting Up MCP Integration The module guides you through the complete process: 1. Setting up an MCP server with tool definitions 2. Creating an MCP client to connect to the server 3. Dynamically discovering available tools 4. Wrapping tools in async functions for agent use 5. Registering the tools with your Azure AI agent Once set up, your agent can use any tool in the MCP catalog as if it were a native function, without any hardcoding required! Practical Applications for Student Developers As a student developer, how might you apply these concepts in real projects? Classroom Projects: - Build a research assistant that delegates to specialized agents for different academic subjects - Create a coding tutor that uses different agents for explaining concepts, debugging code, and suggesting improvements Hackathons: - Develop a sustainability app that uses connected agents to analyze environmental data from different sources - Create a personal finance advisor with specialized agents for budgeting, investment analysis, and financial planning Personal Portfolio Projects: - Build a content creation assistant with specialized agents for brainstorming, drafting, editing, and SEO optimization - Develop a health and wellness app that uses MCP tools to connect to fitness APIs, nutrition databases, and sleep tracking services Getting Started Ready to dive in? Both modules include hands-on exercises where you'll build real working examples: - A ticket triage system using connected agents - An inventory management assistant that integrates with MCP tools The prerequisites are straightforward: - Experience with deploying generative AI models in Azure AI Foundry - Programming experience with Python or C# Conclusion Multi-agent systems and MCP tools integration represent the next evolution in AI application development. By mastering these concepts, you'll be able to build more sophisticated, maintainable, and extensible AI solutions - skills that will make you stand out in internship applications and job interviews. The best part? These modules are designed with practical, hands-on learning in mind - perfect for student developers who learn by doing. So why not give them a try? Your future AI applications (and your resume) will thank you for it! Want to learn more about Model Context Protocol 'MCP' see MCP for Beginners Happy coding!1.9KViews1like0Comments