mcp
14 TopicsFunction Calling with Small Language Models
In our previous article on running Phi-4 locally, we built a web-enhanced assistant that could search the internet and provide informed answers. Here's what that implementation looked like: def web_enhanced_query(question): # 1. ALWAYS search (hardcoded decision) search_results = search_web(question) # 2. Inject results into prompt prompt = f"""Here are recent search results: {search_results} Question: {question} Using only the information above, give a clear answer.""" # 3. Model just summarizes what it reads return ask_phi4(endpoint, model_id, prompt) Today, we're upgrading to true function calling. With this, we have ability to transform small language models from passive text generators into intelligent agents that can: Decide when to use external tools Reason which tool bests fit each task Execute real-world actions thrugh apis Function calling represents a significant evolution in AI capabilities. Let's understand where this positions our small language models: Agent Classification Framework Simple Reflex Agents (Basic) React to immediate input with predefined rules Example: Thermostat, basic chatbot Without function calling, models operate here Model-Based Agents (Intermediate) Maintain internal state and context Example: Robot vacuum with room mapping Function calling enables this level Goal-Based Agents (Advanced) Plan multi-step sequences to achieve objectives Example: Route planner, task scheduler Function calling + reasoning enables this Learning Agents (Expert) Adapt and improve over time Example: Recommendation systems Future: Function calling + fine-tuning As usual with these articles, let's get ready to get our hands dirty! Project Setup Let's set up our environment for building function-calling assistants. Prerequisites First, ensure you have Foundry Local installed and a model running. We'll use Qwen 2.5-7B for this tutorial as it has excellent function calling support. Important: Not all small language models support function calling equally. Qwen 2.5 was specifically trained for this capability and provides a reliable experience through Foundry Local. # 1. Check Foundry Local is installed foundry --version # 2. Start the Foundry Local service foundry service start # 3. Download and run Qwen 2.5-7B foundry model run qwen2.5-7b Python Environment Setup # 1. Create Python virtual environment python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate # 2. Install dependencies pip install openai requests python-dotenv # 3. Get a free OpenWeatherMap API key # Sign up at: https://openweathermap.org/api ``` Create `.env` file: ``` OPENWEATHER_API_KEY=your_api_key_here ``` Building a Weather-Aware Assistant So in this scenario, a user wants to plan outdoor activities but needs weather context. Without function calling, You will get something like this: User: "Should I schedule my team lunch outside at 2pm in Birmingham?" Model: "That depends on weather conditions. Please check the forecast for rain and temperature." However, with fucntion-calling you get an answer that is able to look up the weather and reply with the needed context. We will do that now. Understanding Foundry Local's Function Calling Implementation Before we start coding, there's an important implementation detail to understand. Foundry Local uses a non-standard function calling format. Instead of returning function calls in the standard OpenAI tool_calls field, Qwen models return the function call as JSON text in the response content. For example, when you ask about weather, instead of: # Standard OpenAI format message.tool_calls = [ {"name": "get_weather", "arguments": {"location": "Birmingham"}} ] You get: # Foundry Local format message.content = '{"name": "get_weather", "arguments": {"location": "Birmingham"}}' This means we need to parse the JSON from the content ourselves. Don't worry—this is straightforward, and I'll show you exactly how to handle it! Step 1: Define the Weather Tool Create weather_assistant.py: import os from openai import OpenAI import requests import json import re from dotenv import load_dotenv load_dotenv() # Initialize Foundry Local client client = OpenAI( base_url="http://127.0.0.1:59752/v1/", api_key="not-needed" ) # Define weather tool tools = [ { "type": "function", "function": { "name": "get_weather", "description": "Get current weather information for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city or location name" }, "units": { "type": "string", "description": "Temperature units", "enum": ["celsius", "fahrenheit"], "default": "celsius" } }, "required": ["location"] } } } ] A tool is necessary because it provides the model with a structured specification of what external functions are available and how to use them. The tool definition contains the function name, description, parameters schema, and information returned. Step 2: Implement the Weather Function def get_weather(location: str, units: str = "celsius") -> dict: """Fetch weather data from OpenWeatherMap API""" api_key = os.getenv("OPENWEATHER_API_KEY") url = "http://api.openweathermap.org/data/2.5/weather" params = { "q": location, "appid": api_key, "units": "metric" if units == "celsius" else "imperial" } response = requests.get(url, params=params, timeout=5) response.raise_for_status() data = response.json() temp_unit = "°C" if units == "celsius" else "°F" return { "location": data["name"], "temperature": f"{round(data['main']['temp'])}{temp_unit}", "feels_like": f"{round(data['main']['feels_like'])}{temp_unit}", "conditions": data["weather"][0]["description"], "humidity": f"{data['main']['humidity']}%", "wind_speed": f"{round(data['wind']['speed'] * 3.6)} km/h" } The model calls this function to get the weather data. it contacts OpenWeatherMap API, gets real weather data and returns it as a python dictionary Step 3: Parse Function Calls from Content This is the crucial step where we handle Foundry Local's non-standard format: def parse_function_call(content: str): """Extract function call JSON from model response""" if not content: return None json_pattern = r'\{"name":\s*"get_weather",\s*"arguments":\s*\{[^}]+\}\}' match = re.search(json_pattern, content) if match: try: return json.loads(match.group()) except json.JSONDecodeError: pass try: parsed = json.loads(content.strip()) if isinstance(parsed, dict) and "name" in parsed: return parsed except json.JSONDecodeError: pass return None Step 4: Main Chat Function with Function Calling and lastly, calling the model. Notice the tools and tool_choice parameter. Tools tells the model it is allowed to output a tool_call requesting that the function be executed. While tool_choice instructs the model how to decide whether to call a tool. def chat(user_message: str) -> str: """Process user message with function calling support""" messages = [ {"role": "user", "content": user_message} ] response = client.chat.completions.create( model="qwen2.5-7b-instruct-generic-cpu:4", messages=messages, tools=tools, tool_choice="auto", temperature=0.3, max_tokens=500 ) message = response.choices[0].message if message.content: function_call = parse_function_call(message.content) if function_call and function_call.get("name") == "get_weather": print(f"\n[Function Call] {function_call.get('name')}({function_call.get('arguments')})") args = function_call.get("arguments", {}) weather_data = get_weather(**args) print(f"[Result] {weather_data}\n") final_prompt = f"""User asked: "{user_message}" Weather data: {json.dumps(weather_data, indent=2)} Provide a natural response based on this weather information.""" final_response = client.chat.completions.create( model="qwen2.5-7b-instruct-generic-cpu:4", messages=[{"role": "user", "content": final_prompt}], max_tokens=200, temperature=0.7 ) return final_response.choices[0].message.content return message.content Step 5: Run the script Now put all the above together and run the script def main(): """Interactive weather assistant""" print("\nWeather Assistant") print("=" * 50) print("Ask about weather or general questions.") print("Type 'exit' to quit\n") while True: user_input = input("You: ").strip() if user_input.lower() in ['exit', 'quit']: print("\nGoodbye!") break if user_input: response = chat(user_input) print(f"Assistant: {response}\n") if __name__ == "__main__": if not os.getenv("OPENWEATHER_API_KEY"): print("Error: OPENWEATHER_API_KEY not set") print("Set it with: export OPENWEATHER_API_KEY='your_key_here'") exit(1) main() Note: Make sure Qwen 2.5 is running in Foundry Local in a new terminal Now let's talk about Model Context Protocol! Our weather assistant works beautifully with a single function, but what happens when you need dozens of tools? Database queries, file operations, calendar integration, email—each would require similar setup code. This is where Model Context Protocol (MCP) comes in. MCP is an open standard that provides pre-built, standardized servers for common tools. Instead of writing custom integration code for every capability, you can connect to MCP servers that handle the complexity for you. With MCP, You only need one command to enable weather, database, and file access npx @modelcontextprotocol/server-weather npx @modelcontextprotocol/server-sqlite npx @modelcontextprotocol/server-filesystem Your model automatically discovers and uses these tools without custom integration code. Learn more: Model Context Protocol Documentation EdgeAI Course - Module 03: MCP Integration Key Takeaways Function calling transforms models into agents - From passive text generators to active problem-solvers Qwen 2.5 has excellent function calling support - Specifically trained for reliable tool use Foundry Local uses non-standard format - Parse JSON from content instead of tool_calls field Start simple, then scale with MCP - Build one tool to understand the pattern, then leverage standards Documentation Running Phi-4 Locally with Foundry Local Phi-4: Small Language Models That Pack a Punch Microsoft Foundry Local GitHub EdgeAI for Beginners Course OpenWeatherMap API Documentation Model Context Protocol Qwen 2.5 Documentation Thank you for reading! I hope this article helps you build more capable AI agents with small language models. Function calling opens up incredible possibilities—from simple weather assistants to complex multi-tool workflows. Start with one tool, understand the pattern, and scale from there.387Views1like0CommentsUnleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge. Key Features of MCP Standardized Communication – MCP provides a structured way for AI models to interact with various tools. Tool Access & Expansion – AI assistants can now utilize external tools for real-time insights. Secure & Scalable – Enables safe and scalable integration with enterprise applications. Multi-Modal Integration – Supports STDIO, SSE (Server-Sent Events), and WebSocket communication methods. MCP Architecture & How It Works MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: Components of MCP MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. Data Flow in MCP The AI model sends a request (e.g., "fetch user profile data"). The MCP client forwards the request to the appropriate MCP server. The MCP server retrieves the required data from a database or API. The response is sent back to the AI model via the MCP client. Integrating MCP with Azure OpenAI Services Microsoft has integrated MCP with Azure OpenAI Services, allowing GPT models to interact with external services and fetch live data. This means AI models are no longer limited to static knowledge but can access real-time information. Benefits of Azure OpenAI Services + MCP Integration ✔ Real-time Data Fetching – AI assistants can retrieve fresh information from APIs, databases, and internal systems. ✔ Contextual AI Responses – Enhances AI responses by providing accurate, up-to-date information. ✔ Enterprise-Ready – Secure and scalable for business applications, including finance, healthcare, and retail. Hands-On Tools for MCP Implementation To implement MCP effectively, Microsoft provides two powerful tools: Semantic Workbench and AI Gateway. Microsoft Semantic Workbench A development environment for prototyping AI-powered assistants and integrating MCP-based functionalities. Features: Build and test multi-agent AI assistants. Configure settings and interactions between AI models and external tools. Supports GitHub Codespaces for cloud-based development. Explore Semantic Workbench Workbench interface examples Microsoft AI Gateway A plug-and-play interface that allows developers to experiment with MCP using Azure API Management. Features: Credential Manager – Securely handle API credentials. Live Experimentation – Test AI model interactions with external tools. Pre-built Labs – Hands-on learning for developers. Explore AI Gateway Setting Up MCP with Azure OpenAI Services Step 1: Create a Virtual Environment First, create a virtual environment using Python: python -m venv .venv Activate the environment: # Windows venv\Scripts\activate # MacOS/Linux source .venv/bin/activate Step 2: Install Required Libraries Create a requirements.txt file and add the following dependencies: langchain-mcp-adapters langgraph langchain-openai Then, install the required libraries: pip install -r requirements.txt Step 3: Set Up OpenAI API Key Ensure you have your OpenAI API key set up: # Windows setx OPENAI_API_KEY "<your_api_key> # MacOS/Linux export OPENAI_API_KEY=<your_api_key> Building an MCP Server This server performs basic mathematical operations like addition and multiplication. Create the Server File First, create a new Python file: touch math_server.py Then, implement the server: from mcp.server.fastmcp import FastMCP # Initialize the server mcp = FastMCP("Math") MCP.tool() def add(a: int, b: int) -> int: return a + b MCP.tool() def multiply(a: int, b: int) -> int: return a * b if __name__ == "__main__": mcp.run(transport="stdio") Your MCP server is now ready to run. Building an MCP Client This client connects to the MCP server and interacts with it. Create the Client File First, create a new file: touch client.py Then, implement the client: import asyncio from mcp import ClientSession, StdioServerParameters from langchain_openai import ChatOpenAI from mcp.client.stdio import stdio_client # Define server parameters server_params = StdioServerParameters( command="python", args=["math_server.py"], ) # Define the model model = ChatOpenAI(model="gpt-4o") async def run_agent(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await load_mcp_tools(session) agent = create_react_agent(model, tools) agent_response = await agent.ainvoke({"messages": "what's (4 + 6) x 14?"}) return agent_response["messages"][3].content if __name__ == "__main__": result = asyncio.run(run_agent()) print(result) Your client is now set up and ready to interact with the MCP server. Running the MCP Server and Client Step 1: Start the MCP Server Open a terminal and run: python math_server.py This starts the MCP server, making it available for client connections. Step 2: Run the MCP Client In another terminal, run: python client.py Expected Output 140 This means the AI agent correctly computed (4 + 6) x 14 using both the MCP server and GPT-4o. Conclusion Integrating MCP with Azure OpenAI Services enables AI applications to securely interact with external tools, enhancing functionality beyond text-based responses. With standardized communication and improved AI capabilities, developers can build smarter and more interactive AI-powered solutions. By following this guide, you can set up an MCP server and client, unlocking the full potential of AI with structured external interactions. Next Steps: Explore more MCP tools and integrations. Extend your MCP setup to work with additional APIs. Deploy your solution in a cloud environment for broader accessibility. For further details, visit the GitHub repository for MCP integration examples and best practices. MCP GitHub Repository MCP Documentation Semantic Workbench AI Gateway MCP Video Walkthrough MCP Blog MCP Github End to End Demo58KViews10likes6CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo1.5KViews0likes0CommentsModel Mondays S2E01 Recap: Advanced Reasoning Session
About Model Mondays Want to know what Reasoning models are and how you can build advanced reasoning scenarios like a Deep Research agent using Azure AI Foundry? Check out this recap from Model Mondays Season 2 Ep 1. Model Mondays is a weekly series to help you build your model IQ in three steps: 1. Catch the 5-min Highlights on Monday, to get up to speed on model news 2. Catch the 15-min Spotlight on Monday, for a deep-dive into a model or tool 3. Catch the 30-min AMA on Friday, for a Q&A session with subject matter experts Want to follow along? Register Here- to watch upcoming livestreams for Season 2 Visit The Forum- to see the full AMA schedule for Season 2 Register Here - to join the AMA on Friday Jun 20 Spotlight On: Advanced Reasoning This week, the Model Mondays spotlight was on Advanced Reasoning with subject matter expert Marlene Mhangami. In this blog post, I'll talk about my five takeaways from this episode: Why Are Reasoning Models Important? What Is an Advanced Reasoning Scenario? How Can I Get Started with Reasoning Models ? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1. Why Are Reasoning Models Important? In today's fast-evolving AI landscape, it's no longer enough for models to just complete text or summarize content. We need AI that can: Understand multi-step tasks Make decisions based on logic Plan sequences of actions or queries Connect context across turns Reasoning models are large language models (LLMs) trained with reinforcement learning techniques to "think" before they answer. Rather than simply generating a response based on probability, these models follow an internal thought process producing a chain of reasoning before responding. This makes them ideal for complex problem-solving tasks. And they’re the foundation of building intelligent, context-aware agents. They enable next-gen AI workflows in everything from customer support to legal research and healthcare diagnostics. Reason: They allow AI to go beyond surface-level response and deliver solutions that reflect understanding, not just language patterning. 2. What does Advanced Reasoning involve? An advanced reasoning scenario is one where a model: Breaks a complex prompt into smaller steps Retrieves relevant external data Uses logic to connect dots Outputs a structured, reasoned answer Example: A user asks: What are the financial and operational risks of expanding a startup to Southeast Asia in 2025? This is the kind of question that requires extensive research and analysis. A reasoning model might tackle this by: Retrieving reports on Southeast Asia market conditions Breaking down risks into financial, political, and operational buckets Cross-referencing data with recent trends Returning a reasoned, multi-part answer 3. How Can I Get Started with Reasoning Models? To get started, you need to visit a catalog that has examples of these models. Try the GitHub Models Marketplace and look for the reasoning category in the filter. Try the Azure AI Foundry model catalog and look for reasoning models by name. Example: The o-series of models from Azure Open AI The DeepSeek-R1 models The Grok 3 models The Phi-4 reasoning models Next, you can use SDKs or Playground for exploring the model capabiliies. 1. Try Lab 331 - for a beginner-friendly guide. 2. Try Lab 333 - for an advanced project. 3. Try the GitHub Model Playground - to compare reasoning and GPT models. 4. Try the Deep Research Agent using LangChain - sample as a great starting project. Have questions or comments? Join the Friday AMA on Azure AI Foundry Discord: 4. Spotlight: My Aha Moment Before this session, I thought reasoning meant longer or more detailed responses. But this session helped me realize that reasoning means structured thinking — models now plan, retrieve, and respond with logic. This inspired me to think about building AI agents that go beyond chat and actually assist users like a teammate. It also made me want to dive deeper into LangChain + Azure AI workflows to build mini-agents for real-world use. 5. Highlights: What’s New in Azure AI Here’s what’s new in the Azure AI Foundry: Direct From Azure Models - Try hosted models like OpenAI GPT on PTU plans SORA Video Playground - Generate video from prompts via SORA models Grok 3 Models - Now available for secure, scalable LLM experiences DeepSeek R1-0528 - A reasoning-optimized, Microsoft-tuned open-source model These are all available in the Azure Model Catalog and can be tried with your Azure account. Did You Know? Your first step is to find the right model for your task. But what if you could have the model automatically selected for you_ based on the prompt you provide? That's the magic of Model Router a deployable AI chat model that dynamically selects the best LLM based on your prompt. Instead of choosing one model manually, the Router makes that choice in real time. Currently, this works with a fixed set of Azure OpenAI models, including a reasoning model option. Keep an eye on the documentation for more updates. Why it’s powerful: Saves cost by switching between models based on complexity Optimizes performance by selecting the right model for the task Lets you test and compare model outputs quickly Try it out in Azure AI Foundry or read more in the Model Catalog Coming Up Next Next week, we dive into Model Context Protocol, an open protocol that empowers agentic AI applications by making it easier to discover and integrate knowledge and action tools with your model choices. Register Here to get reminded - and join us live on Monday! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me. I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to,, Tech Community and Linkedin. In this blog series I have summarizef my takeaways from this week's Model Mondays livestream .420Views0likes0CommentsModel Mondays S2:E6 Understanding Research & Innovation with SeokJin Han and Saumil Shrivastava
In this week's blog post, we dive into the cutting-edge research happening at Azure AI Foundry Labs. From the MCP Server that makes it easy to experiment with new models and tools, to Magentic-UI that brings human-centered agent workflows to life, there’s a lot to unpack!174Views0likes0CommentsThe fantastic duo: How to build your modern APIs
🧠 Core Concept The article introduces a Chat Playground System designed to streamline AI development by managing multiple chat scenarios (e.g., technical support, creative writing) from a single dashboard. 🔧 Key Features Scenario-Aware Sessions: Launch pre-configured chat contexts with one click. Dual Access Architecture: FastAPI for RESTful web apps. MCP (Model Context Protocol) for AI tool integration. Streamlit Integration: Wrapped with MCP to allow seamless interaction with AI tools. Automatic Resource Management: Smart port allocation and process cleanup. Context Passing: Uses environment variables and temp JSON files to transfer session data. 🚧 Challenges & Solutions Bridging MCP and Streamlit: Created a wrapper to translate protocol calls and maintain session state. Process Management: Built an async manager to handle multiple Streamlit sessions reliably. Context Transfer: Developed a hybrid system for passing rich context between processes. User Experience: Simplified interface with real-time feedback and intuitive controls. 💡 Lessons Learned Innovation thrives at protocol boundaries. Supporting both REST and MCP broadens adoption. Start simple, scale gradually. Process lifecycle management is critical. Contextual awareness enhances AI utility. Developer experience drives product success. 🔮 Future Directions74Views1like0CommentsCurious About Model Context Protocol? Dive into MCP with Us!
Global Workshops for All Skill Levels We’re hosting a series of free online workshops to introduce you to MCP—available in multiple languages and programming languages! You’ll get hands-on experience building your first MCP server, guided by friendly experts ready to answer your questions. Register now: https://aka.ms/letslearnmcp Who Should Join? This workshop is built for: Students exploring tech careers Beginner devs eager to learn how AI agents and MCP works Curious coders and seasoned pros alike If you’ve got some code curiosity and a laptop, you’re good to go. Workshop Schedule (English Sessions) Date Tech Focus Registration Link July 9 C# Join Here July 15 Java Join Here July 16 Python Join Here July 17 C# + Visual Studio Join Here July 21 TypeScript Join Here Multilingual Sessions We’re also hosting workshops in Spanish, Portuguese, Japanese, Korean, Chinese, Vietnamese, and more! Explore different tech stacks while learning in your preferred language: Date Language Technology Link July 15 한국어 (Korean) C# Join July 15 日本語 (Japanese) C# Join July 17 Español C# Join July 18 Tiếng Việt C# Join July 18 한국어 JavaScript Join July 22 한국어 Python Join July 22 Português Java Join July 23 中文 (Chinese) C# Join July 23 Türkçe C# Join July 23 Español JavaScript/TS Join July 23 Português C# Join July 24 Deutsch Java Join July 24 Italiano Python Join 🗓️ Save your seat: https://aka.ms/letslearnmcp What You’ll Need Before the event starts, make sure you’ve got: Visual Studio Code set up for your language of choice Docker installed A GitHub account (you can sign up for Copilot for free!) A curious mindset—no MCP experience required You can check out the MCP for Beginner course at https://aka.ms/mcp-for-beginners What’s Next? MCP Dev Days! Once you’ve wrapped up the workshop, why not go deeper? MCP Dev Days is happening July 29–30, and it’s packed with pro sessions from the Microsoft team and beyond. You’ll explore the MCP ecosystem, learn from insiders, and connect with other learners and devs. 👉 Info and registration: https://aka.ms/mcpdevdays Whether you're writing your first line of code or fine-tuning models like a pro, MCP is a game-changer. Come learn with us, and let’s build the future together298Views0likes0CommentsMulti-Agent Systems and MCP Tools Integration with Azure AI Foundry
The Power of Connected Agents: Building Multi-Agent Systems Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in! The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents. Why Connected Agents Matter As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative: 1. Simplified Complexity: Instead of building one massive agent that does everything (and becomes difficult to maintain), you can create smaller, specialized agents with clearly defined responsibilities. 2. No Custom Orchestration Required: The main agent naturally delegates tasks using natural language - no need to write complex routing logic or orchestration code. 3. Better Reliability and Debugging: When something goes wrong, it's much easier to identify which specific agent is causing issues rather than debugging a monolithic system. 4. Flexibility and Extensibility: Need to add a new capability? Just create a new connected agent without modifying your main agent or other parts of the system. How Multi-Agent Systems Work The architecture is surprisingly straightforward: 1. A main agent acts as the orchestrator, interpreting user requests and delegating tasks 2. Connected sub-agents perform specialized functions like data retrieval, analysis, or summarization 3. Results flow back to the main agent, which compiles the final response For example, imagine building a ticket triage system. When a new support ticket arrives, your main agent might: - Delegate to a classifier agent to determine the ticket type - Send the ticket to a priority-setting agent to determine urgency - Use a team-assignment agent to route it to the right department All this happens seamlessly without you having to write custom routing logic! Setting Up a Multi-Agent Solution The module walks you through the entire process: 1. Initializing the agents client 2. Creating connected agents with specialized roles 3. Registering them as tools for the main agent 4. Building the main agent that orchestrates the workflow 5. Running the complete system Taking It Further: Integrating MCP Tools with Azure AI Agents Once you've mastered multi-agent systems, the next level is connecting your agents to external tools and services. The Integrate MCP Tools with Azure AI Agents module teaches you how to use the Model Context Protocol (MCP) to give your agents access to a dynamic catalog of tools. What is Dynamic Tool Discovery? Traditionally, adding new tools to an AI agent meant hardcoding each one directly into your agent's code. But what if tools change frequently, or if different teams manage different tools? This approach quickly becomes unmanageable. Dynamic tool discovery through MCP solves this problem by: 1. Centralizing Tool Management: Tools are defined and managed in a central MCP server 2. Enabling Runtime Discovery: Agents discover available tools during runtime through the MCP client 3. Supporting Automatic Updates: When tools are updated on the server, agents automatically get the latest versions The MCP Server-Client Architecture The architecture involves two key components: 1. MCP Server: Acts as a registry for tools, hosting tool definitions decorated with `@mcp.tool`. Tools are exposed over HTTP when requested. 2. MCP Client: Acts as a bridge between your MCP server and Azure AI Agent. It discovers available tools, generates Python function stubs to wrap them, and registers those functions with your agent. This separation of concerns makes your AI solution more maintainable and adaptable to change. Setting Up MCP Integration The module guides you through the complete process: 1. Setting up an MCP server with tool definitions 2. Creating an MCP client to connect to the server 3. Dynamically discovering available tools 4. Wrapping tools in async functions for agent use 5. Registering the tools with your Azure AI agent Once set up, your agent can use any tool in the MCP catalog as if it were a native function, without any hardcoding required! Practical Applications for Student Developers As a student developer, how might you apply these concepts in real projects? Classroom Projects: - Build a research assistant that delegates to specialized agents for different academic subjects - Create a coding tutor that uses different agents for explaining concepts, debugging code, and suggesting improvements Hackathons: - Develop a sustainability app that uses connected agents to analyze environmental data from different sources - Create a personal finance advisor with specialized agents for budgeting, investment analysis, and financial planning Personal Portfolio Projects: - Build a content creation assistant with specialized agents for brainstorming, drafting, editing, and SEO optimization - Develop a health and wellness app that uses MCP tools to connect to fitness APIs, nutrition databases, and sleep tracking services Getting Started Ready to dive in? Both modules include hands-on exercises where you'll build real working examples: - A ticket triage system using connected agents - An inventory management assistant that integrates with MCP tools The prerequisites are straightforward: - Experience with deploying generative AI models in Azure AI Foundry - Programming experience with Python or C# Conclusion Multi-agent systems and MCP tools integration represent the next evolution in AI application development. By mastering these concepts, you'll be able to build more sophisticated, maintainable, and extensible AI solutions - skills that will make you stand out in internship applications and job interviews. The best part? These modules are designed with practical, hands-on learning in mind - perfect for student developers who learn by doing. So why not give them a try? Your future AI applications (and your resume) will thank you for it! Want to learn more about Model Context Protocol 'MCP' see MCP for Beginners Happy coding!2KViews1like0CommentsModel Mondays S2:E2 - Understanding Model Context Protocol (MCP)
This week in Model Mondays, we focus on the Model Context Protocol (MCP) — and learn how to securely connect AI models to real-world tools and services using MCP, Azure AI Foundry, and industry-standard authorization. Read on for my recap About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here’s how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watch upcoming Mondel Monday livestreams Watch Playlists to replay past Model Monday episodes Register Here - to join the AMA on MCP on Friday Jun 27 Visit The Forum- to view Foundry Friday AMAs and recaps Spotlight On: Model Context Protocol (MCP) This week, the Model Monday’s spotlight was on the Model Context Protocol (MCP) with subject matter expert Den Delimarsky. Don't forget to check out the slides from the presentation, for resource links! In this blog post, I’ll talk about my five key takeaways from this episode: What Is MCP and Why Does It Matter? What Is MCP Authorization and Why Is It Important? How Can I Get Started with MCP? Spotlight: My Aha Moment Highlights: What’s New in Azure AI 1 . What Is MCP and Why is it Important? MCP is a protocol that standardizes how AI applications connect the underlying AI models to required knowledge sources (data) and interaction APIs (functions) for more effective task execution. Because these models are pre-trained, they lack access to real-time or proprietary data sources (for knowledge) and real-world environments (for interaction). MCP allows them to "discover and use" relevant knowledge and action tools to add relevant context to the model for task execution. Explore: The MCP Specification Learn: MCP For Beginners Want to learn more about MCP - check out the AI Engineer World Fair 2025 "MCP and Keynotes" track. It kicks off with a keynote from Asha Sharma that gives you a broader vision for Azure AI Foundry. Then look for the talk from Harald Kirschner on MCP and VS Code. 2. What Is MCP Authorization and Why Does It Matter? MCP (Model Context Protocol) authorization is a system that helps developers manage who can access their apps, especially when they are hosted in the cloud. The goal is to simplify the process of securing these apps by using common tools like OAuth and identity providers (such as Google or GitHub), so developers don't have to be security experts. Key Takeaways: The new MCP proposal uses familiar identity providers to simplify the authorization process. It allows developers to secure their apps without requiring deep knowledge of security. The update ensures better security controls and prepares the system for future authentication methods. Related Reading: Aaron Parecki, Let's Fix OAuth in MCP Den Delimarsky, Improving The MCP Authorization Spec - One RFC At A Time MCP Specification, Authorization protocol draft On Monday, Den joined us live to talk about the work he did for the authorization protocol. Watch the session now to get a sense for what the MCP Authorization protocol does, how it works, and why it matters. Have questions? Submit them to the forum or Join the Foundry Friday AMA on Jun 27 at 1:30pm ET. 3. How Can I Get Started? If you want to start working with MCP, here’s how to do it easily: Learn the Fundamentals: Explore MCP For Beginners Use an MCP Server: Explore VSCode Agent Mode support . Use MCP with AI Agents: Explore the Azure MCP Server 4. What’s New in Azure AI Foundry? Managed Compute for Cohere Models: Faster, secure AI deployments with low latency. Prompt Shields: New Azure security system to protect against prompt injection and unsafe content. OpenAI o3 Pro Model: A fast, low-cost model similar to GPT-4 Turbo. Codex Mini Model: A smaller, quicker model perfect for developer command-line tasks. MCP Security Upgrades: Now easier to secure AI apps using familiar OAuth identity providers. 5. My Aha Moment Before this session, I used to think that connecting apps to AI was complicated and risky. I believed developers had to build their own security systems from scratch, which sounded tough. But this week, I learned that MCP makes it simple. We can now use trusted logins like Google or GitHub and securely connect AI models to real-world apps without extra hassle. How I Learned This ? To be honest, I also used Copilot to help me understand and summarize this topic in simple words. I wanted to make sure I really understood it well enough to explain it to my friends and peers. I believe in learning with the tools we have, and AI is one of them. By using Copilot and combining it with what I learned from the Model Monday’s session, I was able to write this blog in a way that is easy to understand Takeaway for Beginners: It’s okay to use AI to learn what matters is that you grow, verify, and share the knowledge in your own way. Coming Up Next Week: Next week, we dive into SLMs & Reasoning (Phi-4) with Mojan Javaheripi, PhD, Senior Researcher at Microsoft Research. This session will explore how Small Language Models (SLMs) can perform advanced reasoning tasks, and what makes models like Phi-4 reasoning efficient, scalable, and useful in practical AI applications. Register Here! Join The Community Great devs don't build alone! In a fast-pased developer ecosystem, there's no time to hunt for help. That's why we have the Azure AI Developer Community. Join us today and let's journey together! Join the Discord - for real-time chats, events & learning Explore the Forum - for AMA recaps, Q&A, and help! About Me: I'm Sharda, a Gold Microsoft Learn Student Ambassador interested in cloud and AI. Find me on Github, Dev.to, Tech Community and Linkedin. In this blog series I have summarized my takeaways from this week's Model Mondays livestream.872Views1like2Comments