agents
29 TopicsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models šŗ Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings šŗ Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation šŗ Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models šŗ Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs šŗ Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety šŗ Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling šŗ Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents šŗ Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol šŗ Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo813Views0likes0CommentsIntelligent Conversations: Building Memory-Driven Bots with Azure AI and Semantic Kernel
Discover how memory-driven AI reshapes the way we interact, learn, and collaborate. š¬āØ On October 20th at 8pm CET, weāll explore how Semantic Kernel, Azure AI Search, and Azure OpenAI models enable bots that remember context, adapt to users, and deliver truly intelligent conversations. š¤š Join Marko Atanasov and Bojan Ivanovski as they dive into the architecture behind context-aware assistants and the future of personalized learning powered by Azure AI. šš” ā Save your seat now: https://lnkd.in/dnZSj6Pb40Views0likes0CommentsModel Mondays S2E12: Models & Observability
1. Weekly Highlights This weekās top news in the Azure AI ecosystem included: GPT Real Time (GA): Azure AI Foundry now offers GPT Real Time (GA)ālifelike voices, improved instruction following, audio fidelity, and function calling, with support for image context and lower pricing. Read the announcement and check out the model card for more details. Azure AI Translator API (Public Preview): Choose between fast Neural Machine Translation (NMT) or nuanced LLM-powered translations, with real-time flexibility for multilingual workflows. Read the announcement then check out the Azure AI Translator documentation for more details. Azure AI Foundry Agents Learning Plan: Build agents with autonomous goal pursuit, memory, collaboration, and deep fine-tuning (SFT, RFT, DPO) - on Azure AI Foundry. Read the announcement what Agentic AI involves - then follow this comprehensive learning plan with step-by-step guidance. CalcLM Agent Grid (Azure AI Foundry Labs): Project CalcLM: Agent Grid is a prototype and open-source experiment that illustrates how agents might live in a grid-like surface (like Excel). It's formula-first and lightweight - defining agentic workflows like calculations. Try the prototype and visit Foundry Labs to learn more. Agent Factory Blog: Observability in Agentic AI: Agentic AI tools and workflows are gaining rapid adoption in the enterprise. But delivering safe, reliable and performant agents requires foundation support for Observability. Read the 6-part Agent Factory series and check out the Top 5 agent observability best practices for reliable AI blog post for more details. 2. Spotlight On: Observability in Azure AI Foundry This weekās spotlight featured a deep dive and demo by Han Che (Senior PM, Core AI/ Microsoft ), showing observability end-to-end for agent workflows. Why Observability? Ensures AI quality, performance, and safety throughout the development lifecycle. Enables monitoring, root cause analysis, optimization, and governance for agents and models. Key Features & Demos: Development Lifecycle: Leaderboard: Pick the best model for your agent with real-time evaluation. Playground: Chat and prototype agents, view instant quality and safety metrics. Evaluators: Assess quality, risk, safety, intent resolution, tool accuracy, code vulnerability, and custom metrics. Governance: Integrate with partners like Cradle AI and SideDot for policy mapping and evidence archiving. Red Teaming Agent: Automatically test for vulnerabilities and unsafe behavior. CI/CD Integration: Automate evaluation in GitHub Actions and Azure DevOps pipelines. Azure DevOps GitHub Actions Monitoring Dashboard: Resource usage, application analytics, input/output tokens, request latency, cost breakdown, and evaluation scores. Azure Cost Management SDKs & Local Evaluation: Run evaluations locally or in the cloud with the Azure AI Evaluation SDK. Demo Highlights: Chat with a travel planning agent, view run metrics and tool usage. Drill into run details, debugging, and real-time safety/quality scores. Configure and run large-scale agent evaluations in CI/CD pipelines. Compare agents, review statistical analysis, and monitor in production dashboards 3. Customer Story: Saifr Saifr is a RegTech company that uses artificial intelligence to streamline compliance for marketing, communications, and creative teams in regulated industries. Incubated at Fidelity Labs (Fidelity Investmentsā innovation arm), Saifr helps enterprises create, review, and approve content that meets regulatory standardsāfaster and with less manual effort. What Saifr Offers AI-Powered Compliance: Saifrās platform leverages proprietary AI models trained on decades of regulatory expertise to automatically detect potential compliance risks in text, images, audio, and video. Automated Guardrails: The solution flags risky or non-compliant language, suggests compliant alternatives, and provides explanationsāall in real time. Workflow Integration: Saifr seamlessly integrates with enterprise content creation and approval workflows, including cloud platforms and agentic AI systems like Azure AI Foundry. Multimodal Support: Goes beyond text to check images, videos, and audio for compliance risks, supporting modern marketing and communications teams. 4. Key Takeaways Observability is Essential: Azure AI Foundry offers complete monitoring, evaluation, tracing, and governance for agentic AIāmaking production safe, reliable, and compliant. Built-In Evaluation and Red Teaming: Use leaderboards, evaluators, and red teaming agents to assess and continuously improve model safety and quality. CI/CD and Dashboard Integration: Automate evaluations in GitHub Actions or Azure DevOps, then monitor and optimize agents in production with detailed dashboards. Compliance Made Easy: Saferās agents and models help financial services and regulated industries proactively meet compliance standards for content and communications. Sharda's Tips: How I Wrote This Blog I focus on organizing highlights, summarizing customer stories, and linking to official Microsoft docs and real working resources. For this recap, I explored the Azure AI Foundry Observability docs, tested CI/CD pipeline integration, and watched the customer demo to share best practices for regulated industries. Hereās my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E12 based on the transcript and episode details. Focus on observability, agent dashboards, CI/CD, compliance, and customer stories. Add correct, working Microsoft links!" Coming Up Next Week Next week: Open Source Models! Join us for the final episode with Hugging Face VP of Product, live demos, and open model workflows. Register For The Livestream ā Sep 15, 2025 About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Donāt build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each weekās Model Mondays livestream.168Views0likes0CommentsAzure AI Foundry Agents - Azure AI and APIM integration
Azure Innovators Hub & Global AI Athens Community presents: Azure AI Foundry Agents All you need to know about building agents with Azure AI and APIM integration! š ļø Live Event Highlights Join us for an immersive, hands-on experience where weāll explore: Creating and managing powerful Agents using Azure AI Foundry Handling threads, messages, and orchestrating Agent behaviors Implementing robust Agentic solutions with real-world scenarios Leveraging ready-to-use Templates to accelerate development Integrating APIM for seamless and secure API connectivity ⨠Whether you're a developer, AI enthusiast, or solution architect, you'll leave with practical skills and an end-to-end Multi-Agent Solution built during the session. šÆ Perfect for tech professionals, innovators, newcomers and community members looking to deepen their Azure AI expertise and connect with fellow thinkers in Athens. Join Live Event190Views0likes3CommentsModel Mondays S2E10: Automating Document Processing with AI
1. Weekly Highlights We kicked off with the top news and updates in the Azure AI ecosystem: Agent Factory Blog Series: A new 6-part blog series on designing reliable, agentic AIāexploring multi-step, collaborative agents that reflect, plan, and adapt using tool integrations and design patterns. Text PII Preview in Azure AI Language: Now redacts PII (like date of birth, license plates) in major European languages, with better accuracy for UK bank entities. Claude Opus 4.1 in Copilot Pro & Enterprise: Public preview brings smarter summaries, tool assistant thinking, and "Ask Mode" in VS Code.Now leverages stronger computer vision algorithms for table parsingāachieving 94-97% accuracy across Latin, Chinese, Japanese, and Koreanāwith sub-10ms latency. Mistral Document AI in Azure Foundry: Instantly turn PDFs, contracts, and scanned docs into structured JSON with tables, headings, and LaTeX support. Serverless, multilingual, secure, and perfect for regulated industries. 2. Spotlight On: Document Intelligence with Azure & Mistral This weekās spotlight was a hands-on exploration of document processing, featuring both Microsoft and Mistral AI experts. Why Document Processing? Unstructured dataāreceipts, forms, handwritten notesāare everywhere. Modern document AI can extract, structure, and even annotate this data, fueling everything from search to RAG pipelines. Azure Document Intelligence: State-of-the-art OCR and table extraction with super-high accuracy and speed. Handles multi-language, complex layouts, and returns structured outputs ready for programmatic use. Mistral Document AI: Transforms PDFs and scanned docs into JSON, retaining complex formatting, tables, images, and even LaTeX. Supports custom schema extraction, image/document annotations, and returns everything in one API call. Integrates seamlessly with Azure AI Foundry and developer workflows. Demo Highlights: Extracting Receipts: OCR accurately pulls out store, date, and transaction details from photos. Handwriting Recognition: Even historical documents (like Thomas Jeffersonās letters) are parsed with surprising accuracy. Tables & Structured Data: Financial statements and reports converted into structured markdown and JSONāready for downstream apps. Advanced Annotations: Define your own schema (via JSON Schema or Pydantic), extract custom fields, classify images, summarize documents, and even translate summariesāall in a single call. 3. Customer Story: Oracle Health Oracle Health shared how agentic AI and fine-tuned models are revolutionizing clinical workflows: Problem: Clinicians spend hours on documentation, searching records, and manual data entryāreducing time for patient care. Solution: Oracleās clinical AI agents automate chart reviews, data extraction, and even conversational Q&Aāwhile keeping humans in the loop for safety. Technical Highlights: Multi-agent architecture understands provider specialty and context. Orchestrator model "routes" requests to the right agent or plugin, extracting needed arguments from context. Fine-tuning was key: For low latency, Oracle used lightweight models (like GPT-4 Mini) and fine-tuned on their dataāachieving sub-800ms responses, with accuracy matching larger models. Fine-tuning also allowed for nuanced tool selection, argument extraction, and rule-based orchestrationābetter than prompt engineering alone. Used LoRA for efficient, targeted fine-tuning without erasing base model knowledge. Live Demo: Agent summarizes patient history, retrieves lab results, filters for abnormals, and answers follow-up questionsāall conversationally. Fine-tuned orchestrator chooses the right tool and context for each doctorās workflow. Result: 1-2 hours saved per day, more time for patients, and happier doctors! 4. Key Takeaways Here are the key learnings from this episode: Document AI is Production-Ready: Azure Document Intelligence and Mistral Document AI offer fast, accurate, and customizable document parsing for real enterprise needs. Schema-Driven Extraction & Annotation: Define your own schemas and extract exactly what you wantāno more one-size-fits-all. Fine-Tuning Unlocks Performance: For low latency and high accuracy, fine-tuning lightweight models beats prompt engineering in complex, rule-based agent workflows. Agentic Workflows in Action: Multi-agent systems can automate complex tasks, route requests, and keep humans in control, especially in regulated domains like healthcare. Community & Support: Join the Discord and Forum to ask questions, share use cases, and connect with the team. Sharda's Tips: How I Wrote This Blog Writing this recap is all about sharing what I learned and making it practical for the community! I start by organizing the key highlights, then walk through customer stories and demos, using simple language and real-world examples. Copilot helps me structure and clarify my notes, especially when summarizing technical sections. Hereās the prompt I used for Copilot this week: "Generate a technical blog post for Model Mondays S2E10 based on the transcript and episode details. Focus on document processing with Azure AI and Mistral, include customer demos, and highlight practical workflows and fine-tuning. Make it clear and approachable for developers and students." Every episode inspires me to try these tools myself, and I hope this blog makes it easy for you to start, too. If you have questions or want to share your own experience, Iād love to hear from you! Coming Up Next Week Next week: Text & Speech AI Playgrounds! Learn how to build and test language and speech models, with live demos and expert guests. | Register For The Livestream ā Aug 25, 2025 | Register For The AMA ā Aug 29, 2025 | Ask Questions & View Recaps ā Discussion Forum About Model Mondays Model Mondays is a weekly series to build your Azure AI IQ with: 5-Minute Highlights: News & updates on Mondays 15-Minute Spotlight: Deep dives into new features, models, and protocols 30-Minute AMA Fridays: Live Q&A with product teams and experts Get started: Register For Livestreams Watch Past Replays Register For AMA Recap Past AMAs Join The Community Donāt build alone! Join the Azure AI Developer Community for real-time chats, events, support, and more: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each weekās Model Mondays livestream.214Views0likes0CommentsMicrosoft 365 Champion community call | July 2025 | AM
Join our next community call on July 22, 2025, to learn more about IT management controls and measurement for Copilot and agents, as well as People Skills and the Skills agent. Host: Tiffany Lee Guests: Samer Baroudi, Anirudh Bajaj Moderator: Jessie Hwang š¢ NOTE: our community call format has changed to using Teams webinars to enable more dynamic discussions! Join link is still the same but you must register to be able to join the call when it starts: https://aka.ms/M365ChampionCallAM ā° šØļø Each call includes an open Q&A discussion section, where you'll have a chance to ask your questions about Microsoft 365. š Join the Microsoft 365 Champion program today! Champions combine technical acumen with people skills to drive meaningful change. Our community calls are open to everyone but only Champions have access to the presentation resources (access link in the initial welcome email and in the monthly newsletters). Join now: https://aka.ms/M365Champions. Note: If you are unable to watch the recording on YouTube, try watching it here.422Views0likes0CommentsMicrosoft 365 Champion community call | July 2025 | PM
Join our next community call on July 22, 2025, to learn more about IT management controls and measurement for Copilot and agents, as well as People Skills and the Skills agent. Host: Tiffany Lee Guests: Samer Baroudi, Anirudh Bajaj Moderator: Jessie Hwang š¢ NOTE: our community call format has changed to using Teams webinars to enable more dynamic discussions! Join link is still the same but you must register to be able to join the call when it starts: https://aka.ms/M365ChampionCallAM ā° šØļø Each call includes an open Q&A discussion section, where you'll have a chance to ask your questions about Microsoft 365. š Join the Microsoft 365 Champion program today! Champions combine technical acumen with people skills to drive meaningful change. Our community calls are open to everyone but only Champions have access to the presentation resources (access link in the initial welcome email and in the monthly newsletters). Join now: https://aka.ms/M365Champions. Note: If you are unable to watch the recording on YouTube, try watching it here.112Views1like0CommentsCampusSphere: Building the Future of Campus AI with Microsoft's Agentic Framework
Project Overview We are a team of Imperial College Students committed to improving campus life through innovative multi-agent solutions. CampusSphere leverages Microsoft Azure AI capabilities to automate core university campus services. We created an end-to-end solution that allows both students and staff to access a multi-agent framework for room/gym booking, attendance tracking, calendar management, IoT monitoring and more. š Our Initial Vision: Reimagining Campus Technology When our team at Imperial College London embarked on the CampusSphere project as part of Microsoft's Agentic Campus initiative, we had one clear ambition: to create an intelligent campus ecosystem that would fundamentally change how students, faculty, and staff interact with university services. The inspiration came from a simple observationādespite living in an age of advanced AI, campus technology remained frustratingly fragmented. Students juggled multiple portals for course registration, room booking, dining services, and academic support. Faculty members navigated separate systems for teaching, research, and administrative tasks. The result? Countless hours wasted on mundane navigation tasks that could be better spent on learning, teaching, and innovation. Our vision was ambitious: create a single, intelligent interface that could understand natural language, anticipate user needs, and seamlessly integrate with existing campus infrastructure. We didn't just want to build another campus appāwe wanted to demonstrate how Microsoft's agentic AI technologies could create a truly intelligent campus companion. š§ Enter CampusSphere CampusSphere is an intelligent campus assistant made up of multiple AI agents, each with a specific domain of expertise ā all communicating seamlessly through a centralized architecture. Think of it as a digital concierge for campus life, where your calendar, attendance, IoT data, and facility bookings are coordinated by specialized GPT-powered agents. Hereās what we built: TriageAgent ā the brain of the system, using Retrieval-Augmented Generation (RAG) to understand user intent CalendarAgent ā handles scheduling, bookings, and reminders AttendanceAgent ā tracks check-ins automatically IoTAgent ā monitors real-time sensor data from classrooms and labs GymAgent ā manages access and reservations for sports facilities 30+ MCP Tools ā perform SQL queries, scrape web data, and connect with external APIs All of this is built on Microsoft Azure AI, Semantic Kernel, and Model Context Protocol (MCP) ā making it scalable, secure, and lightning fast. š„ļø The Tech Stack Our Azure-powered architecture showcases a modular and scalable approach to real-time data processing and intelligent agent coordination. The frontend is built using React with a Vite development server, providing a fast and responsive user interface. When users submit a prompt, it travels to a Flask backend server acting as the Triage agent, which intelligently delegates tasks to a FastAPI agent service. This FastAPI service asynchronously communicates with individual agents and handles responses efficiently. Complex queries are routed to MCP Tools, which interact with the CosmosDB-powered Campus Database. Simultaneously, real-time synthetic IoT data is pushed into the database via Azure Function Apps and Azure IoT Hub. Authentication is securely managed: users log in through the frontend, receive a token from the database API server, and use it for authorized access to MCP services, with permissions enforced based on user roles using our custom MCP server implementation. This robust architecture enables seamless integration, real-time data flow, and secure multi-agent collaboration across Azure services. Our system leverages a multi-agent architecture designed to intelligently coordinate task execution across specialized services. At the core is the TriageAgent, which uses Retrieval-Augmented Generation (RAG) to interpret user prompts, enrich them with relevant context, and determine the optimal response path. Based on the nature of the request, it may handle the response directly, seek clarification, or delegate tasks to specific agents via FastAPI. Each specialized agent has a clearly defined role: AttendanceAgent: Interfaces with CosmosDB-backed FastAPI endpoints to check student attendance, using filters like event name, student ID, or date. IoTAgent: Monitors room conditions (e.g., temperature, COā levels) and flags anomalies using real-time data from Azure IoT Hub, processed via FastAPI. CalendarAgent: Handles scheduling, availability checks, and event creation by querying or updating CosmosDB through FastAPI. Future integration with Microsoft Graph API is planned for direct calendar syncing. Gym Slot Agent: Checks available times for gym sessions using dedicated MCP tools. The triage agent serves as the orchestrator, breaking down complex requests (like "Book a gym session") into subtasks. It consults relevant agents (e.g., calendar and gym slot agents), merges results, and then confirms the final action with the user. This distributed and asynchronous workflow reduces backend load and enhances both responsiveness and reliability of the system. š® Whatās Next? Integrating CampusSphere with live systems via Microsoft OAuth is crucial for enhancing its capabilities. This integration will grant the agent authenticated access to a wider range of student data, moving beyond synthetic datasets. This expanded access to real-world information will enable deeply personalized advice, such as tailored course selection, scholarship recommendations, event suggestions, and deadline reminders, transforming CampusSphere into a sophisticated, proactive personal assistant. š¤Meet the Team Behind CampusSphere Our success stemmed from a diverse team of innovators who brought together expertise from multiple domains: Benny Liu - https://www.linkedin.com/in/zong-benny-liu-393a4621b/ Lucas Ng - https://www.linkedin.com/in/lucas-ng-11b317203/ Lu Ju - https://www.linkedin.com/in/lu-ju/ Bruno Duaso - https://www.linkedin.com/in/bruno-duaso-jimeno-744464262/ Martim Coutinho - https://www.linkedin.com/in/martim-pereira-coutinho-116308233/ Krischad Pourpongpan - https://www.linkedin.com/in/krischadpua/ Yixu Pan - https://www.linkedin.com/in/yixu-pan/ Our collaborative approach enabled us to create a sophisticated agentic AI system that demonstrates the powerful potential of Microsoft's AI technologies in educational environments. š§āš» Project Repository: GitHub - Imperial-Microsoft-Agentic-Campus/CampusSphere Contribute to Imperial-Microsoft-Agentic-Campus/CampusSphere development by creating an account on GitHub. github.com Have questions about implementing similar solutions at your institution? Connect with our team members on LinkedInāwe're always excited to share knowledge and collaborate on innovative campus technology projects. šGet Started with Microsoft's AI Tools Ready to explore the technologies that made CampusSphere possible? Here are essential resources: Microsoft Semantic Kernel: The core framework for building AI agent orchestration systems. Learn how to create, coordinate, and manage multiple AI agents working together seamlessly. AI Agents for Beginners: A comprehensive guide to understanding and building AI agents from the ground up. Perfect for getting started with agentic AI development. Model Context Protocol (MCP): Learn about the protocol that enables secure connections between AI models and external tools and servicesāessential for building integrated AI systems. Windows AI Toolkit: Microsoft's toolkit for developing AI applications on Windows, providing local AI model development capabilities and deployment tools. Azure Container Apps: Understand how to deploy and scale containerized AI applications in the cloud, perfect for hosting multi-agent systems. Azure Cosmos DB Security: Essential security practices for managing data in AI applications, covering encryption, access control, and compliance.428Views2likes0CommentsMulti-Agent Systems and MCP Tools Integration with Azure AI Foundry
The Power of Connected Agents: Building Multi-Agent Systems Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in! The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents. Why Connected Agents Matter As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative: 1. Simplified Complexity: Instead of building one massive agent that does everything (and becomes difficult to maintain), you can create smaller, specialized agents with clearly defined responsibilities. 2. No Custom Orchestration Required: The main agent naturally delegates tasks using natural language - no need to write complex routing logic or orchestration code. 3. Better Reliability and Debugging: When something goes wrong, it's much easier to identify which specific agent is causing issues rather than debugging a monolithic system. 4. Flexibility and Extensibility: Need to add a new capability? Just create a new connected agent without modifying your main agent or other parts of the system. How Multi-Agent Systems Work The architecture is surprisingly straightforward: 1. A main agent acts as the orchestrator, interpreting user requests and delegating tasks 2. Connected sub-agents perform specialized functions like data retrieval, analysis, or summarization 3. Results flow back to the main agent, which compiles the final response For example, imagine building a ticket triage system. When a new support ticket arrives, your main agent might: - Delegate to a classifier agent to determine the ticket type - Send the ticket to a priority-setting agent to determine urgency - Use a team-assignment agent to route it to the right department All this happens seamlessly without you having to write custom routing logic! Setting Up a Multi-Agent Solution The module walks you through the entire process: 1. Initializing the agents client 2. Creating connected agents with specialized roles 3. Registering them as tools for the main agent 4. Building the main agent that orchestrates the workflow 5. Running the complete system Taking It Further: Integrating MCP Tools with Azure AI Agents Once you've mastered multi-agent systems, the next level is connecting your agents to external tools and services. The Integrate MCP Tools with Azure AI Agents module teaches you how to use the Model Context Protocol (MCP) to give your agents access to a dynamic catalog of tools. What is Dynamic Tool Discovery? Traditionally, adding new tools to an AI agent meant hardcoding each one directly into your agent's code. But what if tools change frequently, or if different teams manage different tools? This approach quickly becomes unmanageable. Dynamic tool discovery through MCP solves this problem by: 1. Centralizing Tool Management: Tools are defined and managed in a central MCP server 2. Enabling Runtime Discovery: Agents discover available tools during runtime through the MCP client 3. Supporting Automatic Updates: When tools are updated on the server, agents automatically get the latest versions The MCP Server-Client Architecture The architecture involves two key components: 1. MCP Server: Acts as a registry for tools, hosting tool definitions decorated with `@mcp.tool`. Tools are exposed over HTTP when requested. 2. MCP Client: Acts as a bridge between your MCP server and Azure AI Agent. It discovers available tools, generates Python function stubs to wrap them, and registers those functions with your agent. This separation of concerns makes your AI solution more maintainable and adaptable to change. Setting Up MCP Integration The module guides you through the complete process: 1. Setting up an MCP server with tool definitions 2. Creating an MCP client to connect to the server 3. Dynamically discovering available tools 4. Wrapping tools in async functions for agent use 5. Registering the tools with your Azure AI agent Once set up, your agent can use any tool in the MCP catalog as if it were a native function, without any hardcoding required! Practical Applications for Student Developers As a student developer, how might you apply these concepts in real projects? Classroom Projects: - Build a research assistant that delegates to specialized agents for different academic subjects - Create a coding tutor that uses different agents for explaining concepts, debugging code, and suggesting improvements Hackathons: - Develop a sustainability app that uses connected agents to analyze environmental data from different sources - Create a personal finance advisor with specialized agents for budgeting, investment analysis, and financial planning Personal Portfolio Projects: - Build a content creation assistant with specialized agents for brainstorming, drafting, editing, and SEO optimization - Develop a health and wellness app that uses MCP tools to connect to fitness APIs, nutrition databases, and sleep tracking services Getting Started Ready to dive in? Both modules include hands-on exercises where you'll build real working examples: - A ticket triage system using connected agents - An inventory management assistant that integrates with MCP tools The prerequisites are straightforward: - Experience with deploying generative AI models in Azure AI Foundry - Programming experience with Python or C# Conclusion Multi-agent systems and MCP tools integration represent the next evolution in AI application development. By mastering these concepts, you'll be able to build more sophisticated, maintainable, and extensible AI solutions - skills that will make you stand out in internship applications and job interviews. The best part? These modules are designed with practical, hands-on learning in mind - perfect for student developers who learn by doing. So why not give them a try? Your future AI applications (and your resume) will thank you for it! Want to learn more about Model Context Protocol 'MCP' see MCP for Beginners Happy coding!1.9KViews1like0CommentsUpcoming July 2025 Microsoft 365 Champion Community Call
Join our next community call on July 22, 2025, to learn more about IT management controls and measurement for Copilot and agents, as well as People Skills and the Skills agent. Our community calls are now in the Teams webinar format, which means you must register to ensure you will be able to join the call when it starts. An on-demand recording will still be available on our Driving Adoption > Events pages, as well as on our Microsoft Community Learning YouTube channel. The calls will still start at 5 minutes past the hour for both sessions (at 8:05 AM and 5:05 PM PT), and it will still end at the top of the hour (9:00 AM and 6:00 PM PT, respectively). The registration links for both sessions are below. Once you register, you will receive an email confirmation and calendar invite with the event join link. https://aka.ms/M365ChampionCallAM https://aka.ms/M365ChampionCallPM Since our calls are open to everyone, you must be a member of the Microsoft 365 Champion Program in order to access the presentation materials - the access link is in the initial welcome email and the monthly newsletter emails sent the week before the community calls. If you have not yet joined our Champion community, sign up here to get access to the monthly newsletters, calendar invites, and program assets (e.g., the presentations).225Views0likes0Comments