Blog Post

Educator Developer Blog
6 MIN READ

Streamlining Campus Life: A Multi-Agent System for Campus Event Management

junjiewan's avatar
junjiewan
Copper Contributor
Sep 23, 2025

Introduction

Managing campus events has long been a complex, manual process fraught with challenges. Traditional event management systems offer limited automation, placing a considerable workload on staff for tasks ranging from resource allocation to participant communication. This procedural friction presents a clear opportunity to build a more intelligent solution, leveraging the emerging paradigm of AI agents. To solve these challenges, I developed and evaluated a multi-agent system designed to automate the campus event workflow and improve productivity. In this blog, I’ll share the journey of building this system, detailing its architecture and how I leveraged the Semantic Kernel and Azure Services to create a team of specialized agents. 

Background

My name is Junjie Wan, and I’m a MSc student in Applied Computational Science and Engineering at Imperial College London. This research project, in collaboration with Microsoft, explores the development of a multi-agent solution for managing a university campus. The system's focus is on automating the event management workflow using Microsoft Azure AI Agent services. I would like to thank my supervisor, Lee Stott, for his guidance and mentorship during this project.

Methodology: Building the Agentic System.

The Model Context Protocol (MCP) and Backend Integration

For agents to perform their duties effectively, they need access to a powerful set of tools.  The system's backend is a high-performance API built with  FastAPI, with Azure Cosmos DB serving as the scalable data store. To make these API functions usable by the agents, they are wrapped as tools using Semantic Kernel’s kernel_function decorator. These tools contain the necessary functions to utilize both the internal API and various Azure Services. The setup for making these tools accessible is straightforward: we first instantiate a central Kernel object, add the defined tools as plugins, and then convert this populated Kernel into a runnable MCP server. This approach creates an extensible system where new tools can be added as services without requiring changes to the agents themselves. 

System Architecture

Frontend Implemenation with Streamlit

To build a frontend powered by the AI features and based on Python, I choose to use the Streamlit for rapid prototyping. The frontend implements role-based access control, with different interfaces for admin, staff, and students. The system inlcudes a dashbarad, form-based pages, and a conversational chat interface as the primary entry point for the multi-agent system. To enhance user experience, it supports multi-modal input through voice integration, which uses OpenAI whisper for accurate speech-to-text transcription and the OpenAI tts model in Azure AI Foundry for voice playback.

Chat Interface

 

 

 

 

 

 

 

 

Individual Agent Design

The system distributes responsibilities across a team of specialized agents, each targeting a specific operational aspect of event management. Each agent is initialized as a ChatCompletionAgent with OpenAI’s Model Router and MCP plugins. Here are some of the agents implemented to improve the event management process.

To address the operational challenge of manually reconciling room availability and event requirements, the system utilizes a Planning Agent and a Schedule Agent. The Planning Agent serves as the central coordinator, gathering event specifications from the user. It can even leverage the Azure Maps Weather service to provide organizers with weather forecasts that may influence venue selection. It then delegates to the Schedule Agent, which is responsible for generating conflict-free timetable entries by querying our FastAPI backend for real-time availability data stored in the database. This workflow directly replaces the error-prone manual process and prevents scheduling conflicts.

For financial planning, the Budget Agent functions as the system's dedicated financial analyst, designed to solve the problem of inaccurate cost estimation. When tasked with a budget, it first retrieves the event context from Cosmos DB. To ground its responses in verifiable data, the agent utilizes a Retrieval-Augmented Generation (RAG) pipeline built on Azure AI Search. This allows it to search internal documents, such as catering menus, for pricing information. If items are not found internally, the agent uses the Grounding with Bing Search tool to gather real-time market data, ensuring estimations are both accurate and current.

To automate the manual, time-consuming process of participant communication, the Communication Agent handles all interactions. It drafts personalized emails populated with event details retrieved from the database. The agent is equipped with a tool that directly interfaces with Azure Communication Service to send emails programmatically. This automates the communication workflow, from sending initial invitations with Microsoft Forms links for registration to distributing post-event feedback surveys, saving significant administrative effort.

Multi-Agent Collaboration

For collaboration between agents, I chose the AgentGroupChat pattern within Semantic Kernel. While orchestration patterns like sequential or handoff are suitable for linear tasks and dynamic delegation between agents, the multi-domain nature of event management required a more flexible approach. The group chat pattern allows for both structured sequential handoffs and dynamic contributions from other agents as needed.

Group Chat Design

The orchestration logic is governed by two dynamic, LLM-driven functions:

  1. Selection Function: This acts as a dynamic router, analyzing the conversation's context to determine the most appropriate agent to speak in the next round. It performs intent recognition for initial queries and routes subsequent turns based on the ongoing workflow.
  2. Termination Function: This function prevents infinite loops and ensures the system remains responsive. It evaluates each agent's turn to decide whether the conversation should conclude or if a clear handoff warrants its continuation, maintaining coherent system behavior.

 

 

 

 

 

 

 

 

 

 

 

Evaluation Framework and Performance

To evaluate whether the system could reliably execute domain-specific workflows, I used the LLM-as-a-Judge framework through the Azure AI Evaluation SDK, which provides a scalable and consistent assessment of agent performance.

Group Chat Performance Radar Chart

The evaluation focused on three main categories of metrics to get a holistic view of the system:

  • Functional Correctness: I used metrics such as IntentResolutionTaskAdherence, and ToolCallAccuracy to assess whether the agents correctly understood user requests, followed instructions, and called the appropriate tools with the correct parameters.
  • Response Quality: Metrics like FluencyCoherence, Relevance, and Response Completeness were used to evaluate the linguistic quality of the agents' responses.
  • Operational Metrics: To assess the practical viability of the system, I also measured key operational metrics, including latency and token usage for each task.

The results confirmed the system's strong performance, consistently exceeding the pass threshold of 3.0. This demonstrates that the agentic architecture can successfully decompose and execute event management tasks with high precision. In contrast, linguistic metrics were lower, highlighting a potential trade-off where our multi-agent system focuses on functionality prioritized over conversational flow. The operational metrics also provided valuable insights into system behavior:

 

Response Time by Tag

Token vs Tool Call

 

 

 

 

 

 

 

 

 

 

 

 

 

  • Latency: The data showed that simpler queries, such as reading information, were consistently fast. However, complex, multi-step tasks exhibited significantly longer and more variable response times. This pattern reflected the expected accumulation of latency across multiple agent handoffs and tool calls within the agentic workflow.
  • Token: Analysis revealed a strong positive correlation between the number of tool calls and total token consumption, indicating that workflow complexity directly impacted computational cost. The baseline token usage for simple queries is high largely due to the context of tool definitions injected by the MCP server. Agents relying on RAG pipelines, like the Budget Agent, notably consumed more tokens due to the inclusion of retrieved context chunks in their prompts. 

Limitation and Future Work

Despite the good performance, the system has several limitations:

  • The system relies on carefully engineered prompts, making it less flexible when facing unexpected queries.
  • Multi-turn coordination between agents and the use of MCP servers results in high token consumption, raising concerns about efficiency and scalability in production deployments.
  • The system was tested with synthetic data and a relatively small set of test queries, which may not reflect the complexity of real-world scenarios.

Future work will focus on:

  • Enhancing error handling and recovery mechanisms within the group chat orchestration
  • Improving conversational quality while reducing token consumption
  • Deploying the agent system on a server for broader access and collecting user feedback
  • Testing the system with real-world data and conducting formal user studies

Conclusion

This project demonstrates that a multi-agent system, built on the integrated power of Microsoft Azure services, can offer an efficient solution for campus event management. By dividing the labor among specialized agents and enabling them with a powerful toolkit, we can automate complex workflows and reduce administrative burden. This work serves as a proof-of-concept that shows how agentic approaches can deliver more intelligent and streamlined solutions that improve the quality of events and the student experience.

Thank you for reading! If you have any questions or would like to discuss this work further, please feel free to contact me via email or on LinkedIn.

Updated Sep 19, 2025
Version 1.0
No CommentsBe the first to comment