learning
399 TopicsEdge AI for Beginners : Getting Started with Foundry Local
In Module 08 of the EdgeAI for Beginners course, Microsoft introduces Foundry Local a toolkit that helps you deploy and test Small Language Models (SLMs) completely offline. In this blog, I’ll share how I installed Foundry Local, ran the Phi-3.5-mini model on my windows laptop, and what I learned through the process. What Is Foundry Local? Foundry Local allows developers to run AI models locally on their own hardware. It supports text generation, summarization, and code completion — all without sending data to the cloud. Unlike cloud-based systems, everything happens on your computer, so your data never leaves your device. Prerequisites Before starting, make sure you have: Windows 10 or 11 Python 3.10 or newer Git Internet connection (for the first-time model download) Foundry Local installed Step 1 — Verify Installation After installing Foundry Local, open Command Prompt and type: foundry --version If you see a version number, Foundry Local is installed correctly. Step 2 — Start the Service Start the Foundry Local service using: foundry service start You should see a confirmation message that the service is running. Step 3 — List Available Models To view the models supported by your system, run: foundry model list You’ll get a list of locally available SLMs. Here’s what I saw on my machine: Note: Model availability depends on your device’s hardware. For most laptops, phi-3.5-mini works smoothly on CPU. Step 4 — Run the Phi-3.5 Model Now let’s start chatting with the model: foundry model run phi-3.5-mini-instruct-generic-cpu:1 Once it loads, you’ll enter an interactive chat mode. Try a simple prompt: Hello! What can you do? The model replies instantly — right from your laptop, no cloud needed. To exit, type: /exit How It Works Foundry Local loads the model weights from your device and performs inference locally.This means text generation happens using your CPU (or GPU, if available). The result: complete privacy, no internet dependency, and instant responses. Benefits for Students For students beginning their journey in AI, Foundry Local offers several key advantages: No need for high-end GPUs or expensive cloud subscriptions. Easy setup for experimenting with multiple models. Perfect for class assignments, AI workshops, and offline learning sessions. Promotes a deeper understanding of model behavior by allowing step-by-step local interaction. These factors make Foundry Local a practical choice for learning environments, especially in universities and research institutions where accessibility and affordability are important. Why Use Foundry Local Running models locally offers several practical benefits compared to using AI Foundry in the cloud. With Foundry Local, you do not need an internet connection, and all computations happen on your personal machine. This makes it faster for small models and more private since your data never leaves your device. In contrast, AI Foundry runs entirely on the cloud, requiring internet access and charging based on usage. For students and developers, Foundry Local is ideal for quick experiments, offline testing, and understanding how models behave in real-time. On the other hand, AI Foundry is better suited for large-scale or production-level scenarios where models need to be deployed at scale. In summary, Foundry Local provides a flexible and affordable environment for hands-on learning, especially when working with smaller models such as Phi-3, Qwen2.5, or TinyLlama. It allows you to experiment freely, learn efficiently, and better understand the fundamentals of Edge AI development. Optional: Restart Later Next time you open your laptop, you don’t have to reinstall anything. Just run these two commands again: foundry service start foundry model run phi-3.5-mini-instruct-generic-cpu:1 What I Learned Following the EdgeAI for Beginners Study Guide helped me understand: How edge AI applications work How small models like Phi 3.5 can run on a local machine How to test prompts and build chat apps with zero cloud usage Conclusion Running the Phi-3.5-mini model locally with Foundry Localgave me hands-on insight into edge AI. It’s an easy, private, and cost-free way to explore generative AI development. If you’re new to Edge AI, start with the EdgeAI for Beginners course and follow its Study Guide to get comfortable with local inference and small language models. Resources: EdgeAI for Beginners GitHub Repo Foundry Local Official Site Phi Model Link190Views0likes0CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo619Views0likes0CommentsRedeeming Azure for Student from your GitHub Student Pack when you do not have an Academic Email
GitHub Student Developer Pack Learn to ship software like a pro. There's no substitute for hands-on experience. But for most students, real world tools can be cost-prohibitive. That's why we created the GitHub Student Developer Pack with some of our partners and friends. Sign up for Student Developer Pack22KViews1like3CommentsAI Career Navigator — Empowering Job Seekers with Azure OpenAI
AI Career Navigator is more than just a project — it’s a mission to make career growth accessible, intelligent, and human. Powered by Azure OpenAI, it transforms uncertainty into direction and effort into achievement. Author: Aryan Jaiswal — Gold Microsoft Learn Student Ambassador Reviewer: Julia Muiruri (Microsoft)245Views2likes0CommentsGetting Started with AI Agents: A Student Developer’s Guide to the Microsoft Agent Framework
AI agents are becoming the backbone of modern applications, from personal assistants to autonomous research bots. If you're a student developer curious about building intelligent, goal-driven agents, Microsoft’s newly released Agent Framework is your launchpad. In this post, we’ll break down what the framework offers, how to get started, and why it’s a game-changer for learners and builders alike. What Is the Microsoft Agent Framework? The Microsoft Agent Framework is a modular, open-source toolkit designed to help developers build, orchestrate, and evaluate AI agents with minimal friction. It’s part of the AI Agents for Beginners curriculum, which walks you through foundational concepts using reproducible examples. At its core, the framework helps you: Define agent goals and capabilities Manage memory and context Route tasks through tools and APIs Evaluate agent performance with traceable metrics Whether you're building a research assistant, a coding helper, or a multi-agent system, this framework gives you the scaffolding to do it right. What’s Inside the Framework? Here’s a quick look at the key components: Component Purpose AgentRuntime Manages agent lifecycle, memory, and tool routing AgentConfig Defines agent goals, tools, and memory settings Tool Interface Lets you plug in custom tools (e.g., web search, code execution) MemoryProvider Supports semantic memory and context-aware responses Evaluator Tracks agent performance and goal completion The framework is built with Python and .NET and designed to be extensible, perfect for experimentation and learning. Try It: Your First Agent in 10 Minutes Here’s a simplified walkthrough to get you started: Clone the repo git clone https://github.com/microsoft/ai-agents-for-beginners Open the Sample cd ai-agents-for-beginners/14-microsoft-agent-framework Install dependencies pip install -r requirements.txt Run the sample agent python main.py You’ll see a basic agent that can answer questions using a web search tool and maintain context across turns. From here, you can customize its goals, memory, and tools. Why Student Developers Should Care Modular Design: Learn how real-world agents are structured—from memory to evaluation. Reproducible Workflows: Build agents that can be debugged, traced, and improved over time. Open Source: Contribute, fork, and remix with your own ideas. Community-Ready: Perfect for hackathons, research projects, or portfolio demos. Plus, it aligns with Microsoft’s best practices for agent governance, making it a solid foundation for enterprise-grade development. Why Learn? Here are a few ideas to take your learning further: Build a custom tool (e.g., a calculator or code interpreter) Swap in a different memory provider (like a vector DB) Create an evaluation pipeline for multi-agent collaboration Use it in a class project or student-led workshop Join the Microsoft Azure AI Foundry Discord https://aka.ms/Foundry/discord share your project and build your AI Engineer and Developer connections. Star and Fork the AI Agents for Beginners repo for updates and new modules. Final Thoughts The Microsoft Agent Framework isn’t just another library, it’s a teaching tool, a playground, and a launchpad for the next generation of AI builders. If you’re a student developer, this is your chance to learn by doing, contribute to the community, and shape the future of agentic systems. So fire up your terminal, fork the repo, and start building. Your first agent is just a few lines of code away.475Views0likes1CommentFoundry Fridays: Your Gateway to Azure AI Discovery
🎓 What Is Foundry Fridays? Every Friday at 1:30 PM ET, join the Azure AI Foundry Discord Community https://aka.ms/model-mondays/discord for a 30-minute live Ask Me Anything (AMA) session. It’s your chance to connect with the experts behind Azure AI—Principal PMs, researchers, and engineers—who are building the tools you’ll use in classrooms, hackathons, and real-world projects. Whether you're experimenting with model fine-tuning, curious about local inference, or diving into agentic workflows and open-source tooling, this is where your questions get answered live and unscripted. 💡 Why Students & Educators Should Join Direct Access to Experts Ask your questions live and get real-time insights from the people building Azure AI. Weekly Themes That Matter From model routing and MCP registries to SAMBA architectures, AI Agents, Model Router, Deployment Templates each week brings a new topic to explore. Community-Led Conversations Hosted by leaders like Nitya Narasimhan and Lee Stott, these sessions are interactive, inclusive, and designed to spotlight your questions. No Slides, Just Substance Skip the lectures—this is about real talk, real tech, and real learning. 📚 Bonus Learning: Model Mondays Want even more? Catch up on the Model Mondays series on demand at https://aka.ms/model-mondays and get ready for Season 3, streaming every Monday at 1:30 PM ET. 🚀 How to Join Join the Discord: https://aka.ms/model-mondays/discord Find the AMA: Check the #community-calls and #model-mondays channels or look for pinned events. Ask Anything: Bring your questions, ideas, or just listen in. No registration needed. 💬 Final Thoughts Whether you're coding your first AI project, mentoring students, or researching the next big thing listen and ask the experts questions and hear from the wider community. Foundry Fridays is your space to learn, connect, and grow. So grab your headphones, jump into Discord, and let’s shape the future of AI—together. 🗓️ Fridays | 1:30 PM ET 📍 Azure AI Foundry Discord 🔗 https://aka.ms/model-mondays/discord138Views0likes0CommentsStreamlining Campus Life: A Multi-Agent System for Campus Event Management
Introduction Managing campus events has long been a complex, manual process fraught with challenges. Traditional event management systems offer limited automation, placing a considerable workload on staff for tasks ranging from resource allocation to participant communication. This procedural friction presents a clear opportunity to build a more intelligent solution, leveraging the emerging paradigm of AI agents. To solve these challenges, I developed and evaluated a multi-agent system designed to automate the campus event workflow and improve productivity. In this blog, I’ll share the journey of building this system, detailing its architecture and how I leveraged the Semantic Kernel and Azure Services to create a team of specialized agents. Background My name is Junjie Wan, and I’m a MSc student in Applied Computational Science and Engineering at Imperial College London. This research project, in collaboration with Microsoft, explores the development of a multi-agent solution for managing a university campus. The system's focus is on automating the event management workflow using Microsoft Azure AI Agent services. I would like to thank my supervisor, Lee Stott, for his guidance and mentorship during this project. Methodology: Building the Agentic System. The Model Context Protocol (MCP) and Backend Integration For agents to perform their duties effectively, they need access to a powerful set of tools. The system's backend is a high-performance API built with FastAPI, with Azure Cosmos DB serving as the scalable data store. To make these API functions usable by the agents, they are wrapped as tools using Semantic Kernel’s kernel_function decorator. These tools contain the necessary functions to utilize both the internal API and various Azure Services. The setup for making these tools accessible is straightforward: we first instantiate a central Kernel object, add the defined tools as plugins, and then convert this populated Kernel into a runnable MCP server. This approach creates an extensible system where new tools can be added as services without requiring changes to the agents themselves. Frontend Implemenation with Streamlit To build a frontend powered by the AI features and based on Python, I choose to use the Streamlit for rapid prototyping. The frontend implements role-based access control, with different interfaces for admin, staff, and students. The system inlcudes a dashbarad, form-based pages, and a conversational chat interface as the primary entry point for the multi-agent system. To enhance user experience, it supports multi-modal input through voice integration, which uses OpenAI whisper for accurate speech-to-text transcription and the OpenAI tts model in Azure AI Foundry for voice playback. Individual Agent Design The system distributes responsibilities across a team of specialized agents, each targeting a specific operational aspect of event management. Each agent is initialized as a ChatCompletionAgent with OpenAI’s Model Router and MCP plugins. Here are some of the agents implemented to improve the event management process. To address the operational challenge of manually reconciling room availability and event requirements, the system utilizes a Planning Agent and a Schedule Agent. The Planning Agent serves as the central coordinator, gathering event specifications from the user. It can even leverage the Azure Maps Weather service to provide organizers with weather forecasts that may influence venue selection. It then delegates to the Schedule Agent, which is responsible for generating conflict-free timetable entries by querying our FastAPI backend for real-time availability data stored in the database. This workflow directly replaces the error-prone manual process and prevents scheduling conflicts. For financial planning, the Budget Agent functions as the system's dedicated financial analyst, designed to solve the problem of inaccurate cost estimation. When tasked with a budget, it first retrieves the event context from Cosmos DB. To ground its responses in verifiable data, the agent utilizes a Retrieval-Augmented Generation (RAG) pipeline built on Azure AI Search. This allows it to search internal documents, such as catering menus, for pricing information. If items are not found internally, the agent uses the Grounding with Bing Search tool to gather real-time market data, ensuring estimations are both accurate and current. To automate the manual, time-consuming process of participant communication, the Communication Agent handles all interactions. It drafts personalized emails populated with event details retrieved from the database. The agent is equipped with a tool that directly interfaces with Azure Communication Service to send emails programmatically. This automates the communication workflow, from sending initial invitations with Microsoft Forms links for registration to distributing post-event feedback surveys, saving significant administrative effort. Multi-Agent Collaboration For collaboration between agents, I chose the AgentGroupChat pattern within Semantic Kernel. While orchestration patterns like sequential or handoff are suitable for linear tasks and dynamic delegation between agents, the multi-domain nature of event management required a more flexible approach. The group chat pattern allows for both structured sequential handoffs and dynamic contributions from other agents as needed. Group Chat Design The orchestration logic is governed by two dynamic, LLM-driven functions: Selection Function: This acts as a dynamic router, analyzing the conversation's context to determine the most appropriate agent to speak in the next round. It performs intent recognition for initial queries and routes subsequent turns based on the ongoing workflow. Termination Function: This function prevents infinite loops and ensures the system remains responsive. It evaluates each agent's turn to decide whether the conversation should conclude or if a clear handoff warrants its continuation, maintaining coherent system behavior. Evaluation Framework and Performance To evaluate whether the system could reliably execute domain-specific workflows, I used the LLM-as-a-Judge framework through the Azure AI Evaluation SDK, which provides a scalable and consistent assessment of agent performance. Group Chat Performance Radar Chart The evaluation focused on three main categories of metrics to get a holistic view of the system: Functional Correctness: I used metrics such as IntentResolution, TaskAdherence, and ToolCallAccuracy to assess whether the agents correctly understood user requests, followed instructions, and called the appropriate tools with the correct parameters. Response Quality: Metrics like Fluency, Coherence, Relevance, and Response Completeness were used to evaluate the linguistic quality of the agents' responses. Operational Metrics: To assess the practical viability of the system, I also measured key operational metrics, including latency and token usage for each task. The results confirmed the system's strong performance, consistently exceeding the pass threshold of 3.0. This demonstrates that the agentic architecture can successfully decompose and execute event management tasks with high precision. In contrast, linguistic metrics were lower, highlighting a potential trade-off where our multi-agent system focuses on functionality prioritized over conversational flow. The operational metrics also provided valuable insights into system behavior: Response Time by Tag Token vs Tool Call Latency: The data showed that simpler queries, such as reading information, were consistently fast. However, complex, multi-step tasks exhibited significantly longer and more variable response times. This pattern reflected the expected accumulation of latency across multiple agent handoffs and tool calls within the agentic workflow. Token: Analysis revealed a strong positive correlation between the number of tool calls and total token consumption, indicating that workflow complexity directly impacted computational cost. The baseline token usage for simple queries is high largely due to the context of tool definitions injected by the MCP server. Agents relying on RAG pipelines, like the Budget Agent, notably consumed more tokens due to the inclusion of retrieved context chunks in their prompts. Limitation and Future Work Despite the good performance, the system has several limitations: The system relies on carefully engineered prompts, making it less flexible when facing unexpected queries. Multi-turn coordination between agents and the use of MCP servers results in high token consumption, raising concerns about efficiency and scalability in production deployments. The system was tested with synthetic data and a relatively small set of test queries, which may not reflect the complexity of real-world scenarios. Future work will focus on: Enhancing error handling and recovery mechanisms within the group chat orchestration Improving conversational quality while reducing token consumption Deploying the agent system on a server for broader access and collecting user feedback Testing the system with real-world data and conducting formal user studies Conclusion This project demonstrates that a multi-agent system, built on the integrated power of Microsoft Azure services, can offer an efficient solution for campus event management. By dividing the labor among specialized agents and enabling them with a powerful toolkit, we can automate complex workflows and reduce administrative burden. This work serves as a proof-of-concept that shows how agentic approaches can deliver more intelligent and streamlined solutions that improve the quality of events and the student experience. Thank you for reading! If you have any questions or would like to discuss this work further, please feel free to contact me via email or on LinkedIn.190Views0likes0CommentsModel Mondays S2E13: Open Source Models (Hugging Face)
1. Weekly Highlights 1. Weekly Highlights Here are the key updates we covered in the Season 2 finale: O1 Mini Reinforcement Fine-Tuning (GA): Fine-tune models with as few as ~100 samples using built-in Python code graders. Azure Live Interpreter API (Preview): Real-time speech-to-speech translation supporting 76 input languages and 143 locales with near human-level latency. Agent Factory – Part 5: Connecting agents using open standards like MCP (Model Context Protocol) and A2A (Agent-to-Agent protocol). Ask Ralph by Ralph Lauren: A retail example of agentic AI for conversational styling assistance, built on Azure OpenAI and Foundry’s agentic toolset. VS Code August Release: Brings auto-model selection, stronger safety guards for sensitive edits, and improved agent workflows through new agents.md support. 2. Spotlight – Open Source Models in Azure AI Foundry Guest: Jeff Boudier, VP of Product at Hugging Face Jeff showcased the deep integration between the Hugging Face community and Azure AI Foundry, where developers can access over 10 000 open-source models across multiple modalities—LLMs, speech recognition, computer vision, and even specialized domains like protein modeling and robotics. Demo Highlights Discover models through Azure AI Foundry’s task-based catalog filters. Deploy directly from Hugging Face Hub to Azure with one-click deployment. Explore Use Cases such as multilingual speech recognition and vision-language-action models for robotics. Jeff also highlighted notable models, including: SmoLM3 – a 3 B-parameter model with hybrid reasoning capabilities Qwen 3 Coder – a mixture-of-experts model optimized for coding tasks Parakeet ASR – multilingual speech recognition Microsoft Research protein-modeling collection MAGMA – a vision-language-action model for robotics Integration extends beyond deployment to programmatic access through the Azure CLI and Python SDKs, plus local development via new VS Code extensions. 3. Customer Story – DraftWise (BUILD 2025 Segment) The finale featured a customer spotlight on DraftWise, where CEO James Ding shared how the company accelerates contract drafting with Azure AI Foundry. Problem Legal contract drafting is time-consuming and error-prone. Solution DraftWise uses Azure AI Foundry to fine-tune Hugging Face language models on legal data, generating contract drafts and redline suggestions. Impact Faster drafting cycles and higher consistency Easy model management and deployment with Foundry’s secure workflows Transparent evaluation for legal compliance 4. Community Story – Hugging Face & Microsoft The episode also celebrated the ongoing collaboration between Hugging Face and Microsoft and the impact of open-source AI on the global developer ecosystem. Community Benefits Access to State-of-the-Art Models without licensing barriers Transparent Performance through public leaderboards and benchmarks Rapid Innovation as improvements and bug fixes spread quickly Education & Empowerment via tutorials, docs, and active forums Responsible AI Practices encouraged through community oversight 5. Key Takeaways Open Source AI Is Here to Stay Azure AI Foundry and Hugging Face make deploying, fine-tuning, and benchmarking open models easier than ever. Community Drives Innovation: Collaboration accelerates progress, improves transparency, and makes AI accessible to everyone. Responsible AI and Transparency: Open-source models come with clear documentation, licensing, and community-driven best practices. Easy Deployment & Customization: Azure AI Foundry lets you deploy, automate, and customize open models from a single, unified platform. Learn, Build, Share: The open-model ecosystem is a great place for students, developers, and researchers to learn, build, and share their work. Sharda's Tips: How I Wrote This Blog For this final recap, I focused on capturing the energy of the open source AI movement and the practical impact of Hugging Face and Azure AI Foundry collaboration. I watched the livestream, took notes on the demos and interviews, and linked directly to official resources for models, docs, and community sites. Here’s my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E13 based on the transcript and episode details. Focus on open source models, Hugging Face, Azure AI Foundry, and community workflows. Include practical links and actionable insights for developers and students! Learn & Connect Explore Open Models in Azure AI Foundry Hugging Face Leaderboard Responsible AI in Azure Machine Learning Llama-3 by Meta Hugging Face Community Azure AI Documentation About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.191Views0likes0CommentsModel Mondays S2:E7 · AI-Assisted Azure Development
Welcome to Episode 7! This week, we explore how AI is transforming Azure development. We’ll break down two key tools—Azure MCP Server and GitHub Copilot for Azure—and see how they make working with Azure resources easier for everyone. We’ll also look at a real customer story from SightMachine, showing how AI streamlines manufacturing operations.250Views0likes0CommentsCampusSphere: Building the Future of Campus AI with Microsoft's Agentic Framework
Project Overview We are a team of Imperial College Students committed to improving campus life through innovative multi-agent solutions. CampusSphere leverages Microsoft Azure AI capabilities to automate core university campus services. We created an end-to-end solution that allows both students and staff to access a multi-agent framework for room/gym booking, attendance tracking, calendar management, IoT monitoring and more. 🔭 Our Initial Vision: Reimagining Campus Technology When our team at Imperial College London embarked on the CampusSphere project as part of Microsoft's Agentic Campus initiative, we had one clear ambition: to create an intelligent campus ecosystem that would fundamentally change how students, faculty, and staff interact with university services. The inspiration came from a simple observation—despite living in an age of advanced AI, campus technology remained frustratingly fragmented. Students juggled multiple portals for course registration, room booking, dining services, and academic support. Faculty members navigated separate systems for teaching, research, and administrative tasks. The result? Countless hours wasted on mundane navigation tasks that could be better spent on learning, teaching, and innovation. Our vision was ambitious: create a single, intelligent interface that could understand natural language, anticipate user needs, and seamlessly integrate with existing campus infrastructure. We didn't just want to build another campus app—we wanted to demonstrate how Microsoft's agentic AI technologies could create a truly intelligent campus companion. 🧠 Enter CampusSphere CampusSphere is an intelligent campus assistant made up of multiple AI agents, each with a specific domain of expertise — all communicating seamlessly through a centralized architecture. Think of it as a digital concierge for campus life, where your calendar, attendance, IoT data, and facility bookings are coordinated by specialized GPT-powered agents. Here’s what we built: TriageAgent – the brain of the system, using Retrieval-Augmented Generation (RAG) to understand user intent CalendarAgent – handles scheduling, bookings, and reminders AttendanceAgent – tracks check-ins automatically IoTAgent – monitors real-time sensor data from classrooms and labs GymAgent – manages access and reservations for sports facilities 30+ MCP Tools – perform SQL queries, scrape web data, and connect with external APIs All of this is built on Microsoft Azure AI, Semantic Kernel, and Model Context Protocol (MCP) — making it scalable, secure, and lightning fast. 🖥️ The Tech Stack Our Azure-powered architecture showcases a modular and scalable approach to real-time data processing and intelligent agent coordination. The frontend is built using React with a Vite development server, providing a fast and responsive user interface. When users submit a prompt, it travels to a Flask backend server acting as the Triage agent, which intelligently delegates tasks to a FastAPI agent service. This FastAPI service asynchronously communicates with individual agents and handles responses efficiently. Complex queries are routed to MCP Tools, which interact with the CosmosDB-powered Campus Database. Simultaneously, real-time synthetic IoT data is pushed into the database via Azure Function Apps and Azure IoT Hub. Authentication is securely managed: users log in through the frontend, receive a token from the database API server, and use it for authorized access to MCP services, with permissions enforced based on user roles using our custom MCP server implementation. This robust architecture enables seamless integration, real-time data flow, and secure multi-agent collaboration across Azure services. Our system leverages a multi-agent architecture designed to intelligently coordinate task execution across specialized services. At the core is the TriageAgent, which uses Retrieval-Augmented Generation (RAG) to interpret user prompts, enrich them with relevant context, and determine the optimal response path. Based on the nature of the request, it may handle the response directly, seek clarification, or delegate tasks to specific agents via FastAPI. Each specialized agent has a clearly defined role: AttendanceAgent: Interfaces with CosmosDB-backed FastAPI endpoints to check student attendance, using filters like event name, student ID, or date. IoTAgent: Monitors room conditions (e.g., temperature, CO₂ levels) and flags anomalies using real-time data from Azure IoT Hub, processed via FastAPI. CalendarAgent: Handles scheduling, availability checks, and event creation by querying or updating CosmosDB through FastAPI. Future integration with Microsoft Graph API is planned for direct calendar syncing. Gym Slot Agent: Checks available times for gym sessions using dedicated MCP tools. The triage agent serves as the orchestrator, breaking down complex requests (like "Book a gym session") into subtasks. It consults relevant agents (e.g., calendar and gym slot agents), merges results, and then confirms the final action with the user. This distributed and asynchronous workflow reduces backend load and enhances both responsiveness and reliability of the system. 🔮 What’s Next? Integrating CampusSphere with live systems via Microsoft OAuth is crucial for enhancing its capabilities. This integration will grant the agent authenticated access to a wider range of student data, moving beyond synthetic datasets. This expanded access to real-world information will enable deeply personalized advice, such as tailored course selection, scholarship recommendations, event suggestions, and deadline reminders, transforming CampusSphere into a sophisticated, proactive personal assistant. 🤝Meet the Team Behind CampusSphere Our success stemmed from a diverse team of innovators who brought together expertise from multiple domains: Benny Liu - https://www.linkedin.com/in/zong-benny-liu-393a4621b/ Lucas Ng - https://www.linkedin.com/in/lucas-ng-11b317203/ Lu Ju - https://www.linkedin.com/in/lu-ju/ Bruno Duaso - https://www.linkedin.com/in/bruno-duaso-jimeno-744464262/ Martim Coutinho - https://www.linkedin.com/in/martim-pereira-coutinho-116308233/ Krischad Pourpongpan - https://www.linkedin.com/in/krischadpua/ Yixu Pan - https://www.linkedin.com/in/yixu-pan/ Our collaborative approach enabled us to create a sophisticated agentic AI system that demonstrates the powerful potential of Microsoft's AI technologies in educational environments. 🧑💻 Project Repository: GitHub - Imperial-Microsoft-Agentic-Campus/CampusSphere Contribute to Imperial-Microsoft-Agentic-Campus/CampusSphere development by creating an account on GitHub. github.com Have questions about implementing similar solutions at your institution? Connect with our team members on LinkedIn—we're always excited to share knowledge and collaborate on innovative campus technology projects. 📚Get Started with Microsoft's AI Tools Ready to explore the technologies that made CampusSphere possible? Here are essential resources: Microsoft Semantic Kernel: The core framework for building AI agent orchestration systems. Learn how to create, coordinate, and manage multiple AI agents working together seamlessly. AI Agents for Beginners: A comprehensive guide to understanding and building AI agents from the ground up. Perfect for getting started with agentic AI development. Model Context Protocol (MCP): Learn about the protocol that enables secure connections between AI models and external tools and services—essential for building integrated AI systems. Windows AI Toolkit: Microsoft's toolkit for developing AI applications on Windows, providing local AI model development capabilities and deployment tools. Azure Container Apps: Understand how to deploy and scale containerized AI applications in the cloud, perfect for hosting multi-agent systems. Azure Cosmos DB Security: Essential security practices for managing data in AI applications, covering encryption, access control, and compliance.422Views2likes0Comments