skilling
100 TopicsBuilding a Multi-Agent System with Azure AI Agent Service: Campus Event Management
Personal Background My name is Peace Silly. I studied French and Spanish at the University of Oxford, where I developed a strong interest in how language is structured and interpreted. That curiosity about syntax and meaning eventually led me to computer science, which I came to see as another language built on logic and structure. In the academic year 2024–2025, I completed the MSc Computer Science at University College London, where I developed this project as part of my Master’s thesis. Project Introduction Can large-scale event management be handled through a simple chat interface? This was the question that guided my Master’s thesis project at UCL. As part of the Industry Exchange Network (IXN) and in collaboration with Microsoft, I set out to explore how conversational interfaces and autonomous AI agents could simplify one of the most underestimated coordination challenges in campus life: managing events across multiple departments, societies, and facilities. At large universities, event management is rarely straightforward. Rooms are shared between academic timetables, student societies, and one-off events. A single lecture theatre might host a departmental seminar in the morning, a society meeting in the afternoon, and a careers talk in the evening, each relying on different systems, staff, and communication chains. Double bookings, last-minute cancellations, and maintenance issues are common, and coordinating changes often means long email threads, manual spreadsheets, and frustrated users. These inefficiencies do more than waste time; they directly affect how a campus functions day to day. When venues are unavailable or notifications fail to reach the right people, even small scheduling errors can ripple across entire departments. A smarter, more adaptive approach was needed, one that could manage complex workflows autonomously while remaining intuitive and human for end users. The result was the Event Management Multi-Agent System, a cloud-based platform where staff and students can query events, book rooms, and reschedule activities simply by chatting. Behind the scenes, a network of Azure-powered AI agents collaborates to handle scheduling, communication, and maintenance in real time, working together to keep the campus running smoothly. The user scenario shown in the figure below exemplifies the vision that guided the development of this multi-agent system. Starting with Microsoft Learning Resources I began my journey with Microsoft’s tutorial Build Your First Agent with Azure AI Foundry which introduced the fundamentals of the Azure AI Agent Service and provided an ideal foundation for experimentation. Within a few weeks, using the Azure Foundry environment, I extended those foundations into a fully functional multi-agent system. Azure Foundry’s visual interface was an invaluable learning space. It allowed me to deploy, test, and adjust model parameters such as temperature, system prompts, and function calling while observing how each change influenced the agents’ reasoning and collaboration. Through these experiments, I developed a strong conceptual understanding of orchestration and coordination before moving to the command line for more complex development later. When development issues inevitably arose, I relied on the Discord support community and the GitHub forum for troubleshooting. These communities were instrumental in addressing configuration issues and providing practical examples, ensuring that each agent performed reliably within the shared-thread framework. This early engagement with Microsoft’s learning materials not only accelerated my technical progress but also shaped how I approached experimentation, debugging, and iteration. It transformed a steep learning curve into a structured, hands-on process that mirrored professional software development practice. A Decentralised Team of AI Agents The system’s intelligence is distributed across three specialised agents, powered by OpenAI’s GPT-4.1 models through Azure OpenAI Service. They each perform a distinct role within the event management workflow: Scheduling Agent – interprets natural language requests, checks room availability, and allocates suitable venues. Communications Agent – notifies stakeholders when events are booked, modified, or cancelled. Maintenance Agent – monitors room readiness, posts fault reports when venues become unavailable, and triggers rescheduling when needed. Each agent operates independently but communicates through a shared thread, a transparent message log that serves as the coordination backbone. This thread acts as a persistent state space where agents post updates, react to changes, and maintain a record of every decision. For example, when a maintenance fault is detected, the Maintenance Agent logs the issue, the Scheduling Agent identifies an alternative venue, and the Communications Agent automatically notifies attendees. These interactions happen autonomously, with each agent responding to the evolving context recorded in the shared thread. Interfaces and Backend The system was designed with both developer-focused and user-facing interfaces, supporting rapid iteration and intuitive interaction. The Terminal Interface Initially, the agents were deployed and tested through a terminal interface, which provided a controlled environment for debugging and verifying logic step by step. This setup allowed quick testing of individual agents and observation of their interactions within the shared thread. The Chat Interface As the project evolved, I introduced a lightweight chat interface to make the system accessible to staff and students. This interface allows users to book rooms, query events, and reschedule activities using plain language. Recognising that some users might still want to see what happens behind the scenes, I added an optional toggle that reveals the intermediate steps of agent reasoning. This transparency feature proved valuable for debugging and for more technical users who wanted to understand how the agents collaborated. When a user interacts with the chat interface, they are effectively communicating with the Scheduling Agent, which acts as the primary entry point. The Scheduling Agent interprets natural-language commands such as “Book the Engineering Auditorium for Friday at 2 PM” or “Reschedule the robotics demo to another room.” It then coordinates with the Maintenance and Communications Agents to complete the process. Behind the scenes, the chat interface connects to a FastAPI backend responsible for core logic and data access. A Flask + HTMX layer handles lightweight rendering and interactivity, while the Azure AI Agent Service manages orchestration and shared-thread coordination. This combination enables seamless agent communication and reliable task execution without exposing any of the underlying complexity to the end user. Automated Notifications and Fault Detection Once an event is scheduled, the Scheduling Agent posts the confirmation to the shared thread. The Communications Agent, which subscribes to thread updates, automatically sends notifications to all relevant stakeholders by email. This ensures that every participant stays informed without any manual follow-up. The Maintenance Agent runs routine availability checks. If a fault is detected, it logs the issue to the shared thread, prompting the Scheduling Agent to find an alternative room. The Communications Agent then notifies attendees of the change, ensuring minimal disruption to ongoing events. Testing and Evaluation The system underwent several layers of testing to validate both functional and non-functional requirements. Unit and Integration Tests Backend reliability was evaluated through unit and integration tests to ensure that room allocation, conflict detection, and database operations behaved as intended. Automated test scripts verified end-to-end workflows for event creation, modification, and cancellation across all agents. Integration results confirmed that the shared-thread orchestration functioned correctly, with all test cases passing consistently. However, coverage analysis revealed that approximately 60% of the codebase was tested, leaving some areas such as Azure service integration and error-handling paths outside automated validation. These trade-offs were deliberate, balancing test depth with project scope and the constraints of mocking live dependencies. Azure AI Evaluation While functional testing confirmed correctness, it did not capture the agents’ reasoning or language quality. To assess this, I used Azure AI Evaluation, which measures conversational performance across metrics such as relevance, coherence, fluency, and groundedness. The results showed high scores in relevance (4.33) and groundedness (4.67), confirming the agents’ ability to generate accurate and context-aware responses. However, slightly lower fluency scores and weaker performance in multi-turn tasks revealed a retrieval–execution gap typical in task-oriented dialogue systems. Limitations and Insights The evaluation also surfaced several key limitations: Synthetic data: All tests were conducted with simulated datasets rather than live campus systems, limiting generalisability. Scalability: A non-functional requirement in the form of horizontal scalability was not tested. The architecture supports scaling conceptually but requires validation under heavier load. Despite these constraints, the testing process confirmed that the system was both technically reliable and linguistically robust, capable of autonomous coordination under normal conditions. The results provided a realistic picture of what worked well and what future iterations should focus on improving. Impact and Future Work This project demonstrates how conversational AI and multi-agent orchestration can streamline real operational processes. By combining Azure AI Agent Services with modular design principles, the system automates scheduling, communication, and maintenance while keeping the user experience simple and intuitive. The architecture also establishes a foundation for future extensions: Predictive maintenance to anticipate venue faults before they occur. Microsoft Teams integration for seamless in-chat scheduling. Scalability testing and real-user trials to validate performance at institutional scale. Beyond its technical results, the project underscores the potential of multi-agent systems in real-world coordination tasks. It illustrates how modularity, transparency, and intelligent orchestration can make everyday workflows more efficient and human-centred. Acknowledgements What began with a simple Microsoft tutorial evolved into a working prototype that reimagines how campuses could manage their daily operations through conversation and collaboration. This was both a challenging and rewarding journey, and I am deeply grateful to Professor Graham Roberts (UCL) and Professor Lee Stott (Microsoft) for their guidance, feedback, and support throughout the project.729Views4likes1CommentKickstart Your AI Development with the Model Context Protocol (MCP) Course
Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Should Developers Care? Model Context Protocol (MCP) is a innovative framework designed to standardize interactions between AI models and client applications. In simpler terms, MCP is a communication bridge that lets your AI agent fetch live context from external sources (like APIs, documents, databases, or web services) and even take actions using tools. This means your AI apps are no longer limited to pre-trained knowledge they can dynamically retrieve data or execute commands, enabling far more powerful and context-aware behavior. Some key reasons MCP matters for developers: Seamless Integration of Tools & Data: MCP provides a unified way to connect AI to various data sources and tools, eliminating the need for ad-hoc, fragile integrations. Your AI agent can, for example, query a database or call a web API during a conversation all through a standardized protocol. Stay Up-to-Date: Because AI models can use MCP to access external information, they overcome the training data cutoff problem. They can fetch the latest facts, figures, or documents on demand, ensuring more accurate and timely responses. Industry Momentum: MCP is quickly gaining traction. Originally introduced by Microsoft and Anthropic in late 2024, it has since been adopted by major AI platforms (Replit, Sourcegraph, Hugging Face, and more) and spawned thousands of open-source connectors by early 2025. It’s an emerging standard – learning it now puts developers at the forefront of AI innovation. In short, MCP is transformative for AI development, and being proficient in it will help you build smarter AI solutions that can interact with the real world. The MCP for Beginners course is designed to make mastering this protocol accessible, with a structured learning path and hands-on examples. Introducing the MCP for Beginners Course “Model Context Protocol for Beginners” is an open-source, self-paced curriculum created by Microsoft to teach the concepts and fundamentals of MCP. Whether you’re completely new to MCP or have some experience, this course offers a comprehensive guide from the ground up. Key Features and Highlights: Structured Learning Path: The curriculum is organized as a multi-part guide (9 modules in total) that gradually builds your knowledge. It starts with the basics of MCP – What is MCP? Why does standardization matter? What are the use cases? – and then moves through core concepts, security considerations, getting started with coding, all the way to advanced topics and real-world case studies. This progression ensures you understand the “why” and “how” of MCP before tackling complex scenarios. Hands-On Coding Examples: This isn’t just theory – practical coding examples are a cornerstone of the course. You’ll find live code samples and mini-projects in multiple languages (C#, Java, JavaScript/TypeScript, and Python) for each concept. For instance, you’ll build a simple MCP-powered Calculator application as a project, exploring how to implement MCP clients and servers in your preferred language. By coding along, you cement your understanding and see MCP in action. Real-World Use Cases: The curriculum illustrates how MCP applies to real scenarios. It discusses practical use cases of MCP in AI pipelines (e.g. an AI agent pulling in documentation or database info on the fly) and includes case studies of early adopters. These examples help you connect what you learn to actual applications and solutions you might develop in your job. Broad Language Support: A unique aspect of this course is its multi-language approach – both in terms of programming and human languages. The content provides code implementations in several popular programming languages (so you can learn MCP in the context of C#, Java, Python, JavaScript, or TypeScript, as you prefer). In addition, the learning materials themselves are available in multiple human languages (English, plus translations like French, Spanish, German, Chinese, Japanese, Korean, Polish, etc.) to support learners worldwide. This inclusivity ensures that more developers can comfortably engage with the material. Up-to-Date and Open-Source: Being hosted on GitHub under MIT License, the curriculum is completely free to use and open for contributions. It’s maintained with the latest updates for example, automated workflows keep translations in sync so all language versions stay current. As MCP evolves, the course content can evolve with it. You can even join the community to suggest improvements or add content, making this a living learning resource. Official Resources & Community Support: The course links to official MCP documentation and specs for deeper reference, and it encourages learners to join thehttps;//aka.ms/ai/discord to discuss and get help. You won’t be learning alone; you can network with experts and peers, ask questions, and share progress. Microsoft’s open-source approach means you’re part of a community of practitioners from day one. Course Outline: (Modules at a Glance) Introduction to MCP: Overview of MCP, why standardization matters in AI, and the key benefits and use cases of using MCP. (Start here to understand the big picture.) Core Concepts: Deep dive into MCP’s architecture – understanding the client-server model, how requests and responses work, and the message schema. Learn the fundamental components that make up the protocol. Security in MCP: Identify potential security threats when building MCP-based systems and learn best practices to secure your AI integrations. Important for anyone planning to deploy MCP in production environments. Getting Started (Hands-On): Set up your environment and create your first MCP server and client. This module walks through basic implementation steps and shows how to integrate MCP with existing applications, so you get a service up and running that an AI agent can communicate with. MCP Calculator Project: A guided project where you build a simple MCP-powered application (a calculator) in the language of your choice. This hands-on exercise reinforces the concepts by implementing a real tool – you’ll see how an AI agent can use MCP to perform calculations via an external tool. Practical Implementation: Tips and techniques for using MCP SDKs across different languages. Covers debugging, testing, validation of MCP integrations, and how to design effective prompt workflows that leverage MCP’s capabilities. Advanced Topics: Going beyond the basics – explore multi-modal AI workflows (using MCP to handle not just text but other data types), scalability and performance tuning for MCP servers, and how MCP fits into larger enterprise architectures. This is where intermediate users can really deepen their expertise. Community Contributions: Learn how to contribute to the MCP ecosystem and the curriculum itself. This section shows you how to collaborate via GitHub, follow the project’s guidelines, and even extend the protocol with your own ideas. It underlines that MCP is a growing, community-driven standard. Insights from Early Adoption: Hear lessons learned from real-world MCP implementations. What challenges did early adopters face? What patterns and solutions worked best? Understanding these will prepare you to avoid pitfalls in your own projects. Best Practices and Case Studies: A roundup of do’s and don’ts when using MCP. This includes performance optimization techniques, designing fault-tolerant systems, and testing strategies. Plus, detailed case studies that walk through actual MCP solution architectures with diagrams and integration tips bringing everything you learned together in concrete examples. Who Should Take This Course? The MCP for Beginners course is geared towards developers if you build or work on AI-driven applications, this course is for you. The content specifically welcomes: Beginners in AI Integration: You might be a developer who's comfortable with languages like Python, C#, or Java but new to AI/LLMs or to MCP itself. This course will take you from zero knowledge of MCP to a level where you can build and deploy your own MCP-enabled services. You do not need prior experience with MCP or machine learning pipelines the introduction module will bring you up to speed on key concepts. (Basic programming skills and understanding of client-server or API concepts are the only prerequisites.) Intermediate Developers & AI Practitioners: If you have some experience building bots or AI features and want to enhance them with real-time data access, you’ll benefit greatly. The course’s later modules on advanced topics, security, and best practices are especially valuable for those looking to integrate MCP into existing projects or optimize their approach. Even if you've dabbled in MCP or a similar concept before, this curriculum will fill gaps in knowledge and provide structured insights that are hard to get from scattered documentation. AI Enthusiasts & Architects: Perhaps you’re an AI architect or tech lead exploring new frameworks for intelligent agents. This course serves as a comprehensive resource to evaluate MCP for your architecture. By walking through it, you’ll understand how MCP can fit into enterprise systems, what benefits it brings, and how to implement it in a maintainable way. It’s perfect for getting a broad yet detailed view of MCP’s capabilities before adopting it within a team. In essence, anyone interested in making AI applications more connected and powerful will find value here. From a solo hackathon coder to a professional solution architect, the material scales to your need. The course starts with fundamentals in an easy-to-grasp manner and then deepens into complex topics appealing to a wide range of skill levels. Prerequisites: The official prerequisites for the course are minimal: you should have basic knowledge of at least one programming language (C#, Java, or Python is recommended) and a general understanding of how client-server applications or APIs work. Familiarity with machine learning concepts is optional but can help. In short, if you can write simple programs and understand making API calls, you have everything you need to start learning MCP. Conclusion: Empower Your AI Projects with MCP The Model Context Protocol for Beginners course is more than just a tutorial – it’s a comprehensive journey that empowers you to build the next generation of AI applications. By demystifying MCP and equipping you with hands-on experience, this curriculum turns a seemingly complex concept into practical skills you can apply immediately. With MCP, you unlock capabilities like giving your AI agents real-time information access and the ability to use tools autonomously. That means as a developer, you can create solutions that are significantly more intelligent and useful. A chatbot that can search documents, a coding assistant that can consult APIs or run code, an AI service that seamlessly integrates with your database – all these become achievable when you know MCP. And thanks to this beginners-friendly course, you’ll be able to implement such features with confidence. Whether you are starting out in the AI development world or looking to sharpen your cutting-edge skills, the MCP for Beginners course has something for you. It condenses best practices, real-world lessons, and robust techniques into an accessible format. Learning MCP now will put you ahead of the curve, as this protocol rapidly becomes a cornerstone of AI integrations across the industry. So, are you ready to level up your AI development skills? Dive into the https://aka.ms/mcp-for-beginnerscourse and start building AI agents that can truly interact with the world around them. With the knowledge and experience gained, you’ll be prepared to create smarter, context-aware applications and be a part of the community driving AI innovation forward.9.5KViews4likes1CommentChoosing the Right Intelligence Layer for Your Application
Introduction One of the most common questions developers ask when planning AI-powered applications is: "Should I use the GitHub Copilot SDK or the Microsoft Agent Framework?" It's a natural question, both technologies let you add an intelligence layer to your apps, both come from Microsoft's ecosystem, and both deal with AI agents. But they solve fundamentally different problems, and understanding where each excels will save you weeks of architectural missteps. The short answer is this: the Copilot SDK puts Copilot inside your app, while the Agent Framework lets you build your app out of agents. They're complementary, not competing. In fact, the most interesting applications use both, the Agent Framework as the system architecture and the Copilot SDK as a powerful execution engine within it. This article breaks down each technology's purpose, architecture, and ideal use cases. We'll walk through concrete scenarios, examine a real-world project that combines both, and give you a decision framework for your own applications. Whether you're building developer tools, enterprise workflows, or data analysis pipelines, you'll leave with a clear understanding of which tool belongs where in your stack. The Core Distinction: Embedding Intelligence vs Building With Intelligence Before comparing features, it helps to understand the fundamental design philosophy behind each technology. They approach the concept of "adding AI to your application" from opposite directions. The GitHub Copilot SDK exposes the same agentic runtime that powers Copilot CLI as a programmable library. When you use it, you're embedding a production-tested agent, complete with planning, tool invocation, file editing, and command execution, directly into your application. You don't build the orchestration logic yourself. Instead, you delegate tasks to Copilot's agent loop and receive results. Think of it as hiring a highly capable contractor: you describe the job, and the contractor figures out the steps. The Microsoft Agent Framework is a framework for building, orchestrating, and hosting your own agents. You explicitly model agents, workflows, state, memory, hand-offs, and human-in-the-loop interactions. You control the orchestration, policies, deployment, and observability. Think of it as designing the company that employs those contractors: you define the roles, processes, escalation paths, and quality controls. This distinction has profound implications for what you build and how you build it. GitHub Copilot SDK: When Your App Wants Copilot-Style Intelligence The GitHub Copilot SDK is the right choice when you want to embed agentic behavior into an existing application without building your own planning or orchestration layer. It's optimized for developer workflows and task automation scenarios where you need an AI agent to do things, edit files, run commands, generate code, interact with tools, reliably and quickly. What You Get Out of the Box The SDK communicates with the Copilot CLI server via JSON-RPC, managing the CLI process lifecycle automatically. This means your application inherits capabilities that have been battle-tested across millions of Copilot CLI users: Planning and execution: The agent analyzes tasks, breaks them into steps, and executes them autonomously Built-in tool support: File system operations, Git operations, web requests, and shell command execution work out of the box MCP (Model Context Protocol) integration: Connect to any MCP server to extend the agent's capabilities with custom data sources and tools Multi-language support: Available as SDKs for Python, TypeScript/Node.js, Go, and .NET Custom tool definitions: Define your own tools and constrain which tools the agent can access BYOK (Bring Your Own Key): Use your own API keys from OpenAI, Azure AI Foundry, or Anthropic instead of GitHub authentication Architecture The SDK's architecture is deliberately simple. Your application communicates with the Copilot CLI running in server mode: Your Application ↓ SDK Client ↓ JSON-RPC Copilot CLI (server mode) The SDK manages the CLI process lifecycle automatically. You can also connect to an external CLI server if you need more control over the deployment. This simplicity is intentional, it keeps the integration surface small so you can focus on your application logic rather than agent infrastructure. Ideal Use Cases for the Copilot SDK The Copilot SDK shines in scenarios where you need a competent agent to execute tasks on behalf of users. These include: AI-powered developer tools: IDEs, CLIs, internal developer portals, and code review tools that need to understand, generate, or modify code "Do the task for me" agents: Applications where users describe what they want—edit these files, run this analysis, generate a pull request and the agent handles execution Rapid prototyping with agentic behavior: When you need to ship an intelligent feature quickly without building a custom planning or orchestration system Internal tools that interact with codebases: Build tools that explore repositories, generate documentation, run migrations, or automate repetitive development tasks A practical example: imagine building an internal CLI that lets engineers say "set up a new microservice with our standard boilerplate, CI pipeline, and monitoring configuration." The Copilot SDK agent would plan the file creation, scaffold the code, configure the pipeline YAML, and even run initial tests, all without you writing orchestration logic. Microsoft Agent Framework: When Your App Is the Intelligence System The Microsoft Agent Framework is the right choice when you need to build a system of agents that collaborate, maintain state, follow business processes, and operate with enterprise-grade governance. It's designed for long-running, multi-agent workflows where you need fine-grained control over every aspect of orchestration. What You Get Out of the Box The Agent Framework provides a comprehensive foundation for building sophisticated agent systems in both Python and .NET: Graph-based workflows: Connect agents and deterministic functions using data flows with streaming, checkpointing, human-in-the-loop, and time-travel capabilities Multi-agent orchestration: Define how agents collaborate, hand off tasks, escalate decisions, and share state Durability and checkpoints: Workflows can pause, resume, and recover from failures, essential for business-critical processes Human-in-the-loop: Built-in support for approval gates, review steps, and human override points Observability: OpenTelemetry integration for distributed tracing, monitoring, and debugging across agent boundaries Multiple agent providers: Use Azure OpenAI, OpenAI, and other LLM providers as the intelligence behind your agents DevUI: An interactive developer UI for testing, debugging, and visualizing workflow execution Architecture The Agent Framework gives you explicit control over the agent topology. You define agents, connect them in workflows, and manage the flow of data between them: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Agent A │────▶│ Agent B │────▶│ Agent C │ │ (Planner) │ │ (Executor) │ │ (Reviewer) │ └─────────────┘ └──────────────┘ └──────────────┘ Define Execute Validate strategy tasks output Each agent has its own instructions, tools, memory, and state. The framework manages communication between agents, handles failures, and provides visibility into what's happening at every step. This explicitness is what makes it suitable for enterprise applications where auditability and control are non-negotiable. Ideal Use Cases for the Agent Framework The Agent Framework excels in scenarios where you need a system of coordinated agents operating under business rules. These include: Multi-agent business workflows: Customer support pipelines, research workflows, operational processes, and data transformation pipelines where different agents handle different responsibilities Systems requiring durability: Workflows that run for hours or days, need checkpoints, can survive restarts, and maintain state across sessions Governance-heavy applications: Processes requiring approval gates, audit trails, role-based access, and compliance documentation Agent collaboration patterns: Applications where agents need to negotiate, escalate, debate, or refine outputs iteratively before producing a final result Enterprise data pipelines: Complex data processing workflows where AI agents analyze, transform, and validate data through multiple stages A practical example: an enterprise customer support system where a triage agent classifies incoming tickets, a research agent gathers relevant documentation and past solutions, a response agent drafts replies, and a quality agent reviews responses before they reach the customer, with a human escalation path when confidence is low. Side-by-Side Comparison To make the distinction concrete, here's how the two technologies compare across key dimensions that matter when choosing an intelligence layer for your application. Dimension GitHub Copilot SDK Microsoft Agent Framework Primary purpose Embed Copilot's agent runtime into your app Build and orchestrate your own agent systems Orchestration Handled by Copilot's agent loop, you delegate You define explicitly, agents, workflows, state, hand-offs Agent count Typically single agent per session Multi-agent systems with agent-to-agent communication State management Session-scoped, managed by the SDK Durable state with checkpointing, time-travel, persistence Human-in-the-loop Basic, user confirms actions Rich approval gates, review steps, escalation paths Observability Session logs and tool call traces Full OpenTelemetry, distributed tracing, DevUI Best for Developer tools, task automation, code-centric workflows Enterprise workflows, multi-agent systems, business processes Languages Python, TypeScript, Go, .NET Python, .NET Learning curve Low, install, configure, delegate tasks Moderate, design agents, workflows, state, and policies Maturity Technical Preview Preview with active development, 7k+ stars, 100+ contributors Real-World Example: Both Working Together The most compelling applications don't choose between these technologies, they combine them. A perfect demonstration of this complementary relationship is the Agentic House project by my colleague Anthony Shaw, which uses an Agent Framework workflow to orchestrate three agents, one of which is powered by the GitHub Copilot SDK. The Problem Agentic House lets users ask natural language questions about their Home Assistant smart home data. Questions like "what time of day is my phone normally fully charged?" or "is there a correlation between when the back door is open and the temperature in my office?" require exploring available data, writing analysis code, and producing visual results—a multi-step process that no single agent can handle well alone. The Architecture The project implements a three-agent pipeline using the Agent Framework for orchestration: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Planner │────▶│ Coder │────▶│ Reviewer │ │ (GPT-4.1) │ │ (Copilot) │ │ (GPT-4.1) │ └─────────────┘ └──────────────┘ └──────────────┘ Plan Notebook Approve/ analysis generation Reject Planner Agent: Takes a natural language question and creates a structured analysis plan, which Home Assistant entities to query, what visualizations to create, what hypotheses to test. This agent uses GPT-4.1 through Azure AI Foundry or GitHub Models. Coder Agent: Uses the GitHub Copilot SDK to generate a complete Jupyter notebook that fetches data from the Home Assistant REST API via MCP, performs the analysis, and creates visualizations. The Copilot agent is constrained to only use specific tools, demonstrating how the SDK supports tool restriction. Reviewer Agent: Acts as a security gatekeeper, reviewing the generated notebook to ensure it only reads and displays data. It rejects notebooks that attempt to modify Home Assistant state, import dangerous modules, make external network requests, or contain obfuscated code. Why This Architecture Works This design demonstrates several principles about when to use which technology: Agent Framework provides the workflow: The sequential pipeline with planning, execution, and review is a classic Agent Framework pattern. Each agent has a clear role, and the framework manages the flow between them. Copilot SDK provides the coding execution: The Coder agent leverages Copilot's battle-tested ability to generate code, work with files, and use MCP tools. Building a custom code generation agent from scratch would take significantly longer and produce less reliable results. Tool constraints demonstrate responsible AI: The Copilot SDK agent is constrained to only specific tools, showing how you can embed powerful agentic behavior while maintaining security boundaries. Standalone agents handle planning and review: The Planner and Reviewer use simpler LLM-based agents, they don't need Copilot's code execution capabilities, just good reasoning. While the Home Assistant data is a fun demonstration, the pattern is designed for something much more significant: applying AI agents for complex research against private data sources. The same architecture could analyze internal databases, proprietary datasets, or sensitive business metrics. Decision Framework: Which Should You Use? When deciding between the Copilot SDK and the Agent Framework, or both, consider these questions about your application. Start with the Copilot SDK if: You need a single agent to execute tasks autonomously (code generation, file editing, command execution) Your application is developer-facing or code-centric You want to ship agentic features quickly without building orchestration infrastructure The tasks are session-scoped, they start and complete within a single interaction You want to leverage Copilot's existing tool ecosystem and MCP integration Start with the Agent Framework if: You need multiple agents collaborating with different roles and responsibilities Your workflows are long-running, require checkpoints, or need to survive restarts You need human-in-the-loop approvals, escalation paths, or governance controls Observability and auditability are requirements (regulated industries, enterprise compliance) You're building a platform where the agents themselves are the product Use both together if: You need a multi-agent workflow where at least one agent requires strong code execution capabilities You want Agent Framework's orchestration with Copilot's battle-tested agent runtime as one of the execution engines Your system involves planning, coding, and review stages that benefit from different agent architectures You're building research or analysis tools that combine AI reasoning with code generation Getting Started Both technologies are straightforward to install and start experimenting with. Here's how to get each running in minutes. GitHub Copilot SDK Quick Start Install the SDK for your preferred language: # Python pip install github-copilot-sdk # TypeScript / Node.js npm install @github/copilot-sdk # .NET dotnet add package GitHub.Copilot.SDK # Go go get github.com/github/copilot-sdk/go The SDK requires the Copilot CLI to be installed and authenticated. Follow the Copilot CLI installation guide to set that up. A GitHub Copilot subscription is required for standard usage, though BYOK mode allows you to use your own API keys without GitHub authentication. Microsoft Agent Framework Quick Start Install the framework: # Python pip install agent-framework --pre # .NET dotnet add package Microsoft.Agents.AI The Agent Framework supports multiple LLM providers including Azure OpenAI and OpenAI directly. Check the quick start tutorial for a complete walkthrough of building your first agent. Try the Combined Approach To see both technologies working together, clone the Agentic House project: git clone https://github.com/tonybaloney/agentic-house.git cd agentic-house uv sync You'll need a Home Assistant instance, the Copilot CLI authenticated, and either a GitHub token or Azure AI Foundry endpoint. The project's README walks through the full setup, and the architecture provides an excellent template for building your own multi-agent systems with embedded Copilot capabilities. Key Takeaways Copilot SDK = "Put Copilot inside my app": Embed a production-tested agentic runtime with planning, tool execution, file edits, and MCP support directly into your application Agent Framework = "Build my app out of agents": Design, orchestrate, and host multi-agent systems with explicit workflows, durable state, and enterprise governance They're complementary, not competing: The Copilot SDK can act as a powerful execution engine inside Agent Framework workflows, as demonstrated by the Agentic House project Choose based on your orchestration needs: If you need one agent executing tasks, start with the Copilot SDK. If you need coordinated agents with business logic, start with the Agent Framework The real power is in combination: The most sophisticated applications use Agent Framework for workflow orchestration and the Copilot SDK for high-leverage task execution within those workflows Conclusion and Next Steps The question isn't really "Copilot SDK or Agent Framework?" It's "where does each fit in my architecture?" Understanding this distinction unlocks a powerful design pattern: use the Agent Framework to model your business processes as agent workflows, and use the Copilot SDK wherever you need a highly capable agent that can plan, code, and execute autonomously. Start by identifying your application's needs. If you're building a developer tool that needs to understand and modify code, the Copilot SDK gets you there fast. If you're building an enterprise system where multiple AI agents need to collaborate under governance constraints, the Agent Framework provides the architecture. And if you need both, as most ambitious applications do, now you know how they fit together. The AI development ecosystem is moving rapidly. Both technologies are in active development with growing communities and expanding capabilities. The architectural patterns you learn today, embedding intelligent agents, orchestrating multi-agent workflows, combining execution engines with orchestration frameworks, will remain valuable regardless of how the specific tools evolve. Resources GitHub Copilot SDK Repository – SDKs for Python, TypeScript, Go, and .NET with documentation and examples Microsoft Agent Framework Repository – Framework source, samples, and workflow examples for Python and .NET Agentic House – Real-world example combining Agent Framework with Copilot SDK for smart home data analysis Agent Framework Documentation – Official Microsoft Learn documentation with tutorials and user guides Copilot CLI Installation Guide – Setup instructions for the CLI that powers the Copilot SDK Copilot SDK Getting Started Guide – Step-by-step tutorial for SDK integration Copilot SDK Cookbook – Practical recipes for common tasks across all supported languages912Views3likes0CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo7.7KViews3likes0CommentsDrive demand for your offers with solution area campaigns in a box.
Take your marketing campaigns further with campaigns in a box (CiaBs), collections of partner-ready, high-quality marketing assets designed to deepen customer engagement, simplify your marketing efforts, and grow revenue. Microsoft offers both new and refreshed campaigns for the following solution areas: Data & AI (Azure), Modern Work, Business Applications, Digital & App Innovation (Azure), Infrastructure (Azure), and Security. Check out the latest CiaBs below and get started today by visiting the Partner Marketing Center, where you’ll find resources such as step-by-step execution guides, customizable nurture tactics, and assets including presentations, e-books, infographics, and more. AI transformation Generate interest in AI adoption among your customers. As AI technology grabs headlines and captures imaginations, use this campaign to illustrate your audience’s unique opportunity to harness the power of AI to deliver value faster. Learn more about the campaign: AI Transformation (formerly Era of AI): Show audiences how to take advantage of the potential of AI to drive business value and showcase the usefulness of Microsoft AI solutions delivered and tailored by your organization. Data & AI (Azure) Our Data & AI campaigns demonstrate how your customers can win customers with AI-enabled differentiation. Show how they can transform their businesses with generative AI, a unified data estate, and solutions like Microsoft Fabric, Microsoft Power BI, and Azure Databricks. Campaigns include: Unify your intelligent data - Banking: Help your banking customers understand how you can help them break down data silos, meet compliance demands, and deliver on rising customer expectations. Innovate with the Azure AI platform: Help your customers understand the potential of generative AI solutions to differentiate themselves in the market—and inspire them to build GenAI solutions with the Azure AI platform. Unify your intelligent data and analytics platform - ENT: Show enterprise audiences how unifying data and analytics on an open foundation can help streamline data transformation and business intelligence. Unify your intelligent data and analytics platform - SMB: Create awareness and urgency for SMBs to adopt Microsoft Fabric as the AI-powered, unified data platform that will suit their analytics needs. Modern Work Our Modern Work campaigns help current and potential customers understand how they can effectively transform their businesses with AI capabilities. Campaigns include: Connect and empower your frontline workers: Empower your customers' frontline workers with smart, AI-enhanced workflows with solutions based on Microsoft Teams and Microsoft 365 Copilot. Use this campaign to show your customers how they can make their frontline workers feel more connected, leading to improved productivity and efficiency. Microsoft 365 Copilot SMB: Increase your audience's understanding of the larger potential of Microsoft 365 Copilot and how AI capabilities can accelerate growth and transform operations. Smart workplace with Teams: Use this campaign to show your customers how to use AI to unlock smarter communication and collaboration with Microsoft Teams and Microsoft 365 Copilot. This campaign demonstrates to customers how you can help them seamlessly integrate meetings, calls, chat, and collaboration to break down silos, gain deeper insights, and focus on the work that matters. Cloud endpoints: Help customers bring virtualized applications to the cloud by providing secure AI-powered productivity and development on any device with Microsoft Intune Suite and Windows in the cloud solutions. Business Applications Nurture interest with audiences ready to modernize and transform their business operations with these BizApps go-to-market resources. Campaigns include: AI-powered customer service: Highlight how AI-powered solutions like Microsoft Dynamics 365 are transforming customer service with more personalized experiences, smarter teamwork, and improved efficiency. Migrate and modernize your ERP with Microsoft Dynamics 365: Position yourself as the right partner to modernize or replace your customers' legacy on-premises ERP systems with a Copilot-powered ERP. Business Central for SMB: Offer customers Microsoft Dynamics 365, a comprehensive business management solution that connects finance, sales, service, and operations teams with a single application to boost productivity and improve decision-making. AI-powered CRM: Help your customers enhance their customer experiences and close more deals with Microsoft 365 Dynamics Sales by making data AI-ready, which empowers them to create effective marketing content with Microsoft 365 Copilot and pass qualified leads on to sales teams. Use this campaign to show audiences how Copilot and AI can supercharge their CRM to increase productivity and efficiency, ultimately leading to better customer outcomes. Modernize at scale with AI and Microsoft Power Platform: This campaign is designed to introduce the business values unlocked with Microsoft Power Platform, show how low-code solutions can accelerate development and drive productivity, and position your company as a valuable asset in the deployment of these solutions. Digital & App Innovation (Azure) Position yourself as the strategic AI partner of choice and empower your customers to grow their businesses by helping them gain agility and build new AI applications faster with intelligent experiences. Campaigns include: Build and modernize AI apps: Help customers building new AI-infused applications and modernizing their application estate take advantage of the Azure AI application platform. Accelerate developer productivity: Help customers reimagine the developer experience with the world’s most-adopted AI-powered platform. Use this campaign to show customers how you can use Microsoft and GitHub tools to help streamline workflows, collaborate better, and deliver intelligent apps faster. Infrastructure (Azure) Help customers tap into the cloud to expand capabilities and boost their return on investment by transforming their digital operations. Campaigns include: Modernize VDI to Azure Virtual Desktop - SMB: Show SMB customers how they can meet the challenges of virtual work with Azure Virtual Desktop and gain flexibility, reliability, and cost-effectiveness. Migrate VMware workloads to Azure: Help customers capitalize on the partnership between VMware and Microsoft so they can migrate VMware workloads to Azure in an efficient and cost-effective manner. Migrate and secure Windows Server and SQL Server and Linux - ENT: Showcase the high return on investment (ROI) of using an adaptive cloud purpose-built for AI workloads, and help customers understand the value of migrating to Azure at their own pace. Modernize SAP on the Microsoft Cloud: Reach SAP customers before the 2027 end-of-support deadline for SAP S/4HANA to show them the importance of having a plan to migrate to the cloud. This campaign also underscores the value of moving to Microsoft Azure in the era of AI. Migrate and secure Windows Server and SQL Server and Linux estate - SMB: Use this campaign to increase understanding of the value gained by migrating from an on-premises environment to a hybrid or cloud environment. Show small and medium-sized businesses that they can grow their business, save money, improve security, and more when they move their workload from Windows Server, SQL Server, and Linux to Microsoft Azure. Security Demonstrate the power of modern security solutions and help customers understand the importance of robust cybersecurity in today’s landscape. Campaigns include: Defend against cybersecurity threats: Increase your audience's understanding of the powerful, AI-driven Microsoft unified security platform, which integrates Microsoft Sentinel, Microsoft Defender XDR, Security Exposure Management, Security Copilot, and Microsoft Threat Intelligence. Data security: Show customers how Microsoft Purview can help fortify data security in a world facing increasing cybersecurity threats. Modernize security operations: Use this campaign to sell Microsoft Sentinel, an industry-leading cloud-native SIEM that can help your customers stay protected and scale their security operations. **Stay up to date on all Skilling announcements by subscribing to the skilling tag!**322Views2likes0CommentsKeep pace with the latest in AI!
Check out our fresh new video series, where leaders from Microsoft show their creative side as they share the latest updates in AI, in a manner that's both educative and fun! Watch the videos, share them with your teams, and bookmark aka.ms/AIOnTheGo to stay updated!205Views2likes1CommentJoin us for the Future of Cybersecurity Assessments with Microsoft Security Solutions!
✍ Register Now! Date: 05th December, 4pm-5pm CET Location: Online via Cloud Champion During this session our speakers will dive into our cybersecurity assessments and discover how #MicrosoftSecurity solutions can revolutionize the way you 𝐬𝐚𝐟𝐞𝐠𝐮𝐚𝐫𝐝 𝐲𝐨𝐮𝐫 𝐜𝐮𝐬𝐭𝐨𝐦𝐞𝐫𝐬 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐚𝐬𝐬𝐞𝐭𝐬, ensuring a proactive and resilient defense against evolving cyber threats. This initiative aims to provide you with a comprehensive view of the 𝐫𝐢𝐬𝐤𝐬 𝐲𝐨𝐮𝐫 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐜𝐮𝐬𝐭𝐨𝐦𝐞𝐫𝐬 𝐜𝐮𝐫𝐫𝐞𝐧𝐭𝐥𝐲 𝐟𝐚𝐜𝐞𝐬 in its daily operations. In an era where the digital landscape is evolving rapidly, the necessity for robust #cybersecurity defenses has never been more pronounced. Over the past year, threats to digital peace have eroded trust in technology, underscoring the 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐧𝐞𝐞𝐝 𝐟𝐨𝐫 𝐞𝐧𝐡𝐚𝐧𝐜𝐞𝐝 𝐜𝐲𝐛𝐞𝐫 𝐝𝐞𝐟𝐞𝐧𝐬𝐞𝐬 across all levels of operation. Don't miss this exclusive opportunity to fortify your cybersecurity strategy and gain valuable insights into the tools and techniques that will define the future of digital resilience.408Views2likes0CommentsPartner Blog | Build customer trust at scale with role-based certifications
As AI moves from experimentation to production, customers are raising the standard for what “ready” looks like. They want partners who can design and deliver solutions with security and responsible AI in mind, scope work accurately, and execute reliably across geographies and teams. In this environment, certifications are no longer a nice-to-have. They are a signal of capability and a practical way to build consistency at scale. A December 2025 Forrester Total Economic Impact™ study indicates that participating in Microsoft skilling and enablement initiatives can realize 217% ROI and $67.9M USD net present value over three years for a composite partner. For many partners, that maps to what customers are asking for right now: Prove you can deliver. Reduce risk. Do it consistently, even as technology and expectations keep moving. Continue reading here65Views1like0CommentsAmericas & EMEA Fabric Engineering Connection
Excited to share what’s ahead for this week’s Fabric Engineering Connection sessions — your weekly opportunity to hear directly from the Microsoft Fabric engineering teams and stay ahead of what’s coming next in the platform. 🎙️ Featured Topics & Speakers: 🔧 Updates on DBT Job Abhishek Narain, Principal PM Manager 🤖 Upcoming Capabilities in Fabric Data Agents Misha Desai, Principal Product Manager Virginia Roman, Senior Product Manager Shreyas Canchi Radhakrishna, Product Manager 🌍 Americas & EMEA 📅 Wednesday, February 25 ⏰ 8:00–9:00 AM PT 🌏 APAC 📅 Thursday, February 26 ⏰ 1:00–2:00 AM UTC (Also available Wednesday, February 25, 5:00–6:00 PM PT) Whether you're deep in deployment, scaling customer workloads, or exploring new Fabric capabilities, these sessions are packed with insights to help you accelerate your practice. 👉 Not yet part of the Fabric Partner Community? Join here: https://lnkd.in/g_PRdfjt Let’s keep learning, building, and shaping the future of Fabric—together. 💡73Views1like0Comments