skilling
89 TopicsChoosing the Right Intelligence Layer for Your Application
Introduction One of the most common questions developers ask when planning AI-powered applications is: "Should I use the GitHub Copilot SDK or the Microsoft Agent Framework?" It's a natural question, both technologies let you add an intelligence layer to your apps, both come from Microsoft's ecosystem, and both deal with AI agents. But they solve fundamentally different problems, and understanding where each excels will save you weeks of architectural missteps. The short answer is this: the Copilot SDK puts Copilot inside your app, while the Agent Framework lets you build your app out of agents. They're complementary, not competing. In fact, the most interesting applications use both, the Agent Framework as the system architecture and the Copilot SDK as a powerful execution engine within it. This article breaks down each technology's purpose, architecture, and ideal use cases. We'll walk through concrete scenarios, examine a real-world project that combines both, and give you a decision framework for your own applications. Whether you're building developer tools, enterprise workflows, or data analysis pipelines, you'll leave with a clear understanding of which tool belongs where in your stack. The Core Distinction: Embedding Intelligence vs Building With Intelligence Before comparing features, it helps to understand the fundamental design philosophy behind each technology. They approach the concept of "adding AI to your application" from opposite directions. The GitHub Copilot SDK exposes the same agentic runtime that powers Copilot CLI as a programmable library. When you use it, you're embedding a production-tested agent, complete with planning, tool invocation, file editing, and command execution, directly into your application. You don't build the orchestration logic yourself. Instead, you delegate tasks to Copilot's agent loop and receive results. Think of it as hiring a highly capable contractor: you describe the job, and the contractor figures out the steps. The Microsoft Agent Framework is a framework for building, orchestrating, and hosting your own agents. You explicitly model agents, workflows, state, memory, hand-offs, and human-in-the-loop interactions. You control the orchestration, policies, deployment, and observability. Think of it as designing the company that employs those contractors: you define the roles, processes, escalation paths, and quality controls. This distinction has profound implications for what you build and how you build it. GitHub Copilot SDK: When Your App Wants Copilot-Style Intelligence The GitHub Copilot SDK is the right choice when you want to embed agentic behavior into an existing application without building your own planning or orchestration layer. It's optimized for developer workflows and task automation scenarios where you need an AI agent to do things, edit files, run commands, generate code, interact with tools, reliably and quickly. What You Get Out of the Box The SDK communicates with the Copilot CLI server via JSON-RPC, managing the CLI process lifecycle automatically. This means your application inherits capabilities that have been battle-tested across millions of Copilot CLI users: Planning and execution: The agent analyzes tasks, breaks them into steps, and executes them autonomously Built-in tool support: File system operations, Git operations, web requests, and shell command execution work out of the box MCP (Model Context Protocol) integration: Connect to any MCP server to extend the agent's capabilities with custom data sources and tools Multi-language support: Available as SDKs for Python, TypeScript/Node.js, Go, and .NET Custom tool definitions: Define your own tools and constrain which tools the agent can access BYOK (Bring Your Own Key): Use your own API keys from OpenAI, Azure AI Foundry, or Anthropic instead of GitHub authentication Architecture The SDK's architecture is deliberately simple. Your application communicates with the Copilot CLI running in server mode: Your Application ↓ SDK Client ↓ JSON-RPC Copilot CLI (server mode) The SDK manages the CLI process lifecycle automatically. You can also connect to an external CLI server if you need more control over the deployment. This simplicity is intentional, it keeps the integration surface small so you can focus on your application logic rather than agent infrastructure. Ideal Use Cases for the Copilot SDK The Copilot SDK shines in scenarios where you need a competent agent to execute tasks on behalf of users. These include: AI-powered developer tools: IDEs, CLIs, internal developer portals, and code review tools that need to understand, generate, or modify code "Do the task for me" agents: Applications where users describe what they want—edit these files, run this analysis, generate a pull request and the agent handles execution Rapid prototyping with agentic behavior: When you need to ship an intelligent feature quickly without building a custom planning or orchestration system Internal tools that interact with codebases: Build tools that explore repositories, generate documentation, run migrations, or automate repetitive development tasks A practical example: imagine building an internal CLI that lets engineers say "set up a new microservice with our standard boilerplate, CI pipeline, and monitoring configuration." The Copilot SDK agent would plan the file creation, scaffold the code, configure the pipeline YAML, and even run initial tests, all without you writing orchestration logic. Microsoft Agent Framework: When Your App Is the Intelligence System The Microsoft Agent Framework is the right choice when you need to build a system of agents that collaborate, maintain state, follow business processes, and operate with enterprise-grade governance. It's designed for long-running, multi-agent workflows where you need fine-grained control over every aspect of orchestration. What You Get Out of the Box The Agent Framework provides a comprehensive foundation for building sophisticated agent systems in both Python and .NET: Graph-based workflows: Connect agents and deterministic functions using data flows with streaming, checkpointing, human-in-the-loop, and time-travel capabilities Multi-agent orchestration: Define how agents collaborate, hand off tasks, escalate decisions, and share state Durability and checkpoints: Workflows can pause, resume, and recover from failures, essential for business-critical processes Human-in-the-loop: Built-in support for approval gates, review steps, and human override points Observability: OpenTelemetry integration for distributed tracing, monitoring, and debugging across agent boundaries Multiple agent providers: Use Azure OpenAI, OpenAI, and other LLM providers as the intelligence behind your agents DevUI: An interactive developer UI for testing, debugging, and visualizing workflow execution Architecture The Agent Framework gives you explicit control over the agent topology. You define agents, connect them in workflows, and manage the flow of data between them: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Agent A │────▶│ Agent B │────▶│ Agent C │ │ (Planner) │ │ (Executor) │ │ (Reviewer) │ └─────────────┘ └──────────────┘ └──────────────┘ Define Execute Validate strategy tasks output Each agent has its own instructions, tools, memory, and state. The framework manages communication between agents, handles failures, and provides visibility into what's happening at every step. This explicitness is what makes it suitable for enterprise applications where auditability and control are non-negotiable. Ideal Use Cases for the Agent Framework The Agent Framework excels in scenarios where you need a system of coordinated agents operating under business rules. These include: Multi-agent business workflows: Customer support pipelines, research workflows, operational processes, and data transformation pipelines where different agents handle different responsibilities Systems requiring durability: Workflows that run for hours or days, need checkpoints, can survive restarts, and maintain state across sessions Governance-heavy applications: Processes requiring approval gates, audit trails, role-based access, and compliance documentation Agent collaboration patterns: Applications where agents need to negotiate, escalate, debate, or refine outputs iteratively before producing a final result Enterprise data pipelines: Complex data processing workflows where AI agents analyze, transform, and validate data through multiple stages A practical example: an enterprise customer support system where a triage agent classifies incoming tickets, a research agent gathers relevant documentation and past solutions, a response agent drafts replies, and a quality agent reviews responses before they reach the customer, with a human escalation path when confidence is low. Side-by-Side Comparison To make the distinction concrete, here's how the two technologies compare across key dimensions that matter when choosing an intelligence layer for your application. Dimension GitHub Copilot SDK Microsoft Agent Framework Primary purpose Embed Copilot's agent runtime into your app Build and orchestrate your own agent systems Orchestration Handled by Copilot's agent loop, you delegate You define explicitly, agents, workflows, state, hand-offs Agent count Typically single agent per session Multi-agent systems with agent-to-agent communication State management Session-scoped, managed by the SDK Durable state with checkpointing, time-travel, persistence Human-in-the-loop Basic, user confirms actions Rich approval gates, review steps, escalation paths Observability Session logs and tool call traces Full OpenTelemetry, distributed tracing, DevUI Best for Developer tools, task automation, code-centric workflows Enterprise workflows, multi-agent systems, business processes Languages Python, TypeScript, Go, .NET Python, .NET Learning curve Low, install, configure, delegate tasks Moderate, design agents, workflows, state, and policies Maturity Technical Preview Preview with active development, 7k+ stars, 100+ contributors Real-World Example: Both Working Together The most compelling applications don't choose between these technologies, they combine them. A perfect demonstration of this complementary relationship is the Agentic House project by my colleague Anthony Shaw, which uses an Agent Framework workflow to orchestrate three agents, one of which is powered by the GitHub Copilot SDK. The Problem Agentic House lets users ask natural language questions about their Home Assistant smart home data. Questions like "what time of day is my phone normally fully charged?" or "is there a correlation between when the back door is open and the temperature in my office?" require exploring available data, writing analysis code, and producing visual results—a multi-step process that no single agent can handle well alone. The Architecture The project implements a three-agent pipeline using the Agent Framework for orchestration: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Planner │────▶│ Coder │────▶│ Reviewer │ │ (GPT-4.1) │ │ (Copilot) │ │ (GPT-4.1) │ └─────────────┘ └──────────────┘ └──────────────┘ Plan Notebook Approve/ analysis generation Reject Planner Agent: Takes a natural language question and creates a structured analysis plan, which Home Assistant entities to query, what visualizations to create, what hypotheses to test. This agent uses GPT-4.1 through Azure AI Foundry or GitHub Models. Coder Agent: Uses the GitHub Copilot SDK to generate a complete Jupyter notebook that fetches data from the Home Assistant REST API via MCP, performs the analysis, and creates visualizations. The Copilot agent is constrained to only use specific tools, demonstrating how the SDK supports tool restriction. Reviewer Agent: Acts as a security gatekeeper, reviewing the generated notebook to ensure it only reads and displays data. It rejects notebooks that attempt to modify Home Assistant state, import dangerous modules, make external network requests, or contain obfuscated code. Why This Architecture Works This design demonstrates several principles about when to use which technology: Agent Framework provides the workflow: The sequential pipeline with planning, execution, and review is a classic Agent Framework pattern. Each agent has a clear role, and the framework manages the flow between them. Copilot SDK provides the coding execution: The Coder agent leverages Copilot's battle-tested ability to generate code, work with files, and use MCP tools. Building a custom code generation agent from scratch would take significantly longer and produce less reliable results. Tool constraints demonstrate responsible AI: The Copilot SDK agent is constrained to only specific tools, showing how you can embed powerful agentic behavior while maintaining security boundaries. Standalone agents handle planning and review: The Planner and Reviewer use simpler LLM-based agents, they don't need Copilot's code execution capabilities, just good reasoning. While the Home Assistant data is a fun demonstration, the pattern is designed for something much more significant: applying AI agents for complex research against private data sources. The same architecture could analyze internal databases, proprietary datasets, or sensitive business metrics. Decision Framework: Which Should You Use? When deciding between the Copilot SDK and the Agent Framework, or both, consider these questions about your application. Start with the Copilot SDK if: You need a single agent to execute tasks autonomously (code generation, file editing, command execution) Your application is developer-facing or code-centric You want to ship agentic features quickly without building orchestration infrastructure The tasks are session-scoped, they start and complete within a single interaction You want to leverage Copilot's existing tool ecosystem and MCP integration Start with the Agent Framework if: You need multiple agents collaborating with different roles and responsibilities Your workflows are long-running, require checkpoints, or need to survive restarts You need human-in-the-loop approvals, escalation paths, or governance controls Observability and auditability are requirements (regulated industries, enterprise compliance) You're building a platform where the agents themselves are the product Use both together if: You need a multi-agent workflow where at least one agent requires strong code execution capabilities You want Agent Framework's orchestration with Copilot's battle-tested agent runtime as one of the execution engines Your system involves planning, coding, and review stages that benefit from different agent architectures You're building research or analysis tools that combine AI reasoning with code generation Getting Started Both technologies are straightforward to install and start experimenting with. Here's how to get each running in minutes. GitHub Copilot SDK Quick Start Install the SDK for your preferred language: # Python pip install github-copilot-sdk # TypeScript / Node.js npm install @github/copilot-sdk # .NET dotnet add package GitHub.Copilot.SDK # Go go get github.com/github/copilot-sdk/go The SDK requires the Copilot CLI to be installed and authenticated. Follow the Copilot CLI installation guide to set that up. A GitHub Copilot subscription is required for standard usage, though BYOK mode allows you to use your own API keys without GitHub authentication. Microsoft Agent Framework Quick Start Install the framework: # Python pip install agent-framework --pre # .NET dotnet add package Microsoft.Agents.AI The Agent Framework supports multiple LLM providers including Azure OpenAI and OpenAI directly. Check the quick start tutorial for a complete walkthrough of building your first agent. Try the Combined Approach To see both technologies working together, clone the Agentic House project: git clone https://github.com/tonybaloney/agentic-house.git cd agentic-house uv sync You'll need a Home Assistant instance, the Copilot CLI authenticated, and either a GitHub token or Azure AI Foundry endpoint. The project's README walks through the full setup, and the architecture provides an excellent template for building your own multi-agent systems with embedded Copilot capabilities. Key Takeaways Copilot SDK = "Put Copilot inside my app": Embed a production-tested agentic runtime with planning, tool execution, file edits, and MCP support directly into your application Agent Framework = "Build my app out of agents": Design, orchestrate, and host multi-agent systems with explicit workflows, durable state, and enterprise governance They're complementary, not competing: The Copilot SDK can act as a powerful execution engine inside Agent Framework workflows, as demonstrated by the Agentic House project Choose based on your orchestration needs: If you need one agent executing tasks, start with the Copilot SDK. If you need coordinated agents with business logic, start with the Agent Framework The real power is in combination: The most sophisticated applications use Agent Framework for workflow orchestration and the Copilot SDK for high-leverage task execution within those workflows Conclusion and Next Steps The question isn't really "Copilot SDK or Agent Framework?" It's "where does each fit in my architecture?" Understanding this distinction unlocks a powerful design pattern: use the Agent Framework to model your business processes as agent workflows, and use the Copilot SDK wherever you need a highly capable agent that can plan, code, and execute autonomously. Start by identifying your application's needs. If you're building a developer tool that needs to understand and modify code, the Copilot SDK gets you there fast. If you're building an enterprise system where multiple AI agents need to collaborate under governance constraints, the Agent Framework provides the architecture. And if you need both, as most ambitious applications do, now you know how they fit together. The AI development ecosystem is moving rapidly. Both technologies are in active development with growing communities and expanding capabilities. The architectural patterns you learn today, embedding intelligent agents, orchestrating multi-agent workflows, combining execution engines with orchestration frameworks, will remain valuable regardless of how the specific tools evolve. Resources GitHub Copilot SDK Repository – SDKs for Python, TypeScript, Go, and .NET with documentation and examples Microsoft Agent Framework Repository – Framework source, samples, and workflow examples for Python and .NET Agentic House – Real-world example combining Agent Framework with Copilot SDK for smart home data analysis Agent Framework Documentation – Official Microsoft Learn documentation with tutorials and user guides Copilot CLI Installation Guide – Setup instructions for the CLI that powers the Copilot SDK Copilot SDK Getting Started Guide – Step-by-step tutorial for SDK integration Copilot SDK Cookbook – Practical recipes for common tasks across all supported languages173Views1like0CommentsJoin a Partner Project Ready Workshop to turn your expertise into impact
With a new year just beginning, now is the time to build the skills that power key roles across your organization. Make skilling a priority to turn expertise into real business impact across AI, cloud, and security skill sets. Check out the upcoming Project Ready Workshops for AI Business Solutions, Cloud & AI Platforms, and Security. These role-based sessions focus on real-world delivery scenarios and empower teams to implement solutions with confidence. Check out the Partner Skilling Hub52Views1like0CommentsAPAC Fabric Engineering Connection Call
The Fabric partner ecosystem is buzzing right now — and 2026 is already raising the bar. 🚀 On this week's Fabric Engineering Connection call, Tamer Farag will share what’s next for partners across skilling, demos, FabCon + SQLCon Atlanta, and more. Highlights include: 🎓 More skilling momentum (DP‑600/DP‑700 vouchers, new Partner Project Ready workshops, and a new “Chat with your Data in a Day” xIAD workshop). 🔦 Fabric Certification Spotlight: partners who reach 100+ Fabric certifications will be recognized live in Arun’s keynote at FabCon + SQLCon. 🤝 New ways to tell your story and win with customers through Fabric Demo eXperiences, Fabric Featured Partners + case studies, and FabCon experiences (Partner Elevator Pitch Search, 1:1 + executive meetings, testimonial videos, the Partner Social Sprint, and more). If you’re a Microsoft partner investing in Fabric, we’d love for you to join our next Fabric Engineering Connection call: 📅 Americas/EMEA – Wednesday, Feb 4, 8–9 AM PT 📅 APAC – Thursday, Feb 5, 1–2 AM UTC (Wednesday, Feb 4, 5–6 PM PT) To join, you must be a member of the Fabric Partner Community in Teams: https://aka.ms/JoinFabricPartnerCommunity62Views0likes0CommentsAmericas & EMEA Fabric Engineering Connection
The Fabric partner ecosystem is buzzing right now — and 2026 is already raising the bar. 🚀 On this week's Fabric Engineering Connection call, Tamer Farag will share what’s next for partners across skilling, demos, FabCon + SQLCon Atlanta, and more. Highlights include: 🎓 More skilling momentum (DP‑600/DP‑700 vouchers, new Partner Project Ready workshops, and a new “Chat with your Data in a Day” xIAD workshop). 🔦 Fabric Certification Spotlight: partners who reach 100+ Fabric certifications will be recognized live in Arun’s keynote at FabCon + SQLCon. 🤝 New ways to tell your story and win with customers through Fabric Demo eXperiences, Fabric Featured Partners + case studies, and FabCon experiences (Partner Elevator Pitch Search, 1:1 + executive meetings, testimonial videos, the Partner Social Sprint, and more). If you’re a Microsoft partner investing in Fabric, we’d love for you to join our next Fabric Engineering Connection call: 📅 Americas/EMEA – Wednesday, Feb 4, 8–9 AM PT 📅 APAC – Thursday, Feb 5, 1–2 AM UTC (Wednesday, Feb 4, 5–6 PM PT) To join, you must be a member of the Fabric Partner Community in Teams: https://aka.ms/JoinFabricPartnerCommunity66Views0likes0CommentsPartner Blog | January 2026 skilling kickoff: Turn readiness into growth
A new year is a natural planning moment. Partners are navigating various pressures and opportunities, including fast-moving AI, shifting cloud workloads, rising security expectations, and data becoming a bigger part of every solution motion. In that environment, skilling can’t be treated as an optional add-on. It’s a core business priority that supports what partners care about most: winning work and delivering it with confidence. That’s the lens I’d encourage you to bring into 2026 planning. Not what courses should we take, but what technical and sales capabilities do we need to build so we can execute more consistently across sales and delivery. How data makes the case for readiness and enablement In December 2025, Forrester Consulting published a Total Economic Impact study, commissioned by Microsoft, on the partner opportunity for the Microsoft skilling and enablement offerings. The study modeled a composite organization based on interviews with partners who experienced the offerings. Continue reading here119Views1like0CommentsElevate teaching and learning with AI-powered experiences on Surface Copilot+ PCs
Surface is a premium endpoint, designed and built by Microsoft to run Microsoft technology. When it comes to the classroom, Surface Copilot+ PCs bring the best of Microsoft—hardware, Windows, Microsoft 365 1 , and Microsoft 365 Copilot 1 —into one teaching device, delivering intelligent experiences more securely on‑device and in the cloud. For educators, this means every lesson and interaction is powered by a device purpose-built for teaching and learning in the digital era. Why Surface for Education? Surface Copilot+ PCs, combined with Windows 11, give teachers a powerful platform designed to simplify teaching and elevate learning. With lightning-fast performance, educators can create engaging lessons, generate content, and personalize instruction. Windows 11 features like Snap Layouts, Click to Do 2 , and Copilot Voice streamline multitasking and lesson prep, while intuitive touch, pen 3 , and voice input make teaching feel natural. Together, Surface and Windows 11 help deliver a more secure, AI-supported solution that can save time, support creativity, and help teachers focus on what matters most—students. Engage students with AI-enhanced learning Picture a classroom where every student is actively engaged, their curiosity sparked by lesson plans and quizzes thoughtfully designed by teachers with the help of AI. Microsoft Learning Zone, included with all Microsoft Education licenses at no extra cost 4 , is an AI-powered learning app for Windows 11 designed to help educators create engaging lessons. The app’s AI powered lesson creation feature is designed specifically for Copilot+ PCs. Thanks to the Copilot+ PC’s built-in Neural Processing Unit (NPU), these devices offer fast, reliable performance by running AI models directly on the device and combining them with cloud-based capabilities when needed. This hybrid approach helps Microsoft Learning Zone generate lessons quickly and keeps the experience smooth, secure, and ready for classroom use. It also helps educators streamline lesson planning with interactive activities, instant feedback, and personalized learning pathways. For example, teachers can use Microsoft Learning Zone to develop Kahoot! quizzes for a whole class or to prepare individualized learning experiences like personalized practice questions based on recent student performance, while keeping student data more secure and private. Surface Copilot+ PCs can also help teachers use AI in context to streamline lesson prep and administrative tasks, so they can spend less time on administration and more time inspiring students. For example, teachers can engage with Copilot in a single click using the Copilot key 5 on a Surface keyboard or by saying “Hey Copilot” out loud. And using pen, touch, and voice commands in conjunction with Teach in Microsoft 365 Copilot 6 running on Surface, teachers have a central hub for generating lesson plans, quizzes, rubrics, flashcards, feedback, and more. They can transform ideas and research into engaging lectures in moments, tailor instruction to meet the needs of every learner, and connect with colleagues to share best practices. Recently, we also announced the Microsoft Elevate for Educators skilling program along with more AI-powered experiences purpose-built for education, including the Study and Learn agents in Microsoft 365 Copilot, and Study Guide. Microsoft Elevate helps equip educators with the AI skills they need for the classroom of the future. Delivered through platforms like Microsoft Learn and Minecraft Education, these flexible learning paths ensure that educators can build AI fluency at their own pace, whether they're just beginning their journey or advancing to specialized applications. Read more about Microsoft Elevate and other AI tools for education here. In classrooms where both teachers and students are using Surface Copilot+ PCs, Live Captions 7 with on-device automatic translation can help make spoken content accessible to all students, including students with hearing impairments. The NPU transcribes and translates audio in real time, supporting 40+ languages into English, all processed on-device versus the cloud. Ideally with Surface Copilot+ PCs and Microsoft’s AI-powered tools, technology fades into the background in the classroom, helping teacher creativity and connection take center stage and enabling student learning to be more dynamic, inclusive, and impactful. Empower the classroom of the future, today Surface Copilot+ PCs, purpose-built by Microsoft, are designed to be the foundation for today’s classrooms and the launchpad for tomorrow’s AI innovations. With Windows evolving as the canvas for intelligent AI and agents, Surface devices and Windows together form an essential, AI-assisted platform for educators. Now, every teacher can activate Copilot agents 8 directly on their Surface device to act as a digital teaching partner. These agents can adapt to each teacher’s unique style, streamline daily routines, and unlock new possibilities for student learning. Building agents is simple: teachers or IT can use ready-made templates or create custom agents using natural language, all within the familiar Surface and Windows environment. Surface’s intuitive hardware—touchscreen, pen, voice, and the dedicated Copilot key—makes accessing AI support effortless. Teachers can get immediate answers to classroom questions, troubleshoot tech issues, or navigate school resources using natural input methods. By combining Copilot’s intelligent capabilities with Surface’s secure hardware, educators can gain a more personalized, efficient, and protected teaching experience ready for the future of learning. In addition, Surface Copilot+ PCs help support the full range of learning needs by delivering the performance and experience required for the education tools students and educators depend on every day. Surface devices are designed to work with common education apps like TestNav for assessments, Minecraft Education for STEM, Adobe Express for creativity, and assistive technologies such as JAWS. And, going forward, the built-in NPU on Copilot+ PCs like Surface enables Microsoft and other educational software providers to develop innovative new AI experiences that can run on the device, in the cloud, or both. Boost productivity and collaboration In the classroom, Surface Copilot+ PCs can become the teaching command center: always ready, always responsive. With a simple voice command, Copilot Voice and improved Windows Search 9 instantly pull up lesson plans, student materials, or answers to unexpected questions, freeing instructors from frantic searches and giving back precious prep time. Collaborating on Surface is intuitive and efficient. Teachers can quickly save lesson materials in Teams or OneDrive 1 and share them with students for interactive feedback. Whether teachers are leading a lively discussion in person, connecting with students remotely, or conferencing with colleagues via video, Windows Studio Effects on Copilot+ PCs ensure they’re always seen and heard clearly. Subtle features like background blur, eye contact, and automatic framing help maintain a professional presence, so the focus stays on interacting with students or other teachers—not on the tech. And Surface Copilot+ PCs are designed to empower mobility in and outside the classroom. With extended battery life and lightweight devices, teachers are no longer tethered to a desk or a charger. They can move about freely, interact with students, project and present seamlessly, and focus on teaching. For example, on a Surface Pro with Surface Pen inking, Dual Studio Mics, and natural language prompts in Copilot, teachers can annotate readings, capture ideas by voice, and generate lesson materials on the fly—without breaking the flow of instruction. Throughout the day, Surface Copilot+ PCs can also help teachers anticipate what’s next. Context-aware Windows Search doesn’t just find files—it suggests smart next steps, like opening a document in Word or sharing it with a colleague, streamlining workflows. When inspiration for a lesson strikes, Click to Do lets teachers quickly summarize, explain, or create new content on the fly, helping them build engaging lectures and materials in real time. Surface’s vibrant touchscreen and Snap Layouts can help keep resources organized and visible, supporting a productive work experience. And, most importantly, Surface devices help safeguard faculty, staff, students, and sensitive school data with advanced security and remote management features 1 . As innovations advance in AI, Microsoft and Surface provide built-in protection at every layer—hardware, firmware, operating system, cloud, software applications, and identity. Surface Copilot+ PCs are more than just devices—they’re partners in teaching, learning, and innovation. By combining Microsoft’s advanced hardware, intuitive software, and powerful AI, Surface empowers educators to engage students, boost productivity, and modernize their classrooms, all while keeping data more secure and private. Visit Surface.com/Business to learn more. Students, parents and educators can save up to 10% on select Surface devices and more at the Microsoft Store. 10 Disclaimers: Sold separately. Software license required for some features. Click to Do. Copilot+ PC feature. Image actions now available across devices; other actions vary by device, region, language, and character sets. Subscription required for some actions. Learn more. Surface Pen sold separately. Surface Slim Pen (2nd Edition) experiences and compatibility vary by which Surface device you are using it with. Visit Surface Slim Pen Compatibility to learn more. Microsoft Learning Zone requires a Microsoft 365 Education Essentials, Core (A3), or Advanced (A5) license. Microsoft Education licenses. Copilot key feature availability varies by market, see aka.ms/keysupport. This feature is only available to Faculty/Staff with a Microsoft 365 for Education license. Teach in the Microsoft 365 Copilot App. Copilot+ PC feature. Live Captions translates video and audio subtitles into English from 40+ languages and from 25+ languages into Chinese (Simplified). See Copilot+ PC FAQs. Copilot with commercial data protection is available at no additional cost for users with an Entra ID with an enabled, eligible Microsoft 365 license. Copilot for Microsoft 365 sold separately and requires a qualifying volume license or subscription - Microsoft Copilot for Microsoft 365 | Microsoft 365. Minimum age limits apply to use of Copilot and certain AI features. Details. Copilot+ PC feature. Improved Windows search works with specific text, image, and document formats only; optimized for select languages (English, Chinese (Simplified), French, German, Japanese, and Spanish). See Copilot+ PC FAQs. Microsoft Store Education discount is available to K-12 and higher education students, faculty and parents. Education discount only valid on select products, and may not be combinable with other offers. See terms and conditions at Education & Student Discounts on Laptops, Microsoft 365, Windows, Surface | Microsoft Store532Views0likes0CommentsPartner Blog | What's new for Microsoft partners: January 2026 edition
Your voice continues to shape how the Microsoft AI Cloud Partner Program evolves. In this update, we’re bringing together the announcements, investments, and resources that matter most to partners right now. From the innovations unveiled at Microsoft Ignite to updates across benefits, Azure, AI business solutions, Marketplace, security, and skilling, this edition is designed to build capability, accelerate execution, and drive meaningful customer impact as AI becomes central to every workload. What follows highlights our focus and areas of partner opportunity, grounded in feedback from across the ecosystem and aligned to how customers are evolving their cloud and AI strategies. Spotlight Microsoft Ignite Microsoft Ignite marked an important evolution for the Microsoft AI Cloud Partner Program, introducing new investments for partners to lead as Frontier Firms, translating their own AI experience into customer impact. Read Nicole Dezen’s blog for a partner-focused recap of Microsoft Ignite 2025, bringing together the announcements that matter most across AI, cloud platforms, security, and developer tools. Continue reading here349Views1like2CommentsBuilding a Multi-Agent System with Azure AI Agent Service: Campus Event Management
Personal Background My name is Peace Silly. I studied French and Spanish at the University of Oxford, where I developed a strong interest in how language is structured and interpreted. That curiosity about syntax and meaning eventually led me to computer science, which I came to see as another language built on logic and structure. In the academic year 2024–2025, I completed the MSc Computer Science at University College London, where I developed this project as part of my Master’s thesis. Project Introduction Can large-scale event management be handled through a simple chat interface? This was the question that guided my Master’s thesis project at UCL. As part of the Industry Exchange Network (IXN) and in collaboration with Microsoft, I set out to explore how conversational interfaces and autonomous AI agents could simplify one of the most underestimated coordination challenges in campus life: managing events across multiple departments, societies, and facilities. At large universities, event management is rarely straightforward. Rooms are shared between academic timetables, student societies, and one-off events. A single lecture theatre might host a departmental seminar in the morning, a society meeting in the afternoon, and a careers talk in the evening, each relying on different systems, staff, and communication chains. Double bookings, last-minute cancellations, and maintenance issues are common, and coordinating changes often means long email threads, manual spreadsheets, and frustrated users. These inefficiencies do more than waste time; they directly affect how a campus functions day to day. When venues are unavailable or notifications fail to reach the right people, even small scheduling errors can ripple across entire departments. A smarter, more adaptive approach was needed, one that could manage complex workflows autonomously while remaining intuitive and human for end users. The result was the Event Management Multi-Agent System, a cloud-based platform where staff and students can query events, book rooms, and reschedule activities simply by chatting. Behind the scenes, a network of Azure-powered AI agents collaborates to handle scheduling, communication, and maintenance in real time, working together to keep the campus running smoothly. The user scenario shown in the figure below exemplifies the vision that guided the development of this multi-agent system. Starting with Microsoft Learning Resources I began my journey with Microsoft’s tutorial Build Your First Agent with Azure AI Foundry which introduced the fundamentals of the Azure AI Agent Service and provided an ideal foundation for experimentation. Within a few weeks, using the Azure Foundry environment, I extended those foundations into a fully functional multi-agent system. Azure Foundry’s visual interface was an invaluable learning space. It allowed me to deploy, test, and adjust model parameters such as temperature, system prompts, and function calling while observing how each change influenced the agents’ reasoning and collaboration. Through these experiments, I developed a strong conceptual understanding of orchestration and coordination before moving to the command line for more complex development later. When development issues inevitably arose, I relied on the Discord support community and the GitHub forum for troubleshooting. These communities were instrumental in addressing configuration issues and providing practical examples, ensuring that each agent performed reliably within the shared-thread framework. This early engagement with Microsoft’s learning materials not only accelerated my technical progress but also shaped how I approached experimentation, debugging, and iteration. It transformed a steep learning curve into a structured, hands-on process that mirrored professional software development practice. A Decentralised Team of AI Agents The system’s intelligence is distributed across three specialised agents, powered by OpenAI’s GPT-4.1 models through Azure OpenAI Service. They each perform a distinct role within the event management workflow: Scheduling Agent – interprets natural language requests, checks room availability, and allocates suitable venues. Communications Agent – notifies stakeholders when events are booked, modified, or cancelled. Maintenance Agent – monitors room readiness, posts fault reports when venues become unavailable, and triggers rescheduling when needed. Each agent operates independently but communicates through a shared thread, a transparent message log that serves as the coordination backbone. This thread acts as a persistent state space where agents post updates, react to changes, and maintain a record of every decision. For example, when a maintenance fault is detected, the Maintenance Agent logs the issue, the Scheduling Agent identifies an alternative venue, and the Communications Agent automatically notifies attendees. These interactions happen autonomously, with each agent responding to the evolving context recorded in the shared thread. Interfaces and Backend The system was designed with both developer-focused and user-facing interfaces, supporting rapid iteration and intuitive interaction. The Terminal Interface Initially, the agents were deployed and tested through a terminal interface, which provided a controlled environment for debugging and verifying logic step by step. This setup allowed quick testing of individual agents and observation of their interactions within the shared thread. The Chat Interface As the project evolved, I introduced a lightweight chat interface to make the system accessible to staff and students. This interface allows users to book rooms, query events, and reschedule activities using plain language. Recognising that some users might still want to see what happens behind the scenes, I added an optional toggle that reveals the intermediate steps of agent reasoning. This transparency feature proved valuable for debugging and for more technical users who wanted to understand how the agents collaborated. When a user interacts with the chat interface, they are effectively communicating with the Scheduling Agent, which acts as the primary entry point. The Scheduling Agent interprets natural-language commands such as “Book the Engineering Auditorium for Friday at 2 PM” or “Reschedule the robotics demo to another room.” It then coordinates with the Maintenance and Communications Agents to complete the process. Behind the scenes, the chat interface connects to a FastAPI backend responsible for core logic and data access. A Flask + HTMX layer handles lightweight rendering and interactivity, while the Azure AI Agent Service manages orchestration and shared-thread coordination. This combination enables seamless agent communication and reliable task execution without exposing any of the underlying complexity to the end user. Automated Notifications and Fault Detection Once an event is scheduled, the Scheduling Agent posts the confirmation to the shared thread. The Communications Agent, which subscribes to thread updates, automatically sends notifications to all relevant stakeholders by email. This ensures that every participant stays informed without any manual follow-up. The Maintenance Agent runs routine availability checks. If a fault is detected, it logs the issue to the shared thread, prompting the Scheduling Agent to find an alternative room. The Communications Agent then notifies attendees of the change, ensuring minimal disruption to ongoing events. Testing and Evaluation The system underwent several layers of testing to validate both functional and non-functional requirements. Unit and Integration Tests Backend reliability was evaluated through unit and integration tests to ensure that room allocation, conflict detection, and database operations behaved as intended. Automated test scripts verified end-to-end workflows for event creation, modification, and cancellation across all agents. Integration results confirmed that the shared-thread orchestration functioned correctly, with all test cases passing consistently. However, coverage analysis revealed that approximately 60% of the codebase was tested, leaving some areas such as Azure service integration and error-handling paths outside automated validation. These trade-offs were deliberate, balancing test depth with project scope and the constraints of mocking live dependencies. Azure AI Evaluation While functional testing confirmed correctness, it did not capture the agents’ reasoning or language quality. To assess this, I used Azure AI Evaluation, which measures conversational performance across metrics such as relevance, coherence, fluency, and groundedness. The results showed high scores in relevance (4.33) and groundedness (4.67), confirming the agents’ ability to generate accurate and context-aware responses. However, slightly lower fluency scores and weaker performance in multi-turn tasks revealed a retrieval–execution gap typical in task-oriented dialogue systems. Limitations and Insights The evaluation also surfaced several key limitations: Synthetic data: All tests were conducted with simulated datasets rather than live campus systems, limiting generalisability. Scalability: A non-functional requirement in the form of horizontal scalability was not tested. The architecture supports scaling conceptually but requires validation under heavier load. Despite these constraints, the testing process confirmed that the system was both technically reliable and linguistically robust, capable of autonomous coordination under normal conditions. The results provided a realistic picture of what worked well and what future iterations should focus on improving. Impact and Future Work This project demonstrates how conversational AI and multi-agent orchestration can streamline real operational processes. By combining Azure AI Agent Services with modular design principles, the system automates scheduling, communication, and maintenance while keeping the user experience simple and intuitive. The architecture also establishes a foundation for future extensions: Predictive maintenance to anticipate venue faults before they occur. Microsoft Teams integration for seamless in-chat scheduling. Scalability testing and real-user trials to validate performance at institutional scale. Beyond its technical results, the project underscores the potential of multi-agent systems in real-world coordination tasks. It illustrates how modularity, transparency, and intelligent orchestration can make everyday workflows more efficient and human-centred. Acknowledgements What began with a simple Microsoft tutorial evolved into a working prototype that reimagines how campuses could manage their daily operations through conversation and collaboration. This was both a challenging and rewarding journey, and I am deeply grateful to Professor Graham Roberts (UCL) and Professor Lee Stott (Microsoft) for their guidance, feedback, and support throughout the project.519Views4likes1CommentPartner Blog | Preparing for the next stage of partner skilling in 2026
As 2025 draws to a close, we are seeing the impact of focus and follow-through across the partner ecosystem. This year, partners made skilling a deliberate priority. Across solution areas, teams deepened technical capability, strengthened sales readiness, and brought innovation into real customer engagements. That intent is translating into more confident delivery, faster time to value, and practices built to operate in an AI-driven market. Our November skilling update outlined the latest resources, bootcamps, and learning paths available to partners. As we look ahead to 2026, this moment is about turning those investments into a scalable skilling foundation that supports consistent delivery, drives customer outcomes, and positions your organization to grow as demand for AI, cloud, security, and data solutions continues to accelerate. Continue reading here146Views1like0CommentsDon't miss session two, Adopting Copilot Chat and Agent Builder. Sign up now!
Our dynamic four-part webinar series, Agentic AI + Copilot Partner Skilling Accelerator, empowers you to harness the Microsoft AI ecosystem to unlock new revenue streams and enhance customer success. Across each of the four sessions, experts will deliver practical guidance, best practices, and proven strategies for applying AI tools across no-code, low-code, and pro-code scenarios. Tune in to the second session, Adopting Copilot Chat and Agent Builder, to learn how Copilot and agents can help your business design, position, and sell AI solutions that drive customer success and revenue growth. This live virtual event is scheduled for December 1, 2025. Register today to reserve your spot.122Views1like0Comments