mcp
13 TopicsGitHub Copilot Vibe Coding Workshop
Many of us do the vibe coding these days, and GitHub Copilot (GHCP) takes the key role of the vibe coding. You might simply enter prompts to GHCP like "Build a frontend app for a marketplace of camping gear" or even simpler ones like "Give me an app for camping gear marketplace". This surely works. GHCP delivers an app for you. However, the deliverable might be different from what you initially expected. This happens because GHCP fills in uncertainties with its own imagination unless we provide clear and detailed prompts. Let's recall the basics of product lifecycle management (PLM). You're a product owner or product manager about to launch a new product or develop a new business to sell values to your prospective customers. Where would you start from? Yes, it's the fist step to perform market analysis – whether your idea is feasible or not, whether the market is profitable or not, and so on. Then, based on this analysis, you would generate a product requirements document (PRD). The PRD describes what the product or service should be look like, how it should work, what it should deliver. In addition to that, the doc should also contain user stories and acceptance criteria. The user stories define what the app should expect, how it should behave, and what it should return. The acceptance criteria defines how you test the app to accept as a final deliverable. So, is a PRD is important for vibe coding? YES, IT IS! As stated earlier, GHCP tries really hard to fill some missing parts with its full of imagination. Therefore, the more context you provide to GHCP, the better GHCP works more accurately. That's how you get more accurate results from the vibe coding. But how do you actually practise this type of vibe coding? Introducing GitHub Copilot Vibe Coding Workshop I'm more than happy to introduce this GitHub Copilot Vibe Coding Workshop, a resource available for everyone to use. It's based on a typical app development scenario – building a web application that consists of a frontend UI and backend API with database transaction. This workshop has six steps: Analyse a PRD and generate an OpenAPI document from it. Build a FastAPI app in Python based on the OpenAPI doc. Build a React app in JavaScript based on the OpenAPI doc. Migrate the FastAPI app to Spring Boot app in Java. Migrate the React app to Blazor app in .NET. Containerise both the Spring app and the Blazor app, and orchestrate them. This workshop is self-paced so you can complete it in your spare time. It's also designed to run on GitHub Codespaces, since not everyone has all the required development environment set up locally. Throughout this workshop, you'll learn: How to activate GHCP Agent Mode on VS Code, How to customise your GHCP to get the better result, and How to integrate MCP servers for vibe coding. Do you prefer a language other than English? No problem! This workshop provides materials in seven different languages including English, Chinese (Simplified), French, Japanese, Korean, Portuguese and Spanish so you can choose your preferred language to complete the workshop. It's your time for vibe coding! Now it's your turn to try this GitHub Copilot Vibe Coding Workshop on your own, or together with your friends and colleagues. If you have any questions about this workshop, please create an issue in the repository! Want to know more about GitHub Copilot? GitHub Copilot in VS Code GitHub Copilot Agent Mode GitHub Copilot Customisation MCP Server Support in VS CodeSwagger Auto-Generation on MCP Server
Would you like to generate a swagger.json directly on an MCP server on-the-fly? In many use cases, using remote MCP servers is not uncommon. In particular, if you're using Azure API Management (APIM), Azure API Center (APIC) or Copilot Studio in Power Platform, integrating with remote MCP servers is inevitable.Let's Learn - MCP Events: A Beginner's Guide to the Model Context Protocol
The Model Context Protocol (MCP) has rapidly become the industry standard for connecting AI agents to a wide range of external tools and services in a consistent way. In a matter of months, this protocol has become a hot topic in developer events and forums and has been implemented by companies large and small. With such rapid change comes the need for training and upskilling to meet the moment! That's why, we're planning a series of virtual training events across different languages (both natural and programming) to introduce you to MCP. ⭐ Register: https://aka.ms/letslearnmcp 👩💻 Who Should Join? Whether you're a beginner developer, a university student, or a seasoned tech professional, this workshop was designed with you in mind. At each event, experts will guide you through an exciting and beginner-friendly workshop where we'll introduce you to MCP, show you how to build your first server, and answer all your questions along the way. We have an exciting lineup of sessions planned, each focusing on different programming languages and featuring expert presenters. All the events use Visual Studio Code, aside from the July 17th Visual Studio event. Sessions ⭐ You can register for the events here: https://aka.ms/letslearnmcp Date Language Technology Register July 9 English C# https://developer.microsoft.com/reactor/events/26114/ July 15 English Java https://developer.microsoft.com/reactor/events/26115/ July 16 English Python https://developer.microsoft.com/reactor/events/26116/ July 17 English C# + Visual Studio https://developer.microsoft.com/reactor/events/26117/ July 21 English TypeScript https://developer.microsoft.com/reactor/events/26118/ We're also running the event in Spanish, Portuguese, Italian, Korean, Japanese, Chinese, and more. See the event page for more details! Date Language Technology Register July 15 한국어 C# https://developer.microsoft.com/reactor/events/26124/ July 15 日本語 C# https://developer.microsoft.com/reactor/events/26137/ July 17 Español C# https://developer.microsoft.com/reactor/events/26146/ July 18 Tiếng Việt C# https://developer.microsoft.com/reactor/events/26138/ July 18 한국어 JavaScript https://developer.microsoft.com/reactor/events/26121/ July 22 한국어 Python https://developer.microsoft.com/reactor/events/26125/ July 22 Português Java https://developer.microsoft.com/reactor/events/26120/ July 23 中文 C# https://developer.microsoft.com/reactor/events/26142/ July 23 Türkçe C# https://developer.microsoft.com/reactor/events/26139/ July 23 Español JavaScript/ TypeScript https://developer.microsoft.com/reactor/events/26119/ July 23 Português C# https://developer.microsoft.com/reactor/events/26123/ July 24 Deutsch Java https://developer.microsoft.com/reactor/events/26144/ July 24 Italiano Python https://developer.microsoft.com/reactor/events/26145/ Don't miss out on this opportunity to learn about MCP and enhance your skills. Mark your calendars and join us for the Let's Learn - MCP workshops. We look forward to seeing you there! ⭐ Register: https://aka.ms/letslearnmcp Get ready for the event! We recommend you set up your machine prior to the event so that you can follow along with the live session. Ensure you have: Visual Studio Code configured for your chosen programming language Docker Sign up for GitHub Copilot for FREE Check out the MCP For Beginners course If you're completely new to MCP, watch this video for an introduction. Introduction to Model Context Protocol (MCP) Servers | DEM517 But wait, there's more! After the Let's Learn event, you'll be ready to join us for MCP Dev Days on July 29th and 30th. In this two-day virtual event, you'll explore the growing ecosystem around the Model Context Protocol (MCP), a standard that bridges AI models and the tools they rely on. The event will include sessions from MCP experts at Microsoft and beyond. For more information, check out the event page: https://aka.ms/mcpdevdaysServerless MCP Agent with LangChain.js v1 — Burgers, Tools, and Traces 🍔
AI agents that can actually do stuff (not just chat) are the fun part nowadays, but wiring them cleanly into real APIs, keeping things observable, and shipping them to the cloud can get... messy. So we built a fresh end‑to‑end sample to show how to do it right with the brand new LangChain.js v1 and Model Context Protocol (MCP). In case you missed it, MCP is a recent open standard that makes it easy for LLM agents to consume tools and APIs, and LangChain.js, a great framework for building GenAI apps and agents, has first-class support for it. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. This new sample gives you: A LangChain.js v1 agent that streams its result, along reasoning + tool steps An MCP server exposing real tools (burger menu + ordering) from a business API A web interface with authentication, sessions history, and a debug panel (for developers) A production-ready multi-service architecture Serverless deployment on Azure in one command ( azd up ) Yes, it’s a burger ordering system. Who doesn't like burgers? Grab your favorite beverage ☕, and let’s dive in for a quick tour! TL;DR key takeaways New sample: full-stack Node.js AI agent using LangChain.js v1 + MCP tools Architecture: web app → agent API → MCP server → burger API Runs locally with a single npm start , deploys with azd up Uses streaming (NDJSON) with intermediate tool + LLM steps surfaced to the UI Ready to fork, extend, and plug into your own domain / tools What will you learn here? What this sample is about and its high-level architecture What LangChain.js v1 brings to the table for agents How to deploy and run the sample How MCP tools can expose real-world APIs Reference links for everything we use GitHub repo LangChain.js docs Model Context Protocol Azure Developer CLI MCP Inspector Use case You want an AI assistant that can take a natural language request like “Order two spicy burgers and show me my pending orders” and: Understand intent (query menu, then place order) Call the right MCP tools in sequence, calling in turn the necessary APIs Stream progress (LLM tokens + tool steps) Return a clean final answer Swap “burgers” for “inventory”, “bookings”, “support tickets”, or “IoT devices” and you’ve got a reusable pattern! Sample overview Before we play a bit with the sample, let's have a look at the main services implemented here: Service Role Tech Agent Web App ( agent-webapp ) Chat UI + streaming + session history Azure Static Web Apps, Lit web components Agent API ( agent-api ) LangChain.js v1 agent orchestration + auth + history Azure Functions, Node.js Burger MCP Server ( burger-mcp ) Exposes burger API as tools over MCP (Streamable HTTP + SSE) Azure Functions, Express, MCP SDK Burger API ( burger-api ) Business logic: burgers, toppings, orders lifecycle Azure Functions, Cosmos DB Here's a simplified view of how they interact: There are also other supporting components like databases and storage not shown here for clarity. For this quickstart we'll only interact with the Agent Web App and the Burger MCP Server, as they are the main stars of the show here. LangChain.js v1 agent features The recent release of LangChain.js v1 is a huge milestone for the JavaScript AI community! It marks a significant shift from experimental tools to a production-ready framework. The new version doubles down on what’s needed to build robust AI applications, with a strong focus on agents. This includes first-class support for streaming not just the final output, but also intermediate steps like tool calls and agent reasoning. This makes building transparent and interactive agent experiences (like the one in this sample) much more straightforward. Quickstart Requirements GitHub account Azure account (free signup, or if you're a student, get free credits here) Azure Developer CLI Deploy and run the sample We'll use GitHub Codespaces for a quick zero-install setup here, but if you prefer to run it locally, check the README. Click on the following link or open it in a new tab to launch a Codespace: Create Codespace This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Provision and deploy to Azure Open a terminal and run these commands: # Install dependencies npm install # Login to Azure azd auth login # Provision and deploy all resources azd up Follow the prompts to select your Azure subscription and region. If you're unsure of which one to pick, choose East US 2 . The deployment will take about 15 minutes the first time, to create all the necessary resources (Functions, Static Web Apps, Cosmos DB, AI Models). If you're curious about what happens under the hood, you can take a look at the main.bicep file in the infra folder, which defines the infrastructure as code for this sample. Test the MCP server While the deployment is running, you can run the MCP server and API locally (even in Codespaces) to see how it works. Open another terminal and run: npm start This will start all services locally, including the Burger API and the MCP server, which will be available at http://localhost:3000/mcp . This may take a few seconds, wait until you see this message in the terminal: 🚀 All services ready 🚀 When these services are running without Azure resources provisioned, they will use in-memory data instead of Cosmos DB so you can experiment freely with the API and MCP server, though the agent won't be functional as it requires a LLM resource. MCP tools The MCP server exposes the following tools, which the agent can use to interact with the burger ordering system: Tool Name Description get_burgers Get a list of all burgers in the menu get_burger_by_id Get a specific burger by its ID get_toppings Get a list of all toppings in the menu get_topping_by_id Get a specific topping by its ID get_topping_categories Get a list of all topping categories get_orders Get a list of all orders in the system get_order_by_id Get a specific order by its ID place_order Place a new order with burgers (requires userId , optional nickname ) delete_order_by_id Cancel an order if it has not yet been started (status must be pending , requires userId ) You can test these tools using the MCP Inspector. Open another terminal and run: npx -y @modelcontextprotocol/inspector Then open the URL printed in the terminal in your browser and connect using these settings: Transport: Streamable HTTP URL: http://localhost:3000/mcp Connection Type: Via Proxy (should be default) Click on Connect, then try listing the tools first, and run get_burgers tool to get the menu info. Test the Agent Web App After the deployment is completed, you can run the command npm run env to print the URLs of the deployed services. Open the Agent Web App URL in your browser (it should look like https://<your-web-app>.azurestaticapps.net ). You'll first be greeted by an authentication page, you can sign in either with your GitHub or Microsoft account and then you should be able to access the chat interface. From there, you can start asking any question or use one of the suggested prompts, for example try asking: Recommend me an extra spicy burger . As the agent processes your request, you'll see the response streaming in real-time, along with the intermediate steps and tool calls. Once the response is complete, you can also unfold the debug panel to see the full reasoning chain and the tools that were invoked: Tip: Our agent service also sends detailed tracing data using OpenTelemetry. You can explore these either in Azure Monitor for the deployed service, or locally using an OpenTelemetry collector. We'll cover this in more detail in a future post. Wrap it up Congratulations, you just finished spinning up a full-stack serverless AI agent using LangChain.js v1, MCP tools, and Azure’s serverless platform. Now it's your turn to dive in the code and extend it for your use cases! 😎 And don't forget to azd down once you're done to avoid any unwanted costs. Going further This was just a quick introduction to this sample, and you can expect more in-depth posts and tutorials soon. Since we're in the era of AI agents, we've also made sure that this sample can be explored and extended easily with code agents like GitHub Copilot. We even built a custom chat mode to help you discover and understand the codebase faster! Check out the Copilot setup guide in the repo to get started. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. If you like this sample, don't forget to star the repo ⭐️! You can also join us in the Azure AI community Discord to chat and ask any questions. Happy coding and burger ordering! 🍔How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.1.5KViews0likes0CommentsOrchestrating Multi-Agent Intelligence: MCP-Driven Patterns in Agent Framework
Building reliable AI systems requires modular, stateful coordination and deterministic workflows that enable agents to collaborate seamlessly. The Microsoft Agent Framework provides these foundations, with memory, tracing, and orchestration built in. This implementation demonstrates four multi-agentic patterns — Single Agent, Handoff, Reflection, and Magentic Orchestration — showcasing different interaction models and collaboration strategies. From lightweight domain routing to collaborative planning and self-reflection, these patterns highlight the framework’s flexibility. At the core is Model Context Protocol (MCP), connecting agents, tools, and memory through a shared context interface. Persistent session state, conversation thread history, and checkpoint support are handled via Cosmos DB when configured, with an in-memory dictionary as a default fallback. This setup enables dynamic pattern swapping, performance comparison, and traceable multi-agent interactions — all within a unified, modular runtime. Business Scenario: Contoso Customer Support Chatbot Contoso’s chatbot handles multi-domain customer inquiries like billing anomalies, promotion eligibility, account locks, and data usage questions. These require combining structured data (billing, CRM, security logs, promotions) with unstructured policy documents processed via vector embeddings. Using MCP, the system orchestrates tool calls to fetch real-time structured data and relevant policy content, ensuring policy-aligned, auditable responses without exposing raw databases. This enables the assistant to explain anomalies, recommend actions, confirm eligibility, guide account recovery, and surface risk indicators—reducing handle time and improving first-contact resolution while supporting richer multi-agent reasoning. Architecture & Core Concepts The Contoso chatbot leverages the Microsoft Agent Framework to deliver a modular, stateful, and workflow-driven architecture. At its core, the system consists of: Base Agent: All agent patterns—single agent, reflection, handoff and magentic orchestration—inherit from a common base class, ensuring consistent interfaces for message handling, tool invocation, and state management. Backend: A FastAPI backend manages session routing, agent execution, and workflow orchestration. Frontend: A React-based UI (or Streamlit alternative) streams responses in real-time and visualizes agent reasoning and tool calls. Modular Runtime and Pattern Swapping One of the most powerful aspects of this implementation is its modular runtime design. Each agentic pattern—Single, Reflection, Handoff, and Magnetic—plugs into a shared execution pipeline defined by the base agent and MCP integration. By simply updating the .env configuration (e.g., agent_module=handoff), developers can swap in and out entire coordination strategies without touching the backend, frontend, or memory layers. This makes it easy to compare agent styles side by side, benchmark reasoning behaviors, and experiment with orchestration logic—all while maintaining a consistent, deterministic runtime. The same MCP connectors, FastAPI backend, and Cosmos/in-memory state management work seamlessly across every pattern, enabling rapid iteration and reliable evaluation. # Dynamic agent pattern loading agent_module_path = os.getenv("AGENT_MODULE") agent_module = __import__(agent_module_path, fromlist=["Agent"]) Agent = getattr(agent_module, "Agent") # Common MCP setup across all patterns async def _create_tools(self, headers: Dict[str, str]) -> List[MCPStreamableHTTPTool] | None: if not self.mcp_server_uri: return None return [MCPStreamableHTTPTool( name="mcp-streamable", url=self.mcp_server_uri, headers=headers, timeout=30, request_timeout=30, )] Memory & State Management State management is critical for multi-turn conversations and cross-agent workflows. The system supports two out-of-the-box options: Persistent Storage (Cosmos DB) Acts as the durable, enterprise-ready backend. Stores serialized conversation threads and workflow checkpoints keyed by tenant and session ID. Ensures data durability and auditability across restarts. In-Memory Session Store Default fallback when Cosmos DB credentials are not configured. Maintains ephemeral state per session for fast prototyping or lightweight use cases. All patterns leverage the same thread-based state abstraction, enabling: Session isolation: Each user session maintains its own state and history. Checkpointing: Multi-agent workflows can snapshot shared and executor-local state at any point, supporting pause/resume and fault recovery. Model Context Protocol (MCP): Acts as the connector between agents and tools, standardizing how data is fetched and results are returned to agents, whether querying structured databases or unstructured knowledge sources. Core Principles Across all patterns, the framework emphasizes: Modularity: Components are interchangeable—agents, tools, and state stores can be swapped without disrupting the system. Stateful Coordination: Multi-agent workflows coordinate through shared and local state, enabling complex reasoning without losing context. Deterministic Workflows: While agents operate autonomously, the workflow layer ensures predictable, auditable execution of multi-agent tasks. Unified Execution: From single-agent Q&A to complex Magentic orchestrations, every agent follows the same execution lifecycle and integrates seamlessly with MCP and the state store. Multi-Agent Patterns: Workflow and Coordination With the architecture and core concepts established, we can now explore the agentic patterns implemented in the Contoso chatbot. Each pattern builds on the base agent and MCP integration but differs in how agents orchestrate tasks and communicate with one another to handle multi-domain customer queries. In the sections that follow, we take a deeper dive into each pattern’s workflow and examine the under-the-hood communication flows between agents: Single Agent – A simple, single-domain agent handling straightforward queries. Reflection Agent – Allows agents to introspect and refine their outputs. Handoff Pattern – Routes conversations intelligently to specialized agents across domains. Magentic Orchestration – Coordinates multiple specialist agents for complex, parallel tasks. For each pattern, the focus will be on how agents communicate and coordinate, showing the practical orchestration mechanisms in action. Single Intelligent Agent The Single Agent Pattern represents the simplest orchestration style within the framework. Here, a single autonomous agent handles all reasoning, decision-making, and tool interactions directly — without delegation or multi-agent coordination. When a user submits a request, the single agent processes the query using all tools, memory, and data sources available through the Model Context Protocol (MCP). It performs retrieval, reasoning, and response composition in a single, cohesive loop. Communication Flow: User Input → Agent: The user submits a question or command. Agent → MCP Tools: The agent invokes one or more tools (e.g., vector retrieval, structured queries, or API calls) to gather relevant context and data. Agent → User: The agent synthesizes the tool outputs, applies reasoning, and generates the final response to the user. Session Memory: Throughout the exchange, the agent stores conversation history and extracted entities in the configured memory store (in-memory or Cosmos DB). Key Communication Principles: Single Responsibility: One agent performs both reasoning and action, ensuring fast response times and simpler state management. Direct Tool Invocation: The agent has direct access to all registered tools through MCP, enabling flexible retrieval and action chaining. Stateful Execution: The session memory preserves dialogue context, allowing the agent to maintain continuity across user turns. Deterministic Behavior: The workflow is fully predictable — input, reasoning, tool call, and output occur in a linear sequence. Reflection pattern The Reflection Pattern introduces a lightweight, two-agent communication loop designed to improve the quality and reliability of responses through structured self-review. In this setup, a Primary Agent first generates an initial response to the user’s query. This draft is then passed to a Reviewer Agent, whose role is to critique and refine the response—identifying gaps, inaccuracies, or missed context. Finally, the Primary Agent incorporates this feedback and produces a polished final answer for the user. This process introduces one round of reflection and improvement without adding excessive latency, balancing quality with responsiveness. Communication Flow: User Input → Primary Agent: The user submits a query. Primary Agent → Reviewer Agent: The primary generates an initial draft and passes it to the reviewer. Reviewer Agent → Primary Agent: The reviewer provides feedback or suggested improvements. Primary Agent → User: The primary revises its response and sends the refined version back to the user. Key Communication Principles: Two-Stage Dialogue: Structured interaction between Primary and Reviewer ensures each output undergoes quality assurance. Focused Review: The Reviewer doesn’t recreate answers—it critiques and enhances, reducing redundancy. Stateful Context: Both agents operate over the same shared memory, ensuring consistency between draft and revision. Deterministic Flow: A single reflection round guarantees predictable latency while still improving answer quality. Transparent Traceability: Each step—initial draft, feedback, and final output—is logged, allowing developers to audit reasoning or assess quality improvements over time. In practice, this pattern enables the system to reason about its own output before responding, yielding clearer, more accurate, and policy-aligned answers without requiring multiple independent retries. Handoff Pattern When a user request arrives, the system first routes it through an Intent Classifier (or triage agent) to determine which domain specialist should handle the conversation. Once identified, control is handed off directly to that Specialist Agent, which uses its own tools, domain knowledge, and state context to respond. This specialist continues to handle the user interaction as long as the conversation stays within its domain. If the user’s intent shifts — for example, moving from billing to security — the conversation is routed back to the Intent Classifier, which re-assigns it to the correct specialist agent. This pattern reduces latency and maintains continuity by minimizing unnecessary routing. Each handoff is tracked through the shared state store, ensuring seamless context carry-over and full traceability of decisions. Key Communication Principles: Dynamic Routing: The Intent Classifier routes user input to the right specialist domain. Domain Persistence: The specialist remains active while the user stays within its domain. Context Continuity: Conversation history and entities persist across agents through the shared state store. Traceable Handoffs: Every routing decision is logged for observability and auditability. Low Latency: Responses are faster since domain-appropriate agents handle queries directly. In practice, this means a user could begin a conversation about billing, continue seamlessly, and only be re-routed when switching topics — without losing any conversational context or history. Magentic Pattern The Magentic Pattern is designed for open-ended, multi-faceted tasks that require multiple agents to collaborate. It introduces a Manager (Planner) Agent, which interprets the user’s goal, breaks it into subtasks, and orchestrates multiple Specialist Agents to execute those subtasks. The Manager creates and maintains a Task Ledger, which tracks the status, dependencies, and results of each specialist’s work. As specialists perform their tool calls or reasoning, the Manager monitors their progress, gathers intermediate outputs, and can dynamically re-plan, dispatch additional tasks, or adjust the overall workflow. When all subtasks are complete, the Manager synthesizes the combined results into a coherent final response for the user. Key Communication Principles: Centralized Orchestration: The Manager coordinates all agent interactions and workflow logic. Parallel and Sequential Execution: Specialists can work simultaneously or in sequence based on task dependencies. Task Ledger: Acts as a transparent record of all task assignments, updates, and completions. Dynamic Re-planning: The Manager can modify or extend workflows in real time based on intermediate findings. Shared Memory: All agents access the same state store for consistent context and result sharing. Unified Output: The Manager consolidates results into one response, ensuring coherence across multi-agent reasoning. In practice, Magentic orchestration enables complex reasoning where the system might combine insights from multiple agents — e.g., billing, product, and security — and present a unified recommendation or resolution to the user. Choosing the Right Agent for Your Use Case Selecting the appropriate agent pattern hinges on the complexity of the task and the level of coordination required. As use cases evolve from straightforward queries to intricate, multi-step processes, the need for specialized orchestration increases. Below is a decision matrix to guide your choice: Feature / Requirement Single Agent Reflection Agent Handoff Pattern Magentic Orchestration Handles simple, domain-bound tasks ✔ ✔ ✖ ✖ Supports review / quality assurance ✖ ✔ ✖ ✔ Multi-domain routing ✖ ✖ ✔ ✔ Open-ended / complex workflows ✖ ✖ ✖ ✔ Parallel agent collaboration ✖ ✖ ✖ ✔ Direct tool access ✔ ✔ ✔ ✔ Low latency / fast response ✔ ✔ ✔ ✖ Easy to implement / low orchestration ✔ ✔ ✖ ✖ Dive Deeper: Explore, Build, and Innovate We've explored various agent patterns, from Single Agent to Magentic Orchestration, each tailored to different use cases and complexities. To see these patterns in action, we invite you to explore our Github repo. Clone the repo, experiment with the examples, and adapt them to your own scenarios. Additionally, beyond the patterns discussed here, the repository also features a Human-in-the-Loop (HITL) workflow designed for fraud detection. This workflow integrates human oversight into AI decision-making, ensuring higher accuracy and reliability. For an in-depth look at this approach, we recommend reading our detailed blog post: Building Human-in-the-loop AI Workflows with Microsoft Agent Framework | Microsoft Community Hub Engage with these resources, and start building intelligent, reliable, and scalable AI systems today! This repository and content is developed and maintained by James Nguyen, Nicole Serafino, Kranthi Kumar Manchikanti, Heena Ugale, and Tim Sullivan.It's time to secure your MCP servers. Here's how.
The Model Context Protocol (MCP) provides a powerful, standardized way for LLMs to interact with external tools. But as soon as you move from a local demo to a real-world application, a critical question arises: How do you secure it? Exposing an MCP server without security is like leaving the front door of your house wide open. Anyone could walk in and use your tools, access your data, or cause havoc. This guide will walk you through securing a Node.js MCP server from the ground up using JSON Web Tokens (JWT). We'll cover authentication (who are you?) and authorization (what are you allowed to do?), with practical code samples based on this project that can be found at Azure-Samples/mcp-container-ts. The Goal: From Unprotected to Fully Secured Our goal is to take a basic MCP server and add a robust security layer that: Authenticates every request to ensure it comes from a known user. Authorizes the user, granting them specific permissions based on their role (e.g., admin vs. readonly). Protects individual tools, so only authorized users can access them. Why JWT is Perfect for MCP Security JWT is the industry standard for securing APIs, and it's an ideal fit for MCP servers for a few key reasons: Stateless: Each JWT contains all the information needed to verify a user. The server doesn't need to store session information, which makes it highly scalable—perfect for handling many concurrent requests from AI agents. Self-Contained: A JWT can carry user details, their role, and specific permissions directly within its payload. Tamper-Proof: JWTs are digitally signed. If a token is modified in any way, the signature becomes invalid, and the server will reject it. Portable: A single JWT can be used to access multiple secured services, which is common in microservice architectures. Visualizing the Security Flow For visual learners, this sequence diagram illustrates the complete authentication and authorization flow: A Note on MCP Specification Compliance! It's important to note that this guide provides a practical, real-world implementation for securing an MCP server, but it does not fully implement the official MCP authorization specification. This implementation focuses on a robust, stateless, and widely understood pattern using traditional JWTs and role-based access control (RBAC), which is sufficient for many use cases. However, for full compliance with the MCP specification, you would need to implement additional features. In a future post, we may explore how to extend our JWT implementation to fully align with the MCP specification. We recommend staring the GitHub repository to stay updated and receive notifications about future improvements. Step 1: Defining Roles and Permissions Before writing any code, we must define our security rules. What roles exist? What can each role do? This is the foundation of our authorization system. In our src/auth/authorization.ts file, we define UserRole and Permission enums. This makes our code clear, readable, and less prone to typos. // src/auth/authorization.ts export enum UserRole { ADMIN = "admin", USER = "user", READONLY = "readonly", } export enum Permission { CREATE_TODOS = "create:todos", READ_TODOS = "read:todos", UPDATE_TODOS = "update:todos", DELETE_TODOS = "delete:todos", LIST_TOOLS = "list:tools", } // This interface defines the structure of our authenticated user export interface AuthenticatedUser { id: string; role: UserRole; permissions: Permission[]; } // A simple map to assign default permissions to each role const rolePermissions: Record<UserRole, Permission[]> = { [UserRole.ADMIN]: Object.values(Permission), // Admin gets all permissions [UserRole.USER]: [ Permission.CREATE_TODOS, Permission.READ_TODOS, Permission.UPDATE_TODOS, Permission.LIST_TOOLS, ], [UserRole.READONLY]: [Permission.READ_TODOS, Permission.LIST_TOOLS], }; Step 2: Creating a JWT Service Next, we need a centralized service to handle all JWT-related logic: creating new tokens for testing and, most importantly, verifying incoming tokens. This keeps our security logic clean and in one place. Here is the complete src/auth/jwt.ts file. It uses the jsonwebtoken library to do the heavy lifting. // src/auth/jwt.ts import * as jwt from "jsonwebtoken"; import { AuthenticatedUser, getPermissionsForRole, UserRole, } from "./authorization.js"; // These values should come from environment variables for security const JWT_SECRET = process.env.JWT_SECRET!; const JWT_AUDIENCE = process.env.JWT_AUDIENCE!; const JWT_ISSUER = process.env.JWT_ISSUER!; const JWT_EXPIRY = process.env.JWT_EXPIRY || "2h"; if (!JWT_SECRET || !JWT_AUDIENCE || !JWT_ISSUER) { throw new Error("JWT environment variables are not set!"); } /** * Generates a new JWT for a given user payload. * Useful for testing or generating tokens on demand. */ export function generateToken( user: Partial<AuthenticatedUser> & { id: string } ): string { const payload = { id: user.id, role: user.role || UserRole.USER, permissions: user.permissions || getPermissionsForRole(user.role || UserRole.USER), }; return jwt.sign(payload, JWT_SECRET, { algorithm: "HS256", expiresIn: JWT_EXPIRY, audience: JWT_AUDIENCE, issuer: JWT_ISSUER, }); } /** * Verifies an incoming JWT and returns the authenticated user payload if valid. */ export function verifyToken(token: string): AuthenticatedUser { try { const decoded = jwt.verify(token, JWT_SECRET, { algorithms: ["HS256"], audience: JWT_AUDIENCE, issuer: JWT_ISSUER, }) as jwt.JwtPayload; // Ensure the decoded token has the fields we expect if (typeof decoded.id !== "string" || typeof decoded.role !== "string") { throw new Error("Token payload is missing required fields."); } return { id: decoded.id, role: decoded.role as UserRole, permissions: decoded.permissions || [], }; } catch (error) { // Log the specific error for debugging, but return a generic message console.error("JWT verification failed:", error.message); if (error instanceof jwt.TokenExpiredError) { throw new Error("Token has expired."); } if (error instanceof jwt.JsonWebTokenError) { throw new Error("Invalid token."); } throw new Error("Could not verify token."); } } Step 3: Building the Authentication Middleware A "middleware" is a function that runs before your main request handler. It's the perfect place to put our security check. This middleware will inspect every incoming request, look for a JWT in the Authorization header, and verify it. If the token is valid, it attaches the user's information to the request object for later use. If not, it immediately sends a 401 Unauthorized error and stops the request from proceeding further. To make this type-safe, we'll also extend Express's Request interface to include our user object. // src/server-middlewares.ts import { Request, Response, NextFunction } from "express"; import { verifyToken, AuthenticatedUser } from "./auth/jwt.js"; // Extend the global Express Request interface to add our custom 'user' property declare global { namespace Express { interface Request { user?: AuthenticatedUser; } } } export function authenticateJWT( req: Request, res: Response, next: NextFunction ): void { const authHeader = req.headers.authorization; if (!authHeader || !authHeader.startsWith("Bearer ")) { res.status(401).json({ error: "Authentication required", message: "Authorization header with 'Bearer' scheme must be provided.", }); return; } const token = authHeader.substring(7); // Remove "Bearer " try { const userPayload = verifyToken(token); req.user = userPayload; // Attach user payload to the request next(); // Proceed to the next middleware or request handler } catch (error) { res.status(401).json({ error: "Invalid token", message: error.message, }); } } Step 4: Protecting the MCP Server Now we have all the pieces. Let's put them together to protect our server. First, we apply our authenticateJWT middleware to the main MCP endpoint in src/index.ts. This ensures every request to /mcp must have a valid JWT. // src/index.ts // ... other imports import { authenticateJWT } from "./server-middlewares.js"; // ... const MCP_ENDPOINT = "/mcp"; const app = express(); // Apply security middleware ONLY to the MCP endpoint app.use(MCP_ENDPOINT, authenticateJWT); // ... rest of the file Next, we'll enforce our fine-grained permissions. Let's secure the ListTools handler in src/server.ts. We'll modify it to check if the authenticated user has the Permission.LIST_TOOLS permission before returning the list of tools. // src/server.ts // ... other imports import { hasPermission, Permission } from "./auth/authorization.js"; // ... inside the StreamableHTTPServer class private setupServerRequestHandlers() { this.server.setRequestHandler(ListToolsRequestSchema, async (request) => { // The user is attached to the request by our middleware const user = this.currentUser; // 1. Check for an authenticated user if (!user) { return this.createRPCErrorResponse("Authentication required."); } // 2. Check if the user has the specific permission to list tools if (!hasPermission(user, Permission.LIST_TOOLS)) { return this.createRPCErrorResponse( "Insufficient permissions to list tools." ); } // 3. If checks pass, filter tools based on user's permissions const allowedTools = TodoTools.filter((tool) => { const requiredPermissions = this.getToolRequiredPermissions(tool.name); // The user must have at least one of the permissions required for the tool return requiredPermissions.some((p) => hasPermission(user, p)); }); return { jsonrpc: "2.0", tools: allowedTools, }; }); // ... other request handlers } With this change, a user with a readonly role can list tools, but a user without the LIST_TOOLS permission would be denied access. Conclusion and Next Steps Congratulations! You've successfully implemented a robust authentication and authorization layer for your MCP server. By following these steps, you have: Defined clear roles and permissions. Created a centralized service for handling JWTs. Built a middleware to protect all incoming requests. Enforced granular permissions at the tool level. Your MCP server is no longer an open door—it's a secure service. From here, you can expand on these concepts by adding more roles, more permissions, and even more complex business logic to your authorization system. Star our GitHub repository to stay updated and receive notifications about future improvements.Use Copilot and MCP to query Microsoft Learn Docs
Are you ready to take your Azure development workflow to the next level? In this post, we’ll walk through how to use GitHub Copilot in Agent Mode—paired with MCP (Model Context Protocol) servers—to get trusted, grounded answers from Microsoft Learn Docs, right inside your coding workspace. Whether you’re tired of switching tabs to search documentation or want to ensure your AI assistant’s answers are always accurate, this guide will show you how to streamline your workflow and boost your productivity.Join Us for an AMA on Improving Your MCP Servers with Azure API Management
What Will We Cover? In this interactive AMA, you'll learn how to: Expose Azure API Management instances as MCP servers, enabling remote access to your APIs using AI combined with Model Context Protocol. Configure API Management policies to enhance your MCP servers with enterprise-grade capabilities such as rate limiting, authentication, and centralized monitoring. Why This Matters Model Context Protocol (MCP) bridges the gap between AI agents and the real-world data they need to be effective. By integrating MCP with Azure API Management, developers can expose tools to their AI agents while enforcing consistent policies and security standards. Whether you’re deploying custom tools or remote services, this AMA will show you how Azure API Management can be your go-to platform for controlling and scaling MCP access. How to Join Register to Join the Azure AI Foundry Discord Community Event See the events channel 📅 Tuesday, July 22st, 2025 ⏰ 10:00 AM Pacific Time (UTC−07:00) Event Highlights Learn how to expose MCP servers through Azure API Management See how to configure and test policies such as rate limiting Get answers directly from Microsoft product managers and engineers Connect with fellow developers building with MCP, Azure API Management, and Azure AI Foundry Get a Head Start Before the event, check out the documentation to learn how to Expose a REST API in API Management as an MCP server, view the Build and protect MCPs faster with governance in Azure API Manager session from Build 2025, or explore the AI-Gateway labs on GitHub and learn how to use APIM and MCP in the MCP for Beginners course. Don’t miss this opportunity to deepen your understanding of API Management and MCP integration—and get your questions answered live!Level Up Your Python Game with Generative AI Free Livestream Series This October!
If you've been itching to go beyond basic Python scripts and dive into the world of AI-powered applications, this is your moment. Join Pamela Fox and Gwyneth Peña-Siguenza Gwthrilled to announce a brand-new free livestream series running throughout October, focused on Python + Generative AI and this time, we’re going even deeper with Agents and the Model Context Protocol (MCP). Whether you're just starting out with LLMs or you're refining your multi-agent workflows, this series is designed to meet you where you are and push your skills to the next level. 🧠 What You’ll Learn Each session is packed with live coding, hands-on demos, and real-world examples you can run in GitHub Codespaces. Here's a taste of what we’ll cover: 🎥 Why Join? Live coding: No slides-only sessions — we build together, step by step. All code shared: Clone and run in GitHub Codespaces or your local setup. Community support: Join weekly office hours and our AI Discord for Q&A and deeper dives. Modular learning: Each session stands alone, so you can jump in anytime. 🔗 Register for the full series 🌍 ¿Hablas español? We’ve got you covered! Gwyneth Peña-Siguenza will be leading a parallel series in Spanish, covering the same topics with localized examples and demos. 🔗 Regístrese para la serie en español Whether you're building your first AI app or architecting multi-agent systems, this series is your launchpad. Come for the code, stay for the community — and leave with a toolkit that scales. Let’s build something brilliant together. 💡 Join the discussions and share your exprience at the Azure AI Discord Community