learning
735 TopicsBuilding Interactive Agent UIs with AG-UI and Microsoft Agent Framework
Introduction Picture this: You've built an AI agent that analyzes financial data. A user uploads a quarterly report and asks: "What are the top three expense categories?" Behind the scenes, your agent parses the spreadsheet, aggregates thousands of rows, and generates visualizations. All in 20 seconds. But the user? They see a loading spinner. Nothing else. No "reading file" message, no "analyzing data" indicator, no hint that progress is being made. They start wondering: Is it frozen? Should I refresh? The problem isn't the agent's capabilities - it's the communication gap between the agent running on the backend and the user interface. When agents perform multi-step reasoning, call external APIs, or execute complex tool chains, users deserve to see what's happening. They need streaming updates, intermediate results, and transparent progress indicators. Yet most agent frameworks force developers to choose between simple request/response patterns or building custom solutions to stream updates to their UIs. This is where AG-UI comes in. AG-UI is a fairly new event-based protocol that standardizes how agents communicate with user interfaces. Instead of every framework and development team inventing their own streaming solution, AG-UI provides a shared vocabulary of structured events that work consistently across different agent implementations. When an agent starts processing, calls a tool, generates text, or encounters an error, the UI receives explicit, typed events in real time. The beauty of AG-UI is its framework-agnostic design. While this blog post demonstrates integration with Microsoft Agent Framework (MAF), the same AG-UI protocol works with LangGraph, CrewAI, or any other compliant framework. Write your UI code once, and it works with any AG-UI-compliant backend. (Note: MAF supports both Python and .NET - this blog post focuses on the Python implementation.) TL;DR The Problem: Users don't get real-time updates while AI agents work behind the scenes - no progress indicators, no transparency into tool calls, and no insight into what's happening. The Solution: AG-UI is an open, event-based protocol that standardizes real-time communication between AI agents and user interfaces. Instead of each development team and framework inventing custom streaming solutions, AG-UI provides a shared vocabulary of structured events (like TOOL_CALL_START, TEXT_MESSAGE_CONTENT, RUN_FINISHED) that work across any compliant framework. Key Benefits: Framework-agnostic - Write UI code once, works with LangGraph, Microsoft Agent Framework, CrewAI, and more Real-time observability - See exactly what your agent is doing as it happens Server-Sent Events - Built on standard HTTP for universal compatibility Protocol-managed state - No manual conversation history tracking In This Post: You'll learn why AG-UI exists, how it works, and build a complete working application using Microsoft Agent Framework with Python - from server setup to client implementation. What You'll Learn This blog post walks through: Why AG-UI exists - how agent-UI communication has evolved and what problems current approaches couldn't solve How the protocol works - the key design choices that make AG-UI simple, reliable, and framework-agnostic Protocol architecture - the generic components and how AG-UI integrates with agent frameworks Building an AG-UI application - a complete working example using Microsoft Agent Framework with server, client, and step-by-step setup Understanding events - what happens under the hood when your agent runs and how to observe it Thinking in events - how building with AG-UI differs from traditional APIs, and what benefits this brings Making the right choice - when AG-UI is the right fit for your project and when alternatives might be better Estimated reading time: 15 minutes Who this is for: Developers building AI agents who want to provide real-time feedback to users, and teams evaluating standardized approaches to agent-UI communication To appreciate why AG-UI matters, we need to understand the journey that led to its creation. Let's trace how agent-UI communication has evolved through three distinct phases. The Evolution of Agent-UI Communication AI agents have become more capable over time. As they evolved, the way they communicated with user interfaces had to evolve as well. Here's how this evolution unfolded. Phase 1: Simple Request/Response In the early days of AI agent development, the interaction model was straightforward: send a question, wait for an answer, display the result. This synchronous approach mirrored traditional API calls and worked fine for simple scenarios. # Simple, but limiting response = agent.run("What's the weather in Paris?") display(response) # User waits... and waits... Works for: Quick queries that complete in seconds, simple Q&A interactions where immediate feedback and interactivity aren't critical. Breaks down: When agents need to call multiple tools, perform multi-step reasoning, or process complex queries that take 30+ seconds. Users see nothing but a loading spinner, with no insight into what's happening or whether the agent is making progress. This creates a poor user experience and makes it impossible to show intermediate results or allow user intervention. Recognizing these limitations, development teams began experimenting with more sophisticated approaches. Phase 2: Custom Streaming Solutions As agents became more sophisticated, teams recognized the need for incremental feedback and interactivity. Rather than waiting for the complete response, they implemented custom streaming solutions to show partial results as they became available. # Every team invents their own format for chunk in agent.stream("What's the weather?"): display(chunk) # But what about tool calls? Errors? Progress? This was a step forward for building interactive agent UIs, but each team solved the problem differently. Also, different frameworks had incompatible approaches - some streamed only text tokens, others sent structured JSON, and most provided no visibility into critical events like tool calls or errors. The problem: No standardization across frameworks - client code that works with LangGraph won't work with Crew AI, requiring separate implementations for each agent backend Each implementation handles tool calls differently - some send nothing during tool execution, others send unstructured messages Complex state management - clients must track conversation history, manage reconnections, and handle edge cases manually The industry needed a better solution - a common protocol that could work across all frameworks while maintaining the benefits of streaming. Phase 3: Standardized Protocol (AG-UI) AG-UI emerged as a response to the fragmentation problem. Instead of each framework and development team inventing their own streaming solution, AG-UI provides a shared vocabulary of events that work consistently across different agent implementations. # Standardized events everyone understands async for event in agent.run_stream("What's the weather?"): if event.type == "TEXT_MESSAGE_CONTENT": display_text(event.delta) elif event.type == "TOOL_CALL_START": show_tool_indicator(event.tool_name) elif event.type == "TOOL_CALL_RESULT": show_tool_result(event.result) The key difference is structured observability. Rather than guessing what the agent is doing from unstructured text, clients receive explicit events for every stage of execution: when the agent starts, when it generates text, when it calls a tool, when that tool completes, and when the entire run finishes. What's different: A standardized vocabulary of event types, complete observability into agent execution, and framework-agnostic clients that work with any AG-UI-compliant backend. You write your UI code once, and it works whether the backend uses Microsoft Agent Framework, LangGraph, or any other framework that speaks AG-UI. Now that we've seen why AG-UI emerged and what problems it solves, let's examine the specific design decisions that make the protocol work. These choices weren't arbitrary - each one addresses concrete challenges in building reliable, observable agent-UI communication. The Design Decisions Behind AG-UI Why Server-Sent Events (SSE)? Aspect WebSockets SSE (AG-UI) Complexity Bidirectional Unidirectional (simpler) Firewall/Proxy Sometimes blocked Standard HTTP Reconnection Manual implementation Built-in browser support Use case Real-time games, chat Agent responses (one-way) For agent interactions, you typically only need server→client communication, making SSE a simpler choice. SSE solves the transport problem - how events travel from server to client. But once connected, how does the protocol handle conversation state across multiple interactions? Why Protocol-Managed Threads? # Without protocol threads (client manages): conversation_history = [] conversation_history.append({"role": "user", "content": message}) response = agent.complete(conversation_history) conversation_history.append({"role": "assistant", "content": response}) # Complex, error-prone, doesn't work with multiple clients # With AG-UI (protocol manages): thread = agent.get_new_thread() # Server creates and manages thread agent.run_stream(message, thread=thread) # Server maintains context # Simple, reliable, shareable across clients With transport and state management handled, the final piece is the actual messages flowing through the connection. What information should the protocol communicate, and how should it be structured? Why Standardized Event Types? Instead of parsing unstructured text, clients get typed events: RUN_STARTED - Agent begins (start loading UI) TEXT_MESSAGE_CONTENT - Text chunk (stream to user) TOOL_CALL_START - Tool invoked (show "searching...", "calculating...") TOOL_CALL_RESULT - Tool finished (show result, update UI) RUN_FINISHED - Complete (hide loading) This lets UIs react intelligently without custom parsing logic. Now that we understand the protocol's design choices, let's see how these pieces fit together in a complete system. Architecture Overview Here's how the components interact: The communication between these layers relies on a well-defined set of event types. Here are the core events that flow through the SSE connection: Core Event Types AG-UI provides a standardized set of event types to describe what's happening during an agent's execution: RUN_STARTED - agent begins execution TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT, TEXT_MESSAGE_END - streaming segments of text TOOL_CALL_START, TOOL_CALL_ARGS, TOOL_CALL_END, TOOL_CALL_RESULT - tool execution events RUN_FINISHED - agent has finished execution RUN_ERROR - error information This model lets the UI update as the agent runs, rather than waiting for the final response. The generic architecture above applies to any AG-UI implementation. Now let's see how this translates to Microsoft Agent Framework. AG-UI with Microsoft Agent Framework While AG-UI is framework-agnostic, this blog post demonstrates integration with Microsoft Agent Framework (MAF) using Python. MAF is available in both Python and .NET, giving you flexibility to build AG-UI applications in your preferred language. Understanding how MAF implements the protocol will help you build your own applications or work with other compliant frameworks. Integration Architecture The Microsoft Agent Framework integration involves several specialized layers that handle protocol translation and execution orchestration: Understanding each layer: FastAPI Endpoint - Handles HTTP requests and establishes SSE connections for streaming AgentFrameworkAgent - Protocol wrapper that translates between AG-UI events and Agent Framework operations Orchestrators - Manage execution flow, coordinate tool calling sequences, and handle state transitions ChatAgent - Your agent implementation with instructions, tools, and business logic ChatClient - Interface to the underlying language model (Azure OpenAI, OpenAI, or other providers) The good news? When you call add_agent_framework_fastapi_endpoint, all the middleware layers are configured automatically. You simply provide your ChatAgent, and the integration handles protocol translation, event streaming, and state management behind the scenes. Now that we understand both the protocol architecture and the Microsoft Agent Framework integration, let's build a working application. Hands-On: Building Your First AG-UI Application This section demonstrates how to build an AG-UI server and client using Microsoft Agent Framework and FastAPI. Prerequisites Before building your first AG-UI application, ensure you have: Python 3.10 or later installed Basic understanding of async/await patterns in Python Azure CLI installed and authenticated (az login) Azure OpenAI service endpoint and deployment configured (setup guide) Cognitive Services OpenAI Contributor role for your Azure OpenAI resource You'll also need to install the AG-UI integration package: pip install agent-framework-ag-ui --pre This automatically installs agent-framework-core, fastapi, and uvicorn as dependencies. With your environment configured, let's create the server that will host your agent and expose it via the AG-UI protocol. Building the Server Let's create a FastAPI server that hosts an AI agent and exposes it via AG-UI: # server.py import os from typing import Annotated from dotenv import load_dotenv from fastapi import FastAPI from pydantic import Field from agent_framework import ChatAgent, ai_function from agent_framework.azure import AzureOpenAIChatClient from agent_framework_ag_ui import add_agent_framework_fastapi_endpoint from azure.identity import DefaultAzureCredential # Load environment variables from .env file load_dotenv() # Validate environment configuration openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") model_deployment = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME") if not openai_endpoint: raise RuntimeError("Missing required environment variable: AZURE_OPENAI_ENDPOINT") if not model_deployment: raise RuntimeError("Missing required environment variable: AZURE_OPENAI_DEPLOYMENT_NAME") # Define tools the agent can use @ai_function def get_order_status( order_id: Annotated[str, Field(description="The order ID to look up (e.g., ORD-001)")] ) -> dict: """Look up the status of a customer order. Returns order status, tracking number, and estimated delivery date. """ # Simulated order lookup orders = { "ORD-001": {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"}, "ORD-002": {"status": "processing", "tracking": None, "eta": "Jan 23, 2026"}, "ORD-003": {"status": "delivered", "tracking": "1Z999AA3", "eta": "Delivered Jan 20"}, } return orders.get(order_id, {"status": "not_found", "message": "Order not found"}) # Initialize Azure OpenAI client chat_client = AzureOpenAIChatClient( credential=DefaultAzureCredential(), endpoint=openai_endpoint, deployment_name=model_deployment, ) # Configure the agent with custom instructions and tools agent = ChatAgent( name="CustomerSupportAgent", instructions="""You are a helpful customer support assistant. You have access to a get_order_status tool that can look up order information. IMPORTANT: When a user mentions an order ID (like ORD-001, ORD-002, etc.), you MUST call the get_order_status tool to retrieve the actual order details. Do NOT make up or guess order information. After calling get_order_status, provide the actual results to the user in a friendly format.""", chat_client=chat_client, tools=[get_order_status], ) # Initialize FastAPI application app = FastAPI( title="AG-UI Customer Support Server", description="Interactive AI agent server using AG-UI protocol with tool calling" ) # Mount the AG-UI endpoint add_agent_framework_fastapi_endpoint(app, agent, path="/chat") def main(): """Entry point for the AG-UI server.""" import uvicorn print("Starting AG-UI server on http://localhost:8000") uvicorn.run(app, host="0.0.0.0", port=8000, log_level="info") # Run the application if __name__ == "__main__": main() What's happening here: We define a tool: get_order_status with the AI_function decorator Use Annotated and Field for parameter descriptions to help the agent understand when and how to use the tool We create an Azure OpenAI chat client with credential authentication The ChatAgent is configured with domain-specific instructions and the tools parameter add_agent_framework_fastapi_endpoint automatically handles SSE streaming and tool execution The server exposes the agent at the /chat endpoint Note: This example uses Azure OpenAI, but AG-UI works with any chat model. You can also integrate with Azure AI Foundry's model catalog or use other LLM providers. Tool calling is supported by most modern LLMs including GPT-4, GPT-4o, and Claude models. To run this server: # Set your Azure OpenAI credentials export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o" # Start the server python server.py With your server running and exposing the AG-UI endpoint, the next step is building a client that can connect and consume the event stream. Streaming Results to Clients With the server running, clients can connect and stream events as the agent processes requests. Here's a Python client that demonstrates the streaming capabilities: # client.py import asyncio import os from dotenv import load_dotenv from agent_framework import ChatAgent, FunctionCallContent, FunctionResultContent from agent_framework_ag_ui import AGUIChatClient # Load environment variables from .env file load_dotenv() async def interactive_chat(): """Interactive chat session with streaming responses.""" # Connect to the AG-UI server base_url = os.getenv("AGUI_SERVER_URL", "http://localhost:8000/chat") print(f"Connecting to: {base_url}\n") # Initialize the AG-UI client client = AGUIChatClient(endpoint=base_url) # Create a local agent representation agent = ChatAgent(chat_client=client) # Start a new conversation thread conversation_thread = agent.get_new_thread() print("Chat started! Type 'exit' or 'quit' to end the session.\n") try: while True: # Collect user input user_message = input("You: ") # Handle empty input if not user_message.strip(): print("Please enter a message.\n") continue # Check for exit commands if user_message.lower() in ["exit", "quit", "bye"]: print("\nGoodbye!") break # Stream the agent's response print("Agent: ", end="", flush=True) # Track tool calls to avoid duplicate prints seen_tools = set() async for update in agent.run_stream(user_message, thread=conversation_thread): # Display text content if update.text: print(update.text, end="", flush=True) # Display tool calls and results for content in update.contents: if isinstance(content, FunctionCallContent): # Only print each tool call once if content.call_id not in seen_tools: seen_tools.add(content.call_id) print(f"\n[Calling tool: {content.name}]", flush=True) elif isinstance(content, FunctionResultContent): # Only print each result once result_id = f"result_{content.call_id}" if result_id not in seen_tools: seen_tools.add(result_id) result_text = content.result if isinstance(content.result, str) else str(content.result) print(f"[Tool result: {result_text}]", flush=True) print("\n") # New line after response completes except KeyboardInterrupt: print("\n\nChat interrupted by user.") except ConnectionError as e: print(f"\nConnection error: {e}") print("Make sure the server is running.") except Exception as e: print(f"\nUnexpected error: {e}") def main(): """Entry point for the AG-UI client.""" asyncio.run(interactive_chat()) if __name__ == "__main__": main() Key features: The client connects to the AG-UI endpoint using AGUIChatClient with the endpoint parameter run_stream() yields updates containing text and content as they arrive Tool calls are detected using FunctionCallContent and displayed with [Calling tool: ...] Tool results are detected using FunctionResultContent and displayed with [Tool result: ...] Deduplication logic (seen_tools set) prevents printing the same tool call multiple times as it streams Thread management maintains conversation context across messages Graceful error handling for connection issues To use the client: # Optional: specify custom server URL export AGUI_SERVER_URL="http://localhost:8000/chat" # Start the interactive chat python client.py Example Session: Connecting to: http://localhost:8000/chat Chat started! Type 'exit' or 'quit' to end the session. You: What's the status of order ORD-001? Agent: [Calling tool: get_order_status] [Tool result: {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"}] Your order ORD-001 has been shipped! - Tracking Number: 1Z999AA1 - Estimated Delivery Date: January 25, 2026 You can use the tracking number to monitor the delivery progress. You: Can you check ORD-002? Agent: [Calling tool: get_order_status] [Tool result: {"status": "processing", "tracking": null, "eta": "Jan 23, 2026"}] Your order ORD-002 is currently being processed. - Status: Processing - Estimated Delivery: January 23, 2026 Your order should ship soon, and you'll receive a tracking number once it's on the way. You: exit Goodbye! The client we just built handles events at a high level, abstracting away the details. But what's actually flowing through that SSE connection? Let's peek under the hood. Event Types You'll See As the server streams back responses, clients receive a series of structured events. If you were to observe the raw SSE stream (e.g., using curl), you'd see events like: curl -N http://localhost:8000/chat \ -H "Content-Type: application/json" \ -H "Accept: text/event-stream" \ -d '{"messages": [{"role": "user", "content": "What'\''s the status of order ORD-001?"}]}' Sample event stream (with tool calling): data: {"type":"RUN_STARTED","threadId":"eb4d9850-14ef-446c-af4b-23037acda9e8","runId":"chatcmpl-xyz"} data: {"type":"TEXT_MESSAGE_START","messageId":"e8648880-a9ff-4178-a17d-4a6d3ec3d39c","role":"assistant"} data: {"type":"TOOL_CALL_START","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","toolCallName":"get_order_status","parentMessageId":"e8648880-a9ff-4178-a17d-4a6d3ec3d39c"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"{\""} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"order"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"_id"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"\":\""} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"ORD"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"-"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"001"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"\"}"} data: {"type":"TOOL_CALL_END","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y"} data: {"type":"TOOL_CALL_RESULT","messageId":"f048cb0a-a049-4a51-9403-a05e4820438a","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","content":"{\"status\": \"shipped\", \"tracking\": \"1Z999AA1\", \"eta\": \"Jan 25, 2026\"}","role":"tool"} data: {"type":"TEXT_MESSAGE_START","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","role":"assistant"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"Your"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" order"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" ORD"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"-"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"001"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" has"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" been"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" shipped"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"!"} ... (additional TEXT_MESSAGE_CONTENT events streaming the response) ... data: {"type":"TEXT_MESSAGE_END","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf"} data: {"type":"RUN_FINISHED","threadId":"eb4d9850-14ef-446c-af4b-23037acda9e8","runId":"chatcmpl-xyz"} Understanding the flow: RUN_STARTED - Agent begins processing the request TEXT_MESSAGE_START - First message starts (will contain tool calls) TOOL_CALL_START - Agent invokes the get_order_status tool Multiple TOOL_CALL_ARGS events - Arguments stream incrementally as JSON chunks ({"order_id":"ORD-001"}) TOOL_CALL_END - Tool invocation structure complete TOOL_CALL_RESULT - Tool execution finished with result data TEXT_MESSAGE_START - Second message starts (the final response) Multiple TEXT_MESSAGE_CONTENT events - Response text streams word-by-word TEXT_MESSAGE_END - Response message complete RUN_FINISHED - Entire run completed successfully This granular event model enables rich UI experiences - showing tool execution indicators ("Searching...", "Calculating..."), displaying intermediate results, and providing complete transparency into the agent's reasoning process. Seeing the raw events helps, but truly working with AG-UI requires a shift in how you think about agent interactions. Let's explore this conceptual change. The Mental Model Shift Traditional API Thinking # Imperative: Call and wait response = agent.run("What's 2+2?") print(response) # "The answer is 4" Mental model: Function call with return value AG-UI Thinking # Reactive: Subscribe to events async for event in agent.run_stream("What's 2+2?"): match event.type: case "RUN_STARTED": show_loading() case "TEXT_MESSAGE_CONTENT": display_chunk(event.delta) case "RUN_FINISHED": hide_loading() Mental model: Observable stream of events This shift feels similar to: Moving from synchronous to async code Moving from REST to event-driven architecture Moving from polling to pub/sub This mental shift isn't just philosophical - it unlocks concrete benefits that weren't possible with request/response patterns. What You Gain Observability # You can SEE what the agent is doing TOOL_CALL_START: "get_order_status" TOOL_CALL_ARGS: {"order_id": "ORD-001"} TOOL_CALL_RESULT: {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"} TEXT_MESSAGE_START: "Your order ORD-001 has been shipped..." Interruptibility # Future: Cancel long-running operations async for event in agent.run_stream(query): if user_clicked_cancel: await agent.cancel(thread_id, run_id) break Transparency # Users see the reasoning process "Looking up order ORD-001..." "Order found: Status is 'shipped'" "Retrieving tracking information..." "Your order has been shipped with tracking number 1Z999AA1..." To put these benefits in context, here's how AG-UI compares to traditional approaches across key dimensions: AG-UI vs. Traditional Approaches Aspect Traditional REST Custom Streaming AG-UI Connection Model Request/Response Varies Server-Sent Events State Management Manual Manual Protocol-managed Tool Calling Invisible Custom format Standardized events Framework Varies Framework-locked Framework-agnostic Browser Support Universal Varies Universal Implementation Simple Complex Moderate Ecosystem N/A Isolated Growing You've now seen AG-UI's design principles, implementation details, and conceptual foundations. But the most important question remains: should you actually use it? Conclusion: Is AG-UI Right for Your Project? AG-UI represents a shift toward standardized, observable agent interactions. Before adopting it, understand where the protocol stands and whether it fits your needs. Protocol Maturity The protocol is stable enough for production use but still evolving: Ready now: Core specification stable, Microsoft Agent Framework integration available, FastAPI/Python implementation mature, basic streaming and threading work reliably. Choose AG-UI If You Building new agent projects - No legacy API to maintain, want future compatibility with emerging ecosystem Need streaming observability - Multi-step workflows where users benefit from seeing each stage of execution Want framework flexibility - Same client code works with any AG-UI-compliant backend Comfortable with evolving standards - Can adapt to protocol changes as it matures Stick with Alternatives If You Have working solutions - Custom streaming working well, migration cost not justified Need guaranteed stability - Mission-critical systems where breaking changes are unacceptable Build simple agents - Single-step request/response without tool calling or streaming needs Risk-averse environment - Large existing implementations where proven approaches are required Beyond individual project decisions, it's worth considering AG-UI's role in the broader ecosystem. The Bigger Picture While this blog post focused on Microsoft Agent Framework, AG-UI's true power lies in its broader mission: creating a common language for agent-UI communication across the entire ecosystem. As more frameworks adopt it, the real value emerges: write your UI once, work with any compliant agent framework. Think of it like GraphQL for APIs or OpenAPI for REST - a standardization layer that benefits the entire ecosystem. The protocol is young, but the problem it solves is real. Whether you adopt it now or wait for broader adoption, understanding AG-UI helps you make informed architectural decisions for your agent applications. Ready to dive deeper? Here are the official resources to continue your AG-UI journey. Resources AG-UI & Microsoft Agent Framework Getting Started with AG-UI (Microsoft Learn) - Official tutorial AG-UI Integration Overview - Architecture and concepts AG-UI Protocol Specification - Official protocol documentation Backend Tool Rendering - Adding function tools Security Considerations - Production security guidance Microsoft Agent Framework Documentation - Framework overview AG-UI Dojo Examples - Live demonstrations UI Components & Integration CopilotKit for Microsoft Agent Framework - React component library Community & Support Microsoft Q&A - Community support Agent Framework GitHub - Source code and issues Related Technologies Azure AI Foundry Documentation - Azure AI platform FastAPI Documentation - Web framework Server-Sent Events (SSE) Specification - Protocol standard This blog post introduces AG-UI with Microsoft Agent Framework, focusing on fundamental concepts and building your first interactive agent application.Agents League: Two Weeks, Three Tracks, One Challenge
We're inviting all developers to join Agents League, running February 16-27. It's a two-week challenge where you'll build AI agents using production-ready tools, learn from live coding sessions, and get feedback directly from Microsoft product teams. We've put together starter kits for each track to help you get up and running quickly that also includes requirements and guidelines. Whether you want to explore what GitHub Copilot can do beyond autocomplete, build reasoning agents on Microsoft Foundry, or create enterprise integrations for Microsoft 365 Copilot, we have a track for you. Important: Register first to be eligible for prizes and your digital badge. Without registration, you won't qualify for awards or receive a badge when you submit. What Is Agents League? It's a 2-week competition that combines learning with building: 📽️ Live coding battles – Watch Product teams, MVPs and community members tackle challenges in real-time on Microsoft Reactor 💻 Async challenges – Build at your own pace, on your schedule 💬 Discord community – Connect with other participants, join AMAs, and get help when you need it 🏆 Prizes – $500 per track winner, plus GitHub Copilot Pro subscriptions for top picks The Three Tracks 🎨 Creative Apps — Build with GitHub Copilot (Chat, CLI, or SDK) 🧠 Reasoning Agents — Build with Microsoft Foundry 💼 Enterprise Agents — Build with M365 Agents Toolkit (or Copilot Studio) More details on each track below, or jump straight to the starter kits. The Schedule Agents League starts on February 16th and runs through Feburary 27th. Within 2 weeks, we host live battles on Reactor and AMA sessions on Discord. Week 1: Live Battles (Feb 17-19) We're kicking off with live coding battles streamed on Microsoft Reactor. Watch experienced developers compete in real-time, explaining their approach and architectural decisions as they go. Tue Feb 17, 9 AM PT — 🎨 Creative Apps battle Wed Feb 18, 9 AM PT — 🧠 Reasoning Agents battle Thu Feb 19, 9 AM PT — 💼 Enterprise Agents battle All sessions are recorded, so you can watch on your own schedule. Week 2: Build + AMAs (Feb 24-26) This is your time to build and ask questions on Discord. The async format means you work when it suits you, evenings, weekends, whatever fits your schedule. We're also hosting AMAs on Discord where you can ask questions directly to Microsoft experts and product teams: Tue Feb 24, 9 AM PT — 🎨 Creative Apps AMA Wed Feb 25, 9 AM PT — 🧠 Reasoning Agents AMA Thu Feb 26, 9 AM PT — 💼 Enterprise Agents AMA Bring your questions, get help when you're stuck, and share what you're building with the community. Pick Your Track We've created a starter kit for each track with setup guides, project ideas, and example scenarios to help you get started quickly. 🎨 Creative Apps Tool: GitHub Copilot (Chat, CLI, or SDK) Build innovative, imaginative applications that showcase the potential of AI-assisted development. All application types are welcome, web apps, CLI tools, games, mobile apps, desktop applications, and more. The starter kit walks you through GitHub Copilot's different modes and provides prompting tips to get the best results. View the Creative Apps starter kit. 🧠 Reasoning Agents Tool: Microsoft Foundry (UI or SDK) and/or Microsoft Agent Framework Build a multi-agent system that leverages advanced reasoning capabilities to solve complex problems. This track focuses on agents that can plan, reason through multi-step problems, and collaborate. The starter kit includes architecture patterns, reasoning strategies (planner-executor, critic/verifier, self-reflection), and integration guides for tools and MCP servers. View the Reasoning Agents starter kit. 💼 Enterprise Agents Tool: M365 Agents Toolkit or Copilot Studio Create intelligent agents that extend Microsoft 365 Copilot to address real-world enterprise scenarios. Your agent must work on Microsoft 365 Copilot Chat. Bonus points for: MCP server integration, OAuth security, Adaptive Cards UI, connected agents (multi-agent architecture). View the Enterprise Agents starter kit. Prizes & Recognition To be eligible for prizes and your digital badge, you must register before submitting your project. Category Winners ($500 each): 🎨 Creative Apps winner 🧠 Reasoning Agents winner 💼 Enterprise Agents winner GitHub Copilot Pro subscriptions: Community Favorite (voted by participants on Discord) Product Team Picks (selected by Microsoft product teams) Everyone who registers and submits a project wins: A digital badge to showcase their participation. Beyond the prizes, every participant gets feedback from the teams who built these tools, a valuable opportunity to learn and improve your approach to AI agent development. How to Get Started Register first — This is required to be eligible for prizes and to receive your digital badge. Without registration, your submission won't qualify for awards or a badge. Pick a track — Choose one track. Explore the starter kits to help you decide. Watch the battles — See how experienced developers approach these challenges. Great for learning even if you're still deciding whether to compete. Build your project — You have until Feb 27. Work on your own schedule. Submit via GitHub — Open an issue using the project submission template. Join us on Discord — Get help, share your progress, and vote for your favorite projects on Discord. Links Register: https://aka.ms/agentsleague/register Starter Kits: https://github.com/microsoft/agentsleague/starter-kits Discord: https://aka.ms/agentsleague/discord Live Battles: https://aka.ms/agentsleague/battles Submit Project: Project submission templateFrom Zero to 16 Games in 2 Hours
From Zero to 16 Games in 2 Hours: Teaching Prompt Engineering to Students with GitHub Copilot CLI Introduction What happens when you give a room full of 14-year-olds access to AI-powered development tools and challenge them to build games? You might expect chaos, confusion, or at best, a few half-working prototypes. Instead, we witnessed something remarkable: 16 fully functional HTML5 games created in under two hours, all from students with varying programming experience. This wasn't magic, it was the power of GitHub Copilot CLI combined with effective prompt engineering. By teaching students to communicate clearly with AI, we transformed a traditional coding workshop into a rapid prototyping session that exceeded everyone's expectations. The secret weapon? A technique called "one-shot prompting" that enables anyone to generate complete, working applications from a single, well-crafted prompt. In this article, we'll explore how we structured this workshop using CopilotCLI-OneShotPromptGameDev, a methodology designed to teach prompt engineering fundamentals while producing tangible, exciting results. Whether you're an educator planning STEM workshops, a developer exploring AI-assisted coding, or simply curious about how young people can leverage AI tools effectively, this guide provides a practical blueprint you can replicate. What is GitHub Copilot CLI? GitHub Copilot CLI extends the familiar Copilot experience beyond your code editor into the command line. While Copilot in VS Code suggests code completions as you type, Copilot CLI allows you to have conversational interactions with AI directly in your terminal. You describe what you want to accomplish in natural language, and the AI responds with shell commands, explanations, or in our case, complete code files. This terminal-based approach offers several advantages for learning and rapid prototyping. Students don't need to configure complex IDE settings or navigate unfamiliar interfaces. They simply type their request, review the AI's output, and iterate. The command line provides a transparent view of exactly what's happening, no hidden abstractions or magical "autocomplete" that obscures the learning process. For our workshop, Copilot CLI served as a bridge between students' creative ideas and working code. They could describe a game concept in plain English, watch the AI generate HTML, CSS, and JavaScript, then immediately test the result in a browser. This rapid feedback loop kept engagement high and made the connection between language and code tangible. Installing GitHub Copilot CLI Setting up Copilot CLI requires a few straightforward steps. Before the workshop, we ensured all machines were pre-configured, but students also learned the installation process as part of understanding how developer tools work. First, you'll need Node.js installed on your system. Copilot CLI runs as a Node package, so this is a prerequisite: # Check if Node.js is installed node --version # If not installed, download from https://nodejs.org/ # Or use a package manager: # Windows (winget) winget install OpenJS.NodeJS.LTS # macOS (Homebrew) brew install node # Linux (apt) sudo apt install nodejs npm These commands verify your Node.js installation or guide you through installing it using your operating system's preferred package manager. Next, install the GitHub CLI, which provides the foundation for Copilot CLI: # Windows winget install GitHub.cli # macOS brew install gh # Linux sudo apt install gh This installs the GitHub command-line interface, which handles authentication and provides the framework for Copilot integration. With GitHub CLI installed, authenticate with your GitHub account: gh auth login This command initiates an interactive authentication flow that connects your terminal to your GitHub account, enabling access to Copilot features. Finally, install the Copilot CLI extension: gh extension install github/gh-copilot This adds Copilot capabilities to your GitHub CLI installation, enabling the conversational AI features we'll use for game development. Verify the installation by running: gh copilot --help If you see the help output with available commands, you're ready to start prompting. The entire setup takes about 5-10 minutes on a fresh machine, making it practical for classroom environments. Understanding One-Shot Prompting Traditional programming education follows an incremental approach: learn syntax, understand concepts, build small programs, gradually tackle larger projects. This method is thorough but slow. One-shot prompting inverts this model—you start with the complete vision and let AI handle the implementation details. A one-shot prompt provides the AI with all the context it needs to generate a complete, working solution in a single response. Instead of iteratively refining code through multiple exchanges, you craft one comprehensive prompt that specifies requirements, constraints, styling preferences, and technical specifications. The AI then produces complete, functional code. This approach teaches a crucial skill: clear communication of technical requirements. Students must think through their entire game concept before typing. What does the game look like? How does the player interact with it? What happens when they win or lose? By forcing this upfront thinking, one-shot prompting develops the same analytical skills that professional developers use when writing specifications or planning architectures. The technique also demonstrates a powerful principle: with sufficient context, AI can handle implementation complexity while humans focus on creativity and design. Students learned they could create sophisticated games without memorizing JavaScript syntax—they just needed to describe their vision clearly enough for the AI to understand. Crafting Effective Prompts for Game Development The difference between a vague prompt and an effective one-shot prompt is the difference between frustration and success. We taught students a structured approach to prompt construction that consistently produced working games. Start with the game type and core mechanic. Don't just say "make a game"—specify what kind: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. This opening establishes the fundamental gameplay loop: control a spaceship, avoid obstacles. The AI now has a clear mental model to work from. Add visual and interaction details. Games are visual experiences, so specify how things should look and respond: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. The spaceship should be a blue triangle at the bottom of the screen, controlled by left and right arrow keys. Asteroids are brown circles that fall from the top at random positions and increasing speeds. These additions provide concrete visual targets and define the input mechanism. The AI can now generate specific CSS colors and event handlers. Define win/lose conditions and scoring: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. The spaceship should be a blue triangle at the bottom of the screen, controlled by left and right arrow keys. Asteroids are brown circles that fall from the top at random positions and increasing speeds. Display a score that increases every second the player survives. The game ends when an asteroid hits the spaceship, showing a "Game Over" screen with the final score and a "Play Again" button. This complete prompt now specifies the entire game loop: gameplay, scoring, losing, and restarting. The AI has everything needed to generate a fully playable game. The formula students learned: Game Type + Visual Description + Controls + Rules + Win/Lose + Score = Complete Game Prompt. Running the Workshop: Structure and Approach Our two-hour workshop followed a carefully designed structure that balanced instruction with hands-on creation. We partnered with University College London and students access to GitHub Education to access resources specifically designed for classroom settings, including student accounts with Copilot access and amazing tools like VSCode and Azure for Students and for Schools VSCode Education. The first 20 minutes covered fundamentals: what is AI, how does Copilot work, and why does prompt quality matter? We demonstrated this with a live example, showing how "make a game" produces confused output while a detailed prompt generates playable code. This contrast immediately captured students' attention, they could see the direct relationship between their words and the AI's output. The next 15 minutes focused on the prompt formula. We broke down several example prompts, highlighting each component: game type, visuals, controls, rules, scoring. Students practiced identifying these elements in prompts before writing their own. This analysis phase prepared them to construct effective prompts independently. The remaining 85 minutes were dedicated to creation. Students worked individually or in pairs, brainstorming game concepts, writing prompts, generating code, testing in browsers, and iterating. Instructors circulated to help debug prompts (not code an important distinction) and encourage experimentation. We deliberately avoided teaching JavaScript syntax. When students encountered bugs, we guided them to refine their prompts rather than manually fix code. This maintained focus on the core skill: communicating with AI effectively. Surprisingly, this approach resulted in fewer bugs overall because students learned to be more precise in their initial descriptions. Student Projects: The Games They Created The diversity of games produced in 85 minutes of building time amazed everyone present. Students didn't just follow a template, they invented entirely new concepts and successfully communicated them to Copilot CLI. One student created a "Fruit Ninja" clone where players clicked falling fruit to slice it before it hit the ground. Another built a typing speed game that challenged players to correctly type increasingly difficult words against a countdown timer. A pair of collaborators produced a two-player tank battle where each player controlled their tank with different keyboard keys. Several students explored educational games: a math challenge where players solve equations to destroy incoming meteors, a geography quiz with animated maps, and a vocabulary builder where correct definitions unlock new levels. These projects demonstrated that one-shot prompting isn't limited to entertainment, students naturally gravitated toward useful applications. The most complex project was a procedurally generated maze game with fog-of-war mechanics. The student spent extra time on their prompt, specifying exactly how visibility should work around the player character. Their detailed approach paid off with a surprisingly sophisticated result that would typically require hours of manual coding. By the session's end, we had 16 complete, playable HTML5 games. Every student who participated produced something they could share with friends and family a tangible achievement that transformed an abstract "coding workshop" into a genuine creative accomplishment. Key Benefits of Copilot CLI for Rapid Prototyping Our workshop revealed several advantages that make Copilot CLI particularly valuable for rapid prototyping scenarios, whether in educational settings or professional development. Speed of iteration fundamentally changes what's possible. Traditional game development requires hours to produce even simple prototypes. With Copilot CLI, students went from concept to playable game in minutes. This compressed timeline enables experimentation, if your first idea doesn't work, try another. This psychological freedom to fail fast and try again proved more valuable than any technical instruction. Accessibility removes barriers to entry. Students with no prior coding experience produced results comparable to those who had taken programming classes. The playing field leveled because success depended on creativity and communication rather than memorized syntax. This democratization of development opens doors for students who might otherwise feel excluded from technical fields. Focus on design over implementation teaches transferable skills. Whether students eventually become programmers, designers, product managers, or pursue entirely different careers, the ability to clearly specify requirements and think through complete systems applies universally. They learned to think like system designers, not just coders. The feedback loop keeps engagement high. Seeing your words transform into working software within seconds creates an addictive cycle of creation and testing. Students who typically struggle with attention during lectures remained focused throughout the building session. The immediate gratification of seeing their games work motivated continuous refinement. Debugging through prompts teaches root cause analysis. When games didn't work as expected, students had to analyze what they'd asked for versus what they received. This comparison exercise developed critical thinking about specifications a skill that serves developers throughout their careers. Tips for Educators: Running Your Own Workshop If you're planning to replicate this workshop, several lessons from our experience will help ensure success. Pre-configure machines whenever possible. While installation is straightforward, classroom time is precious. Having Copilot CLI ready on all devices lets you dive into content immediately. If pre-configuration isn't possible, allocate the first 15-20 minutes specifically for setup and troubleshoot as a group. Prepare example prompts across difficulty levels. Some students will grasp one-shot prompting immediately; others will need more scaffolding. Having templates ranging from simple ("Create Pong") to complex (the spaceship example above) lets you meet students where they are. Emphasize that "prompt debugging" is the goal. When students ask for help fixing broken code, redirect them to examine their prompt. What did they ask for? What did they get? Where's the gap? This redirection reinforces the workshop's core learning objective and builds self-sufficiency. Celebrate and share widely. Build in time at the end for students to demonstrate their games. This showcase moment validates their work and often inspires classmates to try new approaches in future sessions. Consider creating a shared folder or simple website where all games can be accessed after the workshop. Access GitHub Education resources at education.github.com before your workshop. The GitHub Education program provides free access to developer tools for students and educators, including Copilot. The resources there include curriculum materials, teaching guides, and community support that can enhance your workshop. Beyond Games: Where This Leads The techniques students learned extend far beyond game development. One-shot prompting with Copilot CLI works for any development task: creating web pages, building utilities, generating data processing scripts, or prototyping application interfaces. The fundamental skill, communicating requirements clearly to AI applies wherever AI-assisted development tools are used. Several students have continued exploring after the workshop. Some discovered they enjoy the creative aspects of game design and are learning traditional programming to gain more control. Others found that prompt engineering itself interests them, they're exploring how different phrasings affect AI outputs across various domains. For professional developers, the workshop's lessons apply directly to working with Copilot, ChatGPT, and other AI coding assistants. The ability to craft precise, complete prompts determines whether these tools save time or create confusion. Investing in prompt engineering skills yields returns across every AI-assisted workflow. Key Takeaways Clear prompts produce working code: The one-shot prompting formula (Game Type + Visuals + Controls + Rules + Win/Lose + Score) reliably generates playable games from single prompts Copilot CLI democratizes development: Students with no coding experience created functional applications by focusing on communication rather than syntax Rapid iteration enables experimentation: Minutes-per-prototype timelines encourage creative risk-taking and learning from failures Prompt debugging builds analytical skills: Comparing intended versus actual results teaches specification writing and root cause analysis Sixteen games in two hours is achievable: With proper structure and preparation, young students can produce impressive results using AI-assisted development Conclusion and Next Steps Our workshop demonstrated that AI-assisted development tools like GitHub Copilot CLI aren't just productivity boosters for experienced programmers, they're powerful educational instruments that make software creation accessible to beginners. By focusing on prompt engineering rather than traditional syntax instruction, we enabled 14-year-old students to produce complete, functional games in a fraction of the time traditional methods would require. The sixteen games created during those two hours represent more than just workshop outputs. They represent a shift in how we might teach technical creativity: start with vision, communicate clearly, iterate quickly. Whether students pursue programming careers or not, they've gained experience in thinking systematically about requirements and translating ideas into specifications that produce real results. To explore this approach yourself, visit the CopilotCLI-OneShotPromptGameDev repository for prompt templates, workshop materials, and example games. For educational resources and student access to GitHub tools including Copilot, explore GitHub Education. And most importantly, start experimenting. Write a prompt, generate some code, and see what you can create in the next few minutes. Resources CopilotCLI-OneShotPromptGameDev Repository - Workshop materials, prompt templates, and example games GitHub Education - Free developer tools and resources for students and educators GitHub Copilot CLI Documentation - Official installation and usage guide GitHub CLI - Foundation tool required for Copilot CLI GitHub Copilot - Overview of Copilot features and pricing243Views2likes3CommentsBuild an AI-Powered Space Invaders Game
Build an AI-Powered Space Invaders Game: Integrating LLMs into HTML5 Games with Microsoft Foundry Local Introduction What if your game could talk back to you? Imagine playing Space Invaders while an AI commander taunts you during battle, delivers personalized mission briefings, and provides real-time feedback based on your performance. This isn't science fiction it's something you can build today using HTML, JavaScript, and a locally-running AI model. In this tutorial, we'll explore how to create an HTML5 game with integrated Large Language Model (LLM) features using Microsoft Foundry Local. You'll learn how to combine classic game development with modern AI capabilities, all running entirely on your own machine—no cloud services, no API costs, no internet connection required during gameplay. We'll be working with the Space Invaders - AI Commander Edition project, which demonstrates exactly how to architect games that leverage local AI. Whether you're a student learning game development, exploring AI integration patterns, or building your portfolio, this guide provides practical, hands-on experience with technologies that are reshaping how we build interactive applications. What You'll Learn By the end of this tutorial, you'll understand how to combine traditional web development with local AI inference. These skills transfer directly to building chatbots, interactive tutorials, AI-enhanced productivity tools, and any application where you want intelligent, context-aware responses. Set up Microsoft Foundry Local for running AI models on your machine Understand the architecture of games that integrate LLM features Use GitHub Copilot CLI to accelerate your development workflow Implement AI-powered game features like dynamic commentary and adaptive feedback Extend the project with your own creative AI features Why Local AI for Games? Before diving into the code, let's understand why running AI locally matters for game development. Traditional cloud-based AI services have limitations that make them impractical for real-time gaming experiences. Latency is the first challenge. Cloud API calls typically take 500ms to several seconds, an eternity in a game running at 60 frames per second. Local inference can respond in tens of milliseconds, enabling AI responses that feel instantaneous and natural. When an enemy ship appears, your AI commander can taunt you immediately, not three seconds later. Cost is another consideration. Cloud AI services charge per token, which adds up quickly when generating dynamic content during gameplay. Local models have zero per-use cost, once installed, they run entirely on your hardware. This frees you to experiment without worrying about API bills. Privacy and offline capability complete the picture. Local AI keeps all data on your machine, perfect for games that might handle player information. And since nothing requires internet connectivity, your game works anywhere, on planes, in areas with poor connectivity, or simply when you want to play without network access. Understanding Microsoft Foundry Local Microsoft Foundry Local is a runtime that enables you to run small language models (SLMs) directly on your computer. It's designed for developers who want to integrate AI capabilities into applications without requiring cloud infrastructure. Think of it as having a miniature AI assistant living on your laptop. Foundry Local handles the complex work of loading AI models, managing memory, and processing inference requests through a simple API. You send text prompts, and it returns AI-generated responses, all happening locally on your CPU or GPU. The models are optimized to run efficiently on consumer hardware, so you don't need a supercomputer. For our Space Invaders game, Foundry Local powers the "AI Commander" feature. During gameplay, the game sends context about what's happening, your score, accuracy, current level, enemies remaining and receives back contextual commentary, taunts, and encouragement. The result feels like playing alongside an AI companion who actually understands the game. Setting Up Your Development Environment Let's get your machine ready for AI-powered game development. We'll install Foundry Local, clone the project, and verify everything works. The entire setup takes about 10-15 minutes. Step 1: Install Microsoft Foundry Local Foundry Local installation varies by operating system. Open your terminal and run the appropriate command: # Windows (using winget) winget install Microsoft.FoundryLocal # macOS (using Homebrew) brew install microsoft/foundrylocal/foundrylocal These commands download and install the Foundry Local runtime along with a default small language model. The installation includes everything needed to run AI inference locally. Verify the installation by running: foundry --version If you see a version number, Foundry Local is ready. If you encounter errors, ensure you have administrator/sudo privileges and that your package manager is up to date. Step 2: Install Node.js (If Not Already Installed) Our game's AI features require a small Node.js server to communicate between the browser and Foundry Local. Check if Node.js is installed: node --version If you see a version number (v16 or higher recommended), you're set. Otherwise, install Node.js: # Windows winget install OpenJS.NodeJS.LTS # macOS brew install node # Linux sudo apt install nodejs npm Node.js provides the JavaScript runtime that powers our proxy server, bridging browser code with the local AI model. Step 3: Clone the Project Get the Space Invaders project onto your machine: git clone https://github.com/leestott/Spaceinvaders-FoundryLocal.git cd Spaceinvaders-FoundryLocal This downloads all game files, including the HTML interface, game logic, AI integration module, and server code. Step 4: Install Dependencies and Start the Server Install the Node.js packages and launch the AI-enabled server: npm install npm start The first command downloads required packages (primarily for the proxy server). The second starts the server, which listens for AI requests from the game. You should see output indicating the server is running on port 3001. Step 5: Play the Game Open your browser and navigate to: http://localhost:3001 You should see Space Invaders with "AI: ONLINE" displayed in the game HUD, indicating that AI features are active. Use arrow keys or A/D to move, SPACE to fire, and P to pause. The AI Commander will start providing commentary as you play! Understanding the Project Architecture Now that the game is running, let's explore how the different pieces fit together. Understanding this architecture will help you modify the game and apply these patterns to your own projects. The project follows a clean separation of concerns, with each file handling a specific responsibility: Spaceinvaders-FoundryLocal/ ├── index.html # Main game page and UI structure ├── styles.css # Retro arcade visual styling ├── game.js # Core game logic and rendering ├── llm.js # AI integration module ├── sound.js # Web Audio API sound effects ├── server.js # Node.js proxy for Foundry Local └── package.json # Project configuration index.html: Defines the game canvas and UI elements. It's the entry point that loads all other modules. game.js: Contains the game loop, physics, collision detection, scoring, and rendering logic. This is the heart of the game. llm.js: Handles all communication with the AI backend. It formats game state into prompts and processes AI responses. server.js: A lightweight Express server that proxies requests between the browser and Foundry Local. sound.js: Synthesizes retro sound effects using the Web Audio API—no audio files needed! How the AI Integration Works The magic of the AI Commander happens through a simple but powerful pattern. Let's trace the flow from gameplay event to AI response. When something interesting happens in the game, you clear a wave, achieve a combo, or lose a life, the game logic in game.js triggers an AI request. This request includes context about the current game state: your score, accuracy percentage, current level, lives remaining, and what just happened. The llm.js module formats this context into a prompt. For example, when you clear a wave with 85% accuracy, it might construct: You are an AI Commander in a Space Invaders game. The player just cleared wave 3 with 85% accuracy. Score: 12,500. Lives: 3. Provide a brief, enthusiastic comment (1-2 sentences). This prompt travels to server.js , which forwards it to Foundry Local. The AI model processes the prompt and generates a response like: "Impressive accuracy, pilot! Wave 3 didn't stand a chance. Keep that trigger finger sharp!" The response flows back through the server to the browser, where llm.js passes it to the game. The game displays the message in the HUD, creating the illusion of playing alongside an AI companion. This entire round trip typically completes in 50-200 milliseconds, fast enough to feel responsive without interrupting gameplay. Using GitHub Copilot CLI to Explore and Modify the Code GitHub Copilot CLI accelerates your development workflow by letting you ask questions and generate code directly in your terminal. Let's use it to understand and extend the Space Invaders project. Installing Copilot CLI If you haven't installed Copilot CLI yet, here's the quick setup: # Install GitHub CLI winget install GitHub.cli # Windows brew install gh # macOS # Authenticate with GitHub gh auth login # Add Copilot extension gh extension install github/gh-copilot # Verify installation gh copilot --help With Copilot CLI ready, you can interact with AI directly from your terminal while working on the project. Exploring Code with Copilot CLI Use Copilot to understand unfamiliar code. Navigate to the project directory and try: gh copilot explain "How does llm.js communicate with the server?" Copilot analyzes the code and explains the communication pattern, helping you understand the architecture without reading every line manually. You can also ask about specific functions: gh copilot explain "What does the generateEnemyTaunt function do?" This accelerates onboarding to unfamiliar codebases, a valuable skill when working with open source projects or joining teams. Generating New Features Want to add a new AI feature? Ask Copilot to help generate the code: gh copilot suggest "Create a function that asks the AI to generate a mission briefing at the start of each level, including the level number and a random mission objective" Copilot generates starter code that you can customize and integrate. This combination of AI-powered development tools and AI-integrated gameplay demonstrates how LLMs are transforming both how we build games and how games behave. Customizing the AI Commander The default AI Commander provides generic gaming commentary, but you can customize its personality and responses. Open llm.js to find the prompt templates that control AI behavior. Changing the AI's Personality The system prompt defines who the AI "is." Find the base prompt and modify it: // Original const systemPrompt = "You are an AI Commander in a Space Invaders game."; // Customized - Drill Sergeant personality const systemPrompt = `You are Sergeant Blaster, a gruff but encouraging drill sergeant commanding space cadets. Use military terminology, call the player "cadet," and be tough but fair.`; // Customized - Supportive Coach personality const systemPrompt = `You are Coach Nova, a supportive and enthusiastic gaming coach. Use encouraging language, celebrate small victories, and provide gentle guidance when players struggle.`; These personality changes dramatically alter the game's feel without changing any gameplay code. It's a powerful example of how AI can add variety to games with minimal development effort. Adding New Commentary Triggers Currently the AI responds to wave completions and game events. You can add new triggers in game.js : // Add AI commentary when player achieves a kill streak if (killStreak >= 5 && !streakCommentPending) { requestAIComment('killStreak', { count: killStreak }); streakCommentPending = true; } // Add AI reaction when player narrowly avoids death if (nearMissOccurred) { requestAIComment('nearMiss', { livesRemaining: lives }); } Each new trigger point adds another opportunity for the AI to engage with the player, making the experience more dynamic and personalized. Understanding the Game Features Beyond AI integration, the Space Invaders project demonstrates solid game development patterns worth studying. Let's explore the key features. Power-Up System The game includes eight different power-ups, each with unique effects: SPREAD (Orange): Fires three projectiles in a spread pattern LASER (Red): Powerful beam with high damage RAPID (Yellow): Dramatically increased fire rate MISSILE (Purple): Homing projectiles that track enemies SHIELD (Blue): Grants an extra life EXTRA LIFE (Green): Grants two extra lives BOMB (Red): Destroys all enemies on screen BONUS (Gold): Random score bonus between 250-750 points Power-ups demonstrate state management, tracking which power-up is active, applying its effects to player actions, and handling timeouts. Study the power-up code in game.js to understand how temporary state modifications work. Leaderboard System The game persists high scores using the browser's localStorage API: // Saving scores localStorage.setItem('spaceInvadersScores', JSON.stringify(scores)); // Loading scores const savedScores = localStorage.getItem('spaceInvadersScores'); const scores = savedScores ? JSON.parse(savedScores) : []; This pattern works for any data you want to persist between sessions—game progress, user preferences, or accumulated statistics. It's a simple but powerful technique for web games. Sound Synthesis Rather than loading audio files, the game synthesizes retro sound effects using the Web Audio API in sound.js . This approach has several benefits: no external assets to load, smaller project size, and complete control over sound parameters. Examine how oscillators and gain nodes combine to create laser sounds, explosions, and victory fanfares. This knowledge transfers directly to any web project requiring audio feedback. Extending the Project: Ideas for Students Ready to make the project your own? Here are ideas ranging from beginner-friendly to challenging, each teaching valuable skills. Beginner: Customize Visual Theme Modify styles.css to create a new visual theme. Try changing the color scheme from green to blue, or create a "sunset" theme with orange and purple gradients. This builds CSS skills while making the game feel fresh. Intermediate: Add New Enemy Types Create a new enemy class in game.js with different movement patterns. Perhaps enemies that move in sine waves, or boss enemies that take multiple hits. This teaches object-oriented programming and game physics. Intermediate: Expand AI Interactions Add new AI features like: Pre-game mission briefings that set up the story Dynamic difficulty hints when players struggle Post-game performance analysis and improvement suggestions AI-generated names for enemy waves Advanced: Multiplayer Commentary Modify the game for two-player support and have the AI provide play-by-play commentary comparing both players' performance. This combines game networking concepts with advanced AI prompting. Advanced: Voice Integration Use the Web Speech API to speak the AI Commander's responses aloud. This creates a more immersive experience and demonstrates browser speech synthesis capabilities. Troubleshooting Common Issues If something isn't working, here are solutions to common problems. "AI: OFFLINE" Displayed in Game This means the game can't connect to the AI server. Check that: The server is running ( npm start shows no errors) You're accessing the game via http://localhost:3001 , not directly opening the HTML file Foundry Local is installed correctly ( foundry --version works) Server Won't Start If npm start fails: Ensure you ran npm install first Check that port 3001 isn't already in use by another application Verify Node.js is installed ( node --version ) AI Responses Are Slow Local AI performance depends on your hardware. If responses feel sluggish: Close other resource-intensive applications Ensure your laptop is plugged in (battery mode may throttle CPU) Consider that first requests may be slower as the model loads Key Takeaways Local AI enables real-time game features: Microsoft Foundry Local provides fast, free, private AI inference perfect for gaming applications Clean architecture matters: Separating game logic, AI integration, and server code makes projects maintainable and extensible AI personality is prompt-driven: Changing a few lines of prompt text completely transforms how the AI interacts with players Copilot CLI accelerates learning: Use it to explore unfamiliar code and generate new features quickly The patterns transfer everywhere: Skills from this project apply to chatbots, assistants, educational tools, and any AI-integrated application Conclusion and Next Steps You've now seen how to integrate AI capabilities into a browser-based game using Microsoft Foundry Local. The Space Invaders project demonstrates that modern AI features don't require cloud services or complex infrastructure, they can run entirely on your laptop, responding in milliseconds. More importantly, you've learned patterns that extend far beyond gaming. The architecture of sending context to an AI, receiving generated responses, and integrating them into user experiences applies to countless applications: customer support bots, educational tutors, creative writing tools, and accessibility features. Your next step is experimentation. Clone the repository, modify the AI's personality, add new commentary triggers, or build an entirely new game using these patterns. The combination of GitHub Copilot CLI for development assistance and Foundry Local for runtime AI gives you powerful tools to bring intelligent applications to life. Start playing, start coding, and discover what you can create when your games can think. Resources Space Invaders - AI Commander Edition Repository - Full source code and documentation Play Space Invaders Online - Try the basic version without AI features Microsoft Foundry Local Documentation - Official installation and API guide GitHub Copilot CLI Documentation - Installation and usage guide GitHub Education - Free developer tools for students Web Audio API Documentation - Learn about browser sound synthesis Canvas API Documentation - Master HTML5 game rendering166Views0likes1CommentSeeking Guidance & Partners for Promoting Microsoft TSP Training
Hello everyone, I’m exploring effective ways to market our Microsoft TSP (Training Services Partner) courses and would really appreciate insights from the community. If you have experience promoting Microsoft training offerings, I’d love to learn about recommended approaches, or useful resources that have worked for you. I’m also open to connecting with individuals or partners who support course marketing on a commission basis. If you have suggestions, guidance, or services you’d like to offer, please feel free to share. Your expertise would be greatly appreciated. Thank you in advance for your support. vk2Choosing the Right Intelligence Layer for Your Application
Introduction One of the most common questions developers ask when planning AI-powered applications is: "Should I use the GitHub Copilot SDK or the Microsoft Agent Framework?" It's a natural question, both technologies let you add an intelligence layer to your apps, both come from Microsoft's ecosystem, and both deal with AI agents. But they solve fundamentally different problems, and understanding where each excels will save you weeks of architectural missteps. The short answer is this: the Copilot SDK puts Copilot inside your app, while the Agent Framework lets you build your app out of agents. They're complementary, not competing. In fact, the most interesting applications use both, the Agent Framework as the system architecture and the Copilot SDK as a powerful execution engine within it. This article breaks down each technology's purpose, architecture, and ideal use cases. We'll walk through concrete scenarios, examine a real-world project that combines both, and give you a decision framework for your own applications. Whether you're building developer tools, enterprise workflows, or data analysis pipelines, you'll leave with a clear understanding of which tool belongs where in your stack. The Core Distinction: Embedding Intelligence vs Building With Intelligence Before comparing features, it helps to understand the fundamental design philosophy behind each technology. They approach the concept of "adding AI to your application" from opposite directions. The GitHub Copilot SDK exposes the same agentic runtime that powers Copilot CLI as a programmable library. When you use it, you're embedding a production-tested agent, complete with planning, tool invocation, file editing, and command execution, directly into your application. You don't build the orchestration logic yourself. Instead, you delegate tasks to Copilot's agent loop and receive results. Think of it as hiring a highly capable contractor: you describe the job, and the contractor figures out the steps. The Microsoft Agent Framework is a framework for building, orchestrating, and hosting your own agents. You explicitly model agents, workflows, state, memory, hand-offs, and human-in-the-loop interactions. You control the orchestration, policies, deployment, and observability. Think of it as designing the company that employs those contractors: you define the roles, processes, escalation paths, and quality controls. This distinction has profound implications for what you build and how you build it. GitHub Copilot SDK: When Your App Wants Copilot-Style Intelligence The GitHub Copilot SDK is the right choice when you want to embed agentic behavior into an existing application without building your own planning or orchestration layer. It's optimized for developer workflows and task automation scenarios where you need an AI agent to do things, edit files, run commands, generate code, interact with tools, reliably and quickly. What You Get Out of the Box The SDK communicates with the Copilot CLI server via JSON-RPC, managing the CLI process lifecycle automatically. This means your application inherits capabilities that have been battle-tested across millions of Copilot CLI users: Planning and execution: The agent analyzes tasks, breaks them into steps, and executes them autonomously Built-in tool support: File system operations, Git operations, web requests, and shell command execution work out of the box MCP (Model Context Protocol) integration: Connect to any MCP server to extend the agent's capabilities with custom data sources and tools Multi-language support: Available as SDKs for Python, TypeScript/Node.js, Go, and .NET Custom tool definitions: Define your own tools and constrain which tools the agent can access BYOK (Bring Your Own Key): Use your own API keys from OpenAI, Azure AI Foundry, or Anthropic instead of GitHub authentication Architecture The SDK's architecture is deliberately simple. Your application communicates with the Copilot CLI running in server mode: Your Application ↓ SDK Client ↓ JSON-RPC Copilot CLI (server mode) The SDK manages the CLI process lifecycle automatically. You can also connect to an external CLI server if you need more control over the deployment. This simplicity is intentional, it keeps the integration surface small so you can focus on your application logic rather than agent infrastructure. Ideal Use Cases for the Copilot SDK The Copilot SDK shines in scenarios where you need a competent agent to execute tasks on behalf of users. These include: AI-powered developer tools: IDEs, CLIs, internal developer portals, and code review tools that need to understand, generate, or modify code "Do the task for me" agents: Applications where users describe what they want—edit these files, run this analysis, generate a pull request and the agent handles execution Rapid prototyping with agentic behavior: When you need to ship an intelligent feature quickly without building a custom planning or orchestration system Internal tools that interact with codebases: Build tools that explore repositories, generate documentation, run migrations, or automate repetitive development tasks A practical example: imagine building an internal CLI that lets engineers say "set up a new microservice with our standard boilerplate, CI pipeline, and monitoring configuration." The Copilot SDK agent would plan the file creation, scaffold the code, configure the pipeline YAML, and even run initial tests, all without you writing orchestration logic. Microsoft Agent Framework: When Your App Is the Intelligence System The Microsoft Agent Framework is the right choice when you need to build a system of agents that collaborate, maintain state, follow business processes, and operate with enterprise-grade governance. It's designed for long-running, multi-agent workflows where you need fine-grained control over every aspect of orchestration. What You Get Out of the Box The Agent Framework provides a comprehensive foundation for building sophisticated agent systems in both Python and .NET: Graph-based workflows: Connect agents and deterministic functions using data flows with streaming, checkpointing, human-in-the-loop, and time-travel capabilities Multi-agent orchestration: Define how agents collaborate, hand off tasks, escalate decisions, and share state Durability and checkpoints: Workflows can pause, resume, and recover from failures, essential for business-critical processes Human-in-the-loop: Built-in support for approval gates, review steps, and human override points Observability: OpenTelemetry integration for distributed tracing, monitoring, and debugging across agent boundaries Multiple agent providers: Use Azure OpenAI, OpenAI, and other LLM providers as the intelligence behind your agents DevUI: An interactive developer UI for testing, debugging, and visualizing workflow execution Architecture The Agent Framework gives you explicit control over the agent topology. You define agents, connect them in workflows, and manage the flow of data between them: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Agent A │────▶│ Agent B │────▶│ Agent C │ │ (Planner) │ │ (Executor) │ │ (Reviewer) │ └─────────────┘ └──────────────┘ └──────────────┘ Define Execute Validate strategy tasks output Each agent has its own instructions, tools, memory, and state. The framework manages communication between agents, handles failures, and provides visibility into what's happening at every step. This explicitness is what makes it suitable for enterprise applications where auditability and control are non-negotiable. Ideal Use Cases for the Agent Framework The Agent Framework excels in scenarios where you need a system of coordinated agents operating under business rules. These include: Multi-agent business workflows: Customer support pipelines, research workflows, operational processes, and data transformation pipelines where different agents handle different responsibilities Systems requiring durability: Workflows that run for hours or days, need checkpoints, can survive restarts, and maintain state across sessions Governance-heavy applications: Processes requiring approval gates, audit trails, role-based access, and compliance documentation Agent collaboration patterns: Applications where agents need to negotiate, escalate, debate, or refine outputs iteratively before producing a final result Enterprise data pipelines: Complex data processing workflows where AI agents analyze, transform, and validate data through multiple stages A practical example: an enterprise customer support system where a triage agent classifies incoming tickets, a research agent gathers relevant documentation and past solutions, a response agent drafts replies, and a quality agent reviews responses before they reach the customer, with a human escalation path when confidence is low. Side-by-Side Comparison To make the distinction concrete, here's how the two technologies compare across key dimensions that matter when choosing an intelligence layer for your application. Dimension GitHub Copilot SDK Microsoft Agent Framework Primary purpose Embed Copilot's agent runtime into your app Build and orchestrate your own agent systems Orchestration Handled by Copilot's agent loop, you delegate You define explicitly, agents, workflows, state, hand-offs Agent count Typically single agent per session Multi-agent systems with agent-to-agent communication State management Session-scoped, managed by the SDK Durable state with checkpointing, time-travel, persistence Human-in-the-loop Basic, user confirms actions Rich approval gates, review steps, escalation paths Observability Session logs and tool call traces Full OpenTelemetry, distributed tracing, DevUI Best for Developer tools, task automation, code-centric workflows Enterprise workflows, multi-agent systems, business processes Languages Python, TypeScript, Go, .NET Python, .NET Learning curve Low, install, configure, delegate tasks Moderate, design agents, workflows, state, and policies Maturity Technical Preview Preview with active development, 7k+ stars, 100+ contributors Real-World Example: Both Working Together The most compelling applications don't choose between these technologies, they combine them. A perfect demonstration of this complementary relationship is the Agentic House project by my colleague Anthony Shaw, which uses an Agent Framework workflow to orchestrate three agents, one of which is powered by the GitHub Copilot SDK. The Problem Agentic House lets users ask natural language questions about their Home Assistant smart home data. Questions like "what time of day is my phone normally fully charged?" or "is there a correlation between when the back door is open and the temperature in my office?" require exploring available data, writing analysis code, and producing visual results—a multi-step process that no single agent can handle well alone. The Architecture The project implements a three-agent pipeline using the Agent Framework for orchestration: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Planner │────▶│ Coder │────▶│ Reviewer │ │ (GPT-4.1) │ │ (Copilot) │ │ (GPT-4.1) │ └─────────────┘ └──────────────┘ └──────────────┘ Plan Notebook Approve/ analysis generation Reject Planner Agent: Takes a natural language question and creates a structured analysis plan, which Home Assistant entities to query, what visualizations to create, what hypotheses to test. This agent uses GPT-4.1 through Azure AI Foundry or GitHub Models. Coder Agent: Uses the GitHub Copilot SDK to generate a complete Jupyter notebook that fetches data from the Home Assistant REST API via MCP, performs the analysis, and creates visualizations. The Copilot agent is constrained to only use specific tools, demonstrating how the SDK supports tool restriction. Reviewer Agent: Acts as a security gatekeeper, reviewing the generated notebook to ensure it only reads and displays data. It rejects notebooks that attempt to modify Home Assistant state, import dangerous modules, make external network requests, or contain obfuscated code. Why This Architecture Works This design demonstrates several principles about when to use which technology: Agent Framework provides the workflow: The sequential pipeline with planning, execution, and review is a classic Agent Framework pattern. Each agent has a clear role, and the framework manages the flow between them. Copilot SDK provides the coding execution: The Coder agent leverages Copilot's battle-tested ability to generate code, work with files, and use MCP tools. Building a custom code generation agent from scratch would take significantly longer and produce less reliable results. Tool constraints demonstrate responsible AI: The Copilot SDK agent is constrained to only specific tools, showing how you can embed powerful agentic behavior while maintaining security boundaries. Standalone agents handle planning and review: The Planner and Reviewer use simpler LLM-based agents, they don't need Copilot's code execution capabilities, just good reasoning. While the Home Assistant data is a fun demonstration, the pattern is designed for something much more significant: applying AI agents for complex research against private data sources. The same architecture could analyze internal databases, proprietary datasets, or sensitive business metrics. Decision Framework: Which Should You Use? When deciding between the Copilot SDK and the Agent Framework, or both, consider these questions about your application. Start with the Copilot SDK if: You need a single agent to execute tasks autonomously (code generation, file editing, command execution) Your application is developer-facing or code-centric You want to ship agentic features quickly without building orchestration infrastructure The tasks are session-scoped, they start and complete within a single interaction You want to leverage Copilot's existing tool ecosystem and MCP integration Start with the Agent Framework if: You need multiple agents collaborating with different roles and responsibilities Your workflows are long-running, require checkpoints, or need to survive restarts You need human-in-the-loop approvals, escalation paths, or governance controls Observability and auditability are requirements (regulated industries, enterprise compliance) You're building a platform where the agents themselves are the product Use both together if: You need a multi-agent workflow where at least one agent requires strong code execution capabilities You want Agent Framework's orchestration with Copilot's battle-tested agent runtime as one of the execution engines Your system involves planning, coding, and review stages that benefit from different agent architectures You're building research or analysis tools that combine AI reasoning with code generation Getting Started Both technologies are straightforward to install and start experimenting with. Here's how to get each running in minutes. GitHub Copilot SDK Quick Start Install the SDK for your preferred language: # Python pip install github-copilot-sdk # TypeScript / Node.js npm install @github/copilot-sdk # .NET dotnet add package GitHub.Copilot.SDK # Go go get github.com/github/copilot-sdk/go The SDK requires the Copilot CLI to be installed and authenticated. Follow the Copilot CLI installation guide to set that up. A GitHub Copilot subscription is required for standard usage, though BYOK mode allows you to use your own API keys without GitHub authentication. Microsoft Agent Framework Quick Start Install the framework: # Python pip install agent-framework --pre # .NET dotnet add package Microsoft.Agents.AI The Agent Framework supports multiple LLM providers including Azure OpenAI and OpenAI directly. Check the quick start tutorial for a complete walkthrough of building your first agent. Try the Combined Approach To see both technologies working together, clone the Agentic House project: git clone https://github.com/tonybaloney/agentic-house.git cd agentic-house uv sync You'll need a Home Assistant instance, the Copilot CLI authenticated, and either a GitHub token or Azure AI Foundry endpoint. The project's README walks through the full setup, and the architecture provides an excellent template for building your own multi-agent systems with embedded Copilot capabilities. Key Takeaways Copilot SDK = "Put Copilot inside my app": Embed a production-tested agentic runtime with planning, tool execution, file edits, and MCP support directly into your application Agent Framework = "Build my app out of agents": Design, orchestrate, and host multi-agent systems with explicit workflows, durable state, and enterprise governance They're complementary, not competing: The Copilot SDK can act as a powerful execution engine inside Agent Framework workflows, as demonstrated by the Agentic House project Choose based on your orchestration needs: If you need one agent executing tasks, start with the Copilot SDK. If you need coordinated agents with business logic, start with the Agent Framework The real power is in combination: The most sophisticated applications use Agent Framework for workflow orchestration and the Copilot SDK for high-leverage task execution within those workflows Conclusion and Next Steps The question isn't really "Copilot SDK or Agent Framework?" It's "where does each fit in my architecture?" Understanding this distinction unlocks a powerful design pattern: use the Agent Framework to model your business processes as agent workflows, and use the Copilot SDK wherever you need a highly capable agent that can plan, code, and execute autonomously. Start by identifying your application's needs. If you're building a developer tool that needs to understand and modify code, the Copilot SDK gets you there fast. If you're building an enterprise system where multiple AI agents need to collaborate under governance constraints, the Agent Framework provides the architecture. And if you need both, as most ambitious applications do, now you know how they fit together. The AI development ecosystem is moving rapidly. Both technologies are in active development with growing communities and expanding capabilities. The architectural patterns you learn today, embedding intelligent agents, orchestrating multi-agent workflows, combining execution engines with orchestration frameworks, will remain valuable regardless of how the specific tools evolve. Resources GitHub Copilot SDK Repository – SDKs for Python, TypeScript, Go, and .NET with documentation and examples Microsoft Agent Framework Repository – Framework source, samples, and workflow examples for Python and .NET Agentic House – Real-world example combining Agent Framework with Copilot SDK for smart home data analysis Agent Framework Documentation – Official Microsoft Learn documentation with tutorials and user guides Copilot CLI Installation Guide – Setup instructions for the CLI that powers the Copilot SDK Copilot SDK Getting Started Guide – Step-by-step tutorial for SDK integration Copilot SDK Cookbook – Practical recipes for common tasks across all supported languages400Views2likes0CommentsAgents League: Build, Learn, and Level Up Your AI Skills
We're inviting the next generation of developers to join Agents League, running February 16-27. It's a two-week challenge where you'll build AI agents using production-ready tools, learn from live coding sessions, and get feedback directly from Microsoft product teams. We've put together starter kits for each track to help you get up and running quickly that also includes requirements and guidelines. Whether you want to explore what GitHub Copilot can do beyond autocomplete, build reasoning agents on Microsoft Foundry, or create enterprise integrations for Microsoft 365 Copilot, we have a track for you. Important: Register first to be eligible for prizes and your digital badge. Without registration, you won't qualify for awards or receive a badge when you submit. What Is Agents League? It's a 2-week competition where you learn by doing: 📽️ Live coding battles – Watch experts compete in real-time and explain their thinking 💻 Build at your pace – Two weeks to work on your project 💬 Get help on Discord – AMAs, community support, and a friendly crowd to cheer you on 🏆 Win prizes – $500 per track, GitHub Copilot Pro subscriptions, and digital badges for everyone who submits The Three Tracks 🎨 Creative Apps — Build with GitHub Copilot (Chat, CLI, or SDK) 🧠 Reasoning Agents — Build with Microsoft Foundry 💼 Enterprise Agents — Build with M365 Agents Toolkit (or Copilot Studio) More details on each track below, or jump straight to the starter kits. The Schedule Agents League starts on February 16th and runs through February 27th. Within 2 weeks, we host live battles on Reactor and AMA sessions on Discord. Week 1: Live Battles (Feb 17-19) We're kicking off with live coding battles streamed on Microsoft Reactor. Watch experienced developers compete in real-time, explaining their approach and architectural decisions as they go. Tue Feb 17, 9 AM PT — 🎨 Creative Apps battle Wed Feb 18, 9 AM PT — 🧠 Reasoning Agents battle Thu Feb 19, 9 AM PT — 💼 Enterprise Agents battle All sessions are recorded, so you can watch on your own schedule. Week 2: Build + AMAs (Feb 24-26) This is your time to build and ask questions on Discord. The async format means you work when it suits you, evenings, weekends, whatever fits your schedule. We're also hosting AMAs on Discord where you can ask questions directly to Microsoft experts and product teams: Tue Feb 24, 9 AM PT — 🎨 Creative Apps AMA Wed Feb 25, 9 AM PT — 🧠 Reasoning Agents AMA Thu Feb 26, 9 AM PT — 💼 Enterprise Agents AMA Bring your questions, get help when you're stuck, and share what you're building with the community. Pick Your Track We've created a starter kit for each track with setup guides, project ideas, and example scenarios to help you get started quickly. 🎨 Creative Apps Tool: GitHub Copilot (Chat, CLI, or SDK) Build innovative, imaginative applications that showcase the potential of AI-assisted development. All application types are welcome, web apps, CLI tools, games, mobile apps, desktop applications, and more. The starter kit walks you through GitHub Copilot's different modes and provides prompting tips to get the best results.View the Creative Apps starter kit. 🧠 Reasoning Agents Tool: Microsoft Foundry (UI or SDK) and/or Microsoft Agent Framework Build a multi-agent system that leverages advanced reasoning capabilities to solve complex problems. This track focuses on agents that can plan, reason through multi-step problems, and collaborate. The starter kit includes architecture patterns, reasoning strategies (planner-executor, critic/verifier, self-reflection), and integration guides for tools and MCP servers. View the Reasoning Agents starter kit. 💼 Enterprise Agents Tool: M365 Agents Toolkit or Copilot Studio Create intelligent agents that extend Microsoft 365 Copilot to address real-world enterprise scenarios. Your agent must work on Microsoft 365 Copilot Chat. Bonus points for: MCP server integration, OAuth security, Adaptive Cards UI, connected agents (multi-agent architecture). View the Enterprise Agents starter kit. Prizes & Recognition To be eligible for prizes and your digital badge, you must register before submitting your project. Category Winners ($500 each): 🎨 Creative Apps winner 🧠 Reasoning Agents winner 💼 Enterprise Agents winner GitHub Copilot Pro subscriptions: Community Favorite (voted by participants on Discord) Product Team Picks (selected by Microsoft product teams) Everyone who registers and submits a project wins: A digital badge to showcase their participation. Beyond the prizes, every participant gets feedback from the teams who built these tools, a valuable opportunity to learn and improve your approach to AI agent development. Why This Matters AI development is where the opportunities are right now. Building with GitHub Copilot, Microsoft Foundry, and M365 Agents Toolkit gives you: A real project for your portfolio Hands-on experience with production-grade tools Connections with developers from around the world Whether you're looking for your first internship, exploring AI, or just want to build something cool, this is two weeks well spent. How to Get Started Register first — This is required to be eligible for prizes and to receive your digital badge. Without registration, your submission won't qualify for awards or a badge. Pick a track — Choose one track. Explore the starter kits to help you decide. Watch the battles — See how experienced developers approach these challenges. Great for learning even if you're still deciding whether to compete. Build your project — You have until Feb 27. Work on your own schedule. Submit via GitHub — Open an issue using the project submission template. Join us on Discord — Get help, share your progress, and vote for your favorite projects on Discord. Links Register: https://aka.ms/agentsleague/register Starter Kits: https://github.com/microsoft/agentsleague/starter-kits Discord: https://aka.ms/agentsleague/discord Live Battles: https://aka.ms/agentsleague/battles Submit Project: Project submission template340Views0likes0CommentsBenchmarking Local AI Models
Introduction Selecting the right AI model for your application requires more than reading benchmark leaderboards. Published benchmarks measure academic capabilities, question answering, reasoning, coding, but your application has specific requirements: latency budgets, hardware constraints, quality thresholds. How do you know if Phi-4 provides acceptable quality for your document summarization use case? Will Qwen2.5-0.5B meet your 100ms response time requirement? Does your edge device have sufficient memory for Phi-3.5 Mini? The answer lies in empirical testing: running actual models on your hardware with your workload patterns. This article demonstrates building a comprehensive model benchmarking platform using FLPerformance, Node.js, React, and Microsoft Foundry Local. You'll learn how to implement scientific performance measurement, design meaningful benchmark suites, visualize multi-dimensional comparisons, and make data-driven model selection decisions. Whether you're evaluating models for production deployment, optimizing inference costs, or validating hardware specifications, this platform provides the tools for rigorous performance analysis. Why Model Benchmarking Requires Purpose-Built Tools You cannot assess model performance by running a few manual tests and noting the results. Scientific benchmarking demands controlled conditions, statistically significant sample sizes, multi-dimensional metrics, and reproducible methodology. Understand why purpose-built tooling is essential. Performance is multi-dimensional. A model might excel at throughput (tokens per second) but suffer at latency (time to first token). Another might generate high-quality outputs slowly. Your application might prioritize consistency over average performance, a model with variable response times (high p95/p99 latency) creates poor user experiences even if averages look good. Measuring all dimensions simultaneously enables informed tradeoffs. Hardware matters enormously. Benchmark results from NVIDIA A100 GPUs don't predict performance on consumer laptops. NPU acceleration changes the picture again. Memory constraints affect which models can even load. Test on your actual deployment hardware or comparable specifications to get actionable results. Concurrency reveals bottlenecks. A model handling one request excellently might struggle with ten concurrent requests. Real applications experience variable load, measuring only single-threaded performance misses critical scalability constraints. Controlled concurrency testing reveals these limits. Statistical rigor prevents false conclusions. Running a prompt once and noting the response time tells you nothing about performance distribution. Was this result typical? An outlier? You need dozens or hundreds of trials to establish p50/p95/p99 percentiles, understand variance, and detect stability issues. Comparison requires controlled experiments. Different prompts, different times of day, different system loads, all introduce confounding variables. Scientific comparison runs identical workloads across models sequentially, controlling for external factors. Architecture: Three-Layer Performance Testing Platform FLPerformance implements a clean separation between orchestration, measurement, and presentation: The frontend React application provides model management, benchmark configuration, test execution, and results visualization. Users add models from the Foundry Local catalog, configure benchmark parameters (iterations, concurrency, timeout values), launch test runs, and view real-time progress. The results dashboard displays comparison tables, latency distribution charts, throughput graphs, and "best model for..." recommendations. The backend Node.js/Express server orchestrates tests and captures metrics. It manages the single Foundry Local service instance, loads/unloads models as needed, executes benchmark suites with controlled concurrency, measures comprehensive metrics (TTFT, TPOT, total latency, throughput, error rates), and persists results to JSON storage. WebSocket connections provide real-time progress updates during long benchmark runs. Foundry Local SDK integration uses the official foundry-local-sdk npm package. The SDK manages service lifecycle, starting, stopping, health checkin, and handles model operations, downloading, loading into memory, unloading. It provides OpenAI-compatible inference APIs for consistent request formatting across models. The architecture supports simultaneous testing of multiple models by loading them one at a time, running identical benchmarks, and aggregating results for comparison: User Initiates Benchmark Run ↓ Backend receives {models: [...], suite: "default", iterations: 10} ↓ For each model: 1. Load model into Foundry Local 2. Execute benchmark suite - For each prompt in suite: * Run N iterations * Measure TTFT, TPOT, total time * Track errors and timeouts * Calculate tokens/second 3. Aggregate statistics (mean, p50, p95, p99) 4. Unload model ↓ Store results with metadata ↓ Return comparison data to frontend ↓ Visualize performance metrics Implementing Scientific Measurement Infrastructure Accurate performance measurement requires instrumentation that captures multiple dimensions without introducing measurement overhead: // src/server/benchmark.js import { performance } from 'perf_hooks'; export class BenchmarkExecutor { constructor(foundryClient, options = {}) { this.client = foundryClient; this.options = { iterations: options.iterations || 10, concurrency: options.concurrency || 1, timeout_ms: options.timeout_ms || 30000, warmup_iterations: options.warmup_iterations || 2 }; } async runBenchmarkSuite(modelId, prompts) { const results = []; // Warmup phase (exclude from results) console.log(`Running ${this.options.warmup_iterations} warmup iterations...`); for (let i = 0; i < this.options.warmup_iterations; i++) { await this.executePrompt(modelId, prompts[0].text); } // Actual benchmark runs for (const prompt of prompts) { console.log(`Benchmarking prompt: ${prompt.id}`); const measurements = []; for (let i = 0; i < this.options.iterations; i++) { const measurement = await this.executeMeasuredPrompt( modelId, prompt.text ); measurements.push(measurement); // Small delay between iterations to stabilize await sleep(100); } results.push({ prompt_id: prompt.id, prompt_text: prompt.text, measurements, statistics: this.calculateStatistics(measurements) }); } return { model_id: modelId, timestamp: new Date().toISOString(), config: this.options, results }; } async executeMeasuredPrompt(modelId, promptText) { const measurement = { success: false, error: null, ttft_ms: null, // Time to first token tpot_ms: null, // Time per output token total_ms: null, tokens_generated: 0, tokens_per_second: 0 }; try { const startTime = performance.now(); let firstTokenTime = null; let tokenCount = 0; // Streaming completion to measure TTFT const stream = await this.client.chat.completions.create({ model: modelId, messages: [{ role: 'user', content: promptText }], max_tokens: 200, temperature: 0.7, stream: true }); for await (const chunk of stream) { if (chunk.choices[0]?.delta?.content) { if (firstTokenTime === null) { firstTokenTime = performance.now(); measurement.ttft_ms = firstTokenTime - startTime; } tokenCount++; } } const endTime = performance.now(); measurement.total_ms = endTime - startTime; measurement.tokens_generated = tokenCount; if (tokenCount > 1 && firstTokenTime) { // TPOT = time after first token / (tokens - 1) const timeAfterFirstToken = endTime - firstTokenTime; measurement.tpot_ms = timeAfterFirstToken / (tokenCount - 1); measurement.tokens_per_second = 1000 / measurement.tpot_ms; } measurement.success = true; } catch (error) { measurement.error = error.message; measurement.success = false; } return measurement; } calculateStatistics(measurements) { const successful = measurements.filter(m => m.success); const total = measurements.length; if (successful.length === 0) { return { success_rate: 0, error_rate: 1.0, sample_size: total }; } const ttfts = successful.map(m => m.ttft_ms).sort((a, b) => a - b); const tpots = successful.map(m => m.tpot_ms).filter(v => v !== null).sort((a, b) => a - b); const totals = successful.map(m => m.total_ms).sort((a, b) => a - b); const throughputs = successful.map(m => m.tokens_per_second).filter(v => v > 0); return { success_rate: successful.length / total, error_rate: (total - successful.length) / total, sample_size: total, ttft: { mean: mean(ttfts), median: percentile(ttfts, 50), p95: percentile(ttfts, 95), p99: percentile(ttfts, 99), min: Math.min(...ttfts), max: Math.max(...ttfts) }, tpot: tpots.length > 0 ? { mean: mean(tpots), median: percentile(tpots, 50), p95: percentile(tpots, 95) } : null, total_latency: { mean: mean(totals), median: percentile(totals, 50), p95: percentile(totals, 95), p99: percentile(totals, 99) }, throughput: { mean_tps: mean(throughputs), median_tps: percentile(throughputs, 50) } }; } } function mean(arr) { return arr.reduce((sum, val) => sum + val, 0) / arr.length; } function percentile(sortedArr, p) { const index = Math.ceil((sortedArr.length * p) / 100) - 1; return sortedArr[Math.max(0, index)]; } function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } This measurement infrastructure captures: Time to First Token (TTFT): Critical for perceived responsiveness—users notice delays before output begins Time Per Output Token (TPOT): Determines generation speed after first token—affects throughput Total latency: End-to-end time—matters for batch processing and high-volume scenarios Tokens per second: Overall throughput metric—useful for capacity planning Statistical distributions: Mean alone masks variability—p95/p99 reveal tail latencies that impact user experience Success/error rates: Stability metrics—some models timeout or crash under load Designing Meaningful Benchmark Suites Benchmark quality depends on prompt selection. Generic prompts don't reflect real application behavior. Design suites that mirror actual use cases: // benchmarks/suites/default.json { "name": "default", "description": "General-purpose benchmark covering diverse scenarios", "prompts": [ { "id": "short-factual", "text": "What is the capital of France?", "category": "factual", "expected_tokens": 5 }, { "id": "medium-explanation", "text": "Explain how photosynthesis works in 3-4 sentences.", "category": "explanation", "expected_tokens": 80 }, { "id": "long-reasoning", "text": "Analyze the economic factors that led to the 2008 financial crisis. Discuss at least 5 major causes with supporting details.", "category": "reasoning", "expected_tokens": 250 }, { "id": "code-generation", "text": "Write a Python function that finds the longest palindrome in a string. Include docstring and example usage.", "category": "coding", "expected_tokens": 150 }, { "id": "creative-writing", "text": "Write a short story (3 paragraphs) about a robot learning to paint.", "category": "creative", "expected_tokens": 200 } ] } This suite covers multiple dimensions: Length variation: Short (5 tokens), medium (80), long (250)—tests models across output ranges Task diversity: Factual recall, explanation, reasoning, code, creative—reveals capability breadth Token predictability: Expected token counts enable throughput calculations For production applications, create custom suites matching your actual workload: { "name": "customer-support", "description": "Simulates actual customer support queries", "prompts": [ { "id": "product-question", "text": "How do I reset my password for the customer portal?" }, { "id": "troubleshooting", "text": "I'm getting error code 503 when trying to upload files. What should I do?" }, { "id": "policy-inquiry", "text": "What is your refund policy for annual subscriptions?" } ] } Visualizing Multi-Dimensional Performance Comparisons Raw numbers don't reveal insights—visualization makes patterns obvious. The frontend implements several comparison views: Comparison Table shows side-by-side metrics: // frontend/src/components/ResultsTable.jsx export function ResultsTable({ results }) { return ( {results.map(result => ( ))} Model TTFT (ms) TPOT (ms) Throughput (tok/s) P95 Latency Error Rate {result.model_id} {result.stats.ttft.median.toFixed(0)} (p95: {result.stats.ttft.p95.toFixed(0)}) {result.stats.tpot?.median.toFixed(1) || 'N/A'} {result.stats.throughput.median_tps.toFixed(1)} {result.stats.total_latency.p95.toFixed(0)} ms 0.05 ? 'error' : 'success'}> {(result.stats.error_rate * 100).toFixed(1)}% ); } Latency Distribution Chart reveals performance consistency: // Using Chart.js for visualization export function LatencyChart({ results }) { const data = { labels: results.map(r => r.model_id), datasets: [ { label: 'Median (p50)', data: results.map(r => r.stats.total_latency.median), backgroundColor: 'rgba(75, 192, 192, 0.5)' }, { label: 'p95', data: results.map(r => r.stats.total_latency.p95), backgroundColor: 'rgba(255, 206, 86, 0.5)' }, { label: 'p99', data: results.map(r => r.stats.total_latency.p99), backgroundColor: 'rgba(255, 99, 132, 0.5)' } ] }; return ( ); } Recommendations Engine synthesizes multi-dimensional comparison: export function generateRecommendations(results) { const recommendations = []; // Find fastest TTFT (best perceived responsiveness) const fastestTTFT = results.reduce((best, r) => r.stats.ttft.median < best.stats.ttft.median ? r : best ); recommendations.push({ category: 'Fastest Response', model: fastestTTFT.model_id, reason: `Lowest median TTFT: ${fastestTTFT.stats.ttft.median.toFixed(0)}ms` }); // Find highest throughput const highestThroughput = results.reduce((best, r) => r.stats.throughput.median_tps > best.stats.throughput.median_tps ? r : best ); recommendations.push({ category: 'Best Throughput', model: highestThroughput.model_id, reason: `Highest tok/s: ${highestThroughput.stats.throughput.median_tps.toFixed(1)}` }); // Find most consistent (lowest p95-p50 spread) const mostConsistent = results.reduce((best, r) => { const spread = r.stats.total_latency.p95 - r.stats.total_latency.median; const bestSpread = best.stats.total_latency.p95 - best.stats.total_latency.median; return spread < bestSpread ? r : best; }); recommendations.push({ category: 'Most Consistent', model: mostConsistent.model_id, reason: 'Lowest latency variance (p95-p50 spread)' }); return recommendations; } Key Takeaways and Benchmarking Best Practices Effective model benchmarking requires scientific methodology, comprehensive metrics, and application-specific testing. FLPerformance demonstrates that rigorous performance measurement is accessible to any development team. Critical principles for model evaluation: Test on target hardware: Results from cloud GPUs don't predict laptop performance Measure multiple dimensions: TTFT, TPOT, throughput, consistency all matter Use statistical rigor: Single runs mislead—capture distributions with adequate sample sizes Design realistic workloads: Generic benchmarks don't predict your application's behavior Include warmup iterations: Model loading and JIT compilation affect early measurements Control concurrency: Real applications handle multiple requests—test at realistic loads Document methodology: Reproducible results require documented procedures and configurations The complete benchmarking platform with model management, measurement infrastructure, visualization dashboards, and comprehensive documentation is available at github.com/leestott/FLPerformance. Clone the repository and run the startup script to begin evaluating models on your hardware. Resources and Further Reading FLPerformance Repository - Complete benchmarking platform Quick Start Guide - Setup and first benchmark run Microsoft Foundry Local Documentation - SDK reference and model catalog Architecture Guide - System design and SDK integration Benchmarking Best Practices - Methodology and troubleshootingThe Next One - SC200 Courseware unusable
Hi All, we are trying to deliver SC200 course. Out MCTs tell us the courseware is not working at all. Video Links are not working eg. (Playback Error - We are sorry something went wrong) https://www.microsoft.com/en-us/videoplayer/embed/RE4yiO5?rel=0&postJsllMsg=true https://www.microsoft.com/videoplayer/embed/RE4bOeh?rel=0 https://www.microsoft.com/en-us/videoplayer/embed/RE4bGqo?rel=0&postJsllMsg=true AHL Labs are not working because of cloud slice architecture We (the MCts) use hours to find out how the lab should work in ALH Labs. Missing Permissions Wrong Links I cannot find any contact to get answers to these issues? Can anyone give hints how to discuss these topic. Just to play ping-pong between ALH and "Content Creators" (1 Month Feedback Time) and getting no resolution doesn´t help here. Can you please get back to a working environment. (Full Azure Access, Real World Training Situations) Not a Video Here, a Simulation There. Regards, MarioHow much does real-world practice matter compared to certifications when building skills?
Hi everyone, I’ve been spending time building skills through a mix of hands-on practice and certification preparation, and I’ve noticed there is often a gap between what exams test and what actually comes up in real scenarios. Certifications are useful for structure and validation, but in day-to-day work, troubleshooting, decision-making, and adapting to unexpected issues often seem just as important, sometimes even more. I’m interested in hearing how others here approach this and how they balance certification study with real-world practice. Did earning certifications help more in practical situations or mainly from a career and credibility point of view? And for someone early in their learning journey, what should be prioritized first?297Views2likes5Comments