ai
1017 TopicsBuilding Interactive Agent UIs with AG-UI and Microsoft Agent Framework
Introduction Picture this: You've built an AI agent that analyzes financial data. A user uploads a quarterly report and asks: "What are the top three expense categories?" Behind the scenes, your agent parses the spreadsheet, aggregates thousands of rows, and generates visualizations. All in 20 seconds. But the user? They see a loading spinner. Nothing else. No "reading file" message, no "analyzing data" indicator, no hint that progress is being made. They start wondering: Is it frozen? Should I refresh? The problem isn't the agent's capabilities - it's the communication gap between the agent running on the backend and the user interface. When agents perform multi-step reasoning, call external APIs, or execute complex tool chains, users deserve to see what's happening. They need streaming updates, intermediate results, and transparent progress indicators. Yet most agent frameworks force developers to choose between simple request/response patterns or building custom solutions to stream updates to their UIs. This is where AG-UI comes in. AG-UI is a fairly new event-based protocol that standardizes how agents communicate with user interfaces. Instead of every framework and development team inventing their own streaming solution, AG-UI provides a shared vocabulary of structured events that work consistently across different agent implementations. When an agent starts processing, calls a tool, generates text, or encounters an error, the UI receives explicit, typed events in real time. The beauty of AG-UI is its framework-agnostic design. While this blog post demonstrates integration with Microsoft Agent Framework (MAF), the same AG-UI protocol works with LangGraph, CrewAI, or any other compliant framework. Write your UI code once, and it works with any AG-UI-compliant backend. (Note: MAF supports both Python and .NET - this blog post focuses on the Python implementation.) TL;DR The Problem: Users don't get real-time updates while AI agents work behind the scenes - no progress indicators, no transparency into tool calls, and no insight into what's happening. The Solution: AG-UI is an open, event-based protocol that standardizes real-time communication between AI agents and user interfaces. Instead of each development team and framework inventing custom streaming solutions, AG-UI provides a shared vocabulary of structured events (like TOOL_CALL_START, TEXT_MESSAGE_CONTENT, RUN_FINISHED) that work across any compliant framework. Key Benefits: Framework-agnostic - Write UI code once, works with LangGraph, Microsoft Agent Framework, CrewAI, and more Real-time observability - See exactly what your agent is doing as it happens Server-Sent Events - Built on standard HTTP for universal compatibility Protocol-managed state - No manual conversation history tracking In This Post: You'll learn why AG-UI exists, how it works, and build a complete working application using Microsoft Agent Framework with Python - from server setup to client implementation. What You'll Learn This blog post walks through: Why AG-UI exists - how agent-UI communication has evolved and what problems current approaches couldn't solve How the protocol works - the key design choices that make AG-UI simple, reliable, and framework-agnostic Protocol architecture - the generic components and how AG-UI integrates with agent frameworks Building an AG-UI application - a complete working example using Microsoft Agent Framework with server, client, and step-by-step setup Understanding events - what happens under the hood when your agent runs and how to observe it Thinking in events - how building with AG-UI differs from traditional APIs, and what benefits this brings Making the right choice - when AG-UI is the right fit for your project and when alternatives might be better Estimated reading time: 15 minutes Who this is for: Developers building AI agents who want to provide real-time feedback to users, and teams evaluating standardized approaches to agent-UI communication To appreciate why AG-UI matters, we need to understand the journey that led to its creation. Let's trace how agent-UI communication has evolved through three distinct phases. The Evolution of Agent-UI Communication AI agents have become more capable over time. As they evolved, the way they communicated with user interfaces had to evolve as well. Here's how this evolution unfolded. Phase 1: Simple Request/Response In the early days of AI agent development, the interaction model was straightforward: send a question, wait for an answer, display the result. This synchronous approach mirrored traditional API calls and worked fine for simple scenarios. # Simple, but limiting response = agent.run("What's the weather in Paris?") display(response) # User waits... and waits... Works for: Quick queries that complete in seconds, simple Q&A interactions where immediate feedback and interactivity aren't critical. Breaks down: When agents need to call multiple tools, perform multi-step reasoning, or process complex queries that take 30+ seconds. Users see nothing but a loading spinner, with no insight into what's happening or whether the agent is making progress. This creates a poor user experience and makes it impossible to show intermediate results or allow user intervention. Recognizing these limitations, development teams began experimenting with more sophisticated approaches. Phase 2: Custom Streaming Solutions As agents became more sophisticated, teams recognized the need for incremental feedback and interactivity. Rather than waiting for the complete response, they implemented custom streaming solutions to show partial results as they became available. # Every team invents their own format for chunk in agent.stream("What's the weather?"): display(chunk) # But what about tool calls? Errors? Progress? This was a step forward for building interactive agent UIs, but each team solved the problem differently. Also, different frameworks had incompatible approaches - some streamed only text tokens, others sent structured JSON, and most provided no visibility into critical events like tool calls or errors. The problem: No standardization across frameworks - client code that works with LangGraph won't work with Crew AI, requiring separate implementations for each agent backend Each implementation handles tool calls differently - some send nothing during tool execution, others send unstructured messages Complex state management - clients must track conversation history, manage reconnections, and handle edge cases manually The industry needed a better solution - a common protocol that could work across all frameworks while maintaining the benefits of streaming. Phase 3: Standardized Protocol (AG-UI) AG-UI emerged as a response to the fragmentation problem. Instead of each framework and development team inventing their own streaming solution, AG-UI provides a shared vocabulary of events that work consistently across different agent implementations. # Standardized events everyone understands async for event in agent.run_stream("What's the weather?"): if event.type == "TEXT_MESSAGE_CONTENT": display_text(event.delta) elif event.type == "TOOL_CALL_START": show_tool_indicator(event.tool_name) elif event.type == "TOOL_CALL_RESULT": show_tool_result(event.result) The key difference is structured observability. Rather than guessing what the agent is doing from unstructured text, clients receive explicit events for every stage of execution: when the agent starts, when it generates text, when it calls a tool, when that tool completes, and when the entire run finishes. What's different: A standardized vocabulary of event types, complete observability into agent execution, and framework-agnostic clients that work with any AG-UI-compliant backend. You write your UI code once, and it works whether the backend uses Microsoft Agent Framework, LangGraph, or any other framework that speaks AG-UI. Now that we've seen why AG-UI emerged and what problems it solves, let's examine the specific design decisions that make the protocol work. These choices weren't arbitrary - each one addresses concrete challenges in building reliable, observable agent-UI communication. The Design Decisions Behind AG-UI Why Server-Sent Events (SSE)? Aspect WebSockets SSE (AG-UI) Complexity Bidirectional Unidirectional (simpler) Firewall/Proxy Sometimes blocked Standard HTTP Reconnection Manual implementation Built-in browser support Use case Real-time games, chat Agent responses (one-way) For agent interactions, you typically only need server→client communication, making SSE a simpler choice. SSE solves the transport problem - how events travel from server to client. But once connected, how does the protocol handle conversation state across multiple interactions? Why Protocol-Managed Threads? # Without protocol threads (client manages): conversation_history = [] conversation_history.append({"role": "user", "content": message}) response = agent.complete(conversation_history) conversation_history.append({"role": "assistant", "content": response}) # Complex, error-prone, doesn't work with multiple clients # With AG-UI (protocol manages): thread = agent.get_new_thread() # Server creates and manages thread agent.run_stream(message, thread=thread) # Server maintains context # Simple, reliable, shareable across clients With transport and state management handled, the final piece is the actual messages flowing through the connection. What information should the protocol communicate, and how should it be structured? Why Standardized Event Types? Instead of parsing unstructured text, clients get typed events: RUN_STARTED - Agent begins (start loading UI) TEXT_MESSAGE_CONTENT - Text chunk (stream to user) TOOL_CALL_START - Tool invoked (show "searching...", "calculating...") TOOL_CALL_RESULT - Tool finished (show result, update UI) RUN_FINISHED - Complete (hide loading) This lets UIs react intelligently without custom parsing logic. Now that we understand the protocol's design choices, let's see how these pieces fit together in a complete system. Architecture Overview Here's how the components interact: The communication between these layers relies on a well-defined set of event types. Here are the core events that flow through the SSE connection: Core Event Types AG-UI provides a standardized set of event types to describe what's happening during an agent's execution: RUN_STARTED - agent begins execution TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT, TEXT_MESSAGE_END - streaming segments of text TOOL_CALL_START, TOOL_CALL_ARGS, TOOL_CALL_END, TOOL_CALL_RESULT - tool execution events RUN_FINISHED - agent has finished execution RUN_ERROR - error information This model lets the UI update as the agent runs, rather than waiting for the final response. The generic architecture above applies to any AG-UI implementation. Now let's see how this translates to Microsoft Agent Framework. AG-UI with Microsoft Agent Framework While AG-UI is framework-agnostic, this blog post demonstrates integration with Microsoft Agent Framework (MAF) using Python. MAF is available in both Python and .NET, giving you flexibility to build AG-UI applications in your preferred language. Understanding how MAF implements the protocol will help you build your own applications or work with other compliant frameworks. Integration Architecture The Microsoft Agent Framework integration involves several specialized layers that handle protocol translation and execution orchestration: Understanding each layer: FastAPI Endpoint - Handles HTTP requests and establishes SSE connections for streaming AgentFrameworkAgent - Protocol wrapper that translates between AG-UI events and Agent Framework operations Orchestrators - Manage execution flow, coordinate tool calling sequences, and handle state transitions ChatAgent - Your agent implementation with instructions, tools, and business logic ChatClient - Interface to the underlying language model (Azure OpenAI, OpenAI, or other providers) The good news? When you call add_agent_framework_fastapi_endpoint, all the middleware layers are configured automatically. You simply provide your ChatAgent, and the integration handles protocol translation, event streaming, and state management behind the scenes. Now that we understand both the protocol architecture and the Microsoft Agent Framework integration, let's build a working application. Hands-On: Building Your First AG-UI Application This section demonstrates how to build an AG-UI server and client using Microsoft Agent Framework and FastAPI. Prerequisites Before building your first AG-UI application, ensure you have: Python 3.10 or later installed Basic understanding of async/await patterns in Python Azure CLI installed and authenticated (az login) Azure OpenAI service endpoint and deployment configured (setup guide) Cognitive Services OpenAI Contributor role for your Azure OpenAI resource You'll also need to install the AG-UI integration package: pip install agent-framework-ag-ui --pre This automatically installs agent-framework-core, fastapi, and uvicorn as dependencies. With your environment configured, let's create the server that will host your agent and expose it via the AG-UI protocol. Building the Server Let's create a FastAPI server that hosts an AI agent and exposes it via AG-UI: # server.py import os from typing import Annotated from dotenv import load_dotenv from fastapi import FastAPI from pydantic import Field from agent_framework import ChatAgent, ai_function from agent_framework.azure import AzureOpenAIChatClient from agent_framework_ag_ui import add_agent_framework_fastapi_endpoint from azure.identity import DefaultAzureCredential # Load environment variables from .env file load_dotenv() # Validate environment configuration openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") model_deployment = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME") if not openai_endpoint: raise RuntimeError("Missing required environment variable: AZURE_OPENAI_ENDPOINT") if not model_deployment: raise RuntimeError("Missing required environment variable: AZURE_OPENAI_DEPLOYMENT_NAME") # Define tools the agent can use @ai_function def get_order_status( order_id: Annotated[str, Field(description="The order ID to look up (e.g., ORD-001)")] ) -> dict: """Look up the status of a customer order. Returns order status, tracking number, and estimated delivery date. """ # Simulated order lookup orders = { "ORD-001": {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"}, "ORD-002": {"status": "processing", "tracking": None, "eta": "Jan 23, 2026"}, "ORD-003": {"status": "delivered", "tracking": "1Z999AA3", "eta": "Delivered Jan 20"}, } return orders.get(order_id, {"status": "not_found", "message": "Order not found"}) # Initialize Azure OpenAI client chat_client = AzureOpenAIChatClient( credential=DefaultAzureCredential(), endpoint=openai_endpoint, deployment_name=model_deployment, ) # Configure the agent with custom instructions and tools agent = ChatAgent( name="CustomerSupportAgent", instructions="""You are a helpful customer support assistant. You have access to a get_order_status tool that can look up order information. IMPORTANT: When a user mentions an order ID (like ORD-001, ORD-002, etc.), you MUST call the get_order_status tool to retrieve the actual order details. Do NOT make up or guess order information. After calling get_order_status, provide the actual results to the user in a friendly format.""", chat_client=chat_client, tools=[get_order_status], ) # Initialize FastAPI application app = FastAPI( title="AG-UI Customer Support Server", description="Interactive AI agent server using AG-UI protocol with tool calling" ) # Mount the AG-UI endpoint add_agent_framework_fastapi_endpoint(app, agent, path="/chat") def main(): """Entry point for the AG-UI server.""" import uvicorn print("Starting AG-UI server on http://localhost:8000") uvicorn.run(app, host="0.0.0.0", port=8000, log_level="info") # Run the application if __name__ == "__main__": main() What's happening here: We define a tool: get_order_status with the AI_function decorator Use Annotated and Field for parameter descriptions to help the agent understand when and how to use the tool We create an Azure OpenAI chat client with credential authentication The ChatAgent is configured with domain-specific instructions and the tools parameter add_agent_framework_fastapi_endpoint automatically handles SSE streaming and tool execution The server exposes the agent at the /chat endpoint Note: This example uses Azure OpenAI, but AG-UI works with any chat model. You can also integrate with Azure AI Foundry's model catalog or use other LLM providers. Tool calling is supported by most modern LLMs including GPT-4, GPT-4o, and Claude models. To run this server: # Set your Azure OpenAI credentials export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o" # Start the server python server.py With your server running and exposing the AG-UI endpoint, the next step is building a client that can connect and consume the event stream. Streaming Results to Clients With the server running, clients can connect and stream events as the agent processes requests. Here's a Python client that demonstrates the streaming capabilities: # client.py import asyncio import os from dotenv import load_dotenv from agent_framework import ChatAgent, FunctionCallContent, FunctionResultContent from agent_framework_ag_ui import AGUIChatClient # Load environment variables from .env file load_dotenv() async def interactive_chat(): """Interactive chat session with streaming responses.""" # Connect to the AG-UI server base_url = os.getenv("AGUI_SERVER_URL", "http://localhost:8000/chat") print(f"Connecting to: {base_url}\n") # Initialize the AG-UI client client = AGUIChatClient(endpoint=base_url) # Create a local agent representation agent = ChatAgent(chat_client=client) # Start a new conversation thread conversation_thread = agent.get_new_thread() print("Chat started! Type 'exit' or 'quit' to end the session.\n") try: while True: # Collect user input user_message = input("You: ") # Handle empty input if not user_message.strip(): print("Please enter a message.\n") continue # Check for exit commands if user_message.lower() in ["exit", "quit", "bye"]: print("\nGoodbye!") break # Stream the agent's response print("Agent: ", end="", flush=True) # Track tool calls to avoid duplicate prints seen_tools = set() async for update in agent.run_stream(user_message, thread=conversation_thread): # Display text content if update.text: print(update.text, end="", flush=True) # Display tool calls and results for content in update.contents: if isinstance(content, FunctionCallContent): # Only print each tool call once if content.call_id not in seen_tools: seen_tools.add(content.call_id) print(f"\n[Calling tool: {content.name}]", flush=True) elif isinstance(content, FunctionResultContent): # Only print each result once result_id = f"result_{content.call_id}" if result_id not in seen_tools: seen_tools.add(result_id) result_text = content.result if isinstance(content.result, str) else str(content.result) print(f"[Tool result: {result_text}]", flush=True) print("\n") # New line after response completes except KeyboardInterrupt: print("\n\nChat interrupted by user.") except ConnectionError as e: print(f"\nConnection error: {e}") print("Make sure the server is running.") except Exception as e: print(f"\nUnexpected error: {e}") def main(): """Entry point for the AG-UI client.""" asyncio.run(interactive_chat()) if __name__ == "__main__": main() Key features: The client connects to the AG-UI endpoint using AGUIChatClient with the endpoint parameter run_stream() yields updates containing text and content as they arrive Tool calls are detected using FunctionCallContent and displayed with [Calling tool: ...] Tool results are detected using FunctionResultContent and displayed with [Tool result: ...] Deduplication logic (seen_tools set) prevents printing the same tool call multiple times as it streams Thread management maintains conversation context across messages Graceful error handling for connection issues To use the client: # Optional: specify custom server URL export AGUI_SERVER_URL="http://localhost:8000/chat" # Start the interactive chat python client.py Example Session: Connecting to: http://localhost:8000/chat Chat started! Type 'exit' or 'quit' to end the session. You: What's the status of order ORD-001? Agent: [Calling tool: get_order_status] [Tool result: {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"}] Your order ORD-001 has been shipped! - Tracking Number: 1Z999AA1 - Estimated Delivery Date: January 25, 2026 You can use the tracking number to monitor the delivery progress. You: Can you check ORD-002? Agent: [Calling tool: get_order_status] [Tool result: {"status": "processing", "tracking": null, "eta": "Jan 23, 2026"}] Your order ORD-002 is currently being processed. - Status: Processing - Estimated Delivery: January 23, 2026 Your order should ship soon, and you'll receive a tracking number once it's on the way. You: exit Goodbye! The client we just built handles events at a high level, abstracting away the details. But what's actually flowing through that SSE connection? Let's peek under the hood. Event Types You'll See As the server streams back responses, clients receive a series of structured events. If you were to observe the raw SSE stream (e.g., using curl), you'd see events like: curl -N http://localhost:8000/chat \ -H "Content-Type: application/json" \ -H "Accept: text/event-stream" \ -d '{"messages": [{"role": "user", "content": "What'\''s the status of order ORD-001?"}]}' Sample event stream (with tool calling): data: {"type":"RUN_STARTED","threadId":"eb4d9850-14ef-446c-af4b-23037acda9e8","runId":"chatcmpl-xyz"} data: {"type":"TEXT_MESSAGE_START","messageId":"e8648880-a9ff-4178-a17d-4a6d3ec3d39c","role":"assistant"} data: {"type":"TOOL_CALL_START","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","toolCallName":"get_order_status","parentMessageId":"e8648880-a9ff-4178-a17d-4a6d3ec3d39c"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"{\""} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"order"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"_id"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"\":\""} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"ORD"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"-"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"001"} data: {"type":"TOOL_CALL_ARGS","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","delta":"\"}"} data: {"type":"TOOL_CALL_END","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y"} data: {"type":"TOOL_CALL_RESULT","messageId":"f048cb0a-a049-4a51-9403-a05e4820438a","toolCallId":"call_GTWj2N3ZyYiiQIjg3fwmiQ8y","content":"{\"status\": \"shipped\", \"tracking\": \"1Z999AA1\", \"eta\": \"Jan 25, 2026\"}","role":"tool"} data: {"type":"TEXT_MESSAGE_START","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","role":"assistant"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"Your"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" order"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" ORD"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"-"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"001"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" has"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" been"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":" shipped"} data: {"type":"TEXT_MESSAGE_CONTENT","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf","delta":"!"} ... (additional TEXT_MESSAGE_CONTENT events streaming the response) ... data: {"type":"TEXT_MESSAGE_END","messageId":"8215fc88-8cb6-4ce4-8bdb-a8715dcd26cf"} data: {"type":"RUN_FINISHED","threadId":"eb4d9850-14ef-446c-af4b-23037acda9e8","runId":"chatcmpl-xyz"} Understanding the flow: RUN_STARTED - Agent begins processing the request TEXT_MESSAGE_START - First message starts (will contain tool calls) TOOL_CALL_START - Agent invokes the get_order_status tool Multiple TOOL_CALL_ARGS events - Arguments stream incrementally as JSON chunks ({"order_id":"ORD-001"}) TOOL_CALL_END - Tool invocation structure complete TOOL_CALL_RESULT - Tool execution finished with result data TEXT_MESSAGE_START - Second message starts (the final response) Multiple TEXT_MESSAGE_CONTENT events - Response text streams word-by-word TEXT_MESSAGE_END - Response message complete RUN_FINISHED - Entire run completed successfully This granular event model enables rich UI experiences - showing tool execution indicators ("Searching...", "Calculating..."), displaying intermediate results, and providing complete transparency into the agent's reasoning process. Seeing the raw events helps, but truly working with AG-UI requires a shift in how you think about agent interactions. Let's explore this conceptual change. The Mental Model Shift Traditional API Thinking # Imperative: Call and wait response = agent.run("What's 2+2?") print(response) # "The answer is 4" Mental model: Function call with return value AG-UI Thinking # Reactive: Subscribe to events async for event in agent.run_stream("What's 2+2?"): match event.type: case "RUN_STARTED": show_loading() case "TEXT_MESSAGE_CONTENT": display_chunk(event.delta) case "RUN_FINISHED": hide_loading() Mental model: Observable stream of events This shift feels similar to: Moving from synchronous to async code Moving from REST to event-driven architecture Moving from polling to pub/sub This mental shift isn't just philosophical - it unlocks concrete benefits that weren't possible with request/response patterns. What You Gain Observability # You can SEE what the agent is doing TOOL_CALL_START: "get_order_status" TOOL_CALL_ARGS: {"order_id": "ORD-001"} TOOL_CALL_RESULT: {"status": "shipped", "tracking": "1Z999AA1", "eta": "Jan 25, 2026"} TEXT_MESSAGE_START: "Your order ORD-001 has been shipped..." Interruptibility # Future: Cancel long-running operations async for event in agent.run_stream(query): if user_clicked_cancel: await agent.cancel(thread_id, run_id) break Transparency # Users see the reasoning process "Looking up order ORD-001..." "Order found: Status is 'shipped'" "Retrieving tracking information..." "Your order has been shipped with tracking number 1Z999AA1..." To put these benefits in context, here's how AG-UI compares to traditional approaches across key dimensions: AG-UI vs. Traditional Approaches Aspect Traditional REST Custom Streaming AG-UI Connection Model Request/Response Varies Server-Sent Events State Management Manual Manual Protocol-managed Tool Calling Invisible Custom format Standardized events Framework Varies Framework-locked Framework-agnostic Browser Support Universal Varies Universal Implementation Simple Complex Moderate Ecosystem N/A Isolated Growing You've now seen AG-UI's design principles, implementation details, and conceptual foundations. But the most important question remains: should you actually use it? Conclusion: Is AG-UI Right for Your Project? AG-UI represents a shift toward standardized, observable agent interactions. Before adopting it, understand where the protocol stands and whether it fits your needs. Protocol Maturity The protocol is stable enough for production use but still evolving: Ready now: Core specification stable, Microsoft Agent Framework integration available, FastAPI/Python implementation mature, basic streaming and threading work reliably. Choose AG-UI If You Building new agent projects - No legacy API to maintain, want future compatibility with emerging ecosystem Need streaming observability - Multi-step workflows where users benefit from seeing each stage of execution Want framework flexibility - Same client code works with any AG-UI-compliant backend Comfortable with evolving standards - Can adapt to protocol changes as it matures Stick with Alternatives If You Have working solutions - Custom streaming working well, migration cost not justified Need guaranteed stability - Mission-critical systems where breaking changes are unacceptable Build simple agents - Single-step request/response without tool calling or streaming needs Risk-averse environment - Large existing implementations where proven approaches are required Beyond individual project decisions, it's worth considering AG-UI's role in the broader ecosystem. The Bigger Picture While this blog post focused on Microsoft Agent Framework, AG-UI's true power lies in its broader mission: creating a common language for agent-UI communication across the entire ecosystem. As more frameworks adopt it, the real value emerges: write your UI once, work with any compliant agent framework. Think of it like GraphQL for APIs or OpenAPI for REST - a standardization layer that benefits the entire ecosystem. The protocol is young, but the problem it solves is real. Whether you adopt it now or wait for broader adoption, understanding AG-UI helps you make informed architectural decisions for your agent applications. Ready to dive deeper? Here are the official resources to continue your AG-UI journey. Resources AG-UI & Microsoft Agent Framework Getting Started with AG-UI (Microsoft Learn) - Official tutorial AG-UI Integration Overview - Architecture and concepts AG-UI Protocol Specification - Official protocol documentation Backend Tool Rendering - Adding function tools Security Considerations - Production security guidance Microsoft Agent Framework Documentation - Framework overview AG-UI Dojo Examples - Live demonstrations UI Components & Integration CopilotKit for Microsoft Agent Framework - React component library Community & Support Microsoft Q&A - Community support Agent Framework GitHub - Source code and issues Related Technologies Azure AI Foundry Documentation - Azure AI platform FastAPI Documentation - Web framework Server-Sent Events (SSE) Specification - Protocol standard This blog post introduces AG-UI with Microsoft Agent Framework, focusing on fundamental concepts and building your first interactive agent application.How We Built an AI Operations Agent Using MCP Servers and Dynamic Tool Routing
Modern operations teams are turning to AI Agents to resolve shipping delays faster and more accurately. In this article, we build a “two‑brain” AI Agent on Azure—one MCP server that reads policies from Blob Storage and another that updates order data in Azure SQL—to automate decisions like hazardous‑material handling and delivery prioritization. You’ll see how these coordinated capabilities transform a simple user query into a fully automated operational workflow248Views0likes0CommentsPartner Blog | Cloud & AI Platforms, Part 5: Innovating with Azure AI apps and agents.
As we wrap up this Cloud & AI Platforms series, I want to focus on where many partners are investing next: building AI apps and agents that move beyond experiments and into production, at scale, with trust built in. Across industries, AI is reshaping how applications are designed, built, and delivered. We are witnessing the shift from point solutions to systems, with Copilots and agents that can reason, act, and collaborate across workflows. That shift is creating both urgency and opportunity for partners. According to IDC research, 68% of global enterprises surveyed are already using generative AI, and another 26% plan to adopt it in the next 12 months. At the same time, organizations are rapidly moving from prebuilt generative AI apps toward customized or custom-built solutions, with IDC projecting that shift from 40% today to 70% in the next 24 months. 1 This is exactly where Azure AI apps and agents come in, empowering customers to operationalize AI faster while keeping security, governance, and oversight front and center. Continue reading here37Views0likes0CommentsStrengthen your cloud security expertise with new AI security training
As organizations accelerate their adoption of AI, the need for rigorous, cloud‑ready security skills has never been greater. To help meet that need, we’re releasing a wave of new and updated Microsoft Learn offerings this month, designed to help you secure AI workloads, modernize incident response, strengthen identity governance, optimize security operations across the Microsoft Cloud, and more. New Microsoft Applied Skills: Secure AI Solutions in the Cloud Microsoft Applied Skills: Secure AI Solutions in the Cloud validates your ability to secure AI workloads across Microsoft Defender for Cloud and Microsoft Foundry. In a live, lab-based assessment, you configure AI workload protections, apply model guardrails, secure Foundry environments, and implement identity and access controls. The assessment focuses on real tasks, like enabling Defender for Cloud AI plans, creating responsible AI safeguards, and securing connected resources, making this credential a practical way for you to prove that you can operationalize AI security in the Microsoft Cloud. To prepare to earn this Applied Skills credential, complete the new learning path, Protect Microsoft Foundry solutions by using Microsoft Defender for Cloud. Updated Microsoft Virtual Training Days event Microsoft Virtual Training Days continue to expand as important upskilling opportunities, and the latest, Predict and Defend Against Cybersecurity Threats, brings an even richer learning experience for security professionals. This free, instructor-led, online training is designed to help you strengthen your expertise across Microsoft Defender and Microsoft Sentinel. In this session, you: Learn how to investigate, respond to, and hunt for threats using the unified extended detection and response (XDR) security platform Microsoft Defender and Microsoft Sentinel. Take a deep dive into how to deploy Microsoft Sentinel alongside Microsoft Defender for Endpoint and Microsoft Defender XDR to create a unified security operations center (SOC) experience. Find out about cyberattack mechanisms, alerts, and incident flows, along with how to use Microsoft Security Copilot to automate incident management. Explore how to improve your security posture by connecting Microsoft Defender with Microsoft Sentinel for end-to-end threat visibility. Engage with Microsoft security experts to get answers to real‑world defense challenges. This updated Microsoft Virtual Training Days event reinforces experiential learning and operational readiness across SOC workflows, making it a powerful entry point for analysts, administrators, and chief information security officers (CISOs) working to elevate their threat defense strategies. Sign up for Microsoft Virtual Training Days. Updated modules Mitigate threats using Microsoft Defender XDR Expanded incident response depth. We’ve updated the modules in this learning path to give security analysts a richer, more unified operational view across Defender XDR. The refreshed content now emphasizes how the Microsoft Defender portal provides a single incident queue that correlates signals from Defender for Endpoint, Microsoft Defender for Office 365, Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps. In the modules, you now practice: Managing and triaging incidents through the unified Defender incident experience. Conducting guided and advanced investigations using device, user, and asset context. Executing advanced hunting queries to surface lateral movement and suspicious behavior. These updates strengthen real‑world SOC alignment by teaching you how to move seamlessly from alert to incident to investigation and then to remediation inside Defender XDR. Use Search Jobs in Microsoft Sentinel Improved long-term log investigation. This module now teaches analysts how to perform deep historical investigations using Search Jobs, a capability purpose‑built for large, long‑retention datasets. Module updates include: How Search Jobs scan up to one year of data across extremely large datasets for events that exceed standard KQL query timeouts. How to restore archived logs into interactive mode for high‑performance KQL analysis. The distinction between Simple Mode and KQL Mode when building long‑term searches. How search results are written into dedicated _SRCH suffixed tables for downstream analysis. These additions help teach you how to perform back-in-time investigations, threat hunts, and compliance lookbacks at enterprise scale. Conditional Access: Continuous Access Evaluation Real-time policy enforcement. We’ve refreshed the Microsoft Entra ID Continuous Access Evaluation (CAE) module in this learning path to reflect how CAE enforces Conditional Access in near real-time rather than waiting for tokens to expire. In the updated content, you explore: Why traditional OAuth tokens create security lag. How CAE establishes a two‑way conversation between Microsoft Entra and workloads to instantly react to critical events. Real‑time enforcement for conditions such as password resets, account disablement, high‑risk user flags, and network location changes. Immediate session revocation for sensitive workloads, such as Exchange, SharePoint, and Microsoft Teams. These updates help you understand how CAE closes the gap between identity events and policy enforcement, reducing the window for token replay and unauthorized access. Explore the Access Review Agent in Microsoft Entra Expanded AI-driven governance automation. The updated Access Review Agent module introduces the new Security Copilot–powered access governance assistant. This significantly expands the original module by explaining: How the agent automatically gathers sign‑in activity, usage patterns, and peer‑group behaviors. How it generates AI‑driven approve/deny recommendations with justification summaries. How reviewers complete access reviews in Teams via natural language guidance. Supported review scenarios, limitations (such as decision count limits), and licensing prerequisites. This update shifts access reviews from manual checkbox exercises to intelligent, context‑aware workflows, helping to reduce reviewer fatigue and improve the accuracy of least‑privilege governance. Automate identity lifecycle using Lifecycle Workflows in Microsoft Entra More templates and broader governance coverage. We’ve expanded this module to reflect the growing capabilities of Lifecycle Workflows in Microsoft Entra ID Governance for identity administrators. Updated content now explores: Automation across the complete Joiner, Mover, Leaver (JML) lifecycle. Enhanced workflow templates for onboarding, offboarding, prehire setup, and role changes. A deeper explanation of workflow components, including triggers, scopes, conditions, and tasks. How Lifecycle Workflows complement human resources–driven (HR‑driven) provisioning by automating tasks such as license assignment, group membership updates, notifications, disabling accounts, and scheduled deletions. These updates give you clearer, more actionable guidance for building scalable, no‑code identity automation across cloud and hybrid environments. Whether you're pursuing the new Applied Skills credential, securing AI workloads in Microsoft Foundry, or updating your Defender, Microsoft Sentinel, or Microsoft Entra expertise, AI Skills Navigator helps you build practical expertise that you can apply in real-world environments today—while staying current as technology evolves. To stay ahead of evolving cyberthreats, explore the revamped Security hub, where you can find security learning paths, credentials, events, and resources—all in one place—to help you skill up faster.137Views0likes0CommentsAdding AI Personality to Browser Games
Introduction Browser games traditionally follow predictable patterns, fixed text messages, static tutorials, scripted NPC responses. Players see the same "Game Over" message whether they nearly won or failed spectacularly. Tutorial text remains identical regardless of player skill level. The game experience, while fun, lacks the dynamic reactivity of human-moderated gameplay. What if your Space Invaders game could comment on gameplay in real-time? Taunt players when they miss easy shots? Celebrate close victories with personalized messages? Adjust difficulty suggestions based on actual performance metrics? This article demonstrates exactly that: integrating AI-powered dynamic commentary into a browser game using Spaceinvaders-FoundryLocal, vanilla JavaScript, and Microsoft Foundry Local. You'll learn how to integrate local AI into client-side games, design AI personality systems that enhance rather than distract, implement context-aware commentary generation, and architect optional AI features that don't break core gameplay when unavailable. Whether you're building educational games, interactive training simulations, or simply adding personality to entertainment projects, this approach provides a blueprint for AI-enhanced gaming experiences. Why Local AI Transforms Browser Gaming Adding AI to games sounds expensive, cloud API costs scale with player counts, introducing per-gameplay pricing that makes free-to-play models challenging. Privacy concerns emerge when gameplay data leaves user devices. Latency affects real-time experiences, waiting 2 seconds for commentary after an action breaks immersion. Network requirements exclude offline play. Local AI solves all these challenges simultaneously. Foundry Local runs Small Language Models (SLMs) entirely on player devices, no API costs, no data leaving the machine, no network dependency. Inference happens in milliseconds, enabling truly real-time responses. Games work offline after initial load, perfect for mobile or low-connectivity scenarios. SLMs excel at personality-driven tasks like game commentary. They don't need perfect factual recall or complex reasoning, they generate entertaining, contextually relevant text based on game state. A 1.5B parameter model produces engaging taunts and celebration messages indistinguishable from hand-written content, while running easily on mid-range laptops. Integrating AI as an optional enhancement demonstrates good architecture. Core gameplay must function perfectly without AI, commentary enhances the experience but failure doesn't break the game. This graceful degradation pattern ensures maximum compatibility while offering AI features to capable devices. Architecture: Progressive Enhancement with AI The Spaceinvaders-FoundryLocal implementation uses progressive enhancement, the game fully works without AI, but adds dynamic personality when available: The base game implements classic Space Invaders mechanics entirely in vanilla JavaScript. Player ship movement, bullet physics, enemy patterns, collision detection, scoring, and power-up systems all operate independently of AI. This ensures universal compatibility across browsers, devices, and network conditions. The AI layer adds dynamic commentary through a backend Node.js proxy. The proxy runs locally, communicates with Foundry Local, and provides game context to the AI for generating personalized messages. The game polls the proxy periodically, sending current game state (score, accuracy, wave number, power-up usage) and receiving commentary responses. The architecture flow for AI-enhanced gameplay: Player Action (e.g., destroys enemy) ↓ Game Updates State (score += 100, accuracy tracked) ↓ Game Checks AI Status (polling every 5 seconds) ↓ If AI Available: Send Game Context to Backend → { event: 'wave_complete', score: 2500, accuracy: 78%, wave: 3 } ↓ Backend builds prompt with context ↓ Foundry Local generates comment ↓ Return commentary to game → "Wave 3 conquered! Your 78% accuracy shows improving skills." ↓ Display in game UI (animated text bubble) This design demonstrates several key patterns: Zero-dependency core: Game playable immediately, AI adds value incrementally Graceful degradation: If AI unavailable, game shows generic messages Asynchronous enhancement: AI runs in background, never blocks gameplay Context-aware generation: Commentary reflects actual player performance Local-first architecture: Everything runs on player's machine—no servers, no tracking Implementing Context-Aware AI Commentary Effective game commentary requires understanding current gameplay context. The AI needs to know what just happened, how the player is performing, and what makes this moment interesting: // llm.js - AI integration module export class GameAI { constructor() { this.baseURL = 'http://localhost:3001'; // Local proxy server this.available = false; this.checkAvailability(); } async checkAvailability() { try { const response = await fetch(`${this.baseURL}/health`, { method: 'GET', timeout: 2000 }); this.available = response.ok; return this.available; } catch (error) { console.log('AI server not available (optional feature)'); this.available = false; return false; } } async generateComment(gameContext) { if (!this.available) { return this.getFallbackComment(gameContext.event); } try { const response = await fetch(`${this.baseURL}/api/comment`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(gameContext) }); if (!response.ok) { throw new Error('AI request failed'); } const data = await response.json(); return data.comment; } catch (error) { console.error('AI comment generation failed:', error); return this.getFallbackComment(gameContext.event); } } getFallbackComment(event) { // Static messages when AI unavailable const fallbacks = { 'wave_complete': 'Wave cleared!', 'player_hit': 'Shields damaged!', 'game_over': 'Game Over. Try again!', 'high_score': 'New high score!', 'power_up': 'Power-up collected!' }; return fallbacks[event] || 'Good job!'; } } The backend processes game context and generates contextually relevant commentary: // server.js - Node.js backend proxy import express from 'express'; import { FoundryLocalClient } from 'foundry-local-sdk'; const app = express(); const foundry = new FoundryLocalClient({ endpoint: process.env.FOUNDRY_LOCAL_ENDPOINT || 'http://127.0.0.1:5272' }); app.use(express.json()); app.use(express.cors()); // Allow browser game to connect app.get('/health', (req, res) => { res.json({ status: 'AI available', model: 'phi-3.5-mini' }); }); app.post('/api/comment', async (req, res) => { const { event, score, accuracy, wave, lives, combo } = req.body; // Build context-rich prompt const prompt = buildCommentPrompt(event, { score, accuracy, wave, lives, combo }); try { const completion = await foundry.chat.completions.create({ model: 'phi-3.5-mini', messages: [ { role: 'system', content: `You are an AI commander providing brief, encouraging commentary for a Space Invaders game. Be energetic, supportive, and sometimes humorous. Keep responses to 1-2 sentences maximum. Reference specific game metrics when relevant.` }, { role: 'user', content: prompt } ], temperature: 0.9, // High temperature for creative variety max_tokens: 50 }); const comment = completion.choices[0].message.content.trim(); res.json({ comment, model: 'phi-3.5-mini', timestamp: new Date().toISOString() }); } catch (error) { console.error('AI generation error:', error); res.status(500).json({ error: 'Commentary generation failed' }); } }); function buildCommentPrompt(event, context) { switch(event) { case 'wave_complete': return `The player just completed wave ${context.wave} with score ${context.score}. Their shooting accuracy is ${context.accuracy}%. ${context.lives} lives remaining. Generate an encouraging comment about their progress.`; case 'player_hit': return `The player got hit by an enemy! They now have ${context.lives} lives left. Score: ${context.score}. Provide a brief motivational comment to keep them engaged.`; case 'game_over': if (context.accuracy > 70) { return `Game over at wave ${context.wave}, score ${context.score}. The player had ${context.accuracy}% accuracy - pretty good! Generate an encouraging comment acknowledging their skill.`; } else { return `Game over at wave ${context.wave}, score ${context.score}. Accuracy was ${context.accuracy}%. Provide a supportive comment with a tip for improvement.`; } case 'combo_streak': return `Player achieved a ${context.combo}x combo streak! Score: ${context.score}. Generate an excited celebration comment.`; case 'power_up_used': return `Player activated a ${context.power_up_type} power-up. Generate a brief tactical comment about using it effectively.`; default: return `General gameplay comment. Score: ${context.score}, Wave: ${context.wave}.`; } } const PORT = 3001; app.listen(PORT, () => { console.log(`✓ Game AI server running on http://localhost:${PORT}`); console.log(`✓ Foundry Local endpoint: ${process.env.FOUNDRY_LOCAL_ENDPOINT || 'http://127.0.0.1:5272'}`); }); This backend demonstrates several best practices: Context-sensitive prompting: Different events get different prompt templates with relevant metrics Personality consistency: System message establishes tone and style guidelines Brevity constraints: max_tokens: 50 ensures comments don't overwhelm UI Creative variety: High temperature (0.9) produces diverse commentary on repeated events Performance-aware feedback: Comments adapt based on accuracy, lives remaining, combo streaks Integrating AI into Game Loop Without Performance Impact Games require 60 FPS to feel smooth, any blocking operation creates stutter. AI integration must be completely asynchronous and non-blocking: // game.js - Main game loop class SpaceInvadersGame { constructor() { this.ai = new GameAI(); this.lastAIUpdate = 0; this.aiUpdateInterval = 5000; // Poll AI every 5 seconds this.pendingAIRequest = false; // ... other game state } update(deltaTime) { // Core game logic (always runs) this.updatePlayer(deltaTime); this.updateEnemies(deltaTime); this.updateBullets(deltaTime); this.checkCollisions(); this.updatePowerUps(deltaTime); // AI commentary (optional, async) this.updateAI(deltaTime); } updateAI(deltaTime) { this.lastAIUpdate += deltaTime; // Only check AI periodically, never block gameplay if (this.lastAIUpdate >= this.aiUpdateInterval && !this.pendingAIRequest) { this.requestAICommentary(); } } async requestAICommentary() { // Check if there's an interesting event to comment on const event = this.getSignificantEvent(); if (!event) return; this.pendingAIRequest = true; // Fire-and-forget async request this.ai.generateComment({ event: event.type, score: this.score, accuracy: this.calculateAccuracy(), wave: this.currentWave, lives: this.lives, combo: this.comboMultiplier }) .then(comment => { this.displayAIComment(comment); this.lastAIUpdate = 0; }) .catch(error => { console.log('AI comment failed (non-critical):', error); }) .finally(() => { this.pendingAIRequest = false; }); } getSignificantEvent() { // Determine what's worth commenting on if (this.justCompletedWave) { this.justCompletedWave = false; return { type: 'wave_complete' }; } if (this.justGotHit) { this.justGotHit = false; return { type: 'player_hit' }; } if (this.comboMultiplier >= 5) { return { type: 'combo_streak' }; } return null; // Nothing interesting right now } displayAIComment(comment) { // Show comment in animated text bubble const bubble = document.createElement('div'); bubble.className = 'ai-comment-bubble'; bubble.textContent = comment; document.getElementById('game-container').appendChild(bubble); // Animate in setTimeout(() => bubble.classList.add('show'), 50); // Remove after 4 seconds setTimeout(() => { bubble.classList.remove('show'); setTimeout(() => bubble.remove(), 500); }, 4000); } calculateAccuracy() { if (this.shotsFired === 0) return 0; return Math.round((this.shotsHit / this.shotsFired) * 100); } } This integration pattern ensures: Zero gameplay impact: AI runs completely asynchronously—game never waits for AI Periodic updates only: Check AI every 5 seconds, not every frame (60 FPS → minimal CPU overhead) Event-driven commentary: Only request comments for significant moments, not continuous chatter Non-blocking display: Comments appear as animated overlays that don't interrupt gameplay Graceful failure: AI errors logged but never shown to players—game continues normally Designing AI Personality Systems Effective game AI has consistent personality that enhances rather than distracts. The system message establishes tone, response templates ensure variety, and context awareness makes commentary relevant: // Enhanced system message for consistent personality const AI_COMMANDER_PERSONALITY = ` You are AEGIS, an AI defense commander providing tactical commentary for a Space Invaders-style game. Your personality traits: - Enthusiastic but professional military commander tone - Celebrate victories with tactical language ("Excellent flanking maneuver!") - Acknowledge defeats with constructive feedback ("Regroup and maintain formation!") - Reference specific metrics to show you're paying attention - Keep responses to 1-2 sentences maximum - Use occasional humor but stay in character - Be encouraging even when player struggles Examples of your style: - "Wave neutralized! Your 85% accuracy shows precision targeting." - "Shield integrity compromised! Fall back and reassess the battlefield." - "Impressive combo multiplier! Sustained fire superiority achieved." - "That power-up spread pattern cleared the sector perfectly." `; // Context-aware response variety const RESPONSE_TEMPLATES = { wave_complete: { high_performance: [ "Your {accuracy}% accuracy led to decisive victory, Commander!", "Wave {wave} eliminated with tactical excellence!", "Strategic brilliance! {accuracy}% hit rate maintained." ], medium_performance: [ "Wave {wave} cleared. Solid tactics, Commander.", "Sector secured. Your {accuracy}% accuracy shows improvement potential.", "Objective achieved. Recommend tightening shot discipline." ], low_performance: [ "Wave {wave} cleared, but {accuracy}% accuracy needs work.", "Victory secured. Focus on accuracy in next engagement.", "Mission accomplished, though your hit rate needs improvement." ] }, player_hit: { lives_critical: [ "Critical damage! Only {lives} lives remain - exercise extreme caution!", "Shields failing! {lives} backup systems active.", "Red alert! Hull integrity at {lives} units." ], lives_okay: [ "Shields damaged. {lives} lives remaining. Stay focused!", "Hit sustained. {lives} backup systems online.", "Damage taken. Maintain defensive posture." ] } }; function selectResponseTemplate(event, context) { const templates = RESPONSE_TEMPLATES[event]; if (!templates) return; // Choose template category based on context let category; if (event === 'wave_complete') { if (context.accuracy >= 75) category = templates.high_performance; else if (context.accuracy >= 50) category = templates.medium_performance; else category = templates.low_performance; } else if (event === 'player_hit') { category = context.lives <= 2 ? templates.lives_critical : templates.lives_okay; } // Randomly select from category for variety const template = category[Math.floor(Math.random() * category.length)]; // Fill in context variables return template .replace('{accuracy}', context.accuracy) .replace('{wave}', context.wave) .replace('{lives}', context.lives); } This personality system creates: Consistent character: AEGIS always sounds like a military commander, never breaks character Context-appropriate responses: Different situations trigger different tones (celebration vs concern) Natural variety: Template randomization prevents repetitive commentary Metric awareness: Specific references to accuracy, lives, waves show AI is "watching" Encouraging feedback: Even in failure scenarios, provides constructive guidance Key Takeaways and Game AI Design Patterns Integrating AI into browser games demonstrates that advanced features don't require cloud services or complex infrastructure. Local AI enables personality-driven enhancements that run entirely on player devices, cost nothing at scale, and work offline. Essential principles for game AI integration: Progressive enhancement architecture: Core gameplay must work perfectly without AI—commentary enhances but isn't required Asynchronous-only integration: Never block game loop for AI—60 FPS gameplay is non-negotiable Context-aware generation: Commentary reflecting actual game state feels intelligent, generic messages feel robotic Personality consistency: Well-defined character voice creates memorable experiences Graceful failure handling: AI errors should be invisible to players—fallback to static messages Performance-conscious polling: Check AI every few seconds, not every frame Event-driven commentary: Only generate responses for significant moments This pattern extends beyond games, any interactive application benefits from context-aware AI personality: educational software providing personalized encouragement, fitness apps offering adaptive coaching, productivity tools giving motivational feedback. The complete implementation with game engine, AI integration, backend proxy, and deployment instructions is available at github.com/leestott/Spaceinvaders-FoundryLocal. Clone the repository to experience AI-enhanced gaming—just open index.html and start playing immediately, then optionally enable AI features for dynamic commentary. Resources and Further Reading Space Invaders with AI Repository - Complete game with AI integration Quick Start Guide - Play immediately or enable AI features Microsoft Foundry Local Documentation - SDK and model reference MDN Game Development - Browser game development patterns HTML5 Game Devs Forum - Community discussions and techniquesAccelerate your AI innovation with App Advisor
Have a great idea for an AI app or agent but unsure how to take it to market? Microsoft’s newly released App Advisor provides a clear, guided path from concept to commercialization. Whether you’re shaping an early idea, navigating build decisions, or preparing your solution for the Microsoft commercial marketplace, App Advisor helps streamline each step—reducing friction and increasing confidence. This walkthrough highlights how App Advisor connects the dots: From concept to execution with structured guidance From development to best practices with informed build recommendations From finished product to revenue through proven Marketplace readiness insights If you’re looking to build and sell AI solutions faster and more effectively, this resource is well worth your time. Read the full article: Build an AI app or agent faster to sell on Marketplace with App Advisor | Microsoft Community HubMondays at Microsoft | Episode 68
Join Heather Cook as she takes you through the latest in AI news, Microsoft 365 product updates, and community news & events. 🎥 Episode 66 of "Mondays at Microsoft" goes LIVE on Monday, February 9th at 8:00 AM PT! #MondaysAtMicrosoft #MicrosoftCommunity #MicrosoftEvents #AI #Microsoft365 Links from Today's Episode: Big Rocks & Product Updates: Enhancing Teams video quality with Super Resolution — https://aka.ms/Teams/SuperResolution "GitHub Copilot: The agent awakens" by Thomas Dohmke — https://aka.ms/GitHubAwakens "Use Microsoft 365 Copilot to drive growth for businesses of all sizes" by Brenna Robinson — https://aka.ms/Copilot/SMBgrowth "What's new in Power Apps: January 2025 Feature Update" by Clay Wesener — https://aka.ms/PowerApps/Jan2025updates Powerful Devs Conference + Hack Together 2025 — https://aka.ms/PowerfulDevsConference "New Security Features for Desktop flows" by Quentin Sele Barancy & Ken Seong Tan — https://aka.ms/PowerPlatform/NewSecurityFeatures Events: Community Days | Exchange Summit 2025 | Feb 17-19 | Wurzburg, Germany Community Days | Microsoft Fabric Community Conference | March 31 - April 2 | Las Vegas, Nevada Community Days | Microsoft 365 Community Conference | April 21 - 23 | Orlando, Florida Community Days | TechCon365 and PWRCON Seattle 2025 | June 23-27 | Seattle, Washington Our Show Links: 📅 Mondays at Microsoft Links including a download for calendar: https://aka.ms/MondaysAtMicrosoft 📝 Our Microsoft Tech Community News Desk for Monday's recaps, Meet the Makers show and more: https://aka.ms/CommunityNewsDesk 🎙️ Listen to Mondays at Microsoft on Apple Podcasts or Spotify — https://aka.ms/MondaysAtMicrosoft 🌍 Discover the Microsoft Global Community Initiative (MGCI) — https://aka.ms/MGCI Microsoft Community page on Facebook: https://aka.ms/MSCommunityFB 🔔Microsoft Community on LinkedIn: https://aka.ms/MSCommunityLIMM MSFT Adoption on Twitter: https://x.com/MSFTAdoption105Views0likes0CommentsChoosing the Right Model in GitHub Copilot: A Practical Guide for Developers
AI-assisted development has grown far beyond simple code suggestions. GitHub Copilot now supports multiple AI models, each optimized for different workflows, from quick edits to deep debugging to multi-step agentic tasks that generate or modify code across your entire repository. As developers, this flexibility is powerful… but only if we know how to choose the right model at the right time. In this guide, I’ll break down: Why model selection matters The four major categories of development tasks A simplified, developer-friendly model comparison table Enterprise considerations and practical tips This is written from the perspective of real-world customer conversations, GitHub Copilot demos, and enterprise adoption journeys Why Model Selection Matters GitHub Copilot isn’t tied to a single model. Instead, it offers a range of models, each with different strengths: Some are optimized for speed Others are optimized for reasoning depth Some are built for agentic workflows Choosing the right model can dramatically improve: The quality of the output The speed of your workflow The accuracy of Copilot’s reasoning The effectiveness of Agents and Plan Mode Your usage efficiency under enterprise quotas Model selection is now a core part of modern software development, just like choosing the right library, framework, or cloud service. The Four Task Categories (and which Model Fits) To simplify model selection, I group tasks into four categories. Each category aligns naturally with specific types of models. 1. Everyday Development Tasks Examples: Writing new functions Improving readability Generating tests Creating documentation Best fit: General-purpose coding models (e.g., GPT‑4.1, GPT‑5‑mini, Claude Sonnet) These models offer the best balance between speed and quality. 2. Fast, Lightweight Edits Examples: Quick explanations JSON/YAML transformations Small refactors Regex generation Short Q&A tasks Best fit: Lightweight models (e.g., Claude Haiku 4.5) These models give near-instant responses and keep you “in flow.” 3. Complex Debugging & Deep Reasoning Examples: Analyzing unfamiliar code Debugging tricky production issues Architecture decisions Multi-step reasoning Performance analysis Best fit: Deep reasoning models (e.g., GPT‑5, GPT‑5.1, GPT‑5.2, Claude Opus) These models handle large context, produce structured reasoning, and give the most reliable insights for complex engineering tasks. 4. Multi-step Agentic Development Examples: Repo-wide refactors Migrating a codebase Scaffolding entire features Implementing multi-file plans in Agent Mode Automated workflows (Plan → Execute → Modify) Best fit: Agent-capable models (e.g., GPT‑5.1‑Codex‑Max, GPT‑5.2‑Codex) These models are ideal when you need Copilot to execute multi-step tasks across your repository. GitHub Copilot Models - Developer Friendly Comparison The set of models you can choose from depends on your Copilot subscription, and the available options may evolve over time. Each model also has its own premium request multiplier, which reflects the compute resources it requires. If you're using a paid Copilot plan, the multiplier determines how many premium requests are deducted whenever that model is used. Model Category Example Models (Premium request Multiplier for paid plans) What they’re best at When to Use Them Fast Lightweight Models Claude Haiku 4.5, Gemini 3 Flash (0.33x) Grok Code Fast 1 (0.25x) Low latency, quick responses Small edits, Q&A, simple code tasks General-Purpose Coding Models GPT‑4.1, GPT‑5‑mini (0x) GPT-5-Codex, Claude Sonnet 4.5 (1x) Reliable day‑to‑day development Writing functions, small tests, documentation Deep Reasoning Models GPT-5.1 Codex Mini (0.33x) GPT‑5, GPT‑5.1, GPT-5.1 Codex, GPT‑5.2, Claude Sonnet 4.0, Gemini 2.5 Pro, Gemini 3 Pro (1x) Claude Opus 4.5 (3x) Complex reasoning and debugging Architecture work, deep bug diagnosis Agentic / Multi-step Models GPT‑5.1‑Codex‑Max, GPT‑5.2‑Codex (1x) Planning + execution workflows Repo-wide changes, feature scaffolding Enterprise Considerations For organizations using Copilot Enterprise or Business: Admins can control which models employees can use Model selection may be restricted due to security, regulation, or data governance You may see fewer available models depending on your organization’s Copilot policies Using "Auto" Model selection in GitHub Copilot GitHub Copilot’s Auto model selection automatically chooses the best available model for your prompts, reducing the mental load of picking a model and helping you avoid rate‑limiting. When enabled, Copilot prioritizes model availability and selects from a rotating set of eligible models such as GPT‑4.1, GPT‑5 mini, GPT‑5.2‑Codex, Claude Haiku 4.5, and Claude Sonnet 4.5 while respecting your subscription level and any administrator‑imposed restrictions. Auto also excludes models blocked by policies, models with premium multipliers greater than 1, and models unavailable in your plan. For paid plans, Auto provides an additional benefit: a 10% discount on premium request multipliers when used in Copilot Chat. Overall, Auto offers a balanced, optimized experience by dynamically selecting a performant and cost‑efficient model without requiring developers to switch models manually. Read more about the 'Auto' Model selection here - About Copilot auto model selection - GitHub Docs Final Thoughts GitHub Copilot is becoming a core part of the developer workflows. Choosing the right model can dramatically improve your productivity, the accuracy of Copilot’s responses, your experience with multi-step agentic tasks, your ability to navigate complex codebases Whether you’re building features, debugging complex issues, or orchestrating repo-wide changes, picking the right model helps you get the best out of GitHub Copilot. References and Further Reading To explore each model further, visit the GitHub Copilot model comparison documentation or try switching models in Copilot Chat to see how they impact your workflow. AI model comparison - GitHub Docs Requests in GitHub Copilot - GitHub Docs About Copilot auto model selection - GitHub DocsDemystifying GitHub Copilot Security Controls: easing concerns for organizational adoption
At a recent developer conference, I delivered a session on Legacy Code Rescue using GitHub Copilot App Modernization. Throughout the day, conversations with developers revealed a clear divide: some have fully embraced Agentic AI in their daily coding, while others remain cautious. Often, this hesitation isn't due to reluctance but stems from organizational concerns around security and regulatory compliance. Having witnessed similar patterns during past technology shifts, I understand how these barriers can slow adoption. In this blog, I'll demystify the most common security concerns about GitHub Copilot and explain how its built-in features address them, empowering organizations to confidently modernize their development workflows. GitHub Copilot Model Training A common question I received at the conference was whether GitHub uses your code as training data for GitHub Copilot. I always direct customers to the GitHub Copilot Trust Center for clarity, but the answer is straightforward: “No. GitHub uses neither Copilot Business nor Enterprise data to train the GitHub model.” Notice this restriction also applies to third-party models as well (e.g. Anthropic, Google). GitHub Copilot Intellectual Property indemnification policy A frequent concern I hear is, since GitHub Copilot’s underlying models are trained on sources that include public code, it might simply “copy and paste” code from those sources. Let’s clarify how this actually works: Does GitHub Copilot “copy/paste”? “The AI models that create Copilot’s suggestions may be trained on public code, but do not contain any code. When they generate a suggestion, they are not “copying and pasting” from any codebase.” To provide an additional layer of protection, GitHub Copilot includes a “duplicate detection filter”. This feature helps prevent suggestions that closely match public code from being surfaced. (Note: This duplicate detection currently does not apply to the Copilot coding agent.) More importantly, customers are protected by an Intellectual Property indemnification policy. This means that if you receive an unmodified suggestion from GitHub Copilot and face a copyright claim as a result, Microsoft will defend you in court. GitHub Copilot Data Retention Another frequent question I hear concerns GitHub Copilot’s data retention policies. For organizations on GitHub Copilot Business and Enterprise plans, retention practices depend on how and where the service is accessed from: Access through IDE for Chat and Code Completions: Prompts and Suggestions: Not retained. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. Other GitHub Copilot access and use: Prompts and Suggestions: Retained for 28 days. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. For Copilot Coding Agent, session logs are retained for the life of the account in order to provide the service. Excluding content from GitHub Copilot To prevent GitHub Copilot from indexing sensitive files, you can configure content exclusions at the repository or organization level. In VS Code, use the .copilotignore file to exclude files client-side. Note that files listed in .gitignore are not indexed by default but may still be referenced if open or explicitly referenced (unless they’re excluded through .copilotignore or content exclusions). The life cycle of a GitHub Copilot code suggestion Here are the key protections at each stage of the life cycle of a GitHub Copilot code suggestion: In the IDE: Content exclusions prevent files, folders, or patterns from being included. GitHub proxy (pre-model safety): Prompts go through a GitHub proxy hosted in Microsoft Azure for pre-inference checks: screening for toxic or inappropriate language, relevance, and hacking attempts/jailbreak-style prompts before reaching the model. Model response: With the public code filter enabled, some suggestions are suppressed. The vulnerability protection feature blocks insecure coding patterns like hardcoded credentials or SQL injections in real time. Disable access to GitHub Copilot Free Due to the varying policies associated with GitHub Copilot Free, it is crucial for organizations to ensure it is disabled both in the IDE and on GitHub.com. Since not all IDEs currently offer a built-in option to disable Copilot Free, the most reliable method to prevent both accidental and intentional access is to implement firewall rule changes, as outlined in the official documentation. Agent Mode Allow List Accidental file system deletion by Agentic AI assistants can happen. With GitHub Copilot agent mode, the "Terminal auto approve” setting in VS Code can be used to prevent this. This setting can be managed centrally using a VS Code policy. MCP registry Organizations often want to restrict access to allow only trusted MCP servers. GitHub now offers an MCP registry feature for this purpose. This feature isn’t available in all IDEs and clients yet, but it's being developed. Compliance Certifications The GitHub Copilot Trust Center page lists GitHub Copilot's broad compliance credentials, surpassing many competitors in financial, security, privacy, cloud, and industry coverage. SOC 1 Type 2: Assurance over internal controls for financial reporting. SOC 2 Type 2: In-depth report covering Security, Availability, Processing Integrity, Confidentiality, and Privacy over time. SOC 3: General-use version of SOC 2 with broad executive-level assurance. ISO/IEC 27001:2013: Certification for a formal Information Security Management System (ISMS), based on risk management controls. CSA STAR Level 2: Includes a third-party attestation combining ISO 27001 or SOC 2 with additional cloud control matrix (CCM) requirements. TISAX: Trusted Information Security Assessment Exchange, covering automotive-sector security standards. In summary, while the adoption of AI tools like GitHub Copilot in software development can raise important questions around security, privacy, and compliance, it’s clear that existing safeguards in place help address these concerns. By understanding the safeguards, configurable controls, and robust compliance certifications offered, organizations and developers alike can feel more confident in embracing GitHub Copilot to accelerate innovation while maintaining trust and peace of mind.Microsoft 365 & Power Platform Community call
💡 Microsoft 365 & Power Platform Development bi-weekly community call focuses on different use cases and features within the Microsoft 365 and Power Platform - across Microsoft 365 Copilot, Copilot Studio, SharePoint, Power Apps and more. 👏 Looking to catch up on the latest news and updates, including cool community demos, this call is for you! 📅 On 12th of February we'll have following agenda: Copilot prompt of the week CommunityDays.org update Microsoft 365 Maturity model Latest on PnP Framework and Core SDK extension Latest on PnP PowerShell Latest on script samples Latest Copilot pro dev samples Latest on Power Platform samples Picture time with the Together Mode! Mohammed Amer (Atea Global Services) – Reverse Engineering: Teaching GitHub Copilot to Configure Vitest Unit Testing for your SPFx apps Peter Paul Kirschner (ACP Cubido) – Creating React Office Breakout game with SPFx - Vision, Motion, and a Little Chaos 📅 Download recurrent invite from https://aka.ms/community/m365-powerplat-dev-call-invite 📞 & 📺 Join the Microsoft Teams meeting live at https://aka.ms/community/m365-powerplat-dev-call-join 👋 See you in the call! 💡 Building something cool for Microsoft 365 or Power Platform (Copilot, SharePoint, Power Apps, etc)? We are always looking for presenters - Volunteer for a community call demo at https://aka.ms/community/request/demo 📖 Resources: Previous community call recordings and demos from the Microsoft Community Learning YouTube channel at https://aka.ms/community/youtube Microsoft 365 & Power Platform samples from Microsoft and community - https://aka.ms/community/samples Microsoft 365 & Power Platform community details - https://aka.ms/community/home 🧡 Sharing is caring!100Views0likes0Comments