copilot
72 TopicsIntegrating Microsoft Foundry with OpenClaw: Step by Step Model Configuration
Step 1: Deploying Models on Microsoft Foundry Let us kick things off in the Azure portal. To get our OpenClaw agent thinking like a genius, we need to deploy our models in Microsoft Foundry. For this guide, we are going to focus on deploying gpt-5.2-codex on Microsoft Foundry with OpenClaw. Navigate to your AI Hub, head over to the model catalog, choose the model you wish to use with OpenClaw and hit deploy. Once your deployment is successful, head to the endpoints section. Important: Grab your Endpoint URL and your API Keys right now and save them in a secure note. We will need these exact values to connect OpenClaw in a few minutes. Step 2: Installing and Initializing OpenClaw Next up, we need to get OpenClaw running on your machine. Open up your terminal and run the official installation script: curl -fsSL https://openclaw.ai/install.sh | bash The wizard will walk you through a few prompts. Here is exactly how to answer them to link up with our Azure setup: First Page (Model Selection): Choose "Skip for now". Second Page (Provider): Select azure-openai-responses. Model Selection: Select gpt-5.2-codex , For now only the models listed (hosted on Microsoft Foundry) in the picture below are available to be used with OpenClaw. Follow the rest of the standard prompts to finish the initial setup. Step 3: Editing the OpenClaw Configuration File Now for the fun part. We need to manually configure OpenClaw to talk to Microsoft Foundry. Open your configuration file located at ~/.openclaw/openclaw.json in your favorite text editor. Replace the contents of the models and agents sections with the following code block: { "models": { "providers": { "azure-openai-responses": { "baseUrl": "https://<YOUR_RESOURCE_NAME>.openai.azure.com/openai/v1", "apiKey": "<YOUR_AZURE_OPENAI_API_KEY>", "api": "openai-responses", "authHeader": false, "headers": { "api-key": "<YOUR_AZURE_OPENAI_API_KEY>" }, "models": [ { "id": "gpt-5.2-codex", "name": "GPT-5.2-Codex (Azure)", "reasoning": true, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 16384, "compat": { "supportsStore": false } }, { "id": "gpt-5.2", "name": "GPT-5.2 (Azure)", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 272000, "maxTokens": 16384, "compat": { "supportsStore": false } } ] } } }, "agents": { "defaults": { "model": { "primary": "azure-openai-responses/gpt-5.2-codex" }, "models": { "azure-openai-responses/gpt-5.2-codex": {} }, "workspace": "/home/<USERNAME>/.openclaw/workspace", "compaction": { "mode": "safeguard" }, "maxConcurrent": 4, "subagents": { "maxConcurrent": 8 } } } } You will notice a few placeholders in that JSON. Here is exactly what you need to swap out: Placeholder Variable What It Is Where to Find It <YOUR_RESOURCE_NAME> The unique name of your Azure OpenAI resource. Found in your Azure Portal under the Azure OpenAI resource overview. <YOUR_AZURE_OPENAI_API_KEY> The secret key required to authenticate your requests. Found in Microsoft Foundry under your project endpoints or Azure Portal keys section. <USERNAME> Your local computer's user profile name. Open your terminal and type whoami to find this. Step 4: Restart the Gateway After saving the configuration file, you must restart the OpenClaw gateway for the new Foundry settings to take effect. Run this simple command: openclaw gateway restart Configuration Notes & Deep Dive If you are curious about why we configured the JSON that way, here is a quick breakdown of the technical details. Authentication Differences Azure OpenAI uses the api-key HTTP header for authentication. This is entirely different from the standard OpenAI Authorization: Bearer header. Our configuration file addresses this in two ways: Setting "authHeader": false completely disables the default Bearer header. Adding "headers": { "api-key": "<key>" } forces OpenClaw to send the API key via Azure's native header format. Important Note: Your API key must appear in both the apiKey field AND the headers.api-key field within the JSON for this to work correctly. The Base URL Azure OpenAI's v1-compatible endpoint follows this specific format: https://<your_resource_name>.openai.azure.com/openai/v1 The beautiful thing about this v1 endpoint is that it is largely compatible with the standard OpenAI API and does not require you to manually pass an api-version query parameter. Model Compatibility Settings "compat": { "supportsStore": false } disables the store parameter since Azure OpenAI does not currently support it. "reasoning": true enables the thinking mode for GPT-5.2-Codex. This supports low, medium, high, and xhigh levels. "reasoning": false is set for GPT-5.2 because it is a standard, non-reasoning model. Model Specifications & Cost Tracking If you want OpenClaw to accurately track your token usage costs, you can update the cost fields from 0 to the current Azure pricing. Here are the specs and costs for the models we just deployed: Model Specifications Model Context Window Max Output Tokens Image Input Reasoning gpt-5.2-codex 400,000 tokens 16,384 tokens Yes Yes gpt-5.2 272,000 tokens 16,384 tokens Yes No Current Cost (Adjust in JSON) Model Input (per 1M tokens) Output (per 1M tokens) Cached Input (per 1M tokens) gpt-5.2-codex $1.75 $14.00 $0.175 gpt-5.2 $2.00 $8.00 $0.50 Conclusion: And there you have it! You have successfully bridged the gap between the enterprise-grade infrastructure of Microsoft Foundry and the local autonomy of OpenClaw. By following these steps, you are not just running a chatbot; you are running a sophisticated agent capable of reasoning, coding, and executing tasks with the full power of GPT-5.2-codex behind it. The combination of Azure's reliability and OpenClaw's flexibility opens up a world of possibilities. Whether you are building an automated devops assistant, a research agent, or just exploring the bleeding edge of AI, you now have a robust foundation to build upon. Now it is time to let your agent loose on some real tasks. Go forth, experiment with different system prompts, and see what you can build. If you run into any interesting edge cases or come up with a unique configuration, let me know in the comments below. Happy coding!1.6KViews1like1CommentFrom Zero to 16 Games in 2 Hours
From Zero to 16 Games in 2 Hours: Teaching Prompt Engineering to Students with GitHub Copilot CLI Introduction What happens when you give a room full of 14-year-olds access to AI-powered development tools and challenge them to build games? You might expect chaos, confusion, or at best, a few half-working prototypes. Instead, we witnessed something remarkable: 16 fully functional HTML5 games created in under two hours, all from students with varying programming experience. This wasn't magic, it was the power of GitHub Copilot CLI combined with effective prompt engineering. By teaching students to communicate clearly with AI, we transformed a traditional coding workshop into a rapid prototyping session that exceeded everyone's expectations. The secret weapon? A technique called "one-shot prompting" that enables anyone to generate complete, working applications from a single, well-crafted prompt. In this article, we'll explore how we structured this workshop using CopilotCLI-OneShotPromptGameDev, a methodology designed to teach prompt engineering fundamentals while producing tangible, exciting results. Whether you're an educator planning STEM workshops, a developer exploring AI-assisted coding, or simply curious about how young people can leverage AI tools effectively, this guide provides a practical blueprint you can replicate. What is GitHub Copilot CLI? GitHub Copilot CLI extends the familiar Copilot experience beyond your code editor into the command line. While Copilot in VS Code suggests code completions as you type, Copilot CLI allows you to have conversational interactions with AI directly in your terminal. You describe what you want to accomplish in natural language, and the AI responds with shell commands, explanations, or in our case, complete code files. This terminal-based approach offers several advantages for learning and rapid prototyping. Students don't need to configure complex IDE settings or navigate unfamiliar interfaces. They simply type their request, review the AI's output, and iterate. The command line provides a transparent view of exactly what's happening, no hidden abstractions or magical "autocomplete" that obscures the learning process. For our workshop, Copilot CLI served as a bridge between students' creative ideas and working code. They could describe a game concept in plain English, watch the AI generate HTML, CSS, and JavaScript, then immediately test the result in a browser. This rapid feedback loop kept engagement high and made the connection between language and code tangible. Installing GitHub Copilot CLI Setting up Copilot CLI requires a few straightforward steps. Before the workshop, we ensured all machines were pre-configured, but students also learned the installation process as part of understanding how developer tools work. First, you'll need Node.js installed on your system. Copilot CLI runs as a Node package, so this is a prerequisite: # Check if Node.js is installed node --version # If not installed, download from https://nodejs.org/ # Or use a package manager: # Windows (winget) winget install OpenJS.NodeJS.LTS # macOS (Homebrew) brew install node # Linux (apt) sudo apt install nodejs npm These commands verify your Node.js installation or guide you through installing it using your operating system's preferred package manager. Next, install the GitHub CLI, which provides the foundation for Copilot CLI: # Windows winget install GitHub.cli # macOS brew install gh # Linux sudo apt install gh This installs the GitHub command-line interface, which handles authentication and provides the framework for Copilot integration. With GitHub CLI installed, authenticate with your GitHub account: gh auth login This command initiates an interactive authentication flow that connects your terminal to your GitHub account, enabling access to Copilot features. Finally, install the Copilot CLI extension: gh extension install github/gh-copilot This adds Copilot capabilities to your GitHub CLI installation, enabling the conversational AI features we'll use for game development. Verify the installation by running: gh copilot --help If you see the help output with available commands, you're ready to start prompting. The entire setup takes about 5-10 minutes on a fresh machine, making it practical for classroom environments. Understanding One-Shot Prompting Traditional programming education follows an incremental approach: learn syntax, understand concepts, build small programs, gradually tackle larger projects. This method is thorough but slow. One-shot prompting inverts this model—you start with the complete vision and let AI handle the implementation details. A one-shot prompt provides the AI with all the context it needs to generate a complete, working solution in a single response. Instead of iteratively refining code through multiple exchanges, you craft one comprehensive prompt that specifies requirements, constraints, styling preferences, and technical specifications. The AI then produces complete, functional code. This approach teaches a crucial skill: clear communication of technical requirements. Students must think through their entire game concept before typing. What does the game look like? How does the player interact with it? What happens when they win or lose? By forcing this upfront thinking, one-shot prompting develops the same analytical skills that professional developers use when writing specifications or planning architectures. The technique also demonstrates a powerful principle: with sufficient context, AI can handle implementation complexity while humans focus on creativity and design. Students learned they could create sophisticated games without memorizing JavaScript syntax—they just needed to describe their vision clearly enough for the AI to understand. Crafting Effective Prompts for Game Development The difference between a vague prompt and an effective one-shot prompt is the difference between frustration and success. We taught students a structured approach to prompt construction that consistently produced working games. Start with the game type and core mechanic. Don't just say "make a game"—specify what kind: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. This opening establishes the fundamental gameplay loop: control a spaceship, avoid obstacles. The AI now has a clear mental model to work from. Add visual and interaction details. Games are visual experiences, so specify how things should look and respond: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. The spaceship should be a blue triangle at the bottom of the screen, controlled by left and right arrow keys. Asteroids are brown circles that fall from the top at random positions and increasing speeds. These additions provide concrete visual targets and define the input mechanism. The AI can now generate specific CSS colors and event handlers. Define win/lose conditions and scoring: Create a complete HTML5 game where the player controls a spaceship that must dodge falling asteroids. The spaceship should be a blue triangle at the bottom of the screen, controlled by left and right arrow keys. Asteroids are brown circles that fall from the top at random positions and increasing speeds. Display a score that increases every second the player survives. The game ends when an asteroid hits the spaceship, showing a "Game Over" screen with the final score and a "Play Again" button. This complete prompt now specifies the entire game loop: gameplay, scoring, losing, and restarting. The AI has everything needed to generate a fully playable game. The formula students learned: Game Type + Visual Description + Controls + Rules + Win/Lose + Score = Complete Game Prompt. Running the Workshop: Structure and Approach Our two-hour workshop followed a carefully designed structure that balanced instruction with hands-on creation. We partnered with University College London and students access to GitHub Education to access resources specifically designed for classroom settings, including student accounts with Copilot access and amazing tools like VSCode and Azure for Students and for Schools VSCode Education. The first 20 minutes covered fundamentals: what is AI, how does Copilot work, and why does prompt quality matter? We demonstrated this with a live example, showing how "make a game" produces confused output while a detailed prompt generates playable code. This contrast immediately captured students' attention, they could see the direct relationship between their words and the AI's output. The next 15 minutes focused on the prompt formula. We broke down several example prompts, highlighting each component: game type, visuals, controls, rules, scoring. Students practiced identifying these elements in prompts before writing their own. This analysis phase prepared them to construct effective prompts independently. The remaining 85 minutes were dedicated to creation. Students worked individually or in pairs, brainstorming game concepts, writing prompts, generating code, testing in browsers, and iterating. Instructors circulated to help debug prompts (not code an important distinction) and encourage experimentation. We deliberately avoided teaching JavaScript syntax. When students encountered bugs, we guided them to refine their prompts rather than manually fix code. This maintained focus on the core skill: communicating with AI effectively. Surprisingly, this approach resulted in fewer bugs overall because students learned to be more precise in their initial descriptions. Student Projects: The Games They Created The diversity of games produced in 85 minutes of building time amazed everyone present. Students didn't just follow a template, they invented entirely new concepts and successfully communicated them to Copilot CLI. One student created a "Fruit Ninja" clone where players clicked falling fruit to slice it before it hit the ground. Another built a typing speed game that challenged players to correctly type increasingly difficult words against a countdown timer. A pair of collaborators produced a two-player tank battle where each player controlled their tank with different keyboard keys. Several students explored educational games: a math challenge where players solve equations to destroy incoming meteors, a geography quiz with animated maps, and a vocabulary builder where correct definitions unlock new levels. These projects demonstrated that one-shot prompting isn't limited to entertainment, students naturally gravitated toward useful applications. The most complex project was a procedurally generated maze game with fog-of-war mechanics. The student spent extra time on their prompt, specifying exactly how visibility should work around the player character. Their detailed approach paid off with a surprisingly sophisticated result that would typically require hours of manual coding. By the session's end, we had 16 complete, playable HTML5 games. Every student who participated produced something they could share with friends and family a tangible achievement that transformed an abstract "coding workshop" into a genuine creative accomplishment. Key Benefits of Copilot CLI for Rapid Prototyping Our workshop revealed several advantages that make Copilot CLI particularly valuable for rapid prototyping scenarios, whether in educational settings or professional development. Speed of iteration fundamentally changes what's possible. Traditional game development requires hours to produce even simple prototypes. With Copilot CLI, students went from concept to playable game in minutes. This compressed timeline enables experimentation, if your first idea doesn't work, try another. This psychological freedom to fail fast and try again proved more valuable than any technical instruction. Accessibility removes barriers to entry. Students with no prior coding experience produced results comparable to those who had taken programming classes. The playing field leveled because success depended on creativity and communication rather than memorized syntax. This democratization of development opens doors for students who might otherwise feel excluded from technical fields. Focus on design over implementation teaches transferable skills. Whether students eventually become programmers, designers, product managers, or pursue entirely different careers, the ability to clearly specify requirements and think through complete systems applies universally. They learned to think like system designers, not just coders. The feedback loop keeps engagement high. Seeing your words transform into working software within seconds creates an addictive cycle of creation and testing. Students who typically struggle with attention during lectures remained focused throughout the building session. The immediate gratification of seeing their games work motivated continuous refinement. Debugging through prompts teaches root cause analysis. When games didn't work as expected, students had to analyze what they'd asked for versus what they received. This comparison exercise developed critical thinking about specifications a skill that serves developers throughout their careers. Tips for Educators: Running Your Own Workshop If you're planning to replicate this workshop, several lessons from our experience will help ensure success. Pre-configure machines whenever possible. While installation is straightforward, classroom time is precious. Having Copilot CLI ready on all devices lets you dive into content immediately. If pre-configuration isn't possible, allocate the first 15-20 minutes specifically for setup and troubleshoot as a group. Prepare example prompts across difficulty levels. Some students will grasp one-shot prompting immediately; others will need more scaffolding. Having templates ranging from simple ("Create Pong") to complex (the spaceship example above) lets you meet students where they are. Emphasize that "prompt debugging" is the goal. When students ask for help fixing broken code, redirect them to examine their prompt. What did they ask for? What did they get? Where's the gap? This redirection reinforces the workshop's core learning objective and builds self-sufficiency. Celebrate and share widely. Build in time at the end for students to demonstrate their games. This showcase moment validates their work and often inspires classmates to try new approaches in future sessions. Consider creating a shared folder or simple website where all games can be accessed after the workshop. Access GitHub Education resources at education.github.com before your workshop. The GitHub Education program provides free access to developer tools for students and educators, including Copilot. The resources there include curriculum materials, teaching guides, and community support that can enhance your workshop. Beyond Games: Where This Leads The techniques students learned extend far beyond game development. One-shot prompting with Copilot CLI works for any development task: creating web pages, building utilities, generating data processing scripts, or prototyping application interfaces. The fundamental skill, communicating requirements clearly to AI applies wherever AI-assisted development tools are used. Several students have continued exploring after the workshop. Some discovered they enjoy the creative aspects of game design and are learning traditional programming to gain more control. Others found that prompt engineering itself interests them, they're exploring how different phrasings affect AI outputs across various domains. For professional developers, the workshop's lessons apply directly to working with Copilot, ChatGPT, and other AI coding assistants. The ability to craft precise, complete prompts determines whether these tools save time or create confusion. Investing in prompt engineering skills yields returns across every AI-assisted workflow. Key Takeaways Clear prompts produce working code: The one-shot prompting formula (Game Type + Visuals + Controls + Rules + Win/Lose + Score) reliably generates playable games from single prompts Copilot CLI democratizes development: Students with no coding experience created functional applications by focusing on communication rather than syntax Rapid iteration enables experimentation: Minutes-per-prototype timelines encourage creative risk-taking and learning from failures Prompt debugging builds analytical skills: Comparing intended versus actual results teaches specification writing and root cause analysis Sixteen games in two hours is achievable: With proper structure and preparation, young students can produce impressive results using AI-assisted development Conclusion and Next Steps Our workshop demonstrated that AI-assisted development tools like GitHub Copilot CLI aren't just productivity boosters for experienced programmers, they're powerful educational instruments that make software creation accessible to beginners. By focusing on prompt engineering rather than traditional syntax instruction, we enabled 14-year-old students to produce complete, functional games in a fraction of the time traditional methods would require. The sixteen games created during those two hours represent more than just workshop outputs. They represent a shift in how we might teach technical creativity: start with vision, communicate clearly, iterate quickly. Whether students pursue programming careers or not, they've gained experience in thinking systematically about requirements and translating ideas into specifications that produce real results. To explore this approach yourself, visit the CopilotCLI-OneShotPromptGameDev repository for prompt templates, workshop materials, and example games. For educational resources and student access to GitHub tools including Copilot, explore GitHub Education. And most importantly, start experimenting. Write a prompt, generate some code, and see what you can create in the next few minutes. Resources CopilotCLI-OneShotPromptGameDev Repository - Workshop materials, prompt templates, and example games GitHub Education - Free developer tools and resources for students and educators GitHub Copilot CLI Documentation - Official installation and usage guide GitHub CLI - Foundation tool required for Copilot CLI GitHub Copilot - Overview of Copilot features and pricing331Views2likes3CommentsChoosing the Right Intelligence Layer for Your Application
Introduction One of the most common questions developers ask when planning AI-powered applications is: "Should I use the GitHub Copilot SDK or the Microsoft Agent Framework?" It's a natural question, both technologies let you add an intelligence layer to your apps, both come from Microsoft's ecosystem, and both deal with AI agents. But they solve fundamentally different problems, and understanding where each excels will save you weeks of architectural missteps. The short answer is this: the Copilot SDK puts Copilot inside your app, while the Agent Framework lets you build your app out of agents. They're complementary, not competing. In fact, the most interesting applications use both, the Agent Framework as the system architecture and the Copilot SDK as a powerful execution engine within it. This article breaks down each technology's purpose, architecture, and ideal use cases. We'll walk through concrete scenarios, examine a real-world project that combines both, and give you a decision framework for your own applications. Whether you're building developer tools, enterprise workflows, or data analysis pipelines, you'll leave with a clear understanding of which tool belongs where in your stack. The Core Distinction: Embedding Intelligence vs Building With Intelligence Before comparing features, it helps to understand the fundamental design philosophy behind each technology. They approach the concept of "adding AI to your application" from opposite directions. The GitHub Copilot SDK exposes the same agentic runtime that powers Copilot CLI as a programmable library. When you use it, you're embedding a production-tested agent, complete with planning, tool invocation, file editing, and command execution, directly into your application. You don't build the orchestration logic yourself. Instead, you delegate tasks to Copilot's agent loop and receive results. Think of it as hiring a highly capable contractor: you describe the job, and the contractor figures out the steps. The Microsoft Agent Framework is a framework for building, orchestrating, and hosting your own agents. You explicitly model agents, workflows, state, memory, hand-offs, and human-in-the-loop interactions. You control the orchestration, policies, deployment, and observability. Think of it as designing the company that employs those contractors: you define the roles, processes, escalation paths, and quality controls. This distinction has profound implications for what you build and how you build it. GitHub Copilot SDK: When Your App Wants Copilot-Style Intelligence The GitHub Copilot SDK is the right choice when you want to embed agentic behavior into an existing application without building your own planning or orchestration layer. It's optimized for developer workflows and task automation scenarios where you need an AI agent to do things, edit files, run commands, generate code, interact with tools, reliably and quickly. What You Get Out of the Box The SDK communicates with the Copilot CLI server via JSON-RPC, managing the CLI process lifecycle automatically. This means your application inherits capabilities that have been battle-tested across millions of Copilot CLI users: Planning and execution: The agent analyzes tasks, breaks them into steps, and executes them autonomously Built-in tool support: File system operations, Git operations, web requests, and shell command execution work out of the box MCP (Model Context Protocol) integration: Connect to any MCP server to extend the agent's capabilities with custom data sources and tools Multi-language support: Available as SDKs for Python, TypeScript/Node.js, Go, and .NET Custom tool definitions: Define your own tools and constrain which tools the agent can access BYOK (Bring Your Own Key): Use your own API keys from OpenAI, Azure AI Foundry, or Anthropic instead of GitHub authentication Architecture The SDK's architecture is deliberately simple. Your application communicates with the Copilot CLI running in server mode: Your Application ↓ SDK Client ↓ JSON-RPC Copilot CLI (server mode) The SDK manages the CLI process lifecycle automatically. You can also connect to an external CLI server if you need more control over the deployment. This simplicity is intentional, it keeps the integration surface small so you can focus on your application logic rather than agent infrastructure. Ideal Use Cases for the Copilot SDK The Copilot SDK shines in scenarios where you need a competent agent to execute tasks on behalf of users. These include: AI-powered developer tools: IDEs, CLIs, internal developer portals, and code review tools that need to understand, generate, or modify code "Do the task for me" agents: Applications where users describe what they want—edit these files, run this analysis, generate a pull request and the agent handles execution Rapid prototyping with agentic behavior: When you need to ship an intelligent feature quickly without building a custom planning or orchestration system Internal tools that interact with codebases: Build tools that explore repositories, generate documentation, run migrations, or automate repetitive development tasks A practical example: imagine building an internal CLI that lets engineers say "set up a new microservice with our standard boilerplate, CI pipeline, and monitoring configuration." The Copilot SDK agent would plan the file creation, scaffold the code, configure the pipeline YAML, and even run initial tests, all without you writing orchestration logic. Microsoft Agent Framework: When Your App Is the Intelligence System The Microsoft Agent Framework is the right choice when you need to build a system of agents that collaborate, maintain state, follow business processes, and operate with enterprise-grade governance. It's designed for long-running, multi-agent workflows where you need fine-grained control over every aspect of orchestration. What You Get Out of the Box The Agent Framework provides a comprehensive foundation for building sophisticated agent systems in both Python and .NET: Graph-based workflows: Connect agents and deterministic functions using data flows with streaming, checkpointing, human-in-the-loop, and time-travel capabilities Multi-agent orchestration: Define how agents collaborate, hand off tasks, escalate decisions, and share state Durability and checkpoints: Workflows can pause, resume, and recover from failures, essential for business-critical processes Human-in-the-loop: Built-in support for approval gates, review steps, and human override points Observability: OpenTelemetry integration for distributed tracing, monitoring, and debugging across agent boundaries Multiple agent providers: Use Azure OpenAI, OpenAI, and other LLM providers as the intelligence behind your agents DevUI: An interactive developer UI for testing, debugging, and visualizing workflow execution Architecture The Agent Framework gives you explicit control over the agent topology. You define agents, connect them in workflows, and manage the flow of data between them: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Agent A │────▶│ Agent B │────▶│ Agent C │ │ (Planner) │ │ (Executor) │ │ (Reviewer) │ └─────────────┘ └──────────────┘ └──────────────┘ Define Execute Validate strategy tasks output Each agent has its own instructions, tools, memory, and state. The framework manages communication between agents, handles failures, and provides visibility into what's happening at every step. This explicitness is what makes it suitable for enterprise applications where auditability and control are non-negotiable. Ideal Use Cases for the Agent Framework The Agent Framework excels in scenarios where you need a system of coordinated agents operating under business rules. These include: Multi-agent business workflows: Customer support pipelines, research workflows, operational processes, and data transformation pipelines where different agents handle different responsibilities Systems requiring durability: Workflows that run for hours or days, need checkpoints, can survive restarts, and maintain state across sessions Governance-heavy applications: Processes requiring approval gates, audit trails, role-based access, and compliance documentation Agent collaboration patterns: Applications where agents need to negotiate, escalate, debate, or refine outputs iteratively before producing a final result Enterprise data pipelines: Complex data processing workflows where AI agents analyze, transform, and validate data through multiple stages A practical example: an enterprise customer support system where a triage agent classifies incoming tickets, a research agent gathers relevant documentation and past solutions, a response agent drafts replies, and a quality agent reviews responses before they reach the customer, with a human escalation path when confidence is low. Side-by-Side Comparison To make the distinction concrete, here's how the two technologies compare across key dimensions that matter when choosing an intelligence layer for your application. Dimension GitHub Copilot SDK Microsoft Agent Framework Primary purpose Embed Copilot's agent runtime into your app Build and orchestrate your own agent systems Orchestration Handled by Copilot's agent loop, you delegate You define explicitly, agents, workflows, state, hand-offs Agent count Typically single agent per session Multi-agent systems with agent-to-agent communication State management Session-scoped, managed by the SDK Durable state with checkpointing, time-travel, persistence Human-in-the-loop Basic, user confirms actions Rich approval gates, review steps, escalation paths Observability Session logs and tool call traces Full OpenTelemetry, distributed tracing, DevUI Best for Developer tools, task automation, code-centric workflows Enterprise workflows, multi-agent systems, business processes Languages Python, TypeScript, Go, .NET Python, .NET Learning curve Low, install, configure, delegate tasks Moderate, design agents, workflows, state, and policies Maturity Technical Preview Preview with active development, 7k+ stars, 100+ contributors Real-World Example: Both Working Together The most compelling applications don't choose between these technologies, they combine them. A perfect demonstration of this complementary relationship is the Agentic House project by my colleague Anthony Shaw, which uses an Agent Framework workflow to orchestrate three agents, one of which is powered by the GitHub Copilot SDK. The Problem Agentic House lets users ask natural language questions about their Home Assistant smart home data. Questions like "what time of day is my phone normally fully charged?" or "is there a correlation between when the back door is open and the temperature in my office?" require exploring available data, writing analysis code, and producing visual results—a multi-step process that no single agent can handle well alone. The Architecture The project implements a three-agent pipeline using the Agent Framework for orchestration: ┌─────────────┐ ┌──────────────┐ ┌──────────────┐ │ Planner │────▶│ Coder │────▶│ Reviewer │ │ (GPT-4.1) │ │ (Copilot) │ │ (GPT-4.1) │ └─────────────┘ └──────────────┘ └──────────────┘ Plan Notebook Approve/ analysis generation Reject Planner Agent: Takes a natural language question and creates a structured analysis plan, which Home Assistant entities to query, what visualizations to create, what hypotheses to test. This agent uses GPT-4.1 through Azure AI Foundry or GitHub Models. Coder Agent: Uses the GitHub Copilot SDK to generate a complete Jupyter notebook that fetches data from the Home Assistant REST API via MCP, performs the analysis, and creates visualizations. The Copilot agent is constrained to only use specific tools, demonstrating how the SDK supports tool restriction. Reviewer Agent: Acts as a security gatekeeper, reviewing the generated notebook to ensure it only reads and displays data. It rejects notebooks that attempt to modify Home Assistant state, import dangerous modules, make external network requests, or contain obfuscated code. Why This Architecture Works This design demonstrates several principles about when to use which technology: Agent Framework provides the workflow: The sequential pipeline with planning, execution, and review is a classic Agent Framework pattern. Each agent has a clear role, and the framework manages the flow between them. Copilot SDK provides the coding execution: The Coder agent leverages Copilot's battle-tested ability to generate code, work with files, and use MCP tools. Building a custom code generation agent from scratch would take significantly longer and produce less reliable results. Tool constraints demonstrate responsible AI: The Copilot SDK agent is constrained to only specific tools, showing how you can embed powerful agentic behavior while maintaining security boundaries. Standalone agents handle planning and review: The Planner and Reviewer use simpler LLM-based agents, they don't need Copilot's code execution capabilities, just good reasoning. While the Home Assistant data is a fun demonstration, the pattern is designed for something much more significant: applying AI agents for complex research against private data sources. The same architecture could analyze internal databases, proprietary datasets, or sensitive business metrics. Decision Framework: Which Should You Use? When deciding between the Copilot SDK and the Agent Framework, or both, consider these questions about your application. Start with the Copilot SDK if: You need a single agent to execute tasks autonomously (code generation, file editing, command execution) Your application is developer-facing or code-centric You want to ship agentic features quickly without building orchestration infrastructure The tasks are session-scoped, they start and complete within a single interaction You want to leverage Copilot's existing tool ecosystem and MCP integration Start with the Agent Framework if: You need multiple agents collaborating with different roles and responsibilities Your workflows are long-running, require checkpoints, or need to survive restarts You need human-in-the-loop approvals, escalation paths, or governance controls Observability and auditability are requirements (regulated industries, enterprise compliance) You're building a platform where the agents themselves are the product Use both together if: You need a multi-agent workflow where at least one agent requires strong code execution capabilities You want Agent Framework's orchestration with Copilot's battle-tested agent runtime as one of the execution engines Your system involves planning, coding, and review stages that benefit from different agent architectures You're building research or analysis tools that combine AI reasoning with code generation Getting Started Both technologies are straightforward to install and start experimenting with. Here's how to get each running in minutes. GitHub Copilot SDK Quick Start Install the SDK for your preferred language: # Python pip install github-copilot-sdk # TypeScript / Node.js npm install @github/copilot-sdk # .NET dotnet add package GitHub.Copilot.SDK # Go go get github.com/github/copilot-sdk/go The SDK requires the Copilot CLI to be installed and authenticated. Follow the Copilot CLI installation guide to set that up. A GitHub Copilot subscription is required for standard usage, though BYOK mode allows you to use your own API keys without GitHub authentication. Microsoft Agent Framework Quick Start Install the framework: # Python pip install agent-framework --pre # .NET dotnet add package Microsoft.Agents.AI The Agent Framework supports multiple LLM providers including Azure OpenAI and OpenAI directly. Check the quick start tutorial for a complete walkthrough of building your first agent. Try the Combined Approach To see both technologies working together, clone the Agentic House project: git clone https://github.com/tonybaloney/agentic-house.git cd agentic-house uv sync You'll need a Home Assistant instance, the Copilot CLI authenticated, and either a GitHub token or Azure AI Foundry endpoint. The project's README walks through the full setup, and the architecture provides an excellent template for building your own multi-agent systems with embedded Copilot capabilities. Key Takeaways Copilot SDK = "Put Copilot inside my app": Embed a production-tested agentic runtime with planning, tool execution, file edits, and MCP support directly into your application Agent Framework = "Build my app out of agents": Design, orchestrate, and host multi-agent systems with explicit workflows, durable state, and enterprise governance They're complementary, not competing: The Copilot SDK can act as a powerful execution engine inside Agent Framework workflows, as demonstrated by the Agentic House project Choose based on your orchestration needs: If you need one agent executing tasks, start with the Copilot SDK. If you need coordinated agents with business logic, start with the Agent Framework The real power is in combination: The most sophisticated applications use Agent Framework for workflow orchestration and the Copilot SDK for high-leverage task execution within those workflows Conclusion and Next Steps The question isn't really "Copilot SDK or Agent Framework?" It's "where does each fit in my architecture?" Understanding this distinction unlocks a powerful design pattern: use the Agent Framework to model your business processes as agent workflows, and use the Copilot SDK wherever you need a highly capable agent that can plan, code, and execute autonomously. Start by identifying your application's needs. If you're building a developer tool that needs to understand and modify code, the Copilot SDK gets you there fast. If you're building an enterprise system where multiple AI agents need to collaborate under governance constraints, the Agent Framework provides the architecture. And if you need both, as most ambitious applications do, now you know how they fit together. The AI development ecosystem is moving rapidly. Both technologies are in active development with growing communities and expanding capabilities. The architectural patterns you learn today, embedding intelligent agents, orchestrating multi-agent workflows, combining execution engines with orchestration frameworks, will remain valuable regardless of how the specific tools evolve. Resources GitHub Copilot SDK Repository – SDKs for Python, TypeScript, Go, and .NET with documentation and examples Microsoft Agent Framework Repository – Framework source, samples, and workflow examples for Python and .NET Agentic House – Real-world example combining Agent Framework with Copilot SDK for smart home data analysis Agent Framework Documentation – Official Microsoft Learn documentation with tutorials and user guides Copilot CLI Installation Guide – Setup instructions for the CLI that powers the Copilot SDK Copilot SDK Getting Started Guide – Step-by-step tutorial for SDK integration Copilot SDK Cookbook – Practical recipes for common tasks across all supported languages748Views3likes0CommentsWhat’s New in Microsoft EDU, Bett Edition January 2026
Welcome to our update for Microsoft Education and our special Bett 2026 edition! The Bett conference takes place in London during the week of January 21st - January 23rd, and Microsoft Education has 18 exciting updates to share! Check out the official Bett News blog here, and for our full Bett schedule and session times, be sure to check out our Microsoft EDU Bett 2026 guide. January 2026 topics: Microsoft 365 Updates for Educators Microsoft Learning Zone Microsoft 365 Updates for Students Teams EDU and OneNote EDU Updates Microsoft 365 LTI Updates Minecraft EDU 1. New Educator tools coming to the Teach Module in Microsoft 365 Unit Plans Soon educators will be able to create unit plans in Teach. Using a familiar interface, educators will be able to describe their unit, ground in existing content and educational standards, and attach any existing lesson plans. Unit plans will be created as Microsoft Word documents to facilitate easy edits and sharing. When: Preview in Spring 2026 Minecraft Lesson Plans Minecraft Education prepares students for the future workplace by helping build skills like collaboration, creative problem-solving, communication, and computational thinking. Coming soon, you will be able to create lesson plans in Teach that are fully teachable in Minecraft Education. And if you’re new to Minecraft Education, the lesson plan includes step-by-step instructions to get started. Just like the existing lesson plan tool in Teach, Minecraft Lessons can be grounded on your class details, existing content, and educational standards from 35+ countries. When: Preview in February 2026 Modify Content When: In Preview now Teach supports educators in modifying their existing teaching materials using AI-powered tools that save time and help meet the diverse needs of learners. With Modify existing content, educators can quickly adapt lessons they already use—without starting from scratch—by aligning materials to standards, differentiating instructions, adjusting reading levels, and enhancing text with supporting examples. Each modification tool accepts direct text input or file uploads from cloud storage, making it easy to transform current curriculum resources. These tools help educators maintain instructional intent while ensuring content is accessible, standards aligned, and effective for all learners. Align materials to standards Aligning instructional content to educational standards helps ensure lessons clearly support required learning goals and set the right expectations for learners. The Align to Standards tool rewrites existing lesson instructions so they reflect the intent of the selected standard—focusing on what learners should understand or be able to do—without copying the standard’s wording. Scenario: An educator has a lesson instruction for a reading activity on ecosystems. After selecting a state science standard, the educator uses Align to Standards to produce a revised instruction that emphasizes system interactions and evidence-based explanations while preserving the lesson’s original purpose. This allows the educator to strengthen alignment quickly without rewriting the lesson from scratch. Differentiate instructions Differentiation helps ensure every learner—regardless of readiness, background knowledge, or support needs—can access and engage with instructional tasks. The Differentiate Instructions tool adapts existing instructions based on specific supports an educator selects, such as adjusting reading level, including a single type of scaffold, or targeting a desired length. Because this tool is designed for single shot use, it produces a clear, accurate adaptation that adheres directly to the selected inputs. Scenario: A secondary biology educator has lab instructions written for general education learners but needs versions for learners requiring additional scaffolding. Using Differentiate Instructions, the educator quickly generates modified instructions that include step-by-step breakdowns, sentence starters, or graphic organizers—making the lab more accessible without changing the learning goal. Modify reading level Adjusting the reading level helps ensure instructional content remains accessible while preserving essential vocabulary and core concepts. The Modify reading level tool rewrites text to match a specified grade level, simplifying or increasing complexity as needed while maintaining meaning. Educators can also choose to generate a glossary with clear, age-appropriate definitions of key terms. Scenario: A social studies educator wants students to work with a primary source written at a university reading level. Using Modify reading level, the educator creates a version that maintains the document’s key ideas and important historical terms while simplifying sentence structure for lower secondary learners. By adding a glossary, students can access learner friendly definitions alongside the adapted text. Add supporting examples Concrete examples strengthen understanding by connecting abstract ideas to real world applications. The Add Supporting Examples tool enhances existing instructional content by appending relevant, accurate, and age-appropriate examples—without altering the original paragraph. Scenario: An educator teaching thermal energy transfer has a paragraph explaining that heat moves from warmer objects to cooler ones, but the concept feels abstract. Using Add Supporting Examples, the educator adds real world examples—such as a metal spoon warming in hot soup or an ice cube melting on a countertop—to help learners visualize how heat transfer works. These examples reinforce understanding and make the concept more accessible for secondary learners. Fill in the Blanks, Matching and Quizzing New Learning Activities are coming soon! We’re excited to introduce three new Learning Activities designed to make classroom experiences more dynamic and personalized: Fill in the Blanks, Matching, and Quizzes. Whether it’s completing paragraphs to strengthen comprehension, pairing terms with definitions in a timed matching game, or testing knowledge through quick self-assessments, these activities bring variety and fun to learning. Fill in the blanks creates paragraphs where learners can check their understanding by filling in missing terms. Matching is a game where learners can match terms and definitions while racing against the clock, aiming for fast completion and accuracy. And Quizzes allows students to quiz themselves and assess their comprehension. Learning Activities are available across our education products, in a standalone web app, in the Teach Module, in Teams for Education, in the Study and Learn agent and Study Guides. When: Spring 2026 Teach Module updates in Teams Classwork In Teams Classwork, you can already use Copilot to create Lesson Plans, Flashcards, and Fill in the Blank Activities. Coming this Spring, you will see the ability to create and modify more content, better matching the capabilities of Teach in the Microsoft 365 Copilot App. This includes modifying content with AI, Minecraft Lessons, and more! When: Coming soon Teach Module and Class Notebook integration We're bringing Copilot-powered creation tools directly into OneNote Class Notebook. Teachers will be able to generate Learning Activities and quizzes or modify existing content (like adjusting reading level or adding supporting examples) without leaving the page where they're already planning. When: Coming soon 2. Spark classroom engagement with Microsoft Learning Zone Educators worldwide are always looking for innovative ways to engage students, personalize learning, and support individual growth, yet limited time and resources often stand in their way. Microsoft Learning Zone, a new Windows app, empowers educators to transform any idea or resource into an interactive, personalized lesson using AI on Copilot+ PCs. The app also provides actionable insights to guide instruction and support every student’s progress. Learning Zone is now available to download from the Windows app store and included at no additional cost with all Microsoft Education licenses. Just in time for Bett 2026, Learning Zone has earned the prestigious ISTE Seal of Alignment - a recognized mark of quality, accessibility, and inclusive design. This recognition reflects our commitment to delivering meaningful, inclusive, and research-backed digital learning experiences for every learner. As noted by ISTE reviewers: "Microsoft Learning Zone saves educators valuable time while delivering personalized instruction that addresses individual learning needs." Getting started with Microsoft Learning Zone is simple. Educators begin by defining their lesson goals and preferences and can also choose to reference their teaching materials or trusted in-app resources by OpenStax. From there, AI does the heavy-lifting, generating a complete, interactive lesson with engaging content slides and a variety of practice activities. Educators can also quickly create Kahoot! quizzes using AI, bringing live classroom gamification into their lessons with just a few clicks. Learning Zone is more than content creation; it provides a full classroom-ready solution: from assignment to actionable insights. Once a lesson is created and reviewed, educators can assign it to students. Students complete lessons at their own pace, on any device, while the lesson flow adapts to their responses, helping reinforce understanding, revisit missed concepts, and build confidence over time. Educators, in turn, gain clear, actionable insights into student progress and mastery, enabling them to personalize instruction and better support every learner’s growth. Learning Zone is a classroom ready solution including management and actionable insights Learning Zone also includes an extensive library of ready-to-learn lessons developed in collaboration with leading global organizations, including the Nobel Peace Center, PBS NewsHour, the World Wildlife Fund (WWF), NASA, OpenStax, Figma, and Minecraft Education. Ready-to-learn lessons are available to educators and students on any Windows device and are a great way to inspire curiosity and bring meaningful learning of different subjects into the classroom. Ready-to-learn library in partnership with trusted global organizations Learning Zone is available today: Visit https://learningzone.microsoft.com to learn more and download the app. 3. New AI-powered tools for student learning in Microsoft 365 Study and Learn Agent Bring the interactive, conversational Study and Learn Agent in the Microsoft 365 Copilot App to your students. Available to all Microsoft EDU customers, the agent does not require an additional Copilot license. It is going into preview now, in January 2026. Join the Microsoft Education Insiders community at https://aka.ms/joinEIP and get information about getting access to the Preview. Study and Learn helps learners understand concepts, practice skills with activities like flashcards, and prepare for tests with study guides and quizzes. Additional activities including fill-in-the-blanks, matching, and others that will continue to be added. Purpose-built for learning in collaboration with learning science experts, Study and Learn aims to help foster reflective and critical thinking. Over time, it will provide a more personalized, adaptive, inclusive experience to make learning relevant and bolster motivation. When: January 2026 Preview Learning Activities app The Learning Activities Web App is now here! This web-based experience brings all your favorite activities together in one place, making it easier than ever to create, customize, and share engaging content. Whether you’re an educator designing lessons or a student building study sets, the web app offers a streamlined interface for finding or creating Flashcards and Fill in the Blanks with Matching, and Quizzes coming soon. You can easily access all your activities that you have created in other products from the web app, too. When: Available now! 4. Updates for your favorite teaching tools - Teams EDU and OneNote EDU Set AI Guidelines in Teams To help bring clarity to AI use in the classroom, AI Guidelines in Assignments allow educators to set clear expectations for when and how students can use AI—directly within the assignment experience. Educators start with a set of default, standardized AI use levels, and can apply them at the class or assignment level, with the ability to customize descriptions to reflect their school or district guidelines. These guidelines are clearly visible to students, reducing confusion and supporting responsible, transparent AI use, while also encouraging learners to use secure, education-ready Copilot. When: In Preview Q1 Add Learning Activities to Teams Assignments Learning Activities are coming to Teams Assignments and supported LMS platforms in preview, helping educators integrate interactive practice into the assignment workflows they already use. Educators can add activities such as Flashcards, Fill in the blanks, and Matching, and share resource documents that enable students to create their own learning activities within an assignment or the Classwork module. Students complete activities seamlessly within Assignments or their LMS, with progress captured as part of the assignment experience—supporting active, student driven learning while keeping setup, instruction and review in one familiar place. Students can create their own learning activities from educator-shared resources within an assignment or Classwork. When: In Preview Q1 New information literacy features in Search Progress in Teams Assignments Now students don't just gather sources—they investigate them. Four new research prompts (Source Reputation, Factual Importance, Cross-check, Source Purpose) make their thinking visible as they research. Read more about these new features in the preview blog here, and stay tuned for Microsoft Learn course updates to come. When: Available now Add Learning Zone lessons to Teams Assignments and LMS Learning Zone lessons are coming to Teams Assignments and Microsoft 365 LTI for LMS platforms in preview, allowing educators to bring interactive lessons directly into the assignments and grading workflows they already use. Educators can attach Learning Zone lessons during assignment creation, while students complete them fully embedded within Assignments or their LMS, with progress and scores automatically synchronized for review. This preview helps educators save time, reduce manual setup and grading steps, and confidently deliver interactive learning experiences—while keeping assignment creation, student work, and review all in one place. When: Preview in February Embed Learning Activities in OneNote You asked, we're building it. Soon, learners and educators alike will be able to copy a Learning Activity link, paste it into any OneNote classic page, and have it render inline – all to help folks engage without leaving the page. When: NH Spring 2026 5. Create with Copilot in your LMS In addition to supporting the new Learning Zone lessons in assignments, we are adding exciting new Create with Copilot options in Microsoft 365 LTI which bring the AI-powered capabilities of the Teach Module directly into LMS content creation workflows. From within their course, educators can use Copilot to draft lesson materials and other instructional content which is seamlessly published to the course using familiar Microsoft 365 tools. Create with Copilot is also available in LMS content editors to help educators compose content, discussion posts, and more. This includes the ability to modify existing content, if supported by the LMS platform. By embedding the creation experience where courses are designed and managed, Microsoft 365 LTI helps educators preserve instructional intent, reduce context switching, and move more quickly from planning to teaching. Microsoft 365 LTI is available to any Microsoft Education customer without additional licensing. LMS administrators can deploy the integration to an LTI 1.3 compatible LMS like Canvas, Blackboard, PowerSchool Schoology Learning, D2L/Brightspace and Moodle to get started! When: Preview in February 6. Dedicated servers coming to Minecraft Education Minecraft Education is launching a new feature that enables IT administrators and educators to run dedicated servers to host persistent worlds for use in classrooms and after-school programs, similar to Minecraft Bedrock’s dedicated servers (for the consumer version of the game). Dedicated servers enable cross-tenant gameplay, which is a gamechanger for expanding multiplayer experiences in the classroom or running Minecraft esports programs with other schools. This feature is currently in Beta to release in February for general availability for all Minecraft Education users. (Minecraft Education is available in Microsoft A3 and A5 software subscriptions for schools.) ___________________________________________________________________________________ And finally, just to recap all the news we have for you this month, here’s a quick review of all the features that are generally available or are rolling out soon: Teach Module Microsoft 365 Updates for Educators • Unit Plans – available in spring • Minecraft Lesson plans – preview in February • Modify content – align to standards. Private preview now • Modify content – modify reading level. Private preview now • Modify content – add supporting examples Private preview now • Modify content – differentiate instructions. Private preview now • Teach Module integration into OneNote Class Notebooks – preview in spring Microsoft Learning Zone • Available to download from the Windows store, at no additional cost • Provide full classroom ready solution including lesson management and insights • Teach Module, Teams Assignments and LMS integration in March Microsoft 365 Updates for Students • Study and Learn Agent – preview in late January • Learning Activities – Fill in the Blanks generally available • Learning Activities – Matching Activities in private preview now • Learning Activities – Self-quizzing available in private preview in February Teams and OneNote EDU Updates • Set expected AI use in Assignments – private preview end of January • Add Flashcards to Assignments – private preview in February • New information literacy features in Search Progress • Embed Learning Activities in OneNote – private preview in spring Copilot in your Learning Management System Dedicated Minecraft EDU servers Have any feedback to share with us? As always, we'd love to hear it! Mike Tholfsen Group Product Manager Microsoft Education3.4KViews4likes4CommentsMicrosoft Education Solutions Guide - available for all IT Admins, partners and schools
What is ESG- Education Solutions Guide The Microsoft Education Solution Guide (ESG) was developed to simplify and enhance understanding of the license features and products that you have available to you in a Microsoft 365 tenant. Initial focus and intent were to create better tools to understand the tenant setup and configuration to enhance security and compliance. Link to full guide is here. It consists of three stages: Goals and Objectives for ESG Goals Develop prescriptive deployment guides that provide a centralized resource with education-specific scenarios to assist organizations in defining, managing, and organizing their tenant and appropriate applications. Reduce the overall complexity of tenant and service deployment. Establish baseline recommended pathways to facilitate a common and agreed-upon configuration based on subject-matter experts. Utilize AI technology to uncover and compare recommended settings against user requirements based on documented configurations. Implement phased configurations to aid customers and partners in understanding what they may not know or should consider during discovery to meet customer expectations. Highlight unused features and products to ensure customers fully leverage the potential and benefits of their purchased product licenses. Identify opportunities for partner participation in achieving customer goals and expectations based on customer requirements and ESG findings. Create an easy pathway for customer change management to enhance control, security, compliance, and privacy of tenants. Develop custom assessments to evaluate product entry for items such as Copilot, Defender, Purview, Intune, Zero-Trust, and Microsoft Entra ID. Objectives Deliver information for features available (used/unused) to users based on license model. Prescriptive recommendations based on education scenarios. - Present upgrade license opportunities from A1 to A3 to A5. Security analysis exposing gaps and issues proactively to allow modifications before it's too late. Promote partner access to customers that have defined gaps based on assessments and are requesting partner assistance. Better discovery and assessment analysis with new tools. Designed to be more self-serving customer and partner access management. Speed up user adoption for educators and IT Admins alike. Recommended Roles for Implementation IT Admin Identity Admin Security Admin Compliance Admin OneDrive Admin SharePoint Admin Exchange Online Admin Teams Admin Navigation ESG has seven main sections in the navigations. ESG – Microsoft Solutions Guide overall, Microsoft Education license (A1-A3-A5), and Microsoft addons licenses. Baseline Phase – Overview, Products and Features, Licenses. All five sections (Setup, Identity, Applications, Security and Compliance, and Devices) Standard Phase - Overview, Products and Features, Licenses All five sections (Setup, Identity, Applications, Security and Compliance, and Devices. Advanced Phase - Overview, Products and Features, Licenses. Three sections (Identity, Applications, and Security and Compliance) Addons – Any addon products and when/where they can be added to your current license configuration. Windows – Education specific Windows based products and features. References – Any process or configuration that is not outlined in the three license phases will be included here. Phases Deployment Sequence The Baseline phase is the first step in the three-phase approach: Baseline (A1) → Standard (A3) → Advanced (A5) The Baseline phase is the foundational first phase in the Microsoft Education Solution Guide deployment sequence, aligned with the Microsoft 365 A1 education license. Standard Phase - The Standard phase is the second phase in the Microsoft Education Solution Guide deployment sequence, aligned with the Microsoft 365 A3 education license. The Advanced phase is the third and most comprehensive phase in the Microsoft Education Solution Guide deployment sequence, aligned with the Microsoft 365 A5 education license. Sections Setup Tenant setup is key to establishing a secure and valid tenant. Setup goes through domain assignments, administration, and service management. Identity Establishing an identity via Microsoft Entra ID and establishing authentication methods, Single Sign-On, and user procurement methodologies. Applications Applications like Microsoft Teams, SharePoint, OneDrive, Exchange Online is the core to a Microsoft tenant. Getting these applications setup are essential to allow users in education to access services and apps like Learning Accelerators. Security and Compliance Security via each phase is essential to maintain order and blocking access for bad actors. Along with security compliance/privacy considerations that are established to adhere to a multitude of local and government requirements worldwide. Devices Managed and unmanaged devices are another key to helping secure the network and potential cyber-security considerations that enter the network via these devices. How do you use ESG? ESG uses deployment guidelines for content that contain education scenario specifics. These prescriptive “Purple Boxes” allow education organizations the ability to see prescriptive guidelines and recommendations based on an edu scenario. ESG has a linked path for each modules based on the phase (Baseline,Standard,Advanced). Users can follow the deployment content to establish or redefine the tenant configuration in order to enable additional services and products. What’s Next Go to https://aka.ms/esg to access Microsoft Education Solutions Guide. Stage 2 – A custom agent to discover the tenant configuration settings and allow customers and partners the ability to qualify what is set to standard recommendation. This agent will also have the ability to access partner content and internal Microsoft resources. Using an AI Agent to discover tenant settings based on Graph API calls and Microsoft MCP Server for Enterprise to evaluate against known Microsoft recommended settings. Part 3 – Change management from discovery data baselines to access and deliver any updates/changes/modifications, daily/weekly/monthly.971Views0likes0CommentsHands-On Session: Teach Module in Copilot (available for all educators)
Join us on Wednesday, December 10th, 2025 @ 8am Pacific Time for an in-depth professional development webinar on the new AI-powered "Teach" module in Microsoft 365 that is fully rolled out and available to all educators. This will be a 45 minute hands-on webinar where the Product team will walk through the new updates in detail and you can follow- along at home with your own M365 account! Sign up for the session at this link. And don’t worry – we’ll be recording these and posting on our Microsoft Education YouTube channel so you’ll always to able to watch later or share with others! Here is our training agenda for the webinar 👇 How to use the new AI-powered "Teach" module in M365. Includes: ✅ Lesson plans ✅ Copilot Quizzes ✅ Standards integration ✅ Learning Activities ✅ Differentiate reading materials ✅ Teams EDU integration and Classwork ✅ First look at OneNote EDU and Teach Module integration We look forward to having you attend the event! Mike Tholfsen Group Product Manager Microsoft Education1.9KViews3likes0CommentsAI-powered Teach Module in M365 now rolled out to all educators!
Here is an end-to-end tutorial video on the new AI-powered "Teach" module in M365 Copilot. This is available today for ALL Microsoft 365 educators globally! Includes: 📘 Lesson plans 📊 Rubrics 📝 Quizzes 🌍 Standards grounding (35 countries) 🎯 Learning Activities ➕ Lots more YouTube 📺 https://youtu.be/WkXVukl62KU?si=UT3FzN5fqXQ7KlBd154Views3likes0CommentsCalling all IT Admins - webinar on configuring Microsoft 365 Copilot Chat and new updates for 13+
Calling all IT admins — join us for a workshop where we’ll walk through how to configure Microsoft 365 Copilot Chat, from your tenant all the way to the end-user experience. We’ll cover the latest features now available — including the new 13+ student configuration options, CSV, SDS and Powershell uploads, Copilot agents & extensibility, and updated licensing & security controls. By the end, you’ll be able to confidently deploy, manage, and optimize Copilot Chat in your environment so your users can safely harness AI productivity from day one. This webinar will be led by Bill Sluss and Jethro Seghers, Principal Product Managers from the Microsoft Education team When: Wednesday, October 29th @ 8am Pacific Time Register: https://msit.events.teams.microsoft.com/event/7b0cfbc2-e169-461b-9ea2-48effc009d4c@72f988bf-86f1-41af-91ab-2d7cd011db472KViews2likes2CommentsStep-by-Step: Setting Up GitHub Student and GitHub Copilot as an Authenticated Student Developer
To become an authenticated GitHub Student Developer, follow these steps: create a GitHub account, verify student status through a school email or contact GitHub support, sign up for the student developer pack, connect to Copilot and activate the GitHub Student Developer Pack benefits. The GitHub Student Developer Pack offers 100s of free software offers and other benefits such as Azure credit, Codespaces, a student gallery, campus experts program, and a learning lab. Copilot provides autocomplete-style suggestions from AI as you code. Visual Studio Marketplace also offers GitHub Copilot Labs, a companion extension with experimental features, and GitHub Copilot for autocomplete-style suggestions. Setting up your GitHub Student and GitHub Copilot as an authenticated Github Student Developer409KViews14likes16CommentsWhat's New in Microsoft EDU - October 2025 (AI for all edition)
Join us on Wednesday, October 22nd, 2025 for our latest "What's New in Microsoft EDU" webinar! This will be a special one where we go into depth about all of the AI powered tools for educators and students who use Microsoft 365 in Education that we just announced. These 30-minute webinars are put on by the Microsoft Education Product Management group and happen once per month, this month both 8:00am Pacific Time and 4:00pm Pacific time to cover as many global time zones as possible around the world. And don’t worry – we’ll be recording these and posting on our Microsoft Education YouTube channel in the new “What’s New in Microsoft EDU” playlist, so you’ll always to able to watch later or share with others! Here is our October 2025 webinar agenda: M365 Copilot and AI updates for Educators and Students Learning Zone public preview and the Copilot+ PC Microsoft 365 LTI for Learning Management Systems AMA - Ask Microsoft EDU Anything (Q&A) We look forward to having you attend the event! How to sign up OPTION 1: October 22nd, Wednesday @ 8:00am Pacific Time Register here OPTION 2: October 22nd, Wednesday @ 4:00pm Pacific Time Register here This is what the webinar portal will look like when you register: We look forward to seeing you there! Mike Tholfsen Group Product Manager Microsoft Education2.3KViews1like0Comments