copilot
59 TopicsChoosing the Right Model in GitHub Copilot: A Practical Guide for Developers
AI-assisted development has grown far beyond simple code suggestions. GitHub Copilot now supports multiple AI models, each optimized for different workflows, from quick edits to deep debugging to multi-step agentic tasks that generate or modify code across your entire repository. As developers, this flexibility is powerful… but only if we know how to choose the right model at the right time. In this guide, I’ll break down: Why model selection matters The four major categories of development tasks A simplified, developer-friendly model comparison table Enterprise considerations and practical tips This is written from the perspective of real-world customer conversations, GitHub Copilot demos, and enterprise adoption journeys Why Model Selection Matters GitHub Copilot isn’t tied to a single model. Instead, it offers a range of models, each with different strengths: Some are optimized for speed Others are optimized for reasoning depth Some are built for agentic workflows Choosing the right model can dramatically improve: The quality of the output The speed of your workflow The accuracy of Copilot’s reasoning The effectiveness of Agents and Plan Mode Your usage efficiency under enterprise quotas Model selection is now a core part of modern software development, just like choosing the right library, framework, or cloud service. The Four Task Categories (and which Model Fits) To simplify model selection, I group tasks into four categories. Each category aligns naturally with specific types of models. 1. Everyday Development Tasks Examples: Writing new functions Improving readability Generating tests Creating documentation Best fit: General-purpose coding models (e.g., GPT‑4.1, GPT‑5‑mini, Claude Sonnet) These models offer the best balance between speed and quality. 2. Fast, Lightweight Edits Examples: Quick explanations JSON/YAML transformations Small refactors Regex generation Short Q&A tasks Best fit: Lightweight models (e.g., Claude Haiku 4.5) These models give near-instant responses and keep you “in flow.” 3. Complex Debugging & Deep Reasoning Examples: Analyzing unfamiliar code Debugging tricky production issues Architecture decisions Multi-step reasoning Performance analysis Best fit: Deep reasoning models (e.g., GPT‑5, GPT‑5.1, GPT‑5.2, Claude Opus) These models handle large context, produce structured reasoning, and give the most reliable insights for complex engineering tasks. 4. Multi-step Agentic Development Examples: Repo-wide refactors Migrating a codebase Scaffolding entire features Implementing multi-file plans in Agent Mode Automated workflows (Plan → Execute → Modify) Best fit: Agent-capable models (e.g., GPT‑5.1‑Codex‑Max, GPT‑5.2‑Codex) These models are ideal when you need Copilot to execute multi-step tasks across your repository. GitHub Copilot Models - Developer Friendly Comparison The set of models you can choose from depends on your Copilot subscription, and the available options may evolve over time. Each model also has its own premium request multiplier, which reflects the compute resources it requires. If you're using a paid Copilot plan, the multiplier determines how many premium requests are deducted whenever that model is used. Model Category Example Models (Premium request Multiplier for paid plans) What they’re best at When to Use Them Fast Lightweight Models Claude Haiku 4.5, Gemini 3 Flash (0.33x) Grok Code Fast 1 (0.25x) Low latency, quick responses Small edits, Q&A, simple code tasks General-Purpose Coding Models GPT‑4.1, GPT‑5‑mini (0x) GPT-5-Codex, Claude Sonnet 4.5 (1x) Reliable day‑to‑day development Writing functions, small tests, documentation Deep Reasoning Models GPT-5.1 Codex Mini (0.33x) GPT‑5, GPT‑5.1, GPT-5.1 Codex, GPT‑5.2, Claude Sonnet 4.0, Gemini 2.5 Pro, Gemini 3 Pro (1x) Claude Opus 4.5 (3x) Complex reasoning and debugging Architecture work, deep bug diagnosis Agentic / Multi-step Models GPT‑5.1‑Codex‑Max, GPT‑5.2‑Codex (1x) Planning + execution workflows Repo-wide changes, feature scaffolding Enterprise Considerations For organizations using Copilot Enterprise or Business: Admins can control which models employees can use Model selection may be restricted due to security, regulation, or data governance You may see fewer available models depending on your organization’s Copilot policies Using "Auto" Model selection in GitHub Copilot GitHub Copilot’s Auto model selection automatically chooses the best available model for your prompts, reducing the mental load of picking a model and helping you avoid rate‑limiting. When enabled, Copilot prioritizes model availability and selects from a rotating set of eligible models such as GPT‑4.1, GPT‑5 mini, GPT‑5.2‑Codex, Claude Haiku 4.5, and Claude Sonnet 4.5 while respecting your subscription level and any administrator‑imposed restrictions. Auto also excludes models blocked by policies, models with premium multipliers greater than 1, and models unavailable in your plan. For paid plans, Auto provides an additional benefit: a 10% discount on premium request multipliers when used in Copilot Chat. Overall, Auto offers a balanced, optimized experience by dynamically selecting a performant and cost‑efficient model without requiring developers to switch models manually. Read more about the 'Auto' Model selection here - About Copilot auto model selection - GitHub Docs Final Thoughts GitHub Copilot is becoming a core part of the developer workflows. Choosing the right model can dramatically improve your productivity, the accuracy of Copilot’s responses, your experience with multi-step agentic tasks, your ability to navigate complex codebases Whether you’re building features, debugging complex issues, or orchestrating repo-wide changes, picking the right model helps you get the best out of GitHub Copilot. References and Further Reading To explore each model further, visit the GitHub Copilot model comparison documentation or try switching models in Copilot Chat to see how they impact your workflow. AI model comparison - GitHub Docs Requests in GitHub Copilot - GitHub Docs About Copilot auto model selection - GitHub DocsDemystifying GitHub Copilot Security Controls: easing concerns for organizational adoption
At a recent developer conference, I delivered a session on Legacy Code Rescue using GitHub Copilot App Modernization. Throughout the day, conversations with developers revealed a clear divide: some have fully embraced Agentic AI in their daily coding, while others remain cautious. Often, this hesitation isn't due to reluctance but stems from organizational concerns around security and regulatory compliance. Having witnessed similar patterns during past technology shifts, I understand how these barriers can slow adoption. In this blog, I'll demystify the most common security concerns about GitHub Copilot and explain how its built-in features address them, empowering organizations to confidently modernize their development workflows. GitHub Copilot Model Training A common question I received at the conference was whether GitHub uses your code as training data for GitHub Copilot. I always direct customers to the GitHub Copilot Trust Center for clarity, but the answer is straightforward: “No. GitHub uses neither Copilot Business nor Enterprise data to train the GitHub model.” Notice this restriction also applies to third-party models as well (e.g. Anthropic, Google). GitHub Copilot Intellectual Property indemnification policy A frequent concern I hear is, since GitHub Copilot’s underlying models are trained on sources that include public code, it might simply “copy and paste” code from those sources. Let’s clarify how this actually works: Does GitHub Copilot “copy/paste”? “The AI models that create Copilot’s suggestions may be trained on public code, but do not contain any code. When they generate a suggestion, they are not “copying and pasting” from any codebase.” To provide an additional layer of protection, GitHub Copilot includes a “duplicate detection filter”. This feature helps prevent suggestions that closely match public code from being surfaced. (Note: This duplicate detection currently does not apply to the Copilot coding agent.) More importantly, customers are protected by an Intellectual Property indemnification policy. This means that if you receive an unmodified suggestion from GitHub Copilot and face a copyright claim as a result, Microsoft will defend you in court. GitHub Copilot Data Retention Another frequent question I hear concerns GitHub Copilot’s data retention policies. For organizations on GitHub Copilot Business and Enterprise plans, retention practices depend on how and where the service is accessed from: Access through IDE for Chat and Code Completions: Prompts and Suggestions: Not retained. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. Other GitHub Copilot access and use: Prompts and Suggestions: Retained for 28 days. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. For Copilot Coding Agent, session logs are retained for the life of the account in order to provide the service. Excluding content from GitHub Copilot To prevent GitHub Copilot from indexing sensitive files, you can configure content exclusions at the repository or organization level. In VS Code, use the .copilotignore file to exclude files client-side. Note that files listed in .gitignore are not indexed by default but may still be referenced if open or explicitly referenced (unless they’re excluded through .copilotignore or content exclusions). The life cycle of a GitHub Copilot code suggestion Here are the key protections at each stage of the life cycle of a GitHub Copilot code suggestion: In the IDE: Content exclusions prevent files, folders, or patterns from being included. GitHub proxy (pre-model safety): Prompts go through a GitHub proxy hosted in Microsoft Azure for pre-inference checks: screening for toxic or inappropriate language, relevance, and hacking attempts/jailbreak-style prompts before reaching the model. Model response: With the public code filter enabled, some suggestions are suppressed. The vulnerability protection feature blocks insecure coding patterns like hardcoded credentials or SQL injections in real time. Disable access to GitHub Copilot Free Due to the varying policies associated with GitHub Copilot Free, it is crucial for organizations to ensure it is disabled both in the IDE and on GitHub.com. Since not all IDEs currently offer a built-in option to disable Copilot Free, the most reliable method to prevent both accidental and intentional access is to implement firewall rule changes, as outlined in the official documentation. Agent Mode Allow List Accidental file system deletion by Agentic AI assistants can happen. With GitHub Copilot agent mode, the "Terminal auto approve” setting in VS Code can be used to prevent this. This setting can be managed centrally using a VS Code policy. MCP registry Organizations often want to restrict access to allow only trusted MCP servers. GitHub now offers an MCP registry feature for this purpose. This feature isn’t available in all IDEs and clients yet, but it's being developed. Compliance Certifications The GitHub Copilot Trust Center page lists GitHub Copilot's broad compliance credentials, surpassing many competitors in financial, security, privacy, cloud, and industry coverage. SOC 1 Type 2: Assurance over internal controls for financial reporting. SOC 2 Type 2: In-depth report covering Security, Availability, Processing Integrity, Confidentiality, and Privacy over time. SOC 3: General-use version of SOC 2 with broad executive-level assurance. ISO/IEC 27001:2013: Certification for a formal Information Security Management System (ISMS), based on risk management controls. CSA STAR Level 2: Includes a third-party attestation combining ISO 27001 or SOC 2 with additional cloud control matrix (CCM) requirements. TISAX: Trusted Information Security Assessment Exchange, covering automotive-sector security standards. In summary, while the adoption of AI tools like GitHub Copilot in software development can raise important questions around security, privacy, and compliance, it’s clear that existing safeguards in place help address these concerns. By understanding the safeguards, configurable controls, and robust compliance certifications offered, organizations and developers alike can feel more confident in embracing GitHub Copilot to accelerate innovation while maintaining trust and peace of mind.GitHub Copilot SDK and Hybrid AI in Practice: Automating README to PPT Transformation
Introduction In today's rapidly evolving AI landscape, developers often face a critical choice: should we use powerful cloud-based Large Language Models (LLMs) that require internet connectivity, or lightweight Small Language Models (SLMs) that run locally but have limited capabilities? The answer isn't either-or—it's hybrid models—combining the strengths of both to create AI solutions that are secure, efficient, and powerful. This article explores hybrid model architectures through the lens of GenGitHubRepoPPT, demonstrating how to elegantly combine Microsoft Foundry Local, GitHub Copilot SDK, and other technologies to automatically generate professional PowerPoint presentations from GitHub README files. 1. Hybrid Model Scenarios and Value 1.1 What Are Hybrid Models? Hybrid AI Models strategically combine locally-running Small Language Models (SLMs) with cloud-based Large Language Models (LLMs) within the same application, selecting the most appropriate model for each task based on its unique characteristics. Core Principles: Local Processing for Sensitive Data: Privacy-critical content analysis happens on-device Cloud for Value Creation: Complex reasoning and creative generation leverage cloud power Balancing Cost and Performance: High-frequency, simple tasks run locally to minimize API costs 1.2 Typical Hybrid Model Use Cases Use Case Local SLM Role Cloud LLM Role Value Proposition Intelligent Document Processing Text extraction, structural analysis Content refinement, format conversion Privacy protection + Professional output Code Development Assistant Syntax checking, code completion Complex refactoring, architecture advice Fast response + Deep insights Customer Service Systems Intent recognition, FAQ handling Complex issue resolution Reduced latency + Enhanced quality Content Creation Platforms Keyword extraction, outline generation Article writing, multilingual translation Cost control + Creative assurance 1.3 Why Choose Hybrid Models? Three Core Advantages: Privacy and Security Sensitive data never leaves local devices Compliant with GDPR, HIPAA, and other regulations Ideal for internal corporate documents and personal information Cost Optimization Reduces cloud API call frequency Local models have zero usage fees Predictable operational costs Performance and Reliability Local processing eliminates network latency Partial functionality in offline environments Cloud models ensure high-quality output 2. Core Technology Analysis 2.1 Large Language Models (LLMs): Cloud Intelligence Representatives What are LLMs? Large Language Models are deep learning-based natural language processing models, typically with billions to trillions of parameters. Through training on massive text datasets, they've acquired powerful language understanding and generation capabilities. Representative Models: Claude Sonnet 4.5: Anthropic's flagship model, excelling at long-context processing and complex reasoning GPT-5.2 Series: OpenAI's general-purpose language models Gemini: Google's multimodal large models LLM Advantages: ✅ Exceptional text generation quality ✅ Powerful contextual understanding ✅ Support for complex reasoning tasks ✅ Continuous model updates and optimization Typical Applications: Professional document writing (technical reports, business plans) Code generation and refactoring Multilingual translation Creative content creation 2.2 Small Language Models (SLMs) and Microsoft Foundry Local 2.2.1 SLM Characteristics Small Language Models typically have 1B-7B parameters, designed specifically for resource-constrained environments. Mainstream SLM Model Families: Microsoft Phi Family (Phi Family): Inference-optimized efficient models Alibaba Qwen Family (Qwen Family): Excellent Chinese language capabilities Mistral Series: Outstanding performance with small parameter counts SLM Advantages: ⚡ Low-latency response (millisecond-level) 💰 Zero API costs 🔒 Fully local, data stays on-device 📱 Suitable for edge device deployment 2.2.2 Microsoft Foundry Local: The Foundation of Local AI Foundry Local is Microsoft's local AI runtime tool, enabling developers to easily run SLMs on Windows or macOS devices. Core Features: OpenAI-Compatible API # Using Foundry Local is like using OpenAI API from openai import OpenAI from foundry_local import FoundryLocalManager manager = FoundryLocalManager("qwen2.5-7b-instruct") client = OpenAI( base_url=manager.endpoint, api_key=manager.api_key ) Hardware Acceleration Support CPU: General computing support GPU: NVIDIA, AMD, Intel graphics acceleration NPU: Qualcomm, Intel AI-specific chips Apple Silicon: Neural Engine optimization Based on ONNX Runtime Cross-platform compatibility Highly optimized inference performance Supports model quantization (INT4, INT8) Convenient Model Management # View available models foundry model list # Run a model foundry model run qwen2.5-7b-instruct-generic-cpu:4 # Check running status foundry service ps Foundry Local Application Value: 🎓 Educational Scenarios: Students can learn AI development without cloud subscriptions 🏢 Enterprise Environments: Process sensitive data while maintaining compliance 🧪 R&D Testing: Rapid prototyping without API cost concerns ✈️ Offline Environments: Works on planes, subways, and other no-network scenarios 2.3 GitHub Copilot SDK: The Express Lane from Agent to Business Value 2.3.1 What is GitHub Copilot SDK? GitHub Copilot SDK, released as a technical preview on January 22, 2026, is a game-changer for AI Agent development. Unlike other AI SDKs, Copilot SDK doesn't just provide API calling interfaces—it delivers a complete, production-grade Agent execution engine. Why is it revolutionary? Traditional AI application development requires you to build: ❌ Context management systems (multi-turn conversation state) ❌ Tool orchestration logic (deciding when to call which tool) ❌ Model routing mechanisms (switching between different LLMs) ❌ MCP server integration ❌ Permission and security boundaries ❌ Error handling and retry mechanisms Copilot SDK provides all of this out-of-the-box, letting you focus on business logic rather than underlying infrastructure. 2.3.2 Core Advantages: The Ultra-Short Path from Concept to Code Production-Grade Agent Engine: Battle-Tested Reliability Copilot SDK uses the same Agent core as GitHub Copilot CLI, which means: ✅ Validated in millions of real-world developer scenarios ✅ Capable of handling complex multi-step task orchestration ✅ Automatic task planning and execution ✅ Built-in error recovery mechanisms Real-World Example: In the GenGitHubRepoPPT project, we don't need to hand-write the "how to convert outline to PPT" logic—we simply tell Copilot SDK the goal, and it automatically: Analyzes outline structure Plans slide layouts Calls file creation tools Applies formatting logic Handles multilingual adaptation # Traditional approach: requires hundreds of lines of code for logic def create_ppt_traditional(outline): slides = parse_outline(outline) for slide in slides: layout = determine_layout(slide) content = format_content(slide) apply_styling(content, layout) # ... more manual logic return ppt_file # Copilot SDK approach: focus on business intent session = await client.create_session({ "model": "claude-sonnet-4.5", "streaming": True, "skill_directories": [skills_dir] }) session.send_and_wait({"prompt": prompt}, timeout=600) Custom Skills: Reusable Encapsulation of Business Knowledge This is one of Copilot SDK's most powerful features. In traditional AI development, you need to provide complete prompts and context with every call. Skills allow you to: Define once, reuse forever: # .copilot_skills/ppt/SKILL.md # PowerPoint Generation Expert Skill ## Expertise You are an expert in business presentation design, skilled at transforming technical content into easy-to-understand visual presentations. ## Workflow 1. **Structure Analysis** - Identify outline hierarchy (titles, subtitles, bullet points) - Determine topic and content density for each slide 2. **Layout Selection** - Title slide: Use large title + subtitle layout - Content slides: Choose single/dual column based on bullet count - Technical details: Use code block or table layouts 3. **Visual Optimization** - Apply professional color scheme (corporate blue + accent colors) - Ensure each slide has a visual focal point - Keep bullets to 5-7 items per page 4. **Multilingual Adaptation** - Choose appropriate fonts based on language (Chinese: Microsoft YaHei, English: Calibri) - Adapt text direction and layout conventions ## Output Requirements Generate .pptx files meeting these standards: - 16:9 widescreen ratio - Consistent visual style - Editable content (not images) - File size < 5MB Business Code Generation Capability This is the core value of this project. Unlike generic LLM APIs, Copilot SDK with Skills can generate truly executable business code. Comparison Example: Aspect Generic LLM API Copilot SDK + Skills Task Description Requires detailed prompt engineering Concise business intent suffices Output Quality May need multiple adjustments Professional-grade on first try Code Execution Usually example code Directly generates runnable programs Error Handling Manual implementation required Agent automatically handles and retries Multi-step Tasks Manual orchestration needed Automatic planning and execution Comparison of manual coding workload: Task Manual Coding Copilot SDK Processing logic code ~500 lines ~10 lines configuration Layout templates ~200 lines Declared in Skill Style definitions ~150 lines Declared in Skill Error handling ~100 lines Automatically handled Total ~950 lines ~10 lines + Skill file Tool Calling & MCP Integration: Connecting to the Real World Copilot SDK doesn't just generate code—it can directly execute operations: 🗃️ File System Operations: Create, read, modify files 🌐 Network Requests: Call external APIs 📊 Data Processing: Use pandas, numpy, and other libraries 🔧 Custom Tools: Integrate your business logic 3. GenGitHubRepoPPT Case Study 3.1 Project Overview GenGitHubRepoPPT is an innovative hybrid AI solution that combines local AI models with cloud-based AI agents to automatically generate professional PowerPoint presentations from GitHub repository README files in under 5 minutes. Technical Architecture: 3.2 Why Adopt a Hybrid Model? Stage 1: Local SLM Processes Sensitive Data Task: Analyze GitHub README, extract key information, generate structured outline Reasons for choosing Qwen-2.5-7B + Foundry Local: Privacy Protection README may contain internal project information Local processing ensures data doesn't leave the device Complies with data compliance requirements Cost Effectiveness Each analysis processes thousands of tokens Cloud API costs are significant in high-frequency scenarios Local models have zero additional fees Performance Qwen-2.5-7B excels at text analysis tasks Outstanding Chinese support Acceptable CPU inference latency (typically 2-3 seconds) Stage 2: Cloud LLM + Copilot SDK Creates Business Value Task: Create well-formatted PowerPoint files based on outline Reasons for choosing Claude Sonnet 4.5 + Copilot SDK: Automated Business Code Generation Traditional approach pain points: Need to hand-write 500+ lines of code for PPT layout logic Require deep knowledge of python-pptx library APIs Style and formatting code is error-prone Multilingual support requires additional conditional logic Copilot SDK solution: Declare business rules and best practices through Skills Agent automatically generates and executes required code Zero-code implementation of complex layout logic Development time reduced from 2-3 days to 2-3 hours Ultra-Short Path from Intent to Execution Comparison: Different ways to implement "Generate professional PPT" 3. Production-Grade Reliability and Quality Assurance Battle-tested Agent engine: Uses the same core as GitHub Copilot CLI Validated in millions of real-world scenarios Automatically handles edge cases and errors Consistent output quality: Professional standards ensured through Skills Automatic validation of generated files Built-in retry and error recovery mechanisms 4. Rapid Iteration and Optimization Capability Scenario: Client requests PPT style adjustment The GitHub Repo https://github.com/kinfey/GenGitHubRepoPPT 4. Summary 4.1 Core Value of Hybrid Models + Copilot SDK The GenGitHubRepoPPT project demonstrates how combining hybrid models with Copilot SDK creates a new paradigm for AI application development. Privacy and Cost Balance The hybrid approach allows sensitive README analysis to happen locally using Qwen-2.5-7B, ensuring data never leaves the device while incurring zero API costs. Meanwhile, the value-creating work—generating professional PowerPoint presentations—leverages Claude Sonnet 4.5 through Copilot SDK, delivering quality that justifies the per-use cost. From Code to Intent Traditional AI development required writing hundreds of lines of code to handle PPT generation logic, layout selection, style application, and error handling. With Copilot SDK and Skills, developers describe what they want in natural language, and the Agent automatically generates and executes the necessary code. What once took 3-5 days now takes 3-4 hours, with 95% less code to maintain. Automated Business Code Generation Copilot SDK doesn't just provide code examples—it generates complete, executable business logic. When you request a multilingual PPT, the Agent understands the requirement, selects appropriate fonts, generates the implementation code, executes it with error handling, validates the output, and returns a ready-to-use file. Developers focus on business intent rather than implementation details. 4.2 Technology Trends The Shift to Intent-Driven Development We're witnessing a fundamental change in how developers work. Rather than mastering every programming language detail and framework API, developers are increasingly defining what they want through declarative Skills. Copilot SDK represents this future: you describe capabilities in natural language, and AI Agents handle the code generation and execution automatically. Edge AI and Cloud AI Integration The evolution from pure cloud LLMs (powerful but privacy-concerning) to pure local SLMs (private but limited) has led to today's hybrid architectures. GenGitHubRepoPPT exemplifies this trend: local models handle data analysis and structuring, while cloud models tackle complex reasoning and professional output generation. This combination delivers fast, secure, and professional results. Democratization of Agent Development Copilot SDK dramatically lowers the barrier to building AI applications. Senior engineers see 10-20x productivity gains. Mid-level engineers can now build sophisticated agents that were previously beyond their reach. Even junior engineers and business experts can participate by writing Skills that capture domain knowledge without deep technical expertise. The future isn't about whether we can build AI applications—it's about how quickly we can turn ideas into reality. References Projects and Code GenGitHubRepoPPT GitHub Repository - Case study project Microsoft Foundry Local - Local AI runtime GitHub Copilot SDK - Agent development SDK Copilot SDK Getting Started Tutorial - Official quick start Deep Dive: Copilot SDK Build an Agent into Any App with GitHub Copilot SDK - Official announcement GitHub Copilot SDK Cookbook - Practical examples Copilot CLI Official Documentation - CLI tool documentation Learning Resources Edge AI for Beginners - Edge AI introductory course Azure AI Foundry Documentation - Azure AI documentation GitHub Copilot Extensions Guide - Extension development guideWriting Effective Prompts for Testing Scenarios: AI Assisted Quality Engineering
AI-assisted testing is no longer an experiment confined to innovation labs. Across enterprises, quality engineering teams are actively shifting from manual-heavy testing approaches to AI-first QA, where tools like GitHub Copilot participate throughout the SDLC—from requirement analysis to regression triage. Yet, despite widespread adoption, most teams are only scratching the surface. They use AI to “generate test cases” or “write automation,” but struggle with inconsistent outputs, shallow coverage, and trust issues. The root cause is rarely the model, it’s prompt design. This blog moves past basic prompting tips to cover QA practices, focusing on effective prompt design and common pitfalls. It notes that adopting AI in testing is a gradual process of ongoing transformation rather than a quick productivity gain. Why Effective Prompting Is Necessary in Testing At its core, testing is about asking the right questions of a system. When AI enters the picture, prompts become the mechanism through which those questions are asked. A vague or incomplete prompt is no different from an ambiguous test requirement—it leads to weak coverage and unreliable results. Poorly written prompts often result in generic or shallow test cases, incomplete UI or API coverage, incorrect automation logic, or superficial regression analysis. This increases rework and reduces trust in AI-generated outputs. In contrast, well-crafted prompts dramatically improve outcomes. They help expand UI and API test coverage, accelerate automation development, and enable faster interpretation of regression results. More importantly, they allow testers to focus on risk analysis and quality decisions instead of repetitive tasks. In this sense, effective prompting doesn’t replace testing skills—it amplifies them. Industry Shift: Manual QA to AI-First Testing Lifecycle Modern QA organizations are undergoing three noticeable shifts. First, there is a clear move away from manual test authoring toward AI-augmented test design. Testers increasingly rely on AI to generate baseline coverage, allowing them to focus on risk analysis, edge cases, and system behavior rather than repetitive documentation. Second, enterprises are adopting agent-based and MCP-backed testing, where AI systems are no longer isolated prompt responders. They operate with access to application context—OpenAPI specs, UI flows, historical regressions, and even production telemetry—making outputs significantly more accurate and actionable. Third, teams are seeing tangible SDLC impact. Internally reported metrics across multiple organizations show faster test creation, reduced regression cycle time, and earlier defect detection when Copilot-style tools are used correctly. The key phrase here is correct. Poor prompt neutralizes these benefits almost immediately. Prerequisites GitHub Copilot access in a supported IDE (VS Code, JetBrains, Visual Studio) An appropriate model (advanced reasoning models for workflows and analysis) Basic testing fundamentals (AI amplifies skill; it does not replace it) (Optional but powerful) Context providers / MCP servers for specs, docs, and reports Prompting - A Designing skill with Examples Most testers treat prompts as instructions. Mature teams treat them as design artifacts. Effective prompts should be intentional, layered, and defensive. They should not just ask for output, but control how the AI reasons, what assumptions it can make, and how uncertainty is handled. Pattern 1: Role-Based Prompting Assigning a role fundamentally changes the AI’s reasoning depth. Instead of: “Generate test cases for login.” Use: This pattern consistently results in better prioritization, stronger negative scenarios, and fewer superficial cases. Pattern 2: Few-Shot Prompting with Test Examples AI aligns faster when shown what “good” looks like. Providing even a single example test case or automation snippet dramatically improves consistency in AI-generated outputs, especially when multiple teams are involved. Concrete examples help align the AI with expected automation structure, enforce naming conventions, influence the depth and quality of assertions, and standardize reporting formats. By showing what “good” looks like, teams reduce variation, improve maintainability, and make AI-generated assets far easier to review and extend. Pattern 3: Provide Rich Context and Clear Instructions Copilot works best when it understands the surrounding context of what you are testing. The richer the context, the higher the quality of the output—whether you are generating manual test cases, automation scripts, or regression insights. When writing prompts clearly describe the application type (web, mobile, UI, API), the business domain, the feature or workflow under test, and the relevant user roles or API consumers. Business rules, constraints, assumptions, and exclusions should also be explicitly stated. Where possible, include structured instructions in an Instructions .md file and pass it as context to the Copilot agent. You can also attach supporting assets—such as Swagger screenshots or UI flow diagrams—to further ground the AI’s understanding. The result is more concise, accurate output that aligns closely with your system’s real behavior and constraints. Below is an example of how rich context can aid in efficient output Below example shows how to give clear instructions to GHCP that helps AI to handle the uncertainty and exceptions to adhere Prompt Anti-Patterns to Avoid Most AI failures in QA are self-inflicted. The following anti-patterns show up repeatedly in enterprise teams. Overloaded prompts that request UI tests, API tests, automation, and analysis in one step Natural language overuse where structured output (tables, JSON, code templates) is required Automation prompts without environment details (browser, framework, auth, data) Contradictory instructions, such as asking for “detailed coverage” and “keep it minimal” simultaneously The AI-Assisted QA Maturity Model Prompting is not a one-time tactic—it is a capability that matures over time. The levels below represent how increasing sophistication in prompt design directly leads to more advanced, reliable, and impactful testing outcomes. Level 1 – Prompt-Based Test Generation AI is primarily used to generate manual test cases, scenarios, and edge cases from requirements or user stories. This level improves test coverage and speeds up test design but still relies heavily on human judgment for validation, prioritization, and execution. Level 2 – AI-Assisted Automation AI moves beyond documentation and actively supports automation by generating framework-aligned scripts, page objects, and assertions. Testers guide the AI with clear constraints and patterns, resulting in faster automation development while retaining full human control over architecture and execution. Level 3 – AI-Led Regression Analysis At this stage, AI assists in analyzing regression results by clustering failures, identifying recurring patterns, and suggesting likely root causes. Testers shift from manually triaging failures to validating AI-generated insights, significantly reducing regression cycle time. Level 4 – MCP-Integrated, Agentic Testing AI operates with deep system context through MCP servers, accessing specifications, historical test data, and execution results. It can independently generate, refine, and adapt tests based on system changes, enabling semi-autonomous, context-aware quality engineering with human oversight. Best Practices for Prompt-Based Testing Prioritize context over brevity Treat prompts as test specifications Iterate instead of rewriting from scratch Experiment with models when outputs miss intent Always validate AI-generated automation and analysis Maintain reusable prompt templates for UI testing, API testing, automation, and regression analysis Final Thoughts: Prompting as a Core QA Capability Effective prompt improves coverage, accelerates delivery, and elevates QA from execution to engineering. It turns Copilot from a code generator into a quality partner. The next use case in line is going beyond functional flows and understanding how AI prompting can aid for – Automation framework enhancements, Performance testing prompts, Accessibility testing prompts, Data quality testing prompts. Stay tuned for upcoming blogs!!Building Agents with GitHub Copilot SDK: A Practical Guide to Automated Tech Update Tracking
Introduction In the rapidly evolving tech landscape, staying on top of key project updates is crucial. This article explores how to leverage GitHub's newly released Copilot SDK to build intelligent agent systems, featuring a practical case study on automating daily update tracking and analysis for Microsoft's Agent Framework. GitHub Copilot SDK: Embedding AI Capabilities into Any Application SDK Overview On January 22, 2026, GitHub officially launched the GitHub Copilot SDK technical preview, marking a new era in AI agent development. The SDK provides these core capabilities: Production-grade execution loop: The same battle-tested agentic engine powering GitHub Copilot CLI Multi-language support: Node.js, Python, Go, and .NET Multi-model routing: Flexible model selection for different tasks MCP server integration: Native Model Context Protocol support Real-time streaming: Support for streaming responses and live interactions Tool orchestration: Automated tool invocation and command execution Core Advantages Building agentic workflows from scratch presents numerous challenges: Context management across conversation turns Orchestrating tools and commands Routing between models Handling permissions, safety boundaries, and failure modes The Copilot SDK encapsulates all this complexity. As Mario Rodriguez, GitHub's Chief Product Officer, explains: "The SDK takes the agentic power of Copilot CLI and makes it available in your favorite programming language... GitHub handles authentication, model management, MCP servers, custom agents, and chat sessions plus streaming. That means you are in control of what gets built on top of those building blocks." Quick Start Examples Here's a simple TypeScript example using the Copilot SDK: import { CopilotClient } from "@github/copilot-sdk"; const client = new CopilotClient(); await client.start(); const session = await client.createSession({ model: "gpt-5", }); await session.send({ prompt: "Hello, world!" }); And in Python, it's equally straightforward: from copilot import CopilotClient client = CopilotClient() await client.start() session = await client.create_session({ "model": "claude-sonnet-4.5", "streaming": True, "skill_directories": ["./.copilot_skills/pr-analyzer/SKILL.md"] }) await session.send_and_wait({ "prompt": "Analyze PRs from microsoft/agent-framework merged yesterday" }) Real-World Case Study: Automated Agent Framework Daily Updates Project Background agent-framework-update-everyday is an automated system built with GitHub Copilot SDK and CLI that tracks daily code changes in Microsoft's Agent Framework and generates high-quality technical blog posts. System Architecture The project leverages the following technology stack: GitHub Copilot CLI (@github/copilot): Command-line AI capabilities GitHub Copilot SDK (github-copilot-sdk): Programmatic AI interactions Copilot Skills: Custom PR analysis behaviors GitHub Actions: CI/CD automation pipeline Core Workflow The system runs fully automated via GitHub Actions, executing Monday through Friday at UTC 00:00 with these steps: Step Action Description 1 Checkout repository Clone the repo using actions/checkout@v4 2 Setup Node.js Configure Node.js 22 environment for Copilot CLI 3 Install Copilot CLI Install via npm i -g github/copilot 4 Setup Python Configure Python 3.11 environment 5 Install Python dependencies Install github-copilot-sdk package 6 Run PR Analysis Execute pr_trigger_v2.py with Copilot authentication 7 Commit and push Auto-commit generated blog posts to repository Technical Implementation Details 1. Copilot Skill Definition The project uses a custom Copilot Skill (.copilot_skills/pr-analyzer/SKILL.md) to define: PR analysis behavior patterns Blog post structure requirements Breaking changes priority strategy Code snippet extraction rules This skill-based approach enables the AI agent to focus on domain-specific tasks and produce higher-quality outputs. 2. Python SDK Integration The core script pr_trigger_v2.py demonstrates Python SDK usage: from copilot import CopilotClient # Initialize client client = CopilotClient() await client.start() # Create session with model and skill specification session = await client.create_session({ "model": "claude-sonnet-4.5", "streaming": True, "skill_directories": ["./.copilot_skills/pr-analyzer/SKILL.md"] }) # Send analysis request await session.send_and_wait({ "prompt": "Analyze PRs from microsoft/agent-framework merged yesterday" }) 3. CI/CD Integration The GitHub Actions workflow (.github/workflows/daily-pr-analysis.yml) ensures automated execution: name: Daily PR Analysis on: schedule: - cron: '0 0 * * 1-5' # Monday-Friday at UTC 00:00 workflow_dispatch: # Support manual triggers jobs: analyze: runs-on: ubuntu-latest steps: - name: Setup and Run Analysis env: COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} run: | npm i -g github/copilot pip install github-copilot-sdk --break-system-packages python pr_trigger_v2.py Output Results The system automatically generates structured blog posts saved in the blog/ directory with naming convention: blog/agent-framework-pr-summary-{YYYY-MM-DD}.md Each post includes: Breaking Changes (highlighted first) Major Updates (with code examples) Minor Updates and Bug Fixes Summary and impact assessment Latest Advancements in GitHub Copilot CLI Released alongside the SDK, Copilot CLI has also received major updates, making it an even more powerful development tool: Enhanced Core Capabilities Persistent Memory: Cross-session context retention and intelligent compaction Multi-Model Collaboration: Choose different models for explore, plan, and review workflows Autonomous Execution: Custom agent support Agent skill system Full MCP support Async task delegation Real-World Applications Development teams have already built innovative applications using the SDK: YouTube chapter generators Custom GUI interfaces for agents Speech-to-command workflows for desktop apps Games where you compete with AI Content summarization tools These examples showcase the flexibility and power of the Copilot SDK. SDK vs CLI: Complementary, Not Competing Understanding the relationship between SDK and CLI is important: CLI: An interactive tool for end users, providing a complete development experience SDK: A programmable layer for developers to build customized applications The SDK essentially provides programmatic access to the CLI's core capabilities, enabling developers to: Integrate Copilot agent capabilities into any environment Build graphical user interfaces Create personal productivity tools Run custom internal agents in enterprise workflows GitHub handles the underlying authentication, model management, MCP servers, and session management, while developers focus on building value on top of these building blocks. Best Practices and Recommendations Based on experience from the agent-framework-update-everyday project, here are practical recommendations: 1. Leverage Copilot Skills Effectively Define clear skill files that specify: Input and output formats for tasks Rules for handling edge cases Quality standards and priorities 2. Choose Models Wisely Use different models for different tasks: Exploratory tasks: Use more powerful models (e.g., GPT-5) Execution tasks: Use faster models (e.g., Claude Sonnet) Cost-sensitive tasks: Balance performance and budget 3. Implement Robust Error Handling AI calls in CI/CD environments need to consider: Network timeout and retry strategies API rate limit handling Output validation and fallback mechanisms 4. Secure Authentication Management Use fine-grained Personal Access Tokens (PAT): Create dedicated Copilot access tokens Set minimum permission scope (Copilot Requests: Read) Store securely using GitHub Secrets 5. Version Control and Traceability Automated systems should: Log metadata for each execution Preserve historical outputs for comparison Implement auditable change tracking Future Outlook The release of GitHub Copilot SDK marks the democratization of AI agent development. Developers can now: Lower Development Barriers: No need to deeply understand complex AI infrastructure Accelerate Innovation: Focus on business logic rather than underlying implementation Flexible Integration: Embed AI capabilities into any application scenario Production-Ready: Leverage proven execution loops and security mechanisms As the SDK moves from technical preview to general availability, we can expect: Official support for more languages Richer tool ecosystem More powerful MCP integration capabilities Community-driven best practice libraries Conclusion This article demonstrates how to build practical automation systems using GitHub Copilot SDK through the agent-framework-update-everyday project. This case study not only validates the SDK's technical capabilities but, more importantly, showcases a new development paradigm: Using AI agents as programmable building blocks, integrated into daily development workflows, to liberate developer creativity. Whether you want to build personal productivity tools, enterprise internal agents, or innovative AI applications, the Copilot SDK provides a solid technical foundation. Visit github/copilot-sdk to start your AI agent journey today! Reference Resources GitHub Copilot SDK Official Repository Agent Framework Update Everyday Project GitHub Copilot CLI Documentation Microsoft Agent Framework Build an agent into any app with the GitHub Copilot SDK4.4KViews3likes0CommentsFrom Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.2.1KViews0likes0CommentsOne MCP Server, Two Transports: STDIO and HTTP
Let's think about a situation for using MCP servers. Most MCP servers run on a local machine – either directly or in a container. But with other integration scenarios like using Copilot Studio, enterprise-wide MCP servers or need more secure environments, those MCP server should run remotely through HTTP. As long as the core logic lives in a shared layer, wrapping it in a console (STDIO) or web (HTTP) host is straightforward. However, maintaining two hosts can duplicate code. What if a single MCP server supports both STDIO and HTTP, controlled by a simple switch? It will be reducing significant amount of management overhead. This post shows how to build a single MCP server that supports both transports, selected at runtime with a --http switch, using the .NET builder pattern. .NET Builder Pattern A .NET console app starts the builder pattern using Host.CreateApplicationBuilder(args) . var builder = Host.CreateApplicationBuilder(args); The builder instance is the type of HostApplicationBuilder implementing the IHostApplicationBuilder interface. On the other hand, an ASP.NET web app starts the builder pattern using WebApplication.CreateBuilder(args) . var builder = WebApplication.CreateBuilder(args); This builder instance is the type of WebApplicationBuilder also implementing the IHostApplicationBuilder interface. Now, both builder instances have IHostApplicationBuilder in common, and this is the key of this post today. If we decide the hosting mode before creating the builder instance, the server can run as either STDIO or HTTP. The --http Switch as an Argument As you can see, both Host.CreateApplicationBuilder(args) and WebApplication.CreateBuilder(args) take the list of arguments that are passed from the command-line. Therefore, before initializing the builder instance, we can identify the server type. Let's use a --http switch as the selector. Then pass --http when running the server. dotnet run --project MyMcpServer -- --http Then, before creating the builder instance, check whether the switch is present. It looks for the environment variables first, then checks the arguments passed. public static bool UseStreamableHttp(IDictionary env, string[] args) { var useHttp = env.Contains("UseHttp") && bool.TryParse(env["UseHttp"]?.ToString()?.ToLowerInvariant(), out var result) && result; if (args.Length == 0) { return useHttp; } useHttp = args.Contains("--http", StringComparer.InvariantCultureIgnoreCase); return useHttp; } Here's the usage: var useStreamableHttp = UseStreamableHttp(Environment.GetEnvironmentVariables(), args); We've identified whether to use HTTP or not. Therefore, the builder instance is built in this way: IHostApplicationBuilder builder = useStreamableHttp ? WebApplication.CreateBuilder(args) : Host.CreateApplicationBuilder(args); With this builder instance, we can add more dependencies specific to web app or console app depending on the scenario. The Transport Type Let's add the MCP server to the builder instance. var mcpServerBuilder = builder.Services.AddMcpServer() .WithPromptsFromAssembly() .WithResourcesFromAssembly() .WithToolsFromAssembly(); We haven’t told mcpServerBuilder which transport to use yet. Use useStreamableHttp to select the transport. if (useStreamableHttp) { mcpServerBuilder.WithHttpTransport(o => o.Stateless = true); } else { mcpServerBuilder.WithStdioServerTransport(); } Type Casting to Run Server While configuring an ASP.NET web app, middlewares are added. The HTTP host also needs middleware, and the builder must be cast. After the builder instance is built, the webApp instance adds middleware including the endpoint mapping. IHost app; if (useStreamableHttp) { var webApp = (builder as WebApplicationBuilder)!.Build(); webApp.UseHttpsRedirection(); webApp.MapMcp("/mcp"); app = webApp; } else { var consoleApp = (builder as HostApplicationBuilder)!.Build(); app = consoleApp; } Note that WebApplication implements IHost, so you can assign it to an IHost variable. The console host built from HostApplicationBuilder is already an IHost. Use this app instance to run the MCP server. await app.RunAsync(); That's it! Now you can run the MCP server with the STDIO transport or the HTTP transport by providing a single switch, --http . Sample apps Sample apps are available for you to check out. Visit the MCP Samples in .NET repository, and you'll find MCP server apps. All server apps in the repo support both STDIO and HTTP via the switch. More resources If you'd like to learn more about MCP in .NET, here are some additional resources worth exploring: Let's Learn MCP MCP Workshop in .NET MCP Samples in .NET MCP Samples MCP for Beginners1.2KViews0likes0CommentsMCP Bootcamp: APAC, LATAM and Brazil
The Model Context Protocol (MCP) is transforming how AI systems interact with real-world applications. From intelligent assistants to real-time streaming, MCP is already being adopted by leading companies—and now is your chance to get ahead. Join us for a four-part technical series designed to give you practical, production-ready skills in MCP development, integration, and deployment. Whether you're a developer, AI engineer, or cloud architect, this series will equip you with the tools to build and scale MCP-based solutions. 📅 English edition - 6PM IST (India Standard Time) ✅ Register at MCP Bootcamp APAC Session Title Date & Time (IST) Creating Your First MCP Server Learn the fundamental concepts of the protocol and test your implementation using official tools. August 28, 6:00 PM MCP Integration with LLMs Set up an intelligent MCP client that uses LLM to interpret natural commands and integrate everything with VS Code and GitHub Copilot. September 2, 6:00 PM Real-Time with SSE and HTTP Streaming Add real-time communication to your MCP server using Server-Sent Events and streamable HTTP. September 4, 6:00 PM Deploy MCP on Azure Add Real-Time Communication with Server-Sent Events to Your MCP Server and Professionally Deploy on Azure Container Apps. September 9, 6:00 PM 📅 Spanish edition - 9AM CST (Central Standard Time, Mexico City) ✅ Check the time in your location: 11am ET, 8am PT, 9am CST e 5pm CET - Register at MCP Bootcamp LATAM Session Title Date & Time (CST) Creando tu Primer Servidor MCP Construye desde cero un servidor MCP funcional en Python. Aprende los conceptos fundamentales del protocolo y prueba tu implementación usando herramientas oficiales. August 18, 09:00 AM Integración de MCP con LLMs Configura un cliente MCP inteligente que utilice LLM para interpretar comandos en lenguaje natural e intégralo con VS Code y GitHub Copilot. August 20, 09:00 AM MCP en Tiempo Real y Deploy en Azure Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. August 25, 09:00 AM Comunicación en tiempo real con SSE y transmisión HTTP Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. September 1, 09:00 AM 📅 Portuguese edition - 12PM BRT (Brasília Time) ✅ Register at MCP Bootcamp | Brasil Session Title Date & Time (BRT) Criando seu Primeiro MCP Server Construa do zero um servidor MCP funcional em Python. Aprenda os conceitos fundamentais do protocolo e teste sua implementação usando ferramentas oficiais. August 19, 12:00 PM Integração de MCP com LLMs Configure um cliente MCP inteligente que usa LLM para interpretar comandos naturais e integre tudo com VS Code e GitHub Copilot. August 21, 12:00 PM Deploy no Azure Adicione comunicação em tempo real com Server-Sent Events ao seu servidor MCP e faça deploy profissional na Azure Container Apps. August 26, 12:00 PM Comunicação em Tempo Real com SSE e HTTP Streaming Aprenda a adicionar comunicação em tempo real ao seu servidor MCP usando Server-Sent Events (SSE) e streaming HTTP. August 28, 12:00 PM