Blog Post

Microsoft Developer Community Blog
9 MIN READ

GitHub Copilot SDK and Hybrid AI in Practice: Automating README to PPT Transformation

kinfey's avatar
kinfey
Icon for Microsoft rankMicrosoft
Jan 28, 2026

Introduction

In today's rapidly evolving AI landscape, developers often face a critical choice: should we use powerful cloud-based Large Language Models (LLMs) that require internet connectivity, or lightweight Small Language Models (SLMs) that run locally but have limited capabilities? The answer isn't either-or—it's hybrid models—combining the strengths of both to create AI solutions that are secure, efficient, and powerful.

This article explores hybrid model architectures through the lens of GenGitHubRepoPPT, demonstrating how to elegantly combine Microsoft Foundry Local, GitHub Copilot SDK, and other technologies to automatically generate professional PowerPoint presentations from GitHub README files.

1. Hybrid Model Scenarios and Value

1.1 What Are Hybrid Models?

Hybrid AI Models strategically combine locally-running Small Language Models (SLMs) with cloud-based Large Language Models (LLMs) within the same application, selecting the most appropriate model for each task based on its unique characteristics.

Core Principles:

  • Local Processing for Sensitive Data: Privacy-critical content analysis happens on-device
  • Cloud for Value Creation: Complex reasoning and creative generation leverage cloud power
  • Balancing Cost and Performance: High-frequency, simple tasks run locally to minimize API costs

1.2 Typical Hybrid Model Use Cases

Use CaseLocal SLM RoleCloud LLM RoleValue Proposition
Intelligent Document ProcessingText extraction, structural analysisContent refinement, format conversionPrivacy protection + Professional output
Code Development AssistantSyntax checking, code completionComplex refactoring, architecture adviceFast response + Deep insights
Customer Service SystemsIntent recognition, FAQ handlingComplex issue resolutionReduced latency + Enhanced quality
Content Creation PlatformsKeyword extraction, outline generationArticle writing, multilingual translationCost control + Creative assurance

1.3 Why Choose Hybrid Models?

Three Core Advantages:

  1. Privacy and Security
    • Sensitive data never leaves local devices
    • Compliant with GDPR, HIPAA, and other regulations
    • Ideal for internal corporate documents and personal information
  2. Cost Optimization
    • Reduces cloud API call frequency
    • Local models have zero usage fees
    • Predictable operational costs
  3. Performance and Reliability
    • Local processing eliminates network latency
    • Partial functionality in offline environments
    • Cloud models ensure high-quality output

2. Core Technology Analysis

2.1 Large Language Models (LLMs): Cloud Intelligence Representatives

What are LLMs?

Large Language Models are deep learning-based natural language processing models, typically with billions to trillions of parameters. Through training on massive text datasets, they've acquired powerful language understanding and generation capabilities.

Representative Models:

  • Claude Sonnet 4.5: Anthropic's flagship model, excelling at long-context processing and complex reasoning
  • GPT-5.2 Series: OpenAI's general-purpose language models
  • Gemini: Google's multimodal large models

LLM Advantages:

  • ✅ Exceptional text generation quality
  • ✅ Powerful contextual understanding
  • ✅ Support for complex reasoning tasks
  • ✅ Continuous model updates and optimization

Typical Applications:

  • Professional document writing (technical reports, business plans)
  • Code generation and refactoring
  • Multilingual translation
  • Creative content creation

2.2 Small Language Models (SLMs) and Microsoft Foundry Local

2.2.1 SLM Characteristics

Small Language Models typically have 1B-7B parameters, designed specifically for resource-constrained environments.

Mainstream SLM Model Families:

  • Microsoft Phi Family (Phi Family): Inference-optimized efficient models
  • Alibaba Qwen Family (Qwen Family): Excellent Chinese language capabilities
  • Mistral Series: Outstanding performance with small parameter counts

SLM Advantages:

  • ⚡ Low-latency response (millisecond-level)
  • 💰 Zero API costs
  • 🔒 Fully local, data stays on-device
  • 📱 Suitable for edge device deployment

2.2.2 Microsoft Foundry Local: The Foundation of Local AI

Foundry Local is Microsoft's local AI runtime tool, enabling developers to easily run SLMs on Windows or macOS devices.

Core Features:

  1. OpenAI-Compatible API

    # Using Foundry Local is like using OpenAI API
       from openai import OpenAI
       from foundry_local import FoundryLocalManager
       
       manager = FoundryLocalManager("qwen2.5-7b-instruct")
       client = OpenAI(
           base_url=manager.endpoint,
           api_key=manager.api_key
       )
  2. Hardware Acceleration Support
    • CPU: General computing support
    • GPU: NVIDIA, AMD, Intel graphics acceleration
    • NPU: Qualcomm, Intel AI-specific chips
    • Apple Silicon: Neural Engine optimization
  3. Based on ONNX Runtime
    • Cross-platform compatibility
    • Highly optimized inference performance
    • Supports model quantization (INT4, INT8)
  4. Convenient Model Management
       # View available models
       foundry model list
       
       # Run a model
       foundry model run qwen2.5-7b-instruct-generic-cpu:4
       
       # Check running status
       foundry service ps

    Foundry Local Application Value:

    • 🎓 Educational Scenarios: Students can learn AI development without cloud subscriptions
    • 🏢 Enterprise Environments: Process sensitive data while maintaining compliance
    • 🧪 R&D Testing: Rapid prototyping without API cost concerns
    • ✈️ Offline Environments: Works on planes, subways, and other no-network scenarios

2.3 GitHub Copilot SDK: The Express Lane from Agent to Business Value

2.3.1 What is GitHub Copilot SDK?

GitHub Copilot SDK, released as a technical preview on January 22, 2026, is a game-changer for AI Agent development. Unlike other AI SDKs, Copilot SDK doesn't just provide API calling interfaces—it delivers a complete, production-grade Agent execution engine.

Why is it revolutionary?

Traditional AI application development requires you to build:

  • ❌ Context management systems (multi-turn conversation state)
  • ❌ Tool orchestration logic (deciding when to call which tool)
  • ❌ Model routing mechanisms (switching between different LLMs)
  • ❌ MCP server integration
  • ❌ Permission and security boundaries
  • ❌ Error handling and retry mechanisms

Copilot SDK provides all of this out-of-the-box, letting you focus on business logic rather than underlying infrastructure.

2.3.2 Core Advantages: The Ultra-Short Path from Concept to Code

  1. Production-Grade Agent Engine: Battle-Tested Reliability

Copilot SDK uses the same Agent core as GitHub Copilot CLI, which means:

  • ✅ Validated in millions of real-world developer scenarios
  • ✅ Capable of handling complex multi-step task orchestration
  • ✅ Automatic task planning and execution
  • ✅ Built-in error recovery mechanisms

Real-World Example: In the GenGitHubRepoPPT project, we don't need to hand-write the "how to convert outline to PPT" logic—we simply tell Copilot SDK the goal, and it automatically:

  1. Analyzes outline structure
  2. Plans slide layouts
  3. Calls file creation tools
  4. Applies formatting logic
  5. Handles multilingual adaptation
# Traditional approach: requires hundreds of lines of code for logic
def create_ppt_traditional(outline):
    slides = parse_outline(outline)
    for slide in slides:
        layout = determine_layout(slide)
        content = format_content(slide)
        apply_styling(content, layout)
        # ... more manual logic
    return ppt_file

# Copilot SDK approach: focus on business intent
session = await client.create_session({
     "model": "claude-sonnet-4.5",
    "streaming": True,
    "skill_directories": [skills_dir]
})
session.send_and_wait({"prompt": prompt}, timeout=600)

 

  1. Custom Skills: Reusable Encapsulation of Business Knowledge

This is one of Copilot SDK's most powerful features. In traditional AI development, you need to provide complete prompts and context with every call. Skills allow you to:

Define once, reuse forever:

# .copilot_skills/ppt/SKILL.md

# PowerPoint Generation Expert Skill

## Expertise
You are an expert in business presentation design, skilled at transforming
technical content into easy-to-understand visual presentations.

## Workflow
1. **Structure Analysis**
   - Identify outline hierarchy (titles, subtitles, bullet points)
   - Determine topic and content density for each slide
   
2. **Layout Selection**
   - Title slide: Use large title + subtitle layout
   - Content slides: Choose single/dual column based on bullet count
   - Technical details: Use code block or table layouts
   
3. **Visual Optimization**
   - Apply professional color scheme (corporate blue + accent colors)
   - Ensure each slide has a visual focal point
   - Keep bullets to 5-7 items per page
   
4. **Multilingual Adaptation**
   - Choose appropriate fonts based on language (Chinese: Microsoft YaHei, English: Calibri)
   - Adapt text direction and layout conventions

## Output Requirements
Generate .pptx files meeting these standards:
- 16:9 widescreen ratio
- Consistent visual style
- Editable content (not images)
- File size < 5MB
  1. Business Code Generation Capability

This is the core value of this project. Unlike generic LLM APIs, Copilot SDK with Skills can generate truly executable business code.

Comparison Example:

AspectGeneric LLM APICopilot SDK + Skills
Task DescriptionRequires detailed prompt engineeringConcise business intent suffices
Output QualityMay need multiple adjustmentsProfessional-grade on first try
Code ExecutionUsually example codeDirectly generates runnable programs
Error HandlingManual implementation requiredAgent automatically handles and retries
Multi-step TasksManual orchestration neededAutomatic planning and execution

Comparison of manual coding workload:

TaskManual CodingCopilot SDK
Processing logic code~500 lines~10 lines configuration
Layout templates~200 linesDeclared in Skill
Style definitions~150 linesDeclared in Skill
Error handling~100 linesAutomatically handled
Total~950 lines~10 lines + Skill file
  1. Tool Calling & MCP Integration: Connecting to the Real World

Copilot SDK doesn't just generate code—it can directly execute operations:

  • 🗃️ File System Operations: Create, read, modify files
  • 🌐 Network Requests: Call external APIs
  • 📊 Data Processing: Use pandas, numpy, and other libraries
  • 🔧 Custom Tools: Integrate your business logic

3. GenGitHubRepoPPT Case Study

3.1 Project Overview

GenGitHubRepoPPT is an innovative hybrid AI solution that combines local AI models with cloud-based AI agents to automatically generate professional PowerPoint presentations from GitHub repository README files in under 5 minutes.

Technical Architecture:

3.2 Why Adopt a Hybrid Model?

Stage 1: Local SLM Processes Sensitive Data

Task: Analyze GitHub README, extract key information, generate structured outline

Reasons for choosing Qwen-2.5-7B + Foundry Local:

  1. Privacy Protection
    • README may contain internal project information
    • Local processing ensures data doesn't leave the device
    • Complies with data compliance requirements
  2. Cost Effectiveness
    • Each analysis processes thousands of tokens
    • Cloud API costs are significant in high-frequency scenarios
    • Local models have zero additional fees
  3. Performance
    • Qwen-2.5-7B excels at text analysis tasks
    • Outstanding Chinese support
    • Acceptable CPU inference latency (typically 2-3 seconds)

Stage 2: Cloud LLM + Copilot SDK Creates Business Value

Task: Create well-formatted PowerPoint files based on outline

Reasons for choosing Claude Sonnet 4.5 + Copilot SDK:

  1. Automated Business Code Generation
    • Traditional approach pain points:
      • Need to hand-write 500+ lines of code for PPT layout logic
      • Require deep knowledge of python-pptx library APIs
      • Style and formatting code is error-prone
      • Multilingual support requires additional conditional logic
    • Copilot SDK solution:
      • Declare business rules and best practices through Skills
      • Agent automatically generates and executes required code
      • Zero-code implementation of complex layout logic
      • Development time reduced from 2-3 days to 2-3 hours
  2. Ultra-Short Path from Intent to Execution Comparison: Different ways to implement "Generate professional PPT"

      3. Production-Grade Reliability and Quality Assurance

    • Battle-tested Agent engine:
      • Uses the same core as GitHub Copilot CLI
      • Validated in millions of real-world scenarios
      • Automatically handles edge cases and errors
    • Consistent output quality:
      • Professional standards ensured through Skills
      • Automatic validation of generated files
      • Built-in retry and error recovery mechanisms

      4. Rapid Iteration and Optimization Capability Scenario: Client requests PPT style adjustment

The GitHub Repo https://github.com/kinfey/GenGitHubRepoPPT

4. Summary

4.1 Core Value of Hybrid Models + Copilot SDK

The GenGitHubRepoPPT project demonstrates how combining hybrid models with Copilot SDK creates a new paradigm for AI application development.

Privacy and Cost Balance

The hybrid approach allows sensitive README analysis to happen locally using Qwen-2.5-7B, ensuring data never leaves the device while incurring zero API costs. Meanwhile, the value-creating work—generating professional PowerPoint presentations—leverages Claude Sonnet 4.5 through Copilot SDK, delivering quality that justifies the per-use cost.

From Code to Intent

Traditional AI development required writing hundreds of lines of code to handle PPT generation logic, layout selection, style application, and error handling. With Copilot SDK and Skills, developers describe what they want in natural language, and the Agent automatically generates and executes the necessary code. What once took 3-5 days now takes 3-4 hours, with 95% less code to maintain.

Automated Business Code Generation

Copilot SDK doesn't just provide code examples—it generates complete, executable business logic. When you request a multilingual PPT, the Agent understands the requirement, selects appropriate fonts, generates the implementation code, executes it with error handling, validates the output, and returns a ready-to-use file. Developers focus on business intent rather than implementation details.

4.2 Technology Trends

The Shift to Intent-Driven Development

We're witnessing a fundamental change in how developers work. Rather than mastering every programming language detail and framework API, developers are increasingly defining what they want through declarative Skills. Copilot SDK represents this future: you describe capabilities in natural language, and AI Agents handle the code generation and execution automatically.

Edge AI and Cloud AI Integration

The evolution from pure cloud LLMs (powerful but privacy-concerning) to pure local SLMs (private but limited) has led to today's hybrid architectures. GenGitHubRepoPPT exemplifies this trend: local models handle data analysis and structuring, while cloud models tackle complex reasoning and professional output generation. This combination delivers fast, secure, and professional results.

Democratization of Agent Development

Copilot SDK dramatically lowers the barrier to building AI applications. Senior engineers see 10-20x productivity gains. Mid-level engineers can now build sophisticated agents that were previously beyond their reach. Even junior engineers and business experts can participate by writing Skills that capture domain knowledge without deep technical expertise.

The future isn't about whether we can build AI applications—it's about how quickly we can turn ideas into reality.

References

Projects and Code

Deep Dive: Copilot SDK

Learning Resources

Published Jan 28, 2026
Version 1.0
No CommentsBe the first to comment