Blog Post

Microsoft Mission Critical Blog
7 MIN READ

Getting Started with GitHub Copilot SDK

anishekkamal's avatar
anishekkamal
Icon for Microsoft rankMicrosoft
Apr 09, 2026

Transform your applications with intelligent, autonomous AI capabilities.

GitHub Copilot has been a staple in developer workflows for a while — it suggests code, completes functions, and generally keeps you from looking up that one syntax for the hundredth time. But what if you could take that same intelligence and embed it directly into your own applications? That's exactly what the GitHub Copilot SDK lets you do.

Launched in technical preview in January 2026 and entering public preview on April 2nd, 2026, the SDK gives you programmatic access to Copilot's agentic engine. It's the same runtime that powers the Copilot CLI — just exposed as a library you can import into your own code, in your language of choice.

What Is the GitHub Copilot SDK?

The SDK is a multi-language library — Python, TypeScript, Go, .NET, and Java — that lets your application talk directly to Copilot's agent runtime. You don't have to build your own orchestration layer, manage model contexts, or figure out tool invocation protocols from scratch. All of that is handled for you.

Three core concepts are worth understanding upfront:

  • CopilotClient — your main entry point. It manages the connection to the Copilot CLI running in server mode.
  • Sessions — hold a persistent conversational context, meaning the agent remembers what's been said across multiple turns and can handle genuinely stateful workflows.
  • Tools — regular Python functions you register with the session. The agent calls them autonomously when it needs to interact with the outside world: query a database, hit an API, read a file.

For Python, getting started is a single command:

pip install github-copilot-sdk

You'll also need the Copilot CLI (https://docs.github.com/en/copilot/how-tos/set-up/install-copilot-cli) installed and accessible in your PATH, plus Python 3.11 or higher.

Read this on how to setup: copilot-sdk/python

Sending Your First Message

import asyncio

from copilot import CopilotClient

from copilot.session import PermissionHandler

async def main():

    async with CopilotClient() as client:

        async with await client.create_session(

            on_permission_request=PermissionHandler.approve_all,

            model="gpt-5",

        ) as session:

            done = asyncio.Event()

            def on_event(event):

                if event.type.value == "assistant.message":

                    print(event.data.content)

                elif event.type.value == "session.idle":

                    done.set()

            session.on(on_event)

            await session.send("Explain the difference between a list and a tuple in Python.")

            await done.wait()

asyncio.run(main())

A couple of things to notice. The `async with` pattern handles all setup and teardown — no manual cleanup required. The `on_permission_request` parameter is required for every session; it's a handler the SDK calls before the agent executes any tool, allowing you to approve or deny the action. `PermissionHandler.approve_all` is the simplest option and perfect for getting started, but in production you'll want something more selective. More on that below.

Giving Your Agent Real Capabilities

Text in, text out is fine. But the real value of the SDK is that you can give the agent *tools* — functions it can call to interact with real systems. The `@define_tool` decorator makes this clean using Pydantic for parameter validation:

import asyncio

from pydantic import BaseModel, Field

from copilot import CopilotClient, define_tool

from copilot.session import PermissionHandler

class GetPriceParams(BaseModel):

    ticker: str = Field(description="Stock ticker symbol, e.g. MSFT")

@define_tool(description="Fetch the current stock price for a given ticker")

async def get_stock_price(params: GetPriceParams) -> str:

    # Replace with a real API call

    return f"The current price of {params.ticker} is $150.00"

async def main():

    async with CopilotClient() as client:

        async with await client.create_session(

            on_permission_request=PermissionHandler.approve_all,

            model="gpt-5",

            tools=[get_stock_price],

        ) as session:

            done = asyncio.Event()

            def on_event(event):

                if event.type.value == "assistant.message":

                    print(event.data.content)

                elif event.type.value == "session.idle":

                    done.set()

            session.on(on_event)

            await session.send("What's the current price of Microsoft stock?")

            await done.wait()

asyncio.run(main())

When the prompt arrives, the agent works out that it should call `get_stock_price` with `ticker="MSFT"`, runs your function, and folds the result into its response. You don't wire up the function-calling logic yourself — the SDK handles dispatch, parameter parsing, and return value handling. Your job is just writing the function.

Streaming Responses in Real Time

If you're building anything interactive, waiting for a complete response before displaying anything feels slow. Setting `streaming=True` and listening for `assistant.message_delta` events fixes that immediately:

async with await client.create_session(

    on_permission_request=PermissionHandler.approve_all,

    model="gpt-5",

    streaming=True,

) as session:

    done = asyncio.Event()

    def on_event(event):

        match event.type.value:

            case "assistant.message_delta":

                print(event.data.delta_content or "", end="", flush=True)

            case "session.idle":

                done.set()

    session.on(on_event)

    await session.send("Write a Python function that validates an email address.")

    await done.wait()

Each chunk arrives as a `delta_content` string. Print it directly for a terminal UI, or accumulate chunks if you need the full response as a single string.

A Few Things Worth Knowing Before You Build

Billing: Every prompt counts against your GitHub Copilot subscription's premium request quota. If you're building automated workflows that fire off many requests — think CI pipelines or scheduled jobs — monitor usage. The SDK also supports BYOK (Bring Your Own Key), so you can plug in your own API keys from OpenAI, Azure AI Foundry, or Anthropic, which is a good option if you already have model deployments or want to separate usage billing.

Stability: The SDK is in public preview. It follows semantic versioning, so breaking changes come with a major version bump, but check the release notes between upgrades.

Permissions: For anything beyond experiments, replace `PermissionHandler.approve_all` with a custom handler. The SDK lets you inspect each tool request by kind — `shell`, `write`, `read`, `url`, `custom-tool` — and return `approved` or `denied` per request. That's where your security posture lives.

If You Want to Start — Start Here

One thing I've found working is that the best way to help customers adopt a technology is to actually use it yourself first. The Copilot SDK is a good candidate for that approach.

On the internal side, there are a handful of workflows that translate really well to agents.

Customer health reviews, for example — instead of manually pulling data from multiple tools before a call, you could build an agent that gathers recent Azure consumption,

Copilot seat usage, and open support tickets, then produces a plain-language summary. Account preparation used to mean 30 minutes of tab-switching; an agent with the right custom tools can reduce that to a prompt.

Incident prep is another one. When a customer hits an issue and needs a root cause summary fast, an agent that can read recent deployment logs, scan for known patterns, and draft a timeline is genuinely useful — both internally and as something you can walk through with the customer.

Building these tools yourself also gives you hands-on credibility when the architecture conversation comes up. You've already worked through the permission model, you've thought about BYOK, and you know where the rough edges are. That context matters more than any slide.

How to Help Customers Get Started

 

Most enterprise customers land in one of two places: they see Copilot as a developer IDE tool and haven't thought about embedding it in applications, or they've heard about agentic AI and don't know what a framework like this actually handles versus what they need to build themselves.

The clearest entry point is to start with a specific, bounded use case — not "let's build an AI agent" but "your support team answers the same 40 questions every week; let's route those through an agent that queries your internal knowledge base." That scope is small enough to deliver in a few days, concrete enough to measure, and immediately demonstrates how custom tools connect to real systems.

A few things worth surfacing early in the architecture conversation:

  • BYOK vs. Copilot subscription: Customers with existing Azure AI Foundry or OpenAI contracts can connect their own models. A quick win for enterprises who already have model deployments and don't want to provision Copilot seats for non-developer workloads.
  • Permission governance: The `on_permission_request` handler is where the security conversation lives. For customers in regulated industries, showing that every tool action can be audited and restricted at the code level — not just policy — tends to land well.
  • MCP integration: Customers with existing tool ecosystems (Jira, ServiceNow, internal APIs) can expose those as MCP servers rather than rewriting everything as custom tools. Worth raising early to avoid unnecessary rework.

Customer Use Cases

  • DevOps and platform engineering — Agents that validate infrastructure-as-code before deployment, flag security misconfigurations, or triage incidents by reading runbooks and change logs. These are high-value because they touch production workflows and have clear, measurable ROI.
  • Internal knowledge and support — An agent over internal documentation — wikis, policies, architecture decisions — that answers employee questions without requiring someone to search three separate systems. Especially valuable for large organizations where institutional knowledge is fragmented.
  • Developer productivity — Automating pull request summaries, generating release notes from commit history, or flagging potential issues in code changes. These compound fast: save 10 minutes per PR across a 500-developer org and you notice it quickly.
  • Reporting and operations — Generating weekly status reports, customer-facing summaries, or executive briefings by pulling from live data sources. The agent handles gathering and formatting; the human handles the judgment call.

The common thread is that the best use cases aren't about replacing people. They're about removing the repetitive connective tissue between tasks — so that your team, and your customers' teams, spend more time on the work that actually requires their expertise.

Where to Go from Here

The official SDK repo (https://github.com/github/copilot-sdk) has a Python cookbook with practical recipes, active documentation, and an Issues page that the team monitors closely. Session hooks, MCP server integration, and the system message API are all worth exploring once you're comfortable with the basics.

The hardest part is usually just the first 20 lines. Once the client is running and you've got a session sending messages, the rest clicks pretty quickly — and that first working agent is a compelling starting point for the customer conversation too.

The GitHub Copilot SDK is available in public preview at (https://github.com/github/copilot-sdk). Python 3.11+ required.

Recommended Resources for Deeper Insights

Published Apr 09, 2026
Version 1.0
No CommentsBe the first to comment