github
96 TopicsSix Coding Agents, One Production System: A Field Guide to AgenticOps with AKS-Lab-GitHubCopilot
The shift: from "AI helps me code" to "AI authors my repo" For two years we've been talking about GitHub Copilot as an inline pair programmer — a clever autocomplete that lives in your editor. That framing is officially out of date. The new reality is agentic delivery: a team of named, scoped AI agents owns slices of your repository, each with its own tools, skills, and refusal rules. They produce pull requests. They run tests. They roll deployments. And when one finishes its turn, it hands off to the next. The microsoft/AKS-Lab-GitHubCopilot's five labs you ship ZavaShop — a multi-agent retail supply-chain control plane running on AKS + Azure Container Apps — and along the way you internalize an operating model you can carry to any project. Everything in the repo (specs, agents, MCP servers, tests, Bicep, Helm, GitHub Actions) is authored by six GitHub Copilot Custom Coding Agents working from your IDE, plus the remote GitHub Copilot Coding Agent that closes the PR loop on GitHub. This is what AgenticOps looks like in practice. Two layers of agents — don't confuse them The first cognitive hurdle in this lab is keeping two very different agent populations straight: Layer What it is When it lives Examples Application agents The product you ship — the runtime ZavaShop fleet that solves a business problem Production (AKS + ACA) InventoryAgent, SupplierAgent, LogisticsAgent, PricingAgent, OrchestratorAgent Coding agents The dev-time team that writes the application agents Your IDE + GitHub requirements-analyst, mcp-builder, agent-builder, orchestrator-architect, test-author, deploy-engineer Both are built with the Microsoft Agent Framework (MAF). Both use the GitHub Copilot SDK as their model provider. But they exist at different layers of the development lifecycle, and the entire lab is structured around that distinction. If you only remember one thing from this post: the coding agents are how you build the application agents. That is the whole AgenticOps loop, compressed into one sentence. GitHub Copilot Coding Agent vs. Custom Coding Agents There are two flavors of "coding agent" in the GitHub Copilot ecosystem, and this lab uses both. 1. The remote GitHub Copilot Coding Agent This is the GitHub-side, asynchronous, PR-driven agent. You assign it an issue, it spins up a sandboxed environment, writes the code, runs the tests, and opens a PR for human review. You don't watch it work — you review what it produces. In ZavaShop, Lab 04 (Testing) explicitly uses this agent: you take a failing eval scenario, file it as an issue, assign it to Copilot, and the agent comes back with a PR. Your job is the human bar, not the keystrokes. Important governance choice from AGENTS.md: the remote Coding Agent is allowed to open PRs against src/ and tests/ only — never against infra/ without human review. That single rule is a textbook example of agent-aware policy. 2. The local Custom Coding Agents These are scoped, in-IDE specialist agents you select <agent name> in Copilot Chat. They live as *.agent.md files inside .github/agents/ and are discovered by VS Code on reload. Each one owns exactly one slice of the repository. Six of them ship in this lab: Phase Agent Owns Refusal rule Requirements requirements-analyst specs/*.md Refuses to write code MCP tools mcp-builder src/mcp_servers/* One server per turn Specialist agents agent-builder src/agents/<specialist>/* One specialist per turn Orchestration orchestrator-architect src/agents/orchestrator/*, src/shared/*, docker-compose.yml Owns wiring, not business logic Tests test-author tests/** Never edits src/ Deploy deploy-engineer infra/**, .github/workflows/** Won't touch application code The pattern that matters here isn't just "we made some custom agents." It's that every agent declares what it owns and what it refuses to do. That refusal envelope is what makes the system safe to delegate to. Without it, you'd just have a noisier autocomplete. Three workflow prompts in .github/prompts/ chain the agents together so you don't have to remember the sequence: /feature-from-issue — issue → spec → code → tests → PR → deploy /spec-to-code — drive an existing spec through code + tests /ship-it — quality gate → build → push → ACR/ACA/AKS rollout → smoke + evals This is the closest thing I've seen to a programmable software development lifecycle. Where AgenticOps fits in DevOps gave us repeatable infrastructure. MLOps gave us repeatable model lifecycles. AgenticOps is what you need when the thing you're operating is itself a fleet of autonomous agents — both at build time and at runtime. The lab makes the four pillars of AgenticOps concrete: Specs as the contract. /requirements-analyst produces specs/<slug>.md files with goals, contracts, and eval scenarios. Nothing else in the repo is built until that spec exists. Specs are the source of truth that human reviewers actually read. Skills as living documentation. .github/skills/<skill>/SKILL.md files hold shared, agent-agnostic knowledge — Python conventions, Kubernetes patterns, MAF idioms. Every coding agent declares which skills it must consult before writing code. This is how you stop drift: knowledge lives in one place and is pulled in on demand. Evals as the quality gate. The repo runs a four-layer test pyramid plus five golden eval scenarios (S1–S5). uv run poe check runs locally and in GitHub Actions. Copilot-authored PRs must pass the same bar a human does — no exceptions. Observability tied to agent identity. Every agent emits agent.name, agent.run_id, and agent.span_id through structlog. When something misbehaves in production, you can trace the line from "this evaluation failed" all the way back to "this version of this agent, on this run, called this tool with these arguments." These four pillars aren't ZavaShop-specific. They're the contract for any AgenticOps system: scoped ownership, contracts as code, evals as gates, identity in every span. Walking through the workshop: which agent does what, when The five labs are five chapters of one story — ZavaShop going from an empty Azure subscription to a live retail control plane. Each lab activates a different subset of coding agents. Lab 01 — Environment Setup (no coding agents yet) You provision the platform: AKS cluster, ACA environment, Azure Container Registry, Key Vault, and the Workload Identity that every agent will wear. Then you install the six Custom Coding Agents into your IDE. Think of this as hiring the development team and giving them their badges. Lab 02 — Agent Creation (four agents in play) This is where it clicks. You start by requirements-analyst in Copilot Chat to produce the spec for each ZavaShop application agent. Then mcp-builder is invoked four times to scaffold the four MCP servers — one per domain (inventory DB, supplier API, shipping API, pricing API). Then agent-builder runs four more times to build the typed ChatAgent specialists. Finally orchestrator-architect wires them together with a MAF Workflow. What's stunning about this lab is the handoff discipline. Every coding agent ends its turn with a line naming the next agent to invoke. You're not orchestrating the work — the agents are. Lab 03 — Multi-Agent Orchestration & Config (two agents) The orchestrator stops being a one-shot LLM call and becomes a deterministic Workflow. Secrets move from .env to Key Vault. The whole fleet boots locally with Docker Compose. This is orchestrator-architect's star turn — wiring A2A endpoints, MCP tool registration, Key Vault hydration, OpenTelemetry. Specs come from requirements-analyst; the rest is orchestration. Lab 04 — Testing (both coding agent flavors) /test-author writes the four-layer pyramid (unit, MCP contract, integration, eval). Then you switch gears: take a failing eval scenario, file it as a GitHub issue, and assign it to the remote GitHub Copilot Coding Agent. The agent works asynchronously, opens a PR, and uv run poe check decides whether it passes. This is the lab where the local-vs-remote distinction stops being abstract and starts being operational. Lab 05 — Deployment & Run (deployment specialist) /deploy-engineer writes the Helm chart for the AKS orchestrator and the Bicep modules for the ACA specialists. The /ship-it workflow prompt then runs the full pipeline: quality gate → ACR build → ACA deploy → AKS rollout → smoke tests → evals. GitHub Actions OIDC re-runs the same pipeline on every main push. Notice the pattern across all five labs: at no point does a human write production code from scratch. Humans set goals, review specs, approve PRs, and run quality gates. The keystrokes belong to agents. How Coding Agents transform the DevOps pipeline Take a step back from the lab and ask: what actually changes in your DevOps flow when you adopt this model? The atomic unit of work shifts. In classic DevOps the unit is the commit. In AgenticOps the unit is the spec. A spec drives one or more agents; agents produce commits; commits trigger CI; CI gates promotion. The commit becomes a derived artifact, not the starting point. Code review changes shape. You're no longer reviewing "did this human understand the codebase?" — you're reviewing "did this agent follow its refusal rules, consult its skills, and produce something that passes the evals?" Reviewers spend less time on style and more time on intent. The diff is often less interesting than the spec it came from. Governance becomes structural, not procedural. Instead of writing a wiki page that says "don't touch infra without review," you encode that rule in AGENTS.md and refuse to let the agent's tool set include infra paths. Policy becomes part of the agent definition, not a checklist humans hopefully remember. The CI pipeline expands. Beyond build/test/deploy, you now have an eval stage that asks "does the system still behave correctly on the golden scenarios?" — and a Copilot-authored PR has to pass the same eval stage as a human-authored one. The pipeline is the great equalizer. Onboarding compresses. A new engineer doesn't need to read 50 wiki pages to be productive. They read AGENTS.md, select the relevant agent walks them through. Institutional knowledge lives in .agent.md and SKILL.md files instead of senior engineers' heads. The net effect is a pipeline that's faster, more uniform, and easier to audit. Faster because agents parallelize what humans serialize. More uniform because every change goes through the same six-agent template. Easier to audit because every artifact has a named author and a refusal rule it had to respect. What to take away The AKS-Lab-GitHubCopilot workshop teaches three things at once. The surface lesson is "how to build a multi-agent retail system on AKS." The middle lesson is "how to use GitHub Copilot Custom Agents and the remote Coding Agent." The deepest lesson — and the one I'd argue matters most — is how to design a development process where AI agents are first-class citizens with bounded responsibilities, not free-form copilots. If you take the model and walk away from the lab, three patterns are worth keeping: Scope before capability. Don't give an agent every tool; give it the smallest surface that makes it useful. Specs are the API between humans and agents. Invest in requirements-analyst-style flows even if the rest of your stack isn't there yet. Evals are non-negotiable. The moment an agent can open a PR, you need a quality gate that doesn't care who the author is. Clone the repo microsoft/AKS-Lab-GitHubCopilot , hit Developer: Reload Window, select agents in Copilot Chat, and watch six teammates show up. That's the future of the DevOps pipeline — and it's already shipping. Resources microsoft/AKS-Lab-GitHubCopilot — The repository this post is built on. Best practices for using Copilot to work on tasks — Governance patterns for delegating issues to Copilot. GitHub Copilot SDK (Python) — The provider used by every agent in this lab.157Views0likes0CommentsGiving the Copilot SDK Agent a "hardware-level helmet" using Kata microVM on AKS
A Moment That Made Me Pause I was recently building an Agent service with the GitHub Copilot SDK. After getting it up and running, I went back through the execution logs and something jumped out at me: In a single conversation turn, the Agent had executed a shell command, read several files, and pulled down a third-party MCP server from npm via npx — all on its own. I didn't hard-code any of that. The model decided at runtime to run those commands, read those files, and install that package. That's when it hit me: a significant chunk of the code running inside this container was written on the fly — by the model, not by me. This is fundamentally different from a traditional web service. With a regular app, every line of code is written by a human, reviewed, and tested before it reaches production. But an AI Agent? Part of its behavior is generated at runtime. You don't know in advance what it's going to execute. So the question becomes: is the container we put it in actually strong enough? How Container Isolation Actually Works (And Where It Falls Short) Let me use an analogy. Think of a traditional container as an apartment in a building. Each apartment has its own walls — namespaces and cgroups keep things separated. From the inside, it feels like you have your own place. But every apartment shares the same roof — the host Linux kernel. Most of the time, this is fine. But if someone finds a crack in the roof — a kernel vulnerability — they can climb up from their apartment, walk across the roof, and drop into any other apartment in the building. That's a container escape. For a standard web service, this risk is manageable — the code inside your container is predictable. But an AI Agent is different. The code running inside the container is inherently unpredictable — it's not an external attacker you're worried about, it's the tenant itself. Docker laid this out clearly in Comparing Sandboxing Approaches for AI Agents: AI Agents are a class of workload that inherently requires stronger sandboxing. The shared-kernel model of traditional containers isn't enough. So what is enough? Meet the microVM: A Private Roof for Every Apartment Sticking with the building analogy — if the problem is a shared roof, the fix is obvious: give every apartment its own roof. You still live in an apartment (container). The building is still managed the same way (Kubernetes). But the ceiling above your head is now yours alone. Even if you punch through it, you only reach your own roof — not your neighbor's. That's the core idea behind a microVM. Koyeb published a great explainer called What Is a microVM. Here's the essence: It's a virtual machine — with its own independent guest kernel, fully isolated from the host kernel. This is where the security comes from. But it's a stripped-down VM — only the bare essentials: CPU, memory, network, block storage. No USB controllers, no sound cards, no GPU passthrough. So it's fast and light — millisecond boot times, small memory footprint, close to the container experience. One line summary: microVM = VM-grade isolation + near-container-grade lightness. How Does Kubernetes Use microVMs? Enter Kata Containers Knowing microVMs are great is one thing — but Kubernetes schedules Pods and containers, not VMs. How do you bridge these two worlds? That's exactly what Kata Containers does. Their tagline nails it: "The speed of containers, the security of VMs." Kata acts as a translation layer between Kubernetes and microVMs: From Kubernetes' perspective, it's still a standard Pod — scheduled, managed, and monitored normally. Under the hood, that Pod is actually running inside a lightweight VM with its own kernel. You don't change your application code. You don't change your CI/CD pipeline. You just tell Kubernetes: "Run this Pod with Kata's RuntimeClass." Kata handles the rest. On AKS, Microsoft has integrated Kata out of the box under the name Pod Sandboxing. The hypervisor is Microsoft Hyper-V (not QEMU), and the RuntimeClass is called kata-vm-isolation. You create a special node pool, and AKS sets everything up automatically. Now Let's Look at a Real Example Enough theory — let me walk you through something concrete. I built a sample called AKS_MicroVM that does one thing: Run a GitHub Copilot SDK Agent service on AKS, enforced to run inside kata-vm-isolation — a microVM sandbox. Here's the architecture: HTTPS request comes in └─ AKS Node Pool (KataVmIsolation enabled) └─ Pod (runtimeClassName: kata-vm-isolation) └─ Dedicated Hyper-V microVM └─ FastAPI service (Python / uvicorn) └─ GitHubCopilotAgent └─ Copilot CLI (Node.js) └─ MCP servers / tools Isolated guest kernel + seccomp + cgroup Egress restricted by NetworkPolicy From the outside, it's just an ordinary AKS Pod. On the inside, the app runs in its own micro virtual machine with a dedicated kernel. Project Structure The entire sample is just these files: app/ ← Agent service (Python) main.py ← FastAPI endpoints agent.py ← Copilot Agent wrapper tools.py ← Example function tools requirements.txt Dockerfile ← Python 3.12 + Node 20 + Copilot CLI k8s/ ← Kubernetes manifests namespace.yaml runtimeclass.yaml ← Reference (AKS auto-creates this) secret.example.yaml ← Token placeholder deployment.yaml ← The key file: enforces kata-vm-isolation service.yaml networkpolicy.yaml ← Locks down ingress/egress infra/ ← Infrastructure scripts 01-create-aks.sh ← Create the cluster 02-build-push.sh ← Build image, push to ACR 03-deploy.sh ← Deploy everything Three shell scripts to set up infrastructure, six YAML files to deploy the service. That's it. Not Just a microVM: Five Layers of Defense I want to emphasize this: the sample doesn't just slap on a microVM and call it a day. It stacks five layers of protection: What you're worried about How this layer addresses it Malicious code escaping the container kata-vm-isolation → dedicated microVM with its own kernel Privilege escalation inside the container runAsNonRoot + drop ALL caps + read-only filesystem + seccomp Agent phoning home to unauthorized endpoints NetworkPolicy allowlist — only Copilot/GitHub/MCP egress permitted Token leakage K8s Secret injection (upgradeable to Key Vault via CSI) Model instructing the Agent to do something dangerous on_permission_request defaults to deny; only allowlisted operations proceed The microVM is the outermost wall — hardware-grade isolation. But inside that wall, there are still guards, access controls, and surveillance cameras. You need all of them. Six Steps to Deploy # ① Create an AKS cluster with Kata support bash infra/01-create-aks.sh # ② Verify the RuntimeClass is ready kubectl get runtimeclass kata-vm-isolation # ③ Build the image and push to ACR (script auto-detects your ACR) bash infra/02-build-push.sh # ④ Add your GitHub Copilot token # Edit k8s/secret.example.yaml → rename to secret.yaml (don't commit it!) # ⑤ Deploy everything bash infra/03-deploy.sh # ⑥ Access via API server proxy kubectl proxy --port=8001 Then chat with the Agent: curl -s -X POST \ http://localhost:8001/api/v1/namespaces/copilot-agent/services/copilot-agent:80/proxy/chat \ -H 'content-type: application/json' \ -d '{"message":"Briefly introduce Kata Containers."}' Want streaming output? Use the stream endpoint: curl -N -X POST \ http://localhost:8001/api/v1/namespaces/copilot-agent/services/copilot-agent:80/proxy/chat/stream \ -H 'content-type: application/json' \ -d '{"message":"List 3 Linux kernel hardening tips","stream":true}' How to Verify It's Actually Running in a microVM One command: kubectl -n copilot-agent exec deploy/copilot-agent -- uname -r If the kernel version differs from the node's kernel — your Pod is running in its own guest kernel, not sharing the host's. Proof done. Gotchas I Hit So You Don't Have To kubectl port-forward doesn't work with Kata Pods. This is the easiest trap to fall into. The app listener runs inside the microVM, but port-forward connects to the empty sandbox netns on the host — you'll get connection refused. Use kubectl proxy instead. Token environment variable names. The Copilot CLI expects GH_TOKEN or GITHUB_TOKEN — not a custom name. The Deployment already injects both from the same Secret. Read-only filesystem needs emptyDir mounts. The container runs with readOnlyRootFilesystem: true, but the Copilot CLI needs to write to /home/agent/.cache at startup. The Deployment mounts emptyDir volumes at .cache, .copilot, and /tmp — miss one and the CLI won't start. Keep on_permission_request on deny-by-default. The Agent's tool calls go through a permission gate that defaults to deny, with an allowlist for approved operations. Don't switch this to approve-all in production — ever. Wrapping Up: The Thread That Ties It All Together Let me trace the logic one more time: ① Scenario: AI Agents inherently run model-generated, untrusted code inside containers ② Problem: Traditional containers share the host kernel — one escape compromises the entire node ③ Insight: We need hardware-grade isolation, stronger than namespaces alone ④ Solution: microVMs — a dedicated guest kernel for every Pod ⑤ Integration: Kata Containers brings microVM support to Kubernetes natively; AKS Pod Sandboxing makes it turnkey ⑥ Practice: The AKS_MicroVM sample — six steps to deploy, five layers of defense In the age of AI Agents, a container isn't just a box for your application — it's a box for uncertainty. It needs a stronger shell. The microVM is that shell. Full source code: https://github.com/kinfey/Multi-AI-Agents-Cloud-Native/tree/main/code/AKS_MicroVM Further reading: What is a microVM? — Koyeb Comparing Sandboxing Approaches for AI Agents — Docker Kata Containers175Views0likes0CommentsTurning GitHub Copilot into a “Best Practices Coach” with Copilot Spaces + a Markdown Knowledge Base
Why Copilot Spaces + Markdown repos work so well When you ask Copilot generic questions (“How should we log errors?” “What’s our API versioning approach?”), the model will often respond with reasonable defaults. But reasonable defaults are not the same as your standards. Copilot Spaces solve the context problem by allowing you to attach a curated set of sources (files, folders, repos, PRs/issues, uploads, free text) plus explicit instructions—so Copilot answers in the context of your team’s rules and artifacts. Spaces can be shared with your team and stay updated as the underlying GitHub content changes—so your “best practices coach” stays evergreen. The architecture (high level) Here’s the mental model: Engineering Knowledge Base Repo: A dedicated repo containing your standards as Markdown (coding style, architecture decisions, security rules, testing conventions, examples, templates). Copilot Space: “Engineering Standards Coach”: A Space that attaches the knowledge base repo (or key folders/files within it), optionally your main application repo(s), and a short set of “rules of engagement” (instructions). In-repo reinforcement (optional but powerful): Custom instruction files (repo-wide + path-specific) and prompt files (slash commands) inside your production repos to standardize behavior and workflows. Step 1 Create a Knowledge Base repo (Markdown-first) Create a repo such as: engineering-knowledge-base platform-playbook org-standards A practical starter structure: engineering-knowledge-base/ README.md standards/ coding-style.md logging.md error-handling.md performance.md security/ secure-coding.md secrets.md threat-modeling.md architecture/ overview.md adr/ 0001-service-boundaries.md 0002-api-versioning.md testing/ unit-testing.md integration-testing.md contract-testing.md templates/ pr-review-checklist.md api-design-checklist.md definition-of-done.md Tip: Keep these docs opinionated, concrete, and example-heavy—Copilot works best when it can point to specific patterns rather than abstract principles. Step 2 Create a Copilot Space and attach your sources Create a space, name it, choose an owner (personal or organization), then add sources and instructions. Inside the Space, add two types of context: Instructions (how Copilot should behave) and Sources (your actual code and docs). 2.1 Instructions (how Copilot should behave in this Space) Example instructions you can paste: You are the Engineering Standards Coach for this organization. Goals: - Recommend solutions that follow our standards in the attached knowledge base. - When proposing code, align with our logging, error-handling, security, and testing guidelines. - When uncertain, ask for the missing repo context or point to the exact standard that applies. Output format: - Start with the standard(s) you are applying (with a link or filename reference). - Then provide the recommended implementation. - Include a lightweight checklist for reviewers. 2.2 Sources (your real “knowledge base”) Attach: The knowledge base repo (or just the folders that matter) Your main code repo(s) (or select folders) PR checklist and Definition of Done templates Key architecture docs, runbooks, or troubleshooting guides Step 3 (Optional) Add instruction files to your production repos Spaces are excellent for curated context and team-wide “ask me anything about our standards.” But you can reinforce consistency directly inside each repo by adding custom instruction files. 3.1 Repo-wide instructions (.github/copilot-instructions.md) Create: your-app-repo/.github/copilot-instructions.md # Repository Copilot Instructions ## Tech stack - Language: TypeScript (strict) - Framework: Node.js + Express - Testing: Jest - Lint/format: ESLint + Prettier ## Engineering rules - Use structured logging as defined in /docs/logging.md - Never log secrets or raw tokens - Prefer small, composable functions - All new endpoints must include: input validation, authz checks, unit tests, and consistent error handling ## Build & test - Install: npm ci - Test: npm test - Lint: npm run lint 3.2 Path-specific instructions (.github/instructions/*.instructions.md) Create: your-app-repo/.github/instructions/security.instructions.md --- applyTo: "**/*.ts" --- # Security rules (TypeScript) - Never introduce dynamic SQL construction; use parameterized queries only. - Any new external HTTP call must enforce timeouts and retry policy. - Any auth logic must include negative tests. Step 4 (Optional) Turn your best practices into “slash commands” with prompt files To standardize repeatable workflows like code review, test scaffolding, or endpoint scaffolding, create prompt files (slash commands) as .prompt.md files—commonly in .github/prompts/. Engineers invoke them manually in chat by typing /. Create: your-app-repo/.github/prompts/standards-code-review.prompt.md --- description: Review code against our org standards (security, perf, style, tests) --- You are a senior engineer performing a standards-based review. Use these checks: 1) Security: input validation, authz, secrets, injection risks 2) Reliability: error handling, retries/timeouts, edge cases 3) Observability: structured logs, metrics, tracing where relevant 4) Testing: required coverage, negative tests, naming conventions 5) Style: follow repository rules in .github/copilot-instructions.md Output: - Summary (2-3 lines) - Issues (severity: BLOCKER/REQUIRED/SUGGESTION) - Suggested patch snippets (only where confident) - “Ready to merge?” verdict Now any engineer can type /standards-code-review and get the same structured output every time, without rewriting the prompt. How teams actually use this day-to-day Recipe A Onboarding a new engineer Ask inside the Space: “Summarize our service architecture and coding conventions for onboarding. Link the key docs.” Recipe B Writing a feature with best-practice guardrails Ask in the Space: “We’re adding endpoint X. Generate a plan that follows our API versioning ADR and error-handling standard.” Recipe C Enforcing review standards consistently In the repo, run the prompt file: /standards-code-review. Governance and best practices (what to do / what to avoid) Keep Spaces purpose-built. Avoid dumping an entire org into one Space if your goal is consistent, grounded output. Prefer linking the “golden source.” Keep standards in a single repo and update them via PR—treat it like code. Make instructions short but strict. Detailed rules belong in your Markdown standards. Avoid conflicting instruction files. If instructions contradict each other, results can be inconsistent. References (official docs for further reading) About GitHub Copilot Spaces: https://docs.github.com/en/copilot/concepts/context/spaces Creating GitHub Copilot Spaces: https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces Adding custom instructions for GitHub Copilot: https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/add-custom-instructions Use custom instructions in VS Code: https://code.visualstudio.com/docs/copilot/customization/custom-instructions Use prompt files in VS Code: https://code.visualstudio.com/docs/copilot/customization/prompt-files Closing: the “best practices” flywheel Once you implement this pattern, you get a virtuous cycle: teams encode standards as Markdown; Copilot Spaces ground answers in those standards; prompt files and instruction files standardize execution; and code reviews shift from style policing to design and correctness.From Test Cases to Trust: Elevating Enterprise Quality with GitHub Copilot
The Traditional QA Bottleneck In complex enterprise systems, QA teams often face familiar challenges: Time‑consuming test case creation from evolving requirements Repetitive automation scripting and refactoring Heavy regression cycles under tight release timelines Limited bandwidth for deeper risk analysis and exploratory testing None of these issues are caused by lack of skill—they’re symptoms of scale and complexity. This is where GitHub Copilot entered our workflow—not as a “magic button,” but as a thinking partner. Where GitHub Copilot Actually Helped Used responsibly, Copilot added value in very specific QA scenarios: Faster Test Design from Requirements Transforming business or technical requirements into structured test scenarios is intellectually demanding but time-intensive. Copilot helped accelerate: Initial test scenario drafting Gherkin-style acceptance criteria Coverage identification for edge and negative cases The result wasn’t “auto-generated tests,” but faster starting points, reviewed and refined by humans. Accelerating Automation Without Losing Control Whether working with UI automation or API tests, a significant portion of effort goes into boilerplate code, assertions, and structuring. Copilot assisted with: Suggesting test skeletons Refactoring repetitive code Improving readability and consistency This freed engineers to focus on test intent, not syntax. Supporting Debugging and Maintenance Automation maintenance is often underestimated. Copilot helped: Identify potential fixes during test failures Suggest improvements during refactoring Reduce turnaround time during regression cycles Again, nothing was auto‑merged. Human review remained non‑negotiable. The Most Important Shift: QA Mindset The real impact of Copilot wasn’t just efficiency—it changed how QA engineers spent their time. Instead of: Writing repetitive scripts Manually expanding similar test cases The team could focus more on: Risk‑based testing Failure pattern analysis Cross‑team quality discussions Improving test strategy and coverage depth In short, AI didn’t remove QA effort—it redirected it to higher‑value work. Responsible AI Was Not Optional In an enterprise setup, responsible AI usage matters. Key principles we followed: No blind acceptance of AI suggestions Strict human validation of all test logic Awareness of data sensitivity and compliance boundaries Treating Copilot as an assistant, not an authority This balance ensured quality and trust were never compromised in the pursuit of speed. What This Means for QA Teams From this experience, one thing became clear: AI won’t replace QA engineers. But QA engineers who use AI effectively will redefine quality. GitHub Copilot helped shift QA from execution to enablement—from writing tests faster to thinking about quality better. For enterprise teams, this is a powerful evolution. Final Thoughts Quality engineering is no longer just about finding defects—it’s about enabling confidence in delivery. Tools like GitHub Copilot, when used responsibly, can become catalysts in that transformation. The future of QA isn’t manual vs automation. It’s human judgment amplified by AI assistance. And that’s where real quality lives. References Create and manage manual test cases - Azure Test Plans | Microsoft Learn What is Azure Test Plans? Manual, exploratory, and automated test tools. - Azure Test Plans | Microsoft Learn Create and manage test plans - Azure Test Plans | Microsoft LearnFrom Terminal to Autonomous Coding: Mastering GitHub Copilot CLI ACP Server
Introduction The rise of AI-powered development is no longer just about autocomplete—it’s about autonomous agents that can think, act, and collaborate. At the center of this transformation is the Agent Client Protocol (ACP) and its integration with GitHub Copilot CLI. If you’ve ever wanted to: Integrate Copilot into your own tools Build custom AI-driven developer workflows Orchestrate coding agents in CI/CD Then understanding the GitHub Copilot CLI ACP Server is a game-changer. This article will take you from zero to advanced, covering concepts, architecture, setup, and real-world use cases. What Is Agent Client Protocol (ACP)? The Agent Client Protocol (ACP) is an open standard designed to connect clients (like IDEs or tools) with AI agents in a consistent and interoperable way. Why ACP Exists Before ACP: Every IDE needed custom integration for each AI agent Every agent needed custom APIs per editor ACP solves this by introducing a universal communication layer. Key Idea “Any editor can talk to any agent.” Core Capabilities ACP enables: Standardized messaging between client and agent Streaming responses Tool execution with permissions Session lifecycle management Multi-agent coordination This makes ACP a foundation layer for the agentic developer ecosystem. What Is GitHub Copilot CLI ACP Server? GitHub Copilot CLI can run as an ACP-compatible server, exposing its AI capabilities programmatically. 👉 In simple terms: It turns Copilot into a backend AI agent service that any tool can connect to. According to GitHub Docs: Copilot CLI can run in ACP mode using a flag It supports standardized communication via ACP It enables integration with IDEs, pipelines, and custom tools Architecture: How ACP + Copilot CLI Works Components Component Role Client Sends prompts, receives responses ACP Protocol Standard communication layer Copilot CLI AI agent executing tasks System Files, repos, tools Getting Started (Beginner Level) Install GitHub Copilot CLI Ensure: Copilot subscription is active CLI installed and authenticated Start ACP Server Default (stdio mode – recommended) TCP Mode (for remote systems) stdio: Best for IDE integration TCP: Best for distributed systems Connect Using ACP Client (Example) Using TypeScript SDK: You: Start Copilot as a process Create streams Initialize connection Send prompt Receive streaming response ACP uses: NDJSON streams over stdin/stdout Event-driven communication ACP Workflow Explained A typical flow looks like this: Step-by-step lifecycle Initialize connection Create session Send prompt Agent processes task Streaming updates returned Optional tool execution (with permissions) Session ends ACP supports: Text + multimodal inputs Incremental responses Cancellation and control Real-World Use Cases IDE Integration (Custom Editors) Build your own AI-powered editor: Connect via ACP Send code context Receive suggestions CI/CD Automation Imagine: Use ACP to: Auto-fix bugs Generate tests Refactor code Multi-Agent Systems ACP enables: Copilot + other agents working together Task delegation Workflow orchestration Custom Developer Tools Examples: AI code review dashboards Internal dev assistants ChatOps integrations Advanced Concepts Session Management ACP allows: Isolated sessions Custom working directories Context persistence Streaming Responses Instead of waiting: Receive responses in chunks Build real-time UIs Permission Handling ACP includes: Tool execution approvals Security boundaries Controlled automation Extensibility ACP supports: Multiple SDKs (TypeScript, Python, Rust, Kotlin) Custom clients Future protocol evolution ACP vs Traditional Integration Feature Traditional APIs ACP Integration Custom per tool Standardized Streaming Limited Native Multi-agent Hard Built-in Extensibility Low High Interoperability Poor Excellent Why ACP + Copilot CLI Is a Big Deal This combination unlocks: ✅ Platform-level AI integration No more vendor lock-in per editor ✅ True agentic workflows Agents don’t just suggest—they act ✅ Ecosystem growth Any tool can plug into Copilot Challenges & Considerations ACP is still in public preview Requires understanding of: Streams Async communication Debugging agent workflows can be complex Future of Developer Experience ACP represents a shift toward: “AI-native development platforms” Future possibilities: Fully autonomous CI/CD pipelines Cross-agent collaboration Self-healing codebases Final Thoughts The GitHub Copilot CLI ACP Server is not just a feature—it’s a foundation for the next generation of software development. If you are: A developer → build smarter tools A tech lead → design AI-driven workflows A CTO aspirant → understand this deeply Then ACP is something you must master early. Quick Summary ACP = Standard protocol for AI agents Copilot CLI = Can run as ACP server Enables = IDEs, CI/CD, multi-agent systems Key power = interoperability + automation🏆 Agents League Winner Spotlight – Reasoning Agents Track
Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate. Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub. Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed. The Reasoning Agents Challenge Scenario The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step. All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague. The Winning Project: CertPrep Multi‑Agent System The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families. At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface. Inside the Multi‑Agent Architecture At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails. The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey: LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks). StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time. LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort. ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance. AssessmentAgent – Generates and evaluates domain‑proportional mock exams. CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions. Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present. CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably: Managed agents via Foundry SDK Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings Guardrails enforced through Azure Content Safety Parallel agent fan‑out using concurrent execution Typed contracts with Pydantic for every agent boundary AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials. User Experience: From Onboarding to Exam Readiness Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey. From a user’s perspective, the flow looks like this: Profile & Goals Input Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent. Learning Path & Study Plan Visualization Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time. Progress Tracking & Readiness Scoring As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet). Assessments and Feedback Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores. Transparent Recommendations Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making. The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies. Why This Project Stood Out This project embodies the spirit of the Reasoning Agents track in several ways: ✅ Clear separation of reasoning roles, instead of prompt‑heavy monoliths ✅ Deterministic fallbacks and guardrails, critical for educational and decision‑support systems ✅ Observable, debuggable workflows, aligned with Foundry’s production goals ✅ Explainable outputs, surfaced directly in the UX It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions. Try It Yourself Explore the project, architecture, and demo here: 🔗 GitHub Issue (full project details): https://github.com/microsoft/agentsleague/issues/76 🎥 Demo video: https://www.youtube.com/watch?v=okWcFnQoBsE 🌐 Live app (mock data): https://agentsleague.streamlit.app/Supercharge Your Dev Workflows with GitHub Copilot Custom Skills
The Problem Every team has those repetitive, multi-step workflows that eat up time: Running a sequence of CLI commands, parsing output, and generating a report Querying multiple APIs, correlating data, and summarizing findings Executing test suites, analyzing failures, and producing actionable insights You've probably documented these in a wiki or a runbook. But every time, you still manually copy-paste commands, tweak parameters, and stitch results together. What if your AI coding assistant could do all of that — triggered by a single natural language request? That's exactly what GitHub Copilot Custom Skills enable. What Are Custom Skills? A skill is a folder containing a SKILL.md file (instructions for the AI), plus optional scripts, templates, and reference docs. When you ask Copilot something that matches the skill's description, it loads the instructions and executes the workflow autonomously. Think of it as giving your AI assistant a runbook it can actually execute, not just read. Without Skills With Skills Read the wiki for the procedure Copilot loads the procedure automatically Copy-paste 5 CLI commands Copilot runs the full pipeline Manually parse JSON output Script generates a formatted HTML report 15-30 minutes of manual work One natural language request, ~2 minutes How It Works The key insight: the skill file is the contract between you and the AI. You describe what to do and how, and Copilot handles the orchestration. Prerequisites Requirement Details VS Code Latest stable release GitHub Copilot Active Copilot subscription (Individual, Business, or Enterprise) Agent mode Select "Agent" mode in the Copilot Chat panel (the default in recent versions) Runtime tools Whatever your scripts need — Python, Node.js, .NET CLI, az CLI, etc. Note: Agent Skills follow an open standard — they work across VS Code, GitHub Copilot CLI, and GitHub Copilot coding agent. No additional extensions or cloud services are required for the skill system itself. Anatomy of a Skill .github/skills/my-skill/ ├── SKILL.md # Instructions (required) └── references/ ├── resources/ │ ├── run.py # Automation script │ ├── query-template.sql # Reusable query template │ └── config.yaml # Static configuration └── reports/ └── report_template.html # Output template The SKILL.md File Every skill has the same structure: --- name: my-skill description: 'What this does and when to use it. Include trigger phrases so Copilot knows when to load it. USE FOR: specific task A, task B. Trigger phrases: "keyword1", "keyword2".' argument-hint: 'What inputs the user should provide.' --- # My Skill ## When to Use - Situation A - Situation B ## Quick Start \```powershell cd .github/skills/my-skill/references/resources py run.py <arg1> <arg2> \``` ## What It Does | Step | Action | Purpose | |------|--------|---------| | 1 | Fetch data from source | Gather raw input | | 2 | Process and transform | Apply business logic | | 3 | Generate report | Produce actionable output | ## Output Description of what the user gets back. Key Design Principles Description is discovery. The description field is the only thing Copilot reads to decide whether to load your skill. Pack it with trigger phrases and keywords. Progressive loading. Copilot reads only name + description (~100 tokens) for all skills. It loads the full SKILL.md body only for matched skills. Reference files load only when the procedure references them. Self-contained procedures. Include everything the AI needs to execute — exact commands, parameter formats, file paths. Don't assume prior knowledge. Scripts do the heavy lifting. The AI orchestrates; your scripts execute. This keeps the workflow deterministic and reproducible. Example: Build a Deployment Health Check Skill Let's build a skill that checks the health of a deployment by querying an API, comparing against expected baselines, and generating a summary. Step 1 — Create the folder structure .github/skills/deployment-health/ ├── SKILL.md └── references/ └── resources/ ├── check_health.py └── endpoints.yaml Step 2 — Write the SKILL.md --- name: deployment-health description: 'Check deployment health across environments. Queries health endpoints, compares response times against baselines, and flags degraded services. USE FOR: deployment validation, health check, post-deploy verification, service status. Trigger phrases: "check deployment health", "is the deployment healthy", "post-deploy check", "service health".' argument-hint: 'Provide the environment name (e.g., staging, production).' --- # Deployment Health Check ## When to Use - After deploying to any environment - During incident triage to check service status - Scheduled spot checks ## Quick Start \```bash cd .github/skills/deployment-health/references/resources python check_health.py <environment> \``` ## What It Does 1. Loads endpoint definitions from `endpoints.yaml` 2. Calls each endpoint, records response time and status code 3. Compares against baseline thresholds 4. Generates an HTML report with pass/fail status ## Output HTML report at `references/reports/health_<environment>_<date>.html` Step 3 — Write the script # check_health.py import sys, yaml, requests, time, json from datetime import datetime def main(): env = sys.argv[1] with open("endpoints.yaml") as f: config = yaml.safe_load(f) results = [] for ep in config["endpoints"]: url = ep["url"].replace("{env}", env) start = time.time() resp = requests.get(url, timeout=10) elapsed = time.time() - start results.append({ "service": ep["name"], "status": resp.status_code, "latency_ms": round(elapsed * 1000), "threshold_ms": ep["threshold_ms"], "healthy": resp.status_code == 200 and elapsed * 1000 < ep["threshold_ms"] }) healthy = sum(1 for r in results if r["healthy"]) print(f"Health check: {healthy}/{len(results)} services healthy") # ... generate HTML report ... if __name__ == "__main__": main() Step 4 — Use it Just ask Copilot in agent mode: "Check deployment health for staging" Copilot will: Match against the skill description Load the SKILL.md instructions Run python check_health.py staging Open the generated report Summarize findings in chat More Skill Ideas Skills aren't limited to any specific domain. Here are patterns that work well: Skill What It Automates Test Regression Analyzer Run tests, parse failures, compare against last known-good run, generate diff report API Contract Checker Compare Open API specs between branches, flag breaking changes Security Scan Reporter Run SAST/DAST tools, correlate findings, produce prioritized report Cost Analysis Query cloud billing APIs, compare costs across periods, flag anomalies Release Notes Generator Parse git log between tags, categorize changes, generate changelog Infrastructure Drift Detector Compare live infra state vs IaC templates, flag drift Log Pattern Analyzer Query log aggregation systems, identify anomaly patterns, summarize Performance Bench marker Run benchmarks, compare against baselines, flag regressions Dependency Auditor Scan dependencies, check for vulnerabilities and outdated packages The pattern is always the same: instructions (SKILL.md) + automation script + output template. Tips for Writing Effective Skills Do Front-load the description with keywords — this is how Copilot discovers your skill Include exact commands — cd path/to/dir && python script.py <args> Document input/output clearly — what goes in, what comes out Use tables for multi-step procedures — easier for the AI to follow Include time zone conversion notes if dealing with timestamps Bundle HTML report templates — rich output beats plain text Don't Don't use vague descriptions — "A useful skill" won't trigger on anything Don't assume context — include all paths, env vars, and prerequisites Don't put everything in SKILL.md — use references/ for large files Don't hardcode secrets — use environment variables or Azure Key Vault Don't skip error guidance — tell the AI what common errors look like and how to fix them Skill Locations Skills can live at project or personal level: Location Scope Shared with team? .github/skills/<name>/ Project Yes (via source control) .agents/skills/<name>/ Project Yes (via source control) .claude/skills/<name>/ Project Yes (via source control) ~/.copilot/skills/<name>/ Personal No ~/.agents/skills/<name>/ Personal No ~/.claude/skills/<name>/ Personal No Project-level skills are committed to your repo and shared with the team. Personal skills are yours and roam with your VS Code settings sync. You can also configure additional skill locations via the chat.skillsLocations VS Code setting. How Skills Fit in the Copilot Customization Stack Skills are one of several customization primitives. Here's when to use what: Primitive Use When Workspace Instructions (.github/copilot-instructions.md) Always-on rules: coding standards, naming conventions, architectural guidelines File Instructions (.github/instructions/*.instructions.md) Rules scoped to specific file patterns (e.g., all *.test.ts files) Prompts (.github/prompts/*.prompt.md) Single-shot tasks with parameterized inputs Skills (.github/skills/<name>/SKILL.md) Multi-step workflows with bundled scripts and templates Custom Agents (.github/agents/*.agent.md) Isolated subagents with restricted tool access or multi-stage pipelines Hooks (.github/hooks/*.json) Deterministic shell commands at agent lifecycle events (auto-format, block tools) Plugins Installable skill bundles from the community (awesome-copilot) Slash Commands & Quick Creation Skills automatically appear as slash commands in chat. Type / to see all available skills. You can also pass context after the command: /deployment-health staging /webapp-testing for the login page Want to create a skill fast? Type /create-skill in chat and describe what you need. Copilot will ask clarifying questions and generate the SKILL.md with proper frontmatter and directory structure. You can also extract a skill from an ongoing conversation: after debugging a complex issue, ask "create a skill from how we just debugged that" to capture the multi-step procedure as a reusable skill. Controlling When Skills Load Use frontmatter properties to fine-tune skill availability: Configuration Slash command? Auto-loaded? Use case Default (both omitted) Yes Yes General-purpose skills user-invocable: false No Yes Background knowledge the model loads when relevant disable-model-invocation: true Yes No Skills you only want to run on demand Both set No No Disabled skills The Open Standard Agent Skills follow an open standard that works across multiple AI agents: GitHub Copilot in VS Code — chat and agent mode GitHub Copilot CLI — terminal workflows GitHub Copilot coding agent — automated coding tasks Claude Code, Gemini CLI — compatible agents via .claude/skills/ and .agents/skills/ Skills you write once are portable across all these tools. Getting Started Create .github/skills/<your-skill>/SKILL.md in your repo Write a keyword-rich description in the YAML frontmatter Add your procedure and reference scripts Open VS Code, switch to Agent mode, and ask Copilot to do the task Watch it discover your skill, load the instructions, and execute Or skip the manual setup — type /create-skill in chat and describe what you need. That's it. No extension to install. No config file to update. No deployment pipeline. Just markdown and scripts, version-controlled in your repo. Custom Skills turn your documented procedures into executable AI workflows. Start with your most painful manual task, wrap it in a SKILL.md, and let Copilot handle the rest. Further Reading: Official Agent Skills docs Community skills & plugins (awesome-copilot) Anthropic reference skillsFrom CI/CD to Continuous AI: The Future of GitHub Automation
Introduction For over a decade, CI/CD (Continuous Integration and Continuous Deployment) has been the backbone of modern software engineering. It helped teams move from manual, error-prone deployments to automated, reliable pipelines. But today, we are standing at the edge of another transformation—one that is far more powerful. Welcome to the era of Continuous AI. This new paradigm is not just about automating pipelines—it’s about building self-improving, intelligent systems that can analyze, decide, and act with minimal human intervention. With the emergence of AI-powered workflows inside GitHub, automation is evolving from rule-based execution to context-aware decision-making. This article explores: What Continuous AI is How it differs from CI/CD Real-world use cases Architecture patterns Challenges and best practices What the future holds for engineering teams The Evolution: From CI to CI/CD to Continuous AI 1. Continuous Integration (CI) Developers merge code frequently Automated builds and tests validate changes Goal: Catch issues early 2. Continuous Deployment (CD) Code automatically deployed to production Reduced manual intervention Goal: Faster delivery 3. Continuous AI (The Next Step) Systems don’t just execute—they think and improve AI agents analyze code, detect issues, suggest fixes, and even implement them Goal: Autonomous software evolution What is Continuous AI? Continuous AI is a model where: Software systems continuously improve themselves using AI-driven insights and automated actions. Instead of static pipelines, you get: Intelligent workflows Context-aware automation Self-healing repositories Autonomous decision-making systems Key Characteristics Feature CI/CD Continuous AI Execution Rule-based AI-driven Flexibility Low High Decision-making Predefined Dynamic Learning None Continuous Output Build & deploy Improve & optimize Why Continuous AI Matters Traditional automation has limitations: It cannot adapt to new patterns It cannot reason about code quality It cannot proactively improve systems Continuous AI solves these problems by introducing: Context awareness Learning from past data Proactive optimization This leads to: Faster development cycles Higher code quality Reduced operational overhead Smarter engineering teams Core Components of Continuous AI in GitHub 1. AI Agents AI agents act as autonomous workers inside your repository. They can: Review pull requests Suggest improvements Generate tests Fix bugs 2. Agentic Workflows Unlike YAML pipelines, these workflows: Are written in natural language or simplified formats Use AI to interpret intent Adapt based on context 3. Event-Driven Intelligence Workflows trigger on events like: Pull request creation Issue updates Failed builds But instead of just reacting, they: Analyze the situation Decide the best course of action 4. Feedback Loops Continuous AI systems improve over time using: Past PR data Test failures Deployment outcomes CI/CD vs Continuous AI: A Deep Comparison Traditional CI/CD Pipeline Developer pushes code Pipeline runs tests Build is generated Code is deployed ➡️ Everything is predefined and static Continuous AI Workflow Developer creates PR AI agent reviews code Suggests improvements Generates missing tests Fixes minor issues automatically Learns from feedback ➡️ Dynamic, intelligent, and evolving Real-World Use Cases 1. Automated Pull Request Reviews AI agents can: Detect code smells Suggest optimizations Ensure coding standards 2. Self-Healing Repositories Automatically fix failing builds Update dependencies Resolve merge conflicts 3. Intelligent Test Generation Generate test cases based on code changes Improve coverage over time 4. Issue Triage Automation Categorize issues Assign priorities Route to correct teams 5. Documentation Automation Auto-generate README updates Keep documentation in sync with code Architecture of Continuous AI Systems A typical architecture includes: Layer 1: Event Sources GitHub events (PRs, commits, issues) Layer 2: AI Decision Engine LLM-based agents Context analysis Task planning Layer 3: Action Layer GitHub Actions Scripts Automation tools Layer 4: Feedback Loop Logs Metrics Model improvement Multi-Agent Systems: The Next Level Continuous AI becomes more powerful when multiple agents collaborate. Example Setup: Code Review Agent → Reviews PRs Test Agent → Generates tests Security Agent → Scans vulnerabilities Docs Agent → Updates documentation These agents: Communicate with each other Share context Coordinate tasks ➡️ This creates a virtual AI engineering team Benefits for Engineering Teams 1. Increased Productivity Developers spend less time on repetitive tasks. 2. Better Code Quality Continuous improvements ensure cleaner codebases. 3. Faster Time-to-Market Automation reduces bottlenecks. 4. Reduced Burnout Engineers focus on innovation instead of maintenance. Challenges and Risks 1. Over-Automation Too much automation can reduce human oversight. 2. Security Concerns AI workflows may misuse permissions if not controlled. 3. Trust Issues Teams may hesitate to rely on AI decisions. 4. Cost of AI Operations Running AI agents continuously can increase costs. Best Practices for Implementing Continuous AI 1. Start Small Begin with: PR review automation Test generation 2. Human-in-the-Loop Ensure: Critical decisions require approval 3. Use Least Privilege Restrict workflow permissions. 4. Monitor and Measure Track: Accuracy Impact Cost 5. Build Feedback Loops Continuously improve models and workflows. Future of GitHub Automation The future is heading toward: Fully autonomous repositories AI-driven engineering teams Continuous optimization of software systems We may soon see: Repos that refactor themselves Systems that predict failures before they occur AI architects designing system improvements Conclusion CI/CD transformed how we build and deliver software. But Continuous AI is set to transform how software evolves. It moves us from: “Automating tasks” → “Automating intelligence” For engineering leaders, this is not just a technical shift—it’s a strategic advantage. Early adopters of Continuous AI will build faster, smarter, and more resilient systems. The question is no longer: “Should we adopt AI in our workflows?” But: “How fast can we transition to Continuous AI?”Demystifying GitHub Copilot Security Controls: easing concerns for organizational adoption
At a recent developer conference, I delivered a session on Legacy Code Rescue using GitHub Copilot App Modernization. Throughout the day, conversations with developers revealed a clear divide: some have fully embraced Agentic AI in their daily coding, while others remain cautious. Often, this hesitation isn't due to reluctance but stems from organizational concerns around security and regulatory compliance. Having witnessed similar patterns during past technology shifts, I understand how these barriers can slow adoption. In this blog, I'll demystify the most common security concerns about GitHub Copilot and explain how its built-in features address them, empowering organizations to confidently modernize their development workflows. GitHub Copilot Model Training A common question I received at the conference was whether GitHub uses your code as training data for GitHub Copilot. I always direct customers to the GitHub Copilot Trust Center for clarity, but the answer is straightforward: “No. GitHub uses neither Copilot Business nor Enterprise data to train the GitHub model.” Notice this restriction also applies to third-party models as well (e.g. Anthropic, Google). GitHub Copilot Intellectual Property indemnification policy A frequent concern I hear is, since GitHub Copilot’s underlying models are trained on sources that include public code, it might simply “copy and paste” code from those sources. Let’s clarify how this actually works: Does GitHub Copilot “copy/paste”? “The AI models that create Copilot’s suggestions may be trained on public code, but do not contain any code. When they generate a suggestion, they are not “copying and pasting” from any codebase.” To provide an additional layer of protection, GitHub Copilot includes a “duplicate detection filter”. This feature helps prevent suggestions that closely match public code from being surfaced. (Note: This duplicate detection currently does not apply to the Copilot coding agent.) More importantly, customers are protected by an Intellectual Property indemnification policy. This means that if you receive an unmodified suggestion from GitHub Copilot and face a copyright claim as a result, Microsoft will defend you in court. GitHub Copilot Data Retention Another frequent question I hear concerns GitHub Copilot’s data retention policies. For organizations on GitHub Copilot Business and Enterprise plans, retention practices depend on how and where the service is accessed from: Access through IDE for Chat and Code Completions: Prompts and Suggestions: Not retained. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. Other GitHub Copilot access and use: Prompts and Suggestions: Retained for 28 days. User Engagement Data: Kept for two years. Feedback Data: Stored for as long as needed for its intended purpose. For Copilot Coding Agent, session logs are retained for the life of the account in order to provide the service. Excluding content from GitHub Copilot To prevent GitHub Copilot from indexing sensitive files, you can configure content exclusions at the repository or organization level. In VS Code, use the .copilotignore file to exclude files client-side. Note that files listed in .gitignore are not indexed by default but may still be referenced if open or explicitly referenced (unless they’re excluded through .copilotignore or content exclusions). The life cycle of a GitHub Copilot code suggestion Here are the key protections at each stage of the life cycle of a GitHub Copilot code suggestion: In the IDE: Content exclusions prevent files, folders, or patterns from being included. GitHub proxy (pre-model safety): Prompts go through a GitHub proxy hosted in Microsoft Azure for pre-inference checks: screening for toxic or inappropriate language, relevance, and hacking attempts/jailbreak-style prompts before reaching the model. Model response: With the public code filter enabled, some suggestions are suppressed. The vulnerability protection feature blocks insecure coding patterns like hardcoded credentials or SQL injections in real time. Disable access to GitHub Copilot Free Due to the varying policies associated with GitHub Copilot Free, it is crucial for organizations to ensure it is disabled both in the IDE and on GitHub.com. Since not all IDEs currently offer a built-in option to disable Copilot Free, the most reliable method to prevent both accidental and intentional access is to implement firewall rule changes, as outlined in the official documentation. Agent Mode Allow List Accidental file system deletion by Agentic AI assistants can happen. With GitHub Copilot agent mode, the "Terminal auto approve” setting in VS Code can be used to prevent this. This setting can be managed centrally using a VS Code policy. MCP registry Organizations often want to restrict access to allow only trusted MCP servers. GitHub now offers an MCP registry feature for this purpose. This feature isn’t available in all IDEs and clients yet, but it's being developed. Compliance Certifications The GitHub Copilot Trust Center page lists GitHub Copilot's broad compliance credentials, surpassing many competitors in financial, security, privacy, cloud, and industry coverage. SOC 1 Type 2: Assurance over internal controls for financial reporting. SOC 2 Type 2: In-depth report covering Security, Availability, Processing Integrity, Confidentiality, and Privacy over time. SOC 3: General-use version of SOC 2 with broad executive-level assurance. ISO/IEC 27001:2013: Certification for a formal Information Security Management System (ISMS), based on risk management controls. CSA STAR Level 2: Includes a third-party attestation combining ISO 27001 or SOC 2 with additional cloud control matrix (CCM) requirements. TISAX: Trusted Information Security Assessment Exchange, covering automotive-sector security standards. In summary, while the adoption of AI tools like GitHub Copilot in software development can raise important questions around security, privacy, and compliance, it’s clear that existing safeguards in place help address these concerns. By understanding the safeguards, configurable controls, and robust compliance certifications offered, organizations and developers alike can feel more confident in embracing GitHub Copilot to accelerate innovation while maintaining trust and peace of mind.GitHub Copilot SDK and Hybrid AI in Practice: Automating README to PPT Transformation
Introduction In today's rapidly evolving AI landscape, developers often face a critical choice: should we use powerful cloud-based Large Language Models (LLMs) that require internet connectivity, or lightweight Small Language Models (SLMs) that run locally but have limited capabilities? The answer isn't either-or—it's hybrid models—combining the strengths of both to create AI solutions that are secure, efficient, and powerful. This article explores hybrid model architectures through the lens of GenGitHubRepoPPT, demonstrating how to elegantly combine Microsoft Foundry Local, GitHub Copilot SDK, and other technologies to automatically generate professional PowerPoint presentations from GitHub README files. 1. Hybrid Model Scenarios and Value 1.1 What Are Hybrid Models? Hybrid AI Models strategically combine locally-running Small Language Models (SLMs) with cloud-based Large Language Models (LLMs) within the same application, selecting the most appropriate model for each task based on its unique characteristics. Core Principles: Local Processing for Sensitive Data: Privacy-critical content analysis happens on-device Cloud for Value Creation: Complex reasoning and creative generation leverage cloud power Balancing Cost and Performance: High-frequency, simple tasks run locally to minimize API costs 1.2 Typical Hybrid Model Use Cases Use Case Local SLM Role Cloud LLM Role Value Proposition Intelligent Document Processing Text extraction, structural analysis Content refinement, format conversion Privacy protection + Professional output Code Development Assistant Syntax checking, code completion Complex refactoring, architecture advice Fast response + Deep insights Customer Service Systems Intent recognition, FAQ handling Complex issue resolution Reduced latency + Enhanced quality Content Creation Platforms Keyword extraction, outline generation Article writing, multilingual translation Cost control + Creative assurance 1.3 Why Choose Hybrid Models? Three Core Advantages: Privacy and Security Sensitive data never leaves local devices Compliant with GDPR, HIPAA, and other regulations Ideal for internal corporate documents and personal information Cost Optimization Reduces cloud API call frequency Local models have zero usage fees Predictable operational costs Performance and Reliability Local processing eliminates network latency Partial functionality in offline environments Cloud models ensure high-quality output 2. Core Technology Analysis 2.1 Large Language Models (LLMs): Cloud Intelligence Representatives What are LLMs? Large Language Models are deep learning-based natural language processing models, typically with billions to trillions of parameters. Through training on massive text datasets, they've acquired powerful language understanding and generation capabilities. Representative Models: Claude Sonnet 4.5: Anthropic's flagship model, excelling at long-context processing and complex reasoning GPT-5.2 Series: OpenAI's general-purpose language models Gemini: Google's multimodal large models LLM Advantages: ✅ Exceptional text generation quality ✅ Powerful contextual understanding ✅ Support for complex reasoning tasks ✅ Continuous model updates and optimization Typical Applications: Professional document writing (technical reports, business plans) Code generation and refactoring Multilingual translation Creative content creation 2.2 Small Language Models (SLMs) and Microsoft Foundry Local 2.2.1 SLM Characteristics Small Language Models typically have 1B-7B parameters, designed specifically for resource-constrained environments. Mainstream SLM Model Families: Microsoft Phi Family (Phi Family): Inference-optimized efficient models Alibaba Qwen Family (Qwen Family): Excellent Chinese language capabilities Mistral Series: Outstanding performance with small parameter counts SLM Advantages: ⚡ Low-latency response (millisecond-level) 💰 Zero API costs 🔒 Fully local, data stays on-device 📱 Suitable for edge device deployment 2.2.2 Microsoft Foundry Local: The Foundation of Local AI Foundry Local is Microsoft's local AI runtime tool, enabling developers to easily run SLMs on Windows or macOS devices. Core Features: OpenAI-Compatible API # Using Foundry Local is like using OpenAI API from openai import OpenAI from foundry_local import FoundryLocalManager manager = FoundryLocalManager("qwen2.5-7b-instruct") client = OpenAI( base_url=manager.endpoint, api_key=manager.api_key ) Hardware Acceleration Support CPU: General computing support GPU: NVIDIA, AMD, Intel graphics acceleration NPU: Qualcomm, Intel AI-specific chips Apple Silicon: Neural Engine optimization Based on ONNX Runtime Cross-platform compatibility Highly optimized inference performance Supports model quantization (INT4, INT8) Convenient Model Management # View available models foundry model list # Run a model foundry model run qwen2.5-7b-instruct-generic-cpu:4 # Check running status foundry service ps Foundry Local Application Value: 🎓 Educational Scenarios: Students can learn AI development without cloud subscriptions 🏢 Enterprise Environments: Process sensitive data while maintaining compliance 🧪 R&D Testing: Rapid prototyping without API cost concerns ✈️ Offline Environments: Works on planes, subways, and other no-network scenarios 2.3 GitHub Copilot SDK: The Express Lane from Agent to Business Value 2.3.1 What is GitHub Copilot SDK? GitHub Copilot SDK, released as a technical preview on January 22, 2026, is a game-changer for AI Agent development. Unlike other AI SDKs, Copilot SDK doesn't just provide API calling interfaces—it delivers a complete, production-grade Agent execution engine. Why is it revolutionary? Traditional AI application development requires you to build: ❌ Context management systems (multi-turn conversation state) ❌ Tool orchestration logic (deciding when to call which tool) ❌ Model routing mechanisms (switching between different LLMs) ❌ MCP server integration ❌ Permission and security boundaries ❌ Error handling and retry mechanisms Copilot SDK provides all of this out-of-the-box, letting you focus on business logic rather than underlying infrastructure. 2.3.2 Core Advantages: The Ultra-Short Path from Concept to Code Production-Grade Agent Engine: Battle-Tested Reliability Copilot SDK uses the same Agent core as GitHub Copilot CLI, which means: ✅ Validated in millions of real-world developer scenarios ✅ Capable of handling complex multi-step task orchestration ✅ Automatic task planning and execution ✅ Built-in error recovery mechanisms Real-World Example: In the GenGitHubRepoPPT project, we don't need to hand-write the "how to convert outline to PPT" logic—we simply tell Copilot SDK the goal, and it automatically: Analyzes outline structure Plans slide layouts Calls file creation tools Applies formatting logic Handles multilingual adaptation # Traditional approach: requires hundreds of lines of code for logic def create_ppt_traditional(outline): slides = parse_outline(outline) for slide in slides: layout = determine_layout(slide) content = format_content(slide) apply_styling(content, layout) # ... more manual logic return ppt_file # Copilot SDK approach: focus on business intent session = await client.create_session({ "model": "claude-sonnet-4.5", "streaming": True, "skill_directories": [skills_dir] }) session.send_and_wait({"prompt": prompt}, timeout=600) Custom Skills: Reusable Encapsulation of Business Knowledge This is one of Copilot SDK's most powerful features. In traditional AI development, you need to provide complete prompts and context with every call. Skills allow you to: Define once, reuse forever: # .copilot_skills/ppt/SKILL.md # PowerPoint Generation Expert Skill ## Expertise You are an expert in business presentation design, skilled at transforming technical content into easy-to-understand visual presentations. ## Workflow 1. **Structure Analysis** - Identify outline hierarchy (titles, subtitles, bullet points) - Determine topic and content density for each slide 2. **Layout Selection** - Title slide: Use large title + subtitle layout - Content slides: Choose single/dual column based on bullet count - Technical details: Use code block or table layouts 3. **Visual Optimization** - Apply professional color scheme (corporate blue + accent colors) - Ensure each slide has a visual focal point - Keep bullets to 5-7 items per page 4. **Multilingual Adaptation** - Choose appropriate fonts based on language (Chinese: Microsoft YaHei, English: Calibri) - Adapt text direction and layout conventions ## Output Requirements Generate .pptx files meeting these standards: - 16:9 widescreen ratio - Consistent visual style - Editable content (not images) - File size < 5MB Business Code Generation Capability This is the core value of this project. Unlike generic LLM APIs, Copilot SDK with Skills can generate truly executable business code. Comparison Example: Aspect Generic LLM API Copilot SDK + Skills Task Description Requires detailed prompt engineering Concise business intent suffices Output Quality May need multiple adjustments Professional-grade on first try Code Execution Usually example code Directly generates runnable programs Error Handling Manual implementation required Agent automatically handles and retries Multi-step Tasks Manual orchestration needed Automatic planning and execution Comparison of manual coding workload: Task Manual Coding Copilot SDK Processing logic code ~500 lines ~10 lines configuration Layout templates ~200 lines Declared in Skill Style definitions ~150 lines Declared in Skill Error handling ~100 lines Automatically handled Total ~950 lines ~10 lines + Skill file Tool Calling & MCP Integration: Connecting to the Real World Copilot SDK doesn't just generate code—it can directly execute operations: 🗃️ File System Operations: Create, read, modify files 🌐 Network Requests: Call external APIs 📊 Data Processing: Use pandas, numpy, and other libraries 🔧 Custom Tools: Integrate your business logic 3. GenGitHubRepoPPT Case Study 3.1 Project Overview GenGitHubRepoPPT is an innovative hybrid AI solution that combines local AI models with cloud-based AI agents to automatically generate professional PowerPoint presentations from GitHub repository README files in under 5 minutes. Technical Architecture: 3.2 Why Adopt a Hybrid Model? Stage 1: Local SLM Processes Sensitive Data Task: Analyze GitHub README, extract key information, generate structured outline Reasons for choosing Qwen-2.5-7B + Foundry Local: Privacy Protection README may contain internal project information Local processing ensures data doesn't leave the device Complies with data compliance requirements Cost Effectiveness Each analysis processes thousands of tokens Cloud API costs are significant in high-frequency scenarios Local models have zero additional fees Performance Qwen-2.5-7B excels at text analysis tasks Outstanding Chinese support Acceptable CPU inference latency (typically 2-3 seconds) Stage 2: Cloud LLM + Copilot SDK Creates Business Value Task: Create well-formatted PowerPoint files based on outline Reasons for choosing Claude Sonnet 4.5 + Copilot SDK: Automated Business Code Generation Traditional approach pain points: Need to hand-write 500+ lines of code for PPT layout logic Require deep knowledge of python-pptx library APIs Style and formatting code is error-prone Multilingual support requires additional conditional logic Copilot SDK solution: Declare business rules and best practices through Skills Agent automatically generates and executes required code Zero-code implementation of complex layout logic Development time reduced from 2-3 days to 2-3 hours Ultra-Short Path from Intent to Execution Comparison: Different ways to implement "Generate professional PPT" 3. Production-Grade Reliability and Quality Assurance Battle-tested Agent engine: Uses the same core as GitHub Copilot CLI Validated in millions of real-world scenarios Automatically handles edge cases and errors Consistent output quality: Professional standards ensured through Skills Automatic validation of generated files Built-in retry and error recovery mechanisms 4. Rapid Iteration and Optimization Capability Scenario: Client requests PPT style adjustment The GitHub Repo https://github.com/kinfey/GenGitHubRepoPPT 4. Summary 4.1 Core Value of Hybrid Models + Copilot SDK The GenGitHubRepoPPT project demonstrates how combining hybrid models with Copilot SDK creates a new paradigm for AI application development. Privacy and Cost Balance The hybrid approach allows sensitive README analysis to happen locally using Qwen-2.5-7B, ensuring data never leaves the device while incurring zero API costs. Meanwhile, the value-creating work—generating professional PowerPoint presentations—leverages Claude Sonnet 4.5 through Copilot SDK, delivering quality that justifies the per-use cost. From Code to Intent Traditional AI development required writing hundreds of lines of code to handle PPT generation logic, layout selection, style application, and error handling. With Copilot SDK and Skills, developers describe what they want in natural language, and the Agent automatically generates and executes the necessary code. What once took 3-5 days now takes 3-4 hours, with 95% less code to maintain. Automated Business Code Generation Copilot SDK doesn't just provide code examples—it generates complete, executable business logic. When you request a multilingual PPT, the Agent understands the requirement, selects appropriate fonts, generates the implementation code, executes it with error handling, validates the output, and returns a ready-to-use file. Developers focus on business intent rather than implementation details. 4.2 Technology Trends The Shift to Intent-Driven Development We're witnessing a fundamental change in how developers work. Rather than mastering every programming language detail and framework API, developers are increasingly defining what they want through declarative Skills. Copilot SDK represents this future: you describe capabilities in natural language, and AI Agents handle the code generation and execution automatically. Edge AI and Cloud AI Integration The evolution from pure cloud LLMs (powerful but privacy-concerning) to pure local SLMs (private but limited) has led to today's hybrid architectures. GenGitHubRepoPPT exemplifies this trend: local models handle data analysis and structuring, while cloud models tackle complex reasoning and professional output generation. This combination delivers fast, secure, and professional results. Democratization of Agent Development Copilot SDK dramatically lowers the barrier to building AI applications. Senior engineers see 10-20x productivity gains. Mid-level engineers can now build sophisticated agents that were previously beyond their reach. Even junior engineers and business experts can participate by writing Skills that capture domain knowledge without deep technical expertise. The future isn't about whether we can build AI applications—it's about how quickly we can turn ideas into reality. References Projects and Code GenGitHubRepoPPT GitHub Repository - Case study project Microsoft Foundry Local - Local AI runtime GitHub Copilot SDK - Agent development SDK Copilot SDK Getting Started Tutorial - Official quick start Deep Dive: Copilot SDK Build an Agent into Any App with GitHub Copilot SDK - Official announcement GitHub Copilot SDK Cookbook - Practical examples Copilot CLI Official Documentation - CLI tool documentation Learning Resources Edge AI for Beginners - Edge AI introductory course Azure AI Foundry Documentation - Azure AI documentation GitHub Copilot Extensions Guide - Extension development guide1.7KViews3likes2Comments