Turn repetitive multi-step tasks into one-command AI workflows — no extensions, no plugins, just markdown and scripts.
The Problem
Every team has those repetitive, multi-step workflows that eat up time:
- Running a sequence of CLI commands, parsing output, and generating a report
- Querying multiple APIs, correlating data, and summarizing findings
- Executing test suites, analyzing failures, and producing actionable insights
You've probably documented these in a wiki or a runbook. But every time, you still manually copy-paste commands, tweak parameters, and stitch results together.
What if your AI coding assistant could do all of that — triggered by a single natural language request?
That's exactly what GitHub Copilot Custom Skills enable.
What Are Custom Skills?
A skill is a folder containing a SKILL.md file (instructions for the AI), plus optional scripts, templates, and reference docs. When you ask Copilot something that matches the skill's description, it loads the instructions and executes the workflow autonomously.
Think of it as giving your AI assistant a runbook it can actually execute, not just read.
| Without Skills | With Skills |
|---|---|
| Read the wiki for the procedure | Copilot loads the procedure automatically |
| Copy-paste 5 CLI commands | Copilot runs the full pipeline |
| Manually parse JSON output | Script generates a formatted HTML report |
| 15-30 minutes of manual work | One natural language request, ~2 minutes |
How It Works
The key insight: the skill file is the contract between you and the AI. You describe what to do and how, and Copilot handles the orchestration.
Prerequisites
| Requirement | Details |
|---|---|
| VS Code | Latest stable release |
| GitHub Copilot | Active Copilot subscription (Individual, Business, or Enterprise) |
| Agent mode | Select "Agent" mode in the Copilot Chat panel (the default in recent versions) |
| Runtime tools | Whatever your scripts need — Python, Node.js, .NET CLI, az CLI, etc. |
Note: Agent Skills follow an open standard — they work across VS Code, GitHub Copilot CLI, and GitHub Copilot coding agent. No additional extensions or cloud services are required for the skill system itself.
Anatomy of a Skill
.github/skills/my-skill/
├── SKILL.md # Instructions (required)
└── references/
├── resources/
│ ├── run.py # Automation script
│ ├── query-template.sql # Reusable query template
│ └── config.yaml # Static configuration
└── reports/
└── report_template.html # Output template
The SKILL.md File
Every skill has the same structure:
---
name: my-skill
description: 'What this does and when to use it. Include trigger
phrases so Copilot knows when to load it. USE FOR: specific
task A, task B. Trigger phrases: "keyword1", "keyword2".'
argument-hint: 'What inputs the user should provide.'
---
# My Skill
## When to Use
- Situation A
- Situation B
## Quick Start
\```powershell
cd .github/skills/my-skill/references/resources
py run.py <arg1> <arg2>
\```
## What It Does
| Step | Action | Purpose |
|------|--------|---------|
| 1 | Fetch data from source | Gather raw input |
| 2 | Process and transform | Apply business logic |
| 3 | Generate report | Produce actionable output |
## Output
Description of what the user gets back.
Key Design Principles
- Description is discovery. The description field is the only thing Copilot reads to decide whether to load your skill. Pack it with trigger phrases and keywords.
- Progressive loading. Copilot reads only name + description (~100 tokens) for all skills. It loads the full SKILL.md body only for matched skills. Reference files load only when the procedure references them.
- Self-contained procedures. Include everything the AI needs to execute — exact commands, parameter formats, file paths. Don't assume prior knowledge.
- Scripts do the heavy lifting. The AI orchestrates; your scripts execute. This keeps the workflow deterministic and reproducible.
Example: Build a Deployment Health Check Skill
Let's build a skill that checks the health of a deployment by querying an API, comparing against expected baselines, and generating a summary.
Step 1 — Create the folder structure
.github/skills/deployment-health/
├── SKILL.md
└── references/
└── resources/
├── check_health.py
└── endpoints.yaml
Step 2 — Write the SKILL.md
---
name: deployment-health
description: 'Check deployment health across environments. Queries
health endpoints, compares response times against baselines, and
flags degraded services. USE FOR: deployment validation, health
check, post-deploy verification, service status. Trigger phrases:
"check deployment health", "is the deployment healthy",
"post-deploy check", "service health".'
argument-hint: 'Provide the environment name (e.g., staging, production).'
---
# Deployment Health Check
## When to Use
- After deploying to any environment
- During incident triage to check service status
- Scheduled spot checks
## Quick Start
\```bash
cd .github/skills/deployment-health/references/resources
python check_health.py <environment>
\```
## What It Does
1. Loads endpoint definitions from `endpoints.yaml`
2. Calls each endpoint, records response time and status code
3. Compares against baseline thresholds
4. Generates an HTML report with pass/fail status
## Output
HTML report at `references/reports/health_<environment>_<date>.html`
Step 3 — Write the script
# check_health.py
import sys, yaml, requests, time, json
from datetime import datetime
def main():
env = sys.argv[1]
with open("endpoints.yaml") as f:
config = yaml.safe_load(f)
results = []
for ep in config["endpoints"]:
url = ep["url"].replace("{env}", env)
start = time.time()
resp = requests.get(url, timeout=10)
elapsed = time.time() - start
results.append({
"service": ep["name"],
"status": resp.status_code,
"latency_ms": round(elapsed * 1000),
"threshold_ms": ep["threshold_ms"],
"healthy": resp.status_code == 200 and elapsed * 1000 < ep["threshold_ms"]
})
healthy = sum(1 for r in results if r["healthy"])
print(f"Health check: {healthy}/{len(results)} services healthy")
# ... generate HTML report ...
if __name__ == "__main__":
main()
Step 4 — Use it
Just ask Copilot in agent mode: "Check deployment health for staging"
Copilot will:
- Match against the skill description
- Load the SKILL.md instructions
- Run python check_health.py staging
- Open the generated report
- Summarize findings in chat
More Skill Ideas
Skills aren't limited to any specific domain. Here are patterns that work well:
| Skill | What It Automates |
|---|---|
| Test Regression Analyzer | Run tests, parse failures, compare against last known-good run, generate diff report |
| API Contract Checker | Compare Open API specs between branches, flag breaking changes |
| Security Scan Reporter | Run SAST/DAST tools, correlate findings, produce prioritized report |
| Cost Analysis | Query cloud billing APIs, compare costs across periods, flag anomalies |
| Release Notes Generator | Parse git log between tags, categorize changes, generate changelog |
| Infrastructure Drift Detector | Compare live infra state vs IaC templates, flag drift |
| Log Pattern Analyzer | Query log aggregation systems, identify anomaly patterns, summarize |
| Performance Bench marker | Run benchmarks, compare against baselines, flag regressions |
| Dependency Auditor | Scan dependencies, check for vulnerabilities and outdated packages |
The pattern is always the same: instructions (SKILL.md) + automation script + output template.
Tips for Writing Effective Skills
Do
- Front-load the description with keywords — this is how Copilot discovers your skill
- Include exact commands — cd path/to/dir && python script.py <args>
- Document input/output clearly — what goes in, what comes out
- Use tables for multi-step procedures — easier for the AI to follow
- Include time zone conversion notes if dealing with timestamps
- Bundle HTML report templates — rich output beats plain text
Don't
- Don't use vague descriptions — "A useful skill" won't trigger on anything
- Don't assume context — include all paths, env vars, and prerequisites
- Don't put everything in SKILL.md — use references/ for large files
- Don't hardcode secrets — use environment variables or Azure Key Vault
- Don't skip error guidance — tell the AI what common errors look like and how to fix them
Skill Locations
Skills can live at project or personal level:
| Location | Scope | Shared with team? |
|---|---|---|
| .github/skills/<name>/ | Project | Yes (via source control) |
| .agents/skills/<name>/ | Project | Yes (via source control) |
| .claude/skills/<name>/ | Project | Yes (via source control) |
| ~/.copilot/skills/<name>/ | Personal | No |
| ~/.agents/skills/<name>/ | Personal | No |
| ~/.claude/skills/<name>/ | Personal | No |
Project-level skills are committed to your repo and shared with the team. Personal skills are yours and roam with your VS Code settings sync.
You can also configure additional skill locations via the chat.skillsLocations VS Code setting.
How Skills Fit in the Copilot Customization Stack
Skills are one of several customization primitives. Here's when to use what:
| Primitive | Use When |
|---|---|
| Workspace Instructions (.github/copilot-instructions.md) | Always-on rules: coding standards, naming conventions, architectural guidelines |
| File Instructions (.github/instructions/*.instructions.md) | Rules scoped to specific file patterns (e.g., all *.test.ts files) |
| Prompts (.github/prompts/*.prompt.md) | Single-shot tasks with parameterized inputs |
| Skills (.github/skills/<name>/SKILL.md) | Multi-step workflows with bundled scripts and templates |
| Custom Agents (.github/agents/*.agent.md) | Isolated subagents with restricted tool access or multi-stage pipelines |
| Hooks (.github/hooks/*.json) | Deterministic shell commands at agent lifecycle events (auto-format, block tools) |
| Plugins | Installable skill bundles from the community (awesome-copilot) |
Slash Commands & Quick Creation
Skills automatically appear as slash commands in chat. Type / to see all available skills. You can also pass context after the command:
/deployment-health staging
/webapp-testing for the login page
Want to create a skill fast? Type /create-skill in chat and describe what you need. Copilot will ask clarifying questions and generate the SKILL.md with proper frontmatter and directory structure.
You can also extract a skill from an ongoing conversation: after debugging a complex issue, ask "create a skill from how we just debugged that" to capture the multi-step procedure as a reusable skill.
Controlling When Skills Load
Use frontmatter properties to fine-tune skill availability:
| Configuration | Slash command? | Auto-loaded? | Use case |
|---|---|---|---|
| Default (both omitted) | Yes | Yes | General-purpose skills |
| user-invocable: false | No | Yes | Background knowledge the model loads when relevant |
| disable-model-invocation: true | Yes | No | Skills you only want to run on demand |
| Both set | No | No | Disabled skills |
The Open Standard
Agent Skills follow an open standard that works across multiple AI agents:
- GitHub Copilot in VS Code — chat and agent mode
- GitHub Copilot CLI — terminal workflows
- GitHub Copilot coding agent — automated coding tasks
- Claude Code, Gemini CLI — compatible agents via .claude/skills/ and .agents/skills/
Skills you write once are portable across all these tools.
Getting Started
- Create .github/skills/<your-skill>/SKILL.md in your repo
- Write a keyword-rich description in the YAML frontmatter
- Add your procedure and reference scripts
- Open VS Code, switch to Agent mode, and ask Copilot to do the task
- Watch it discover your skill, load the instructions, and execute
Or skip the manual setup — type /create-skill in chat and describe what you need.
That's it. No extension to install. No config file to update. No deployment pipeline. Just markdown and scripts, version-controlled in your repo.
Custom Skills turn your documented procedures into executable AI workflows. Start with your most painful manual task, wrap it in a SKILL.md, and let Copilot handle the rest.
Further Reading: