What if your Learning Management System didn't just host lecture documents, assignments, and grades - but actually understood them?
Every time I sit through a lecture, a constant thought lingers: "I love what I'm studying, don't get me wrong - but it's a lot!" These are 3-hour lectures, a little too much content with piles of reference materials - how do I create efficient study routines beyond these lectures? With the world moving toward an agentic future, AI should help but having read so many posts on AI personalization for education systems, in my experience today, that personalized support isn't here - YET!
Here is the catch though! I don't have weeks to design an architecture, plan every component, and slowly build my way there. I have a problem and a rough idea of a solution, and I need a working prototype fast!
Enter GitHub Copilot CLI
GIF showing typing copilot --banner in the terminalStaring at an empty folder with a half-baked idea and not exactly sure where to start, I spun up the terminal and launched a Copilot Agent in /plan mode for a brainstorming session. You know - to help me think.
This was less of a building session and more of an interactive brainstorm with the agent asking clarifying questions about features, stack preferences, and constraints, then returned a comprehensive implementation plan in seconds.
That step alone was incredibly valuable: it didn't just give the agent a picture of what I wanted to build, it also surfaced scenarios I hadn't even thought of. Even without the full implemenation, that step alone was enough to move my idea forward, and this has influenced my normal ideation routine which is now: idea → brainstorm with Copilot /plan mode → save the plan → iterate.
The Solution
With the plan ready, you might tell the agent to “Start Implementation,” and it'll likely do a great job, but I prefer a five-phase workflow that balances speed, structure and my desired level of involvement in the project (phases may vary by use case):
5 Phases - Brainstorm, Research, Project Setup, Core Logic, Test & FrontendHere is how I think about the stages:
1. Brainstorm
The goal here is to ensure the idea is crystal clear, not just to the builder (me), but to the agent(s), and more importantly - that you are on the same page.
2. Research
This phase is important to surface the latest docs, announcements, and decision factors so in as much as most of the implementation is delegated to the agent(s), builders (I) have a clear understanding of reasons why database/ framework/ provider X was chosen over Y.
3. Project Setup
This is where the agent focuses on installs, project scaffolding, configuration, and defining how components in the architecture design communicate.
4. Core functionality
The main goal here is to implement the core logic behind the system’s essential behavior, followed by a thorough validation that APIs and DB schemas map to the target features.
5. Frontend
Language models rarely struggle with UI design work. The trick to get the perfect frontend with a single prompt in my experience, is to save this task for last and the agent will not only factor in the features already implemented, but will also build a design that anticipates and accommodates future enhancements that you thought about and noted in the brainstorm notes (plan docs).
With these phases documented, plus the plan docs stored in my project directory, I'm confident that when I switch to different agents working on my project, they'll all have a clear, common and referenceable north star and can work on whatever component or feature I delegate to them with the right context.
After the first iteration of this workflow, in a matter of minutes, I had a full stack application, with a beautiful UI and I could browse through the courses, with the ability to upload notes (pdf and text files) which were stored in the database.
Hooray! Happy that it worked, but - is there anything extraordinary about that? Not really, since most current LMS can already do this.
But, here is where we step up the game.
Instead of uploading school docs and have them sit there, a file upload kicks off an ingestion pipeline to build a knowledge base that language models can reason over.
School Agent - Ingestion Pipeline for RAGSo the backend:
- extracts the file content. Output: large and long text block
- applies a chunking strategy to break the long text block into smaller text groups. Output: chunks of roughly 512 tokens each with a 100 token overlap for context continuation
- vector embeddings generation. Output: Embeddings stored in a single db (along with my existing data) using the pgvector extension
Now with my data in a format that language models can understand, the next part involves adding an intelligent layer and we achieve this in 2 steps:
- Expose API endpoints in a format that language models can use (tools),
- Create an autonomous AI workflow that will handle the tool orchestration and determine when to use what.
Enter Model Context Protocol (MCP)
APIs are designed for humans as the primary users with a discovery path optimized for going through API docs to find endpoints and creating custom integrations to consume them. This doesn't work for language models which instead need a more dynamic, self-discoverable, runtime approach encapsulated in a standardized interface for AI.
This is what the Model Context Protocol provides, a standard that connects AI native apps/ agents to data and tools dynamically.
In the steps above, Copilot CLI uses this very same protocol to pull data from external sources, accessing documentation on how the MCP architecture works and how to build and connect to MCP servers, and in a single prompt, is able to extend my existing backend (API Layer) into an MCP server with tools that will allow the agent to perform actions dynamically. That is either reading course material, generating question-answer pairs from the course content for quizzes, extracting coding exercises and updating my completion progress among other functions.
The quickest way to test this MCP setup is with GitHub Copilot as the MCP client, since I'm yet to build any agentic workflows. I'm already on VS Code, so I simply (1) add the mcp server configuration to my .vscode/mcp.json and now the tools are (2) accessible within the Copilot chat window. I start testing with my (3) custom School Agent, comparing different prompts and (4) tool use accuracy to get a feel of how the agent experience would look like in the app. And of course you can use this through the Copilot CLI if you prefer working from the terminal.
Screenshot of VS Code with mcp.json configuration, configure tools view and GitHub Copilot using the getCourseDocuments tool to ground responses in school documentsThat's step 1. Step 2 is building the agent itself.
Enter GitHub Copilot SDK
When it comes to building agents, there are so many Agent Development Kits and Frameworks that make it easier to create and manage agent execution loops, but a recent (and exciting) announcement from GitHub is the new GitHub Copilot SDK. I'll link to the repo in the resources section, but basically what this means is that you can let the existing infrastructure that powers today's GitHub Copilot, handle all the building blocks of an agent - tool discovery and orchestration, session management, real time streaming, multi-turn loops,etc. and just programmatically call that agent workflow in your application.
There wasn't much to go on terms of documentation as this is still very new, but from what I read in the announcement blog & SDK repo - this was mind-blowing. I had to try it!
I'll admit that when I started off with this project, I had a different idea of how to approach the agentic part of it, but luckily the SDK was announced before I got to it and decided that it was worth giving it a try. I am proud to say that I wasn't disappointed.
I jumped into a brainstorming session with my buddy Copilot CLI, who had context from the SDK repo and settled on an approach that needed an aspect of specialization capabilities. Instead of having one agent handle all tasks, let's have smaller specialized agents for each. i.e.,
- I like to frequently quiz myself on topics - let's have an agent that does that one task PERFECTLY!
- I'm struggling to track the completion of exercises provided in a course text pdf document - let's have an agent specialized on extracting coding exercises from the eBook, tracking my progress and helping out when I'm stuck or if I need a quick review on my code attempts.
The beauty of using the Copilot SDK is that if you have such an idea, you won't have to worry about building it from scratch because chances are, someone already thoughout it out and there is likely a feature or a copilot-native pattern ready for you to use. This case is no exception - because the idea of providing Copilot specialized capabilities for specific tasks is already implemented through Agent Skills.
So all I needed was to define a `SKILL.md` document for the specialized tasks I needed - flashcard generator `.github/skills/flashcard-generator/SKILL.md` & Java practice tracker `.github/skills/java-practice-tracker/SKILL.md` and pass a `skill` property to the agent in code, which again the Copilot CLI implemented in minutes.
In just a couple of hours, (with so many breaks in between), I ended up with School Agent that takes learning management systems to the next level, and this is just the beginning.
School Agent working architecture with - Frontend, Agent, Backend API, MCP Server and DB (Postgresql + pgvector) componentsWith tools like the Copilot CLI, SDK, and other AI dev tools, experimentation has never been easier and I have so many ideas (I'm sure you do too), of how to make this system even more useful but I'm confident that in a short while, I'll be back with the next set of features built out and working to perfection.
I'm evolving School Agent into an architecture that is program-agnostic and I hope to share it with you soon to try it out and make it your own.
Yes things are moving fast in the AI space, but at least this way I have AI working with me and for me to improve an actual real world experience (and so can you). I encourage you to not just take other people's word for it, - you saw a cool demo on YouTube/ X recently, or you enjoyed this post (I hope you did), but don't settle for that. Find an immediate problem you are having today and tinker around. Build something. Anything. Everything!
To students: Would you use School Agent? What does it need to do to be even more useful to you?
For educators: How can your students benefit from such a tool? What would you also like to see implemented to support you?
Resources
- Check out this video walk-through of School Agent
- Get started with Copilot CLI
- Get started with the Copilot SDK
- Agent Skills
If you enjoyed this post, let's connect on LinkedIn, X and Bsky