Blog Post

Microsoft Developer Community Blog
4 MIN READ

🏆 Agents League Winner Spotlight – Reasoning Agents Track

carlottacaste's avatar
carlottacaste
Icon for Microsoft rankMicrosoft
Apr 17, 2026

In this post, we celebrate the winning project of the Agents League Reasoning Agents track—a smart, multi‑agent system built on Microsoft Foundry that turns certification exam prep into a well‑orchestrated, reasoning‑driven experience.

Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate.

Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub.

Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed.

 

The Reasoning Agents Challenge Scenario

The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step.

All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague.

 

The Winning Project: CertPrep Multi‑Agent System

The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families.

At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface.

 

 

Inside the Multi‑Agent Architecture

At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails.

The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey:

  • LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks).
  • StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time.
  • LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort.
  • ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance.
  • AssessmentAgent – Generates and evaluates domain‑proportional mock exams.
  • CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions.

Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present.

CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably:

  • Managed agents via Foundry SDK
  • Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings
  • Guardrails enforced through Azure Content Safety
  • Parallel agent fan‑out using concurrent execution
  • Typed contracts with Pydantic for every agent boundary
  • AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding

Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials.

 

User Experience: From Onboarding to Exam Readiness

Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey.

From a user’s perspective, the flow looks like this:

  1. Profile & Goals Input
    Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent.
  2. Learning Path & Study Plan Visualization
    Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time.
  3. Progress Tracking & Readiness Scoring
    As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet).
  4. Assessments and Feedback
    Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores.
  5. Transparent Recommendations
    Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making.

The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies.

 

Why This Project Stood Out

This project embodies the spirit of the Reasoning Agents track in several ways:

  • ✅ Clear separation of reasoning roles, instead of prompt‑heavy monoliths
  • ✅ Deterministic fallbacks and guardrails, critical for educational and decision‑support systems
  • ✅ Observable, debuggable workflows, aligned with Foundry’s production goals
  • ✅ Explainable outputs, surfaced directly in the UX

It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions.

 

Try It Yourself

Explore the project, architecture, and demo here:

Updated Apr 14, 2026
Version 1.0
No CommentsBe the first to comment